Smart Mobility and Autonomous Driving

“Smart Mobility” stands as a pivotal concept in representing the era of the Fourth Industrial Revolution. Driven by the rapid advancements in artificial intelligence and autonomous driving technologies, it has taken various forms of service. The mobility industry has also harnessed synergies with the expanding electric vehicle market, resulting in a notable impact on safer and sustainable future urban development. This transformation has been further facilitated by regulatory relaxation and increased infrastructure investments.
Professor Seung-Woo Seo, from the School of Electrical and Computer Engineering at Seoul National University, established the Intelligent Vehicle IT Research Center in 2009 and has been a pioneer in the field of smart mobility research for over 15 years. In 2015, he introduced South Korea’s first urban autonomous vehicle, known as “SNUver”, alongside its expanded counterpart, “SNUvi”, designed for unconstructed roads. As shown in Figure 1, these vehicles operated on city roads in Yeouido, South Korea, for approximately two years as a pilot autonomous driving service.[1] To achieve this, he has continuously conducted research aimed at developing and enhancing the performance of core software technologies, including sensor data processing, high-precision localization, path planning, and high-precision map generation.
Urban autonomous driving faces significant challenges, including congested traffic and changing weather conditions. Professor Seo’s team has addressed these challenges by developing adaptive and robust perception technologies, including sensor fusion.[2] This technology maintains stable performance even in adverse weather and lighting conditions. In this research, they utilize the Permutohedral lattice to effectively fuse camera and LiDAR information, taking into account the geometric characteristics of objects to compensate for color changes. Additionally, their research demonstrates that by using an attention mechanism to assess the significance of each lattice point and applying bi-directional filtering to preserve terrain boundaries, the fusion technology can be maximized.
Furthermore, the team has undertaken various research initiatives,[3, 4] aimed at enhancing visual recognition performance, including depth estimation and image segmentation. In their depth estimation research, the team identified a locality issue where most Convolutional Neural Networks only employ fixed frequency bands, thus failing to effectively incorporate global information. To tackle this issue, they introduced an attention mechanism that extends the neural network’s receptive field to different frequency domains through adaptive frequency domain operations, enabling the acquisition of additional valuable information.
In the field of decision-making and path planning for autonomous driving, Professor Seo’s team has made significant contributions, particularly through the application of reinforcement learning. Their primary focus has been on safely incorporating reinforcement learning into autonomous vehicles. Instead of the conventional approach of using supervised learning as a starting point for reinforcement learning, the team’s goal is to acquire essential driving skills through unsupervised learning and then systematically integrate these skills to navigate unfamiliar environments.[5, 6] As part of this research, they have worked on learning a generalized reward function through reinforcement learning to solve various tasks in new environments.[7] Furthermore, they have demonstrated their capability to perform multiple tasks in environments they had not encountered before by retraining a single policy using this generalized reward function which remains effective even as the environments change. Additionally, they have also designed policies that take into account the differences between the current situation and the learned state, which helps in preventing errors and mitigating catastrophic outcomes in uncertain conditions.[8]
Recently, the scope of Professor Seo’s research has expanded to harsh and unstructured environments for autonomous driving. In this research, they are developing technology to detect unstructured obstacles in areas that are challenging to define in advance and to determine drivability in these new regions. Additionally, they are working on a technological system that can continuously improve its driving capabilities through self-exploration and learning in unfamiliar and uncharted environments. Professor Seo emphasizes the importance of actively adapting to previously undefined or unlearned environments for autonomous driving. This challenge has long been a sought-after goal in the field of autonomous driving and still lacks a complete solution. While these adaptive abilities are essential in well-defined urban scenarios, they become even more critical when dealing with non-standard terrain and obstacles, such as off-road conditions. As a potential solution to this challenge, Professor Seo’s team has proposed a robust fusion method[9] that deals with the continuously changing vehicle orientation when interacting with the terrain. In this research, they employ a contrastive learning approach to determine transformation functions that align images and point clouds based on the vehicle’s motion patterns. Their findings indicate that estimating the vehicle’s pose transformation matrix through detected vibration patterns and correcting the Image-to-Point cloud registration results in stable fusion outcomes.
These research initiatives, as described above, are poised to play a vital role in accelerating the commercialization of autonomous driving technology, paving the way for the era of smart mobility. This era promises not only improved safety and convenience for individuals but also offers seamless transportation options, ensuring uninterrupted mobility.

Figure “SNUver,” the first South Korea’s urban self-driving car, and “SNUvi” expanded
for the unstructured road.

References

1. Kim, Seong-Woo, et al. "Autonomous campus mobility services using driverless taxi." IEEE Transactions on intelligent transportation systems 18.12 (2017): 3513-3526.

2. Jeon, Yurim, Hwichang Kim, and Seung-Woo Seo. "ABCD: Attentive bilateral convolutional network for robust depth completion." IEEE Robotics and Automation Letters 7.1 (2021): 81-87.

3. Yang, DongWook, Min-Kook Suh, and Seung-Woo Seo. "AFF-CAM: Adaptive Frequency Filtering Based Channel Attention Module." Proceedings of the Asian Conference on Computer Vision. 2022.

4. Suh, Min-Kook, and Seung-Woo Seo. "Long-Tailed Recognition by Mutual Information Maximization between Latent Features and Ground-Truth Labels." International Conference on Machine Learning. PMLR, 2023.

5. Lee, Sang-Hyun, and Seung-Woo Seo. "Learning compound tasks without task-specific knowledge via imitation and self-supervised learning." International Conference on Machine Learning. PMLR, 2020.

6. Lee, Sang-Hyun, and Seung-Woo Seo. "Unsupervised Skill Discovery for Learning Shared Structures across Changing Environments." International Conference on Machine Learning. PMLR, 2023.

7. Yoo, Se-Wook, and Seung-Woo Seo. "Learning Multi-Task Transferable Rewards via Variational Inverse Reinforcement Learning." 2022 International Conference on Robotics and Automation (ICRA). IEEE, 2022.

8. Kim, Chan, et al. "UNICON: Uncertainty-Conditioned Policy for Robust Behavior in Unfamiliar Scenarios." IEEE Robotics and Automation Letters 7.4 (2022): 9099-9106.

9. Jeon, Yurim, and Seung-Woo Seo. "EFGHNet: A Versatile Image-to-Point Cloud Registration Network for Extreme Outdoor Environment." IEEE Robotics and Automation Letters 7.3 (2022): 7511-7517.