Autonomous Driving Group


Autonomous Driving group is studying autonomous/semi-autonomous control system for vehicles. The topics of this sub group so far covered; autonomous navigation functions(such as object recognition, path-planning driving, and so on), sensor data compression for V2X, learning based control and so on. We are also actively contributing open-source software platform for autonomous vehicle "Autoware".



HMI research software platform for Autonomous Vehicles (OPERA Project 4-5)

Developing HMI(Human Machine Interaction) research software platform for autonomous vehicles.(JST/OPERA、JST/COI)

Open Source Integrated Planner for Autonomous Vehicles

Path planning is one of the important functions for autonomous vehicles. In this research, we develop open source path planner for autonomous vehicles. The planner covers almost functions to go to destination these are global planner, behavior state machine, obstacle avoidance.


  • Hatem Darweesh, Eijiro Takeuchi, Kazuya Takeda, Yoshiki Ninomiya, Adi Sujiwo, Luis Yoichi Morales, Naoki Akai, Tetsuo Tomizawa, Shinpei Kato. "Open source integrated planner for autonomous navigation in highly dynamic environments," Journal of Robotics and Mechatronics (JRM), vol. 29, no. 4, pp. 668-684, 2017.

  • Hatem,, Estimating The Probabilities of Surrounding Vehicle’s intentions and Trajectories using a Behavior Planner, IJAE Vol.10 No.4, 2019

LiDAR Point Cloud Compression

LiDAR is one of the important sensors for autonomous vehicles. This research proposes various LiDAR point cloud data compression methods to share with other vehicles and store the data. The proposed methods achieve over 1/100 compression performance.


  • Chenxi Tu, Eijro Takeuchi, Chiyomi Miyajima, Kazuya Takeda,Compressing continuous point cloud data using image compression methods, DOI: 10.1109/ITSC.2016.7795789, Conference: 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC),2016.(Best Paper Award)

  • Chenxi Tu, Eijro Takeuchi, Chiyomi Miyajima, Kazuya Takeda, “Continuous point cloud data compression using SLAM based prediction,” IEEE 2017 Intelligent Vehicles Symposium (IV ’17), pp. 1744–1751, June 2017.

  • Chenxi Tu, Eijiro Takeuchi, Alexander Carballo, Kazuya Takeda, “Point Cloud Compression for 3D LiDAR Sensor using Recurrent Neural Network with Residual Blocks”, 2019 IEEE International Conference on Robotics and Automation (ICRA2019)

  • Tu Chenxi, Eijiro Takeuchi, Alexander Carballo and Kazuya Takeda, Real-time Streaming Point Cloud Compression for 3D LiDAR Sensor Using U-net, IEEE Access, vol. 7, pp. 113616-113625, 2019.

Learning based Autonomous Driving

Autonomous driving system need to cover various situations. In this research, we research learning based autonomous system to adapt various conditions. The system learn "how to run" from driving data using deep learning techniques.


  • Shunya Seiya, Daiki Hayashi, Eijiro Takeuchi, Chiyomi Miyajima, and Kazuya Takeda, “Evaluation of deep learning-based driving signal generation methods for vehicle control,” 4th International Symposium on Future Active Safety Technology toward zero traffic accidents (FAST-zero ’17), 6 pages, Sept. 2017.

  • Alexander Carballo, Shunya Seiya, Jacob Lambert, Hatem Darweesh, Patiphon Narksri, Luis Yoichi Morales, Naoki Akai, Eijiro Takeuchi, Kazuya Takeda, End-to-End Autonomous Mobile Robot Navigation with Model-Based System Support, Journal of Robotics and Mechatronics, 2018, Vol. 30 ,No. 4 , p. 563-583

  • Shunya Seiya, Alexander Carballo, Eijiro Takeuchi, Chiyomi Miyajima and Kazuya TAKEDA, End-to-End Navigation with Branch Turning Supportusing Convolutional Neural Network, ROBIO2018

Occlusion Aware Planning for Autonomous Vehicle

Autonomous driving in an urban area is a challenging task. An urban area is usually full of structures such as houses, buildings or bridges. These structures often cause occlusions. Despite usually being equipped with various types of sensors, occlusions caused by these structures prevent autonomous vehicles from fully observing the surroundings. This research proposes behavior decision methods using visibility information to realize safe autonomous driving.


  • Patiphon Narksri, Eijiro Takeuchi, Yoshiki Ninomiya, and Kazuya Takeda, Crossing Blind Intersections from a Full Stop Using Estimated Visibility of Approaching Vehicles, 2019 22st International Conference on Intelligent Transportation Systems (ITSC2019), TuE-T3.1,2019.

Recognition Assistance Interface for Autonomous Vehicles

To achieve fully autonomous driving, many problems remain to be solved. One important issue is how to accurately recognize obstacles in the surrounding environment and safely turn vehicle control over to the passenger. In this study, we propose a recognition assistance interface that enables the passenger to assist the recognition system of the autonomous driving system.


  • Atsushi Kuribayashi, Alexander Carballo, Eijiro Takeuchi and Kazuya Takeda, Recognition Assistance Interface for Autonomous Vehicle, 5th International Symposium on Future Active Safety Technology toward zero traffic accidents (FAST-zero ’19), Sept. 2019.

LIBRE: LiDAR Benchmarking and Reference dataset

A first-of-its-kind dataset featuring several 3D LiDARs, covering a range of manufacturers, models, and laser configurations. For each LiDAR in this dataset, we captured data for several scenarios: static targets, where objects were placed at known distances and measured from a fixed position and within a controlled environment; adverse weather, where static obstacles were measured from a moving vehicle, captured in a weather chamber where LiDARs were exposed to different adverse conditions (fog, rain, strong light) with controlled intensities; and finally, dynamic traffic, where dynamic objects were captured from a vehicle driven on public urban roads, at different times of the day, and including supporting sensors such as cameras, infrared imaging, and odometry devices.

The first stage of our LIBRE project features different mechanical-type legacy and cutting-edge 3D LiDARs and we make their data openly available to the community. The second stage of this ambitious project will add solid-state LiDARs, different wavelengths, multi-spectral LiDARs, different scanning technologies, etc.

Our data provides a means for a fair comparison of currently available LiDARs, and facilitates the improvement of existing self-driving vehicles and robotics-related software, in terms of development and tuning of LiDAR-based perception algorithms.

More information about our work is available in the introductory website:

Related Publications

  1. J. Lambert, A. Carballo, A. Monrroy Cano, P. Narksri, D.R. Wong, E. Takeuchi, K. Takeda, "Performance Analysis of 10 Models of 3D LiDARs for Automated Driving," in IEEE Access, vol. 8, pp. 131699-131722, 2020, doi: 10.1109/ACCESS.2020.3009680. [link]

  2. A. Carballo, A. Monrroy, D.R. Wong, P. Narksri, J. Lambert, Y. Kitsukawa, E. Takeuchi, S. Kato, K. Takeda, "Characterization of Multiple 3D LiDARs for Localization and Mapping using Normal Distributions Transform," arXiv preprint arXiv:2004.01374, 2020. [link]

  3. A. Carballo, J. Lambert, A. Monrroy, D.R. Wong, P. Narksri, Y. Kitsukawa, E. Takeuchi, S. Kato, K. Takeda, "LIBRE: The Multiple 3D LiDAR Dataset," IEEE Intelligent Vehicles Symposium (IV), October 20-23, 2020. [link]