Program

Final program

9:00 Opening

Christian Laugier (CHROMA, Inria), Philippe Martinet (CHORALE, Inria), Marcelo Ang (NUS, Singapore)

9:10-10:45 Session 1: Machine & Deep Learning

Chairman: Danwei Wang  (NTU, Singapore)

  • Title: The road towards perception for autonomous driving: methods, challenges, and the data required  Presentation  9:10-9:55
    Keynote speaker: Roland Meertens (AID, Munich, Germany)

    Abstract: Self-driving cars are expected to make a big impact on our daily lives within a couple of years. However, first we should solve the most interesting Artificial Intelligence (AI) problem of this century: perception. We will look at the problem of perception for autonomous vehicles, the sensors which are used to solve this problem, and the methods which are currently state of the art. We will also take a look at the available data: a crucial thing we need to teach machines about the world.

  • Title: Transformation-adversarial network for road detection in LIDAR rings, and model-free evidential road grid mapping  paper, presentationvideo1, video2  9:55-10:20
    Authors: E. Capellier, F. Davoine, V. Cherfaoui, Y. Li

    Abstract: We propose a deep learning approach to perform road-detection in LIDAR scans, at the point level. Instead of processing a full LIDAR point-cloud, LIDAR rings can be processed individually. To account for the geometrical diversity among LIDAR rings, an homothety rescaling factor can be predicted during the classification, to realign all the LIDAR rings and facilitate the training. This scale factor is learnt in a semi-supervised fashion. A performant classification can then be achieved with a relatively simple system. Furthermore, evidential mass values can be generated for each point from an observation of the conflict at the output of the network, which enables the classification results to be fused in evidential grids. Experiments are done on real-life LIDAR scans that were labelled from a lane-level centimetric map, to evaluate the classification performances.

  • Title:  End-to-End Deep Neural Network Design for Short-term Path Planning paper, presentation  video   10:20-10:45
    Authors: M.Q. Dao, D. Lanza, V. Frémont

    Abstract: Early attempts on imitating human driving behavior with deep learning have been implemented in an reactive navigation scheme which is to directly map the sensory measurement to control signal. Although this approach has successfully delivered the first half of the driving task – predicting steering angles, learning vehicle speed in an end-to-end setting requires significantly large and complex networks as well as the accompanying dataset. Motivated by the rich literature in trajectory planning which timestamps a geometrical path under some dynamic constraints to provide the corresponding velocity profile, we propose an end-to-end architecture for generating a non-parametric path given an image of the environment in front of a vehicle. The level of accuracy of the resulting path is 70%. The first and foremost benefit of our approach is the ability of incorporating deep learning into the navigation pipeline. This is desirable because the neural network can ease the hardness of developing the see-think-act scheme, while the trajectory planning at the end adds a level of safety to the final output by ensuring it obeys static and dynamic constraint.

10:45-11:15 Coffee break

11:15-12:50 Session 2 : Perception & Situation awareness

Chairman: Marcelo Ang (NUS, Singapore)

  • Title: Sim to Real: Using Simulation for 3D Perception and Navigation  Presentation  11:15-12:00
    Keynote speaker: Ruigang Yang (Baidu, China)

    Abstract: The importance for simulations, in both robotics and more recently autonomous driving, has been more and more recognized. In this talk, I will talk the fairly extensive line of simulation research at Baidus Robotics and Autonomous Driving Lab (RAL), from low-level sensor simulation, such as LIDAR,  to high-level behavior simulation, such as drivers/pedestrians.  These different simulations tools are designed to either produce an abundant amount of annotated data to train deep neural network, or directly provide an end-to-end environment to test all aspects of robots/autonomous vehicles movement capabilities.

  • Title:  Feature Generator Layer for Semantic Segmentation Under Different Weather Conditions for Autonomous Vehicles  paper, presentation  12:00-12:25
    Authors: O. Erkent, C. Laugier

    Abstract: Adaptation to new environments such as semantic segmentation in different weather conditions is still a challenging problem. We propose a new approach to adapt the segmentation method to diverse weather conditions without requiring the semantic labels of the known or the new weather conditions. We achieve this by inserting a feature generator layer (FGL) into the deep neural network (DNN) which is previously trained in the known weather conditions. We only update the parameters of FGL. The parameters of the FGL are optimized by using two losses. One of the losses minimizes the difference between the input and output of FGL for the known weather domain to ensure the similarity between generated and non-generated known weather domain features; whereas, the other loss minimizes the difference between the distribution of the known weather condition features and the new weather condition features. We test our method on SYNTHIA dataset which has several different weather conditions with a wellknown semantic segmentation network architecture. The results show that adding an FGL improves the accuracy of semantic segmentation for the new weather condition and does not reduce the accuracy of the semantic segmentation of the known weather condition.

  • Title:  An Edge-Cloud Computing Model for Autonomous Vehicles   paper, presentation  12:25-12:50
    Authors: Y. Sasaki, T. Sato, H. Chishiro, T. Ishigooka, S. Otsuka, K. Yoshimura, S. Kato

    Abstract: Edge-cloud computing for autonomous driving has been a challenge due to the lack of fast and reliable networks to handle a large amount of data and the traffic cost. The recent development of 5th Generation (5G) mobile network allows us to consider an edge-cloud computing model for autonomous vehicles. However, previous work did not strongly focus on the edge-cloud computing model for autonomous vehicles in 5G mobile network. In this paper, we present an edge-cloud computing model for autonomous vehicles using a software platform, called Autoware. Using 1 Gbit/s simulated network as an alternative of 5G mobile network, we show that the presented edge-cloud computing model for Autoware-based autonomous vehicles reduces the execution time and deadline miss ratio despite the latencies caused by communications, compared to an edge computing model.

12:50-14:00 Lunch Break

14:00-15-45 Session 3: Planning & Navigation

Chairman: Christian Laugier (CHROMA, INRIA, France)

  • Title: Intelligent Perception, Navigation and Control for Multi-robot Systems  Presentation 14:00-14:45
    Keynote speaker: Danwei Wang (NTU, Singapore)

    Abstract: While tremendous progress have been made in the development of localization and navigation algorithms for single robot, the operation of the multi-robot systems has recently garnered significant attention. This talk aim to report recent advancements in multi-robot systems research, which are developed by Prof Wang Danwei’s Group at Nanyang Technological University, Singapore. Emphases are placed on intelligent perception, navigation and control technologies that enable autonomous systems to operate in cluttered and GPS-denied environments. The talk will introduce a systematic multi-robot framework that contains core functions such as multi-sensor data fusion, complex scene understanding, multi-robot localization and mapping, moving object reasoning, and formation control.

  • Title: miniSAM: A Flexible Factor Graph Non-linear Least Squares Optimization Framework  paper, presentation  14:45-15:10
    Authors: J. Dong, Z. Lv

    Abstract: Many problems in computer vision and robotics can be phrased as non-linear least squares optimization problems represented by factor graphs, for example, simultaneous localization and mapping (SLAM), structure from motion (SfM), motion planning, and control. We have developed an open-source C++/Python framework miniSAM, for solving such factor graph based least squares problems. Compared to most existing frameworks for least squares solvers, miniSAM has (1) full Python/NumPy API, which enables more agile development and easy binding with existing Python projects, and (2) a wide list of sparse linear solvers, including CUDA enabled sparse linear solvers. Our benchmarking results shows miniSAM offers comparable performances on various types of problems, with more flexible and smoother development experience.

  • Title:  Linear Camera Velocities and Point Feature Depth Estimation Using Unknown Input Observer   paper, presentation  15:10-15:35
    Authors:  R. Benyoucef, L. Nehaoua, H. Hadj-Abdelkader, H. Arioui

    Abstract: In this paper, we propose a new approach to estimate the missing 3D information of a point feature during the camera motion and reconstruct the linear velocity of the camera. This approach is intended to solve the problem of relative localization and compute the distance between two Unmanned Aerial Vehicles (UAV) within a formation. An Unknown Input Observer is designed for the considered system described by a quasi-linear parameter varying (qLPV) model with unmeasurable variables to achieve kinematic from motion estimation. An observability analysis is performed to ensure the possibility of reconstructing the state variables. Sufficient conditions to design the observer are derived in terms of Linear Matrix Inequalities (LMIs) based on Lyapunov theory. Simulation results are discussed to validate the proposed approach.

15:45-16:15 Coffee break

16:15-18:00 Human vehicle interaction

Chairman: Marcelo  Ang (NUS, Singapore)  

  • Title: The Effect of Vehicle Automation on Road Safety  Presentation   16:15-17:00
    Keynote speaker: Cristina Olaverri (Johannes Kepler Universitat, Austria)

    Abstract: The feasibility of incorporating new technology-driven functionality to vehicles has played a central role in automotive design. The overall diffusion in the application of digital technologies presents the possibility of designing systems, the functioning of which is based on intelligent technologies that simultaneously reside in multiple, interconnected applications. Consequently, the development of intelligent road-vehicle systems such as cooperative advanced driver assistance systems (co-ADAS) and with them the degree of vehicle automation is rapidly increasing. The advent of vehicle automation promotes a reduction of the driver workload. However, depending on the automation grade consequences for the passengers such as out-of-the-loop states can be foreseen. Also the protection of Vulnerable Road Users (VRUs) has been an active research topic in recent years. A variety of responses that exhibit several levels of trust, uncertainty and a certain degree of fear when interacting with driverless vehicles has been observed. In this context, P2V (Pedestrian-to-Vehicle) and V2P (Vehicle-to-Pedestrian) have become crucial technologies to minimize potential dangers, due to the high detection rates and the high user-satisfaction levels they achieve. This presentation gives an overview of the impact of such technologies on traffic awareness towards improving driving performance and reducing road accidents. Furthermore, the benefits and potential problems regarding vehicle automation will be outlined.

  • Round table: Human vehicle interaction 17:00-18:00

           Participants :

        •  Henriette Cornet (TUMCREATE, Singapore) slides
        •  Li Haizhou (National University of Singapore) slides
        •  Cristina Olaverri (Johannes Kepler Universitat, Austria) slide
        •  Juraj Kabzan (Nutonomy, Singapore) slides

18:00 Closing

Comments are closed.