Program

8:50 Opening 

Christian Laugier (CHROMA, Inria), Philippe Martinet (CHORALE, Inria), Urbano Nunes (ISR/University of Coimbra, Portugal), Miguel Angel Sotelo (University of Alcala, Madrid, Spain)

9:00-9:40 Session 1: Deep Learning

Chairman: Philippe Martinet (CHORALE, Inria)

  • Title:  ISA2: Intelligent Speed Adaptation from Appearance  paper  presentation 9:00-9:20
    Authors: C. Herranz-Perdiguero and R. J. Lopez-Sastre

    Abstract: In this work we introduce a new problem named Intelligent Speed Adaptation from Appearance (ISA2). Technically, the goal of an ISA2 model is to predict for a given image of a driving scenario the proper speed of the vehicle. Note this problem is different from predicting the actual speed of the vehicle. It defines a novel regression problem where the appearance information has to be directly mapped to get a prediction for the speed at which the vehicle should go, taking into account the traffic situation. First, we release a novel dataset for the new problem, where multiple driving video sequences, with the annotated adequate speed per frame, are provided. We then introduce two deep learning based ISA2 models, which are trained to perform the final regression of the proper speed given a test image. We end with a thorough experimental validation, where the results show the level of difficulty of the proposed task. The dataset and the proposed models will all be made publicly available to encourage much needed further research on this problem.

  • Title:  Classification of Point Cloud for Road Scene Understanding with Multiscale Voxel Deep Network  paper   presentation 9:20-9:40 
    Authors: X. Roynard and J.E. Deschaud and F. Goulette

    Abstract: In this article we describe a new convolutional neural network (CNN) to classify 3D point clouds of urban scenes. Solutions are given to the problems encountered working on scene point clouds, and a network is described that allows for point classification using only the position of points in a multiscale neighborhood. This network enables the classification of 3D point clouds of road scenes necessary for the creation of maps for autonomous vehicles such as HD-Maps. On the reduced-8 Semantic3D benchmark [1], this network, ranked second, beats the state of the art of point classification methods (those not using an additional regularization step as CRF). Our network has also been tested on a new dataset of labeled urban 3D point clouds for semantic segmentation.

09:40-11:00 Session 2: Navigation, Decision, Safety

Chairman: Christian Laugier (CHROMA, Inria) 

  • Title: Towards Fully Automated Driving: Results and Open Challenges in Intelligent Decision Making, Planning, and Maps  9:40-10:20
    Keynote speaker: Maucher Dominik (Bosch, Germany)

    Abstract:The current Space Race of autonomous driving is in full swing. We will attempt to paint a picture of how far we have gotten and what are some of the most pressing current challenges. We will touch upon some of the unanswered questions in the areas of intelligent decision making, planning, and maps. We will also give you a glimpse into the joint Bosch-Mercedes automated driving project, which aims to bring autonomous cars to the roads at the beginning of the next decade.

  • Title:  Statistical Model Checking Applied on Perception and Decision-making Systems for Autonomous Driving   paper  presentation 10:20-10:40
    Authors: J. Quilbeuf, M. Barbier, L.  Rummelhard, C. Laugier, A. Legay, B. Baudouin, T. Genevois, J. Ibanez-Guzman, and O. Simonin

    Abstract: Automotive systems must undergo a strict process of validation before their release on commercial vehicles. The currently-used methods are not adapted to latest autonomous systems, which increasingly use probabilistic approaches. Furthermore, real life validation, when even possible, often imply costs which can be obstructive. New methods for validation and testing are necessary.
    In this paper, we propose a generic method to evaluate complex automotive-oriented systems for automation (perception, decision-making, etc.). The method is based on Statistical Model Checking (SMC), using specifically defined Key Performance Indicators (KPIs), as temporal properties depending on a set of identified metrics. By feeding the values of these metrics during a large number of simulations, and the properties representing the KPIs to our statistical model checker, we evaluate the probability to meet the KPIs. We applied this method to two different subsystems of an autonomous vehicles: a perception system (CMCDOT framework) and a decision-making system. An overview of the two system is given to understand related validation challenges. We show that the methodology is suited to efficiently evaluate some critical properties of automotive systems, but also their limitations.

  • Title:  Automatically Learning Driver Behaviors for Safe Autonomous Vehicle Navigation  paper   presentation 10:40-11:00
    Authors: E. Cheung, A. Bera, E. Kubin, K. Gray, and D. Manocha

    Abstract: We present an autonomous driving planning algorithm that takes into account neighboring drivers’ behaviors and achieves safer and more efficient navigation. Our approach leverages the advantages of a data-driven mapping that is used to characterize the behavior of other drivers on the road. Our formulation also takes into account pedestrians and cyclists and uses psychology-based models to perform safe navigation. We demonstrate our benefits over previous methods: safer behavior in avoiding dangerous neighboring drivers, pedestrians and cyclists, and efficient navigation around careful drivers.

11:00-11:30 Coffee break

11:30-12:50 Session 3 : Perception

Chairman: Alberto Broggi (Ambarella, USA)

  • Title: SoC for ultra HD mono and stereo-vision processing  presentation  11:30-12:10
    Keynote speaker: Alberto Broggi (Ambarella/USA, VisLab/Italy)

    Abstract: Ambarella has designed a sensing suite for autonomous vehicles that relies on input from stereovision and monocular cameras. The cameras, each based on the new Ambarella CV1 SoC (System on Chip), provide a perception range of over 150 meters for stereo obstacle detection and over 180 meters for monocular classification to support autonomous driving and deliver a viable, high-performance alternative to lidar technology.

    Ambarella’s solution not only recognizes visual landmarks, but also detects obstacles without training and runs commonly used CNNs for classification. Additional features include automatic stereo calibration, terrain modeling, and vision-based localization.

    The presentation will emphasize the underlying architecture of Ambarella’s CV1-based solution for vision-based autonomous vehicle driving systems and provide insight into the main technological breakthrough that made it possible.

  • Title:  Intelligent feature selection method for accurate laser-based mapping and localisation in self-driving cars  paper    presentation   12:10-12:30
    Authors: N. Hernandez, I. G. Daza, C. Salinas, I. Parra, J. Alonso, D. Fernandez-Llorca, M.A. Sotelo

    Abstract: Robust 3D mapping has become an attractive field of research with direct application in the booming domain of self-driving cars. In this paper, we propose a new method for feature selection in laser-based point clouds with a view to achieving robust and accurate 3D mapping. The proposed method follows a double stage approach to map building. In a first stage, the method compensates the point cloud distortion using a rough estimation of the 3-DOF vehicle motion, given that range measurements are received at different times during continuous LIDAR motion. In a second stage, the 6-DOF motion is accurately estimated and the point cloud is registered using a combination of distinctive point cloud features. The appropriate combination of such features, namely vertical poles, road curbs, plane surfaces, etc. reveals to be a powerful tool to achieving accurate mapping and robustness to aggressive motion and temporary low density of features. We show and analyse the results obtained after testing the proposed method with a dataset collected in our own experiments on the Campus of the University of Alcala (Spain) using the DRIVERTIVE vehicle equipped with a Velodyne-32 sensor. In addition, we evaluate the robustness and accuracy of the method for laser-based localisation in a self-driving application.

  • Title:  LiDAR based relative pose and covariance estimation for communicating vehicles exchanging a polygonal model of their shape    paper   presentation  12:30-12:50
    Authors: E. Héry, P. Xu and P. Bonnifait

    Abstract: Relative localization between autonomous vehicles is an important issue for accurate cooperative localization. It is also essential for obstacle avoidance or platooning. Thanks to communication between vehicles, additional information, such as vehicle model and dimension, can be transmitted to facilitate this relative localization process. In this paper, we present and compare different algorithms to solve this problem based on LiDAR points and the pose and model communicated by another vehicle. The core part of the algorithm relies on iterative minimization tested with two methods and different model associations using point-to-point and point-to-line distances. This work compares the accuracy, the consistency and the number of iterations needed to converge for the different algorithms in different scenarios, e.g. straight lane, two lanes and curved lane driving.

13:00-14:00 Lunch Break

14:00-15-00 Session 4: Interactive session

Chairman: Urbano Nunes (ISR/University of Coimbra, Portugal) and Philippe Martinet (CHORALE team, Inria)

  • Title:  Vehicle Detection in UAV Aerial Video  paper   presentation
    Authors: H. Zhang, C. Meng, P.  Guo, X. Ding and Z. Li

    Abstract: In recent years, unmanned aerial vehicle (UAV) is gradually applied in lane detection, vehicle detection and vehicle classification, etc. There are many challenges in the UAV aerial video detection, such as camera shake, interferential targets and a wide range of change of scene. To solve these problems, a new vehicle detection system which is suitable to detect in UAV aerial video is proposed. We utilize the bit plane to extract the lane surface and we use the extracted lane information to limit detection area. We improved the Vibe algorithm so that it can be used in dramatically changing scenarios. In addition, the moving target screening strategy is proposed to screen the moving vehicle. This paper is the first paper to introduce bit plane into detection method. Experiments show that our system is superior to other existing detection algorithms in terms of accuracy and computation time.

  • Title:  MOMDP solving algorithms comparison for safe path planning problems in urban environments  paper   presentation
    Authors: J.A. Delamer and Y. Watanabe and C. P. Carvalho Chanel

    Abstract: This paper tackles a problem of UAV safe path planning in an urban environment where the onboard sensors can be unavailable such as GPS occlusion. The key idea is to perform UAV path planning along with its navigation an guidance mode planning where each of these modes uses different set of sensors and whose availability and performance are environment-dependent. It is supposed to have a-priori knowledge in a form of gaussians mixture maps of obstacles and sensors availabilities. These maps allow the use of an Extended Kalman Filter (EKF) to have an accurate state estimate. This paper proposes a planner model based on Mixed Observability Markov Decision Process (MOMDP) and EKF. It allows the planner to propagate such probability map information to the future path for choosing the best action minimizing the expected cost.

  • Title:  On finding low complexity heuristics for path planning in safety relevant applications    paper  presentation
    Authors: R. Krutsch

    Abstract: In many robotics applications path planning has safety implications that need to be addressed and understood. Approaches based purely on learning algorithms have today no strong guarantees that the path found, even given perfect environment model, is safe. In contrast, search based methods have strong theoretical guarantees but are significantly slower and hard to parallelize. In this paper we present a method of
    obtaining heuristics for search based algorithms targeting to reduce the search complexity by combining the strengths of the two paradigms. We show that a complexity reduction of more than 30% is achievable with less than 1% drop in path optimality. As a consequence of the complexity reduction we also measure a performance boost of more than 30%.

  • Title:  Enhancing the educational process related to autonomous driving    paper   presentation
    Authors: N. Sarantinoudis, P. Spanoudakis, L. Doitsidis and N. Tsourveloudis

    Abstract: Autonomous driving is one of the major areas of interest for the automotive industry. This constantly evolving field requires the involvement of a wide range of engineers with complementary skills. The education of these engineers is a key issue for the further development of the field. Currently in the engineering curriculums, there is a lack of related platforms that can assist the engineers to train in and further develop the required dexterities. The current practice is using either small robotic devices or full scale prototypes in order to understand and experimentate in autonomous driving principals. Each approach has disadvantages ranging from the lack of realistic conditions to the cost of the devices that are used. In this paper
    we present a low cost modular platform which can be used for experimentation and research in the area of autonomous cars and driving. The functionality of the suggested system is verified by extensive experimentation in – very close to- real traffic conditions.

  • Title:  CoMapping: Efficient 3D-Map Sharing: Methodology for Decentralized cases    paper   presentation
    Authors: L. F. Contreras-Samame, S. Dominguez-Quijada, O. Kermorgant and P. Martinet

    Abstract: CoMapping is a framework to efficient manage, share, and merge 3D map data between mobile robots. The main objective of this framework is to implement a Collaborative Mapping for outdoor environments where it can not use all the time GPS data. The framework structure is based on 2 stages. The first one, the Pre-Local Mapping Stage, each robot constructs in real-time a pre-local map of its environment using Laser Rangefinder data and low cost GPS information only in certain situations. Afterwards, in the Local Mapping Stage, the robots share their pre-local maps and merge them in a decentralized way in order to improve their new maps, renamed now as local maps. An experimental study for the case of decentralized cooperative 3D mapping is presented, where tests were conducted using 3 intelligent cars equipped with lidars and GPS receiver devices in urban outdoor scenarios. We also discuss the performance of all the cooperative system in terms of map alignments.

  • Title:  Single-View Place Recognition under Seasonal Changes    paper  presentation
    Authors: D. Olid, J. M. Facil and J. Civera

    Abstract: Single-view place recognition, that we can define as finding an image that corresponds to the same place as a given query image, is a key capability for autonomous navigation and mapping. Although there has been a considerable amount of research in the topic, the high degree of image variability (with viewpoint, illumination or occlusions for example) makes it a research challenge. One of the particular challenges, that we address in this work, is weather variation. Seasonal changes can produce drastic appearance changes, that classic low-level features do not model properly. Our contributions in this paper are twofold. First we pre-process and propose a partition for the Nordland dataset, frequently used for place recognition research without
    consensus on the partitions. And second, we evaluate several neural network architectures such as pre-trained, siamese and triplet for this problem. Our best results outperform the state of the art of the field. A video showing our results can be found in https://youtu.be/VrlxsYZoHDM.

  • Title:  Future Depth as Value Signal for Learning Collision Avoidance  paper  presentation
    Authors: K. Kelchtermans and T. Tuytelaars

    Abstract: The constant miniaturization of robots reduces the array of available sensors. This raises the need for robust algorithms that allow robots to navigate without collision based solely on monocular camera input. In this work we propose a new learning-based method for this task. We propose a neural network policy for monocular collision avoidance with self-supervision that does not require actual collisions during training. To this end, we demonstrate that a neural network is capable of evaluating an input image and action by predicting
    the expected depth in the future. In this sense, the future depth can be seen as an action-value-signal. In comparison with our baseline model that is based on predicting collision probabilities, we show that using depth requires less data and leads to more stable training without need for actual collisions. The latter can be especially useful if the autonomous robot is more fragile and not capable to deal with collisions (e.g. aerial robots). The proposed method is evaluated thoroughly in simulation in a ROS-Gazebo-tensorflow framework and will be made available on publication.

  • Title:  Automatic generation of ground truth for the evaluation of obstacle detection and tracking techniques    paper   presentation
    Authors: H. Hajri, E. Doucet, M.  Revilloud, L. Halit, B.  Lusetti, M.C. Rahal

    Abstract: As automated vehicles are getting closer to becoming a reality, it will become mandatory to be able to characterise the performance of their obstacle detection systems. This validation process requires large amounts of ground-truth data, which is currently generated by manually annotation. In this paper, we propose a novel methodology to generate ground-truth kinematics datasets for specific objects in real-world scenes. Our procedure requires no annotation whatsoever, human intervention being limited to sensors calibration. We present the recording platform which was exploited to acquire the reference data and a detailed and thorough analytical study of the propagation of errors in our procedure. This allows us to provide detailed precision metrics for each and every data item in our datasets. Finally some visualisations of the acquired data are given.

  • Title:  Autonomous navigation using visual sparse map      paper  presentation
    Authors: S. Hong and H. Cheng

    Abstract: This paper presents an autonomous navigation system using only visual sparse map. Although a dense map provides detail information of environment, the most information of the dense map is redundant for autonomous navigation. In addition, the dense map demands the high cost for storage and management. To tackle these challenges, we propose the autonomous navigation using visual sparse map. We leverage a visual Simultaneous Localization and Mapping (SLAM) to generate the visual sparse map and localize a robot in the map. Using the robot position in the map, the robot navigates by following a reference line generated from the visual sparse map. We evaluated the proposed method using two robot platforms in indoor environment and outdoor environment. Experimental results show successful autonomous navigation in both environments. The experimental video is available at https://drive.google.com/file/d/1DlDa6lkrQA6Zi2XAbKn0hZsR8 cXLyXKO/view?usp=sharing.

  • Title:  Socially Invisible Navigation for Intelligent Vehicles      paper  presentation
    Authors: A. Bera, T. Randhavane, E. Kubin, A. Wang, K.  Gray, and D.  Manocha

    Abstract: Abstract—We present a real-time, data-driven algorithm to enhance the social-invisibility of autonomous vehicles within crowds. Our approach is based on prior psychological research, which reveals that people notice and–importantly–react negatively to groups of social actors when they have high entitativity, moving in a tight group with similar appearances and trajectories. In order to evaluate that behavior, we performed a user study to develop navigational algorithms that minimize entitativity. This study establishes mapping between emotional reactions and multi-robot trajectories and appearances, and further generalizes the finding across various environmental conditions. We demonstrate the applicability of our entitativity modeling for trajectory computation for active surveillance and dynamic intervention in simulated robot-human interaction scenarios. Our approach empirically shows that various levels of entitative robots can be used to both avoid and influence pedestrians while not eliciting strong emotional reactions, giving multi-robot systems socially-invisibility.

  • Title:  An Egocubemap Based Algorithm for Quadrotors Obstacle Avoidance Using a Single Depth Camera    paper  presentation
    Authors: T. Tezenas Du Montcel, A. Negre , M. Muschinowski , E. Gomez-Balderas and N. Marchand

    Abstract: A fast obstacle avoidance algorithm is a necessary condition to enable safe flights of Unmanned Aerial Vehicles (UAVs) eventually at high-speed. Large UAVs usually have a lot of sensors and available computational resources which allow complex algorithms to run fast enough to navigate safely. On the contrary, small UAVs gather many difficulties, like computation and sensors limitations, forcing algorithms to retain only a few keys points of their environment. This paper proposes an obstacle avoidance algorithm for quadrotor using a single depth camera. Taking advantage of the possibilities offered by embedded GPUs, a cubic world representation centered on the robot – called Egocubemap – is used while the whole obstacle detection and avoidance algorithm is light enough to run at 10Hz on-board. Numerical and experimental validations are provided.

  • Title:  DisNet: A novel method for distance estimation from monocular camera    paper  presentation
    Authors: M. Abdul Haseeb, J. Guan, D.  Ristić-Durrant, A. Gräse

    Abstract: In this paper, a machine learning setup that provides the obstacle detection system with a method to estimate the distance from the monocular camera to the object viewed with the camera is presented. In particular, the preliminary results of an on-going research to allow the on-board multisensory system, which is under development within H2020 Shift2Rail project SMART, to learn distances to objects, possible obstacles, on the rail tracks ahead of the locomotive are given. The presented distance estimation system is based on Multi Hidden-Layer Neural Network, named DisNet, which is used to learn and predict the distance between the object and the camera sensor. The DisNet was trained using a supervised learning technique where the input features were manually calculated parameters of the object bounding boxes resulted from the YOLO object classifier and outputs were the accurate 3D laser scanner measurements of the distances to objects in the recorded scene. The presented DisNet-based distance estimation system was evaluated on the images of railway scenes as well as on the images of a road scene. Shown results demonstrate a general nature of the proposed DisNet system that enables its use for the estimation of distances to objects imaged with different types of monocular cameras.

15:00-16:30 Session 5: Control, planning

Chairman: Miguel Angel Sotelo (University of Alcala, Madrid, Spain)

  • Title: Integration of Cooperative Services with Autonomous Driving  presentation  15:00-15:40
    Keynote speaker: Jose Eugenio Naranjo Hernandez (Universidad Politecnica de Madrid, Spain)

    Abstract: Cooperative Systems (C-ITS) are based on the generation of safety and efficiency information in road transport and its diffusion through the use of V2X vehicular communications networks. Several C-ITS experiences have been developed, which were mainly focused on sending traffic information via Vehicle-to-Infrastructure (V2I) communications to vehicles, in such a way that the same information shown in the variable information panels is presented to the driver inside the vehicle through a Human Machine Interface (HMI). Also, there are experiences where there is an exchange of information between some vehicles and others using vehicle-to-vehicle communications services (V2V). In this way, the limit of the visual horizon of the vehicles is exceeded. drivers and information on the entire driving environment that may affect safety.
    The European ITS Platform has published the first two sets of cooperative services that are currently available for deployment and have been named Day 1 and Day 1.5, in reference to their implementation term. In this way, within the C-ITS Day 1 services we can find the Emergency vehicle approaching (V2V), Hazardous location notification (V2I), the Road works warning (V2I) or the Weather conditions (V2I), and within the services C-ITS Day 1.5 we found Cooperative collision risk warning (V2V) or Zone access control for urban areas (V2I). This set of services is currently being deployed in projects such as C-ROADS (https://www.c-roads.eu), where C-ITS corridors are being enabled across Europe.
    On the other hand, it is clear that autonomous driving in the strict sense, at an SAE-3 or higher automation level, does not make sense if cooperative and connectivity components are not incorporated within the ego-vehicle, since the limitation of the horizon visual sensors on the vehicles makes them subject to the limitations of human beings.
    In this keynote, we present the results of the European project AUTOCITS – Regulation Study for Interoperability in the Adoption of Autonomous Driving in European Urban Nodes (https://www.autocits.eu/), where they try to obtain synergies of both C-ITS as of the autonomous and connected vehicles, enabling the Cooperative Autonomous Driving. In this way, AUTOCITS is developing all the architecture to allow the TMCs to generate C-ITS Day 1 messages and transmit them to the road via V2X communications, which have been deployed in three pilot sites located in the urban accesses of 3 European cities: Paris, Madrid and Lisbon, belonging to the Atlantic Corridor of the TEN-T road network.

  • Title:  Multi-Sensor-Based Predictive Control for Autonomous Backward Perpendicular and Diagonal Parking    paper   presentation  15:40-16:00
    Authors: D. Perez-Morales, O. Kermorgant , S. Domınguez-Quijada and P. Martinet

    Abstract: This paper explores the feasibility of a Multi-Sensor-Based Predictive Control (MSBPC) approach for addressing backward nonparallel (perpendicular and diagonal) parking problems of car-like vehicles as an alternative to more classical (e.g. path planning based) approaches. The results of a few individual cases are presented to illustrate the behavior and performance of the proposed approach as well as results from exhaustive simulations to assess its convergence and stability. Indeed, preliminary results are encouraging, showing that the vehicle is able to park successfully from virtually any sensible initial position.

  • Title:  Towards Uncertainty-Aware Path Planning for Navigation on Road Networks Using Augmented MDPs    paper    presentation 16:00-16:20
    Authors: L. Nardi,C. Stachniss

    Abstract: Although most robots use probabilistic algorithms to solve state estimation problems such as localization, path planning is often performed without considering the uncertainty about robot’s position. Uncertainty, however, matters in planning. In this paper, we investigate the problem of path planning considering the uncertainty in the robot’s belief about the world, in its perceptions and in its action execution. We propose the use of an uncertainty-augmented Markov Decision Process to approximate the underlying Partially Observable Markov Decision Process, and we employ a localization prior to estimate how the uncertainty about robot’s belief propagates through the environment. This yields to a planning approach that generates navigation policies able to make decisions according to different degrees of uncertainty while being computationally tractable. We implemented our approach and thoroughly evaluated it on different navigation problems. Our experiments suggest that we are able to compute policies that are more effective than
    approaches that ignore the uncertainty and also to outperform policies that always take the safest actions.

16:30-17:00 Coffee break

17:00-18:00 Round table: Robot taxis & Autonomous Shuttles

Chairman: Urbano Nunes (ISR/University of Coimbra, Portugal)

Participants:

  • Robert Krutsch (Intel Deutschland GmbH, Germany)
  • Moritz Werling (BMW, Germany)
  • Nicole Camous (NavyaTech, France)
  • Dominik Maucher (Bosch, Germany)

18:00 Closing

Comments are closed.