

{"id":109,"date":"2018-04-24T17:41:40","date_gmt":"2018-04-24T15:41:40","guid":{"rendered":"https:\/\/project.inria.fr\/ppniv18\/?page_id=109"},"modified":"2019-07-30T09:22:21","modified_gmt":"2019-07-30T07:22:21","slug":"program","status":"publish","type":"page","link":"https:\/\/project.inria.fr\/ppniv18\/program\/","title":{"rendered":"Program"},"content":{"rendered":"<h4><strong><span style=\"font-family: 'Source Sans Pro';\">8:50 Opening\u00a0<\/span><\/strong><\/h4>\n<p><span style=\"font-family: 'Source Sans Pro';\">Christian Laugier (CHROMA, Inria),\u00a0<\/span>Philippe Martinet (CHORALE, Inria), Urbano Nunes (ISR\/University of Coimbra, Portugal), Miguel Angel Sotelo (University of Alcala, Madrid, Spain)<\/p>\n<h4><strong>9:00-9:40 Session 1: Deep Learning<\/strong><\/h4>\n<p>Chairman:\u00a0Philippe Martinet (CHORALE, Inria)<\/p>\n<ul>\n<li><b>Title: <\/b>\u00a0ISA2: Intelligent Speed Adaptation from Appearance\u00a0 <em><a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/paper12.pdf\">paper<\/a>\u00a0\u00a0<a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/presentation_sastre.pdf\">presentation<\/a>\u00a0<\/em><em>9:00-9:20<\/em><i><\/i><br \/>\n<b>Authors:\u00a0<\/b>C. Herranz-Perdiguero and <strong>R. J. Lopez-Sastre<\/strong><\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>: In this work we introduce a new problem named\u00a0Intelligent Speed Adaptation from Appearance (ISA2). Technically,\u00a0the goal of an ISA2 model is to predict for a given\u00a0image of a driving scenario the proper speed of the vehicle.\u00a0Note this problem is different from predicting the actual speed\u00a0of the vehicle. It defines a novel regression problem where the\u00a0appearance information has to be directly mapped to get a\u00a0prediction for the speed at which the vehicle should go, taking\u00a0into account the traffic situation. First, we release a novel\u00a0dataset for the new problem, where multiple driving video\u00a0sequences, with the annotated adequate speed per frame, are\u00a0provided. We then introduce two deep learning based ISA2\u00a0models, which are trained to perform the final regression of\u00a0the proper speed given a test image. We end with a thorough\u00a0experimental validation, where the results show the level of\u00a0difficulty of the proposed task. The dataset and the proposed\u00a0models will all be made publicly available to encourage much\u00a0needed further research on this problem.<\/p>\n<\/li>\n<li><b>Title: <\/b>\u00a0Classification of Point Cloud for Road Scene Understanding with Multiscale Voxel Deep Network\u00a0 <em><a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/paper13.pdf\">paper<\/a>\u00a0\u00a0<\/em><em>\u00a0<\/em><em><a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/presentation.pdf\">presentation<\/a>\u00a0<\/em><em>9:20-9:40\u00a0<\/em><i><\/i><br \/>\n<b>Authors:\u00a0<\/b><strong>X. Roynard<\/strong> and J.E. Deschaud and F. Goulette<\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>: In this article we describe a new convolutional\u00a0neural network (CNN) to classify 3D point clouds of urban\u00a0scenes. Solutions are given to the problems encountered working\u00a0on scene point clouds, and a network is described that allows for\u00a0point classification using only the position of points in a multiscale\u00a0neighborhood. This network enables the classification of\u00a03D point clouds of road scenes necessary for the creation of\u00a0maps for autonomous vehicles such as HD-Maps.\u00a0On the reduced-8 Semantic3D benchmark [1], this network,\u00a0ranked second, beats the state of the art of point classification\u00a0methods (those not using an additional regularization step as\u00a0CRF). Our network has also been tested on a new dataset of\u00a0labeled urban 3D point clouds for semantic segmentation.<\/p>\n<\/li>\n<\/ul>\n<h4><strong><span style=\"font-family: 'Source Sans Pro';\">09:40-11:00 Session 2: Navigation, Decision, Safety<\/span><\/strong><\/h4>\n<p>Chairman: <span style=\"font-family: 'Source Sans Pro';\">Christian Laugier (CHROMA, Inria)\u00a0<\/span><\/p>\n<ul>\n<li><b>Title: Towards Fully Automated Driving: Results and Open Challenges in Intelligent Decision Making, Planning, and Maps\u00a0<\/b> <em>9:40-10:20<\/em><i><\/i><br \/>\n<b>Keynote speaker:\u00a0Maucher Dominik (Bosch, Germany)<\/b><\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>:The current Space Race of autonomous driving is in full swing. We will attempt to paint a picture of how far we have gotten and what are some of the most pressing current challenges. We will touch upon some of the unanswered questions in the areas of intelligent decision making, planning, and maps. We will also give you a glimpse into the joint Bosch-Mercedes automated driving project, which aims to bring autonomous cars to the roads at the beginning of the next decade.<\/p>\n<\/li>\n<li><b>Title: <\/b>\u00a0Statistical Model Checking Applied on Perception and Decision-making Systems for Autonomous Driving\u00a0<em>\u00a0 <a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/paper16.pdf\">paper<\/a> \u00a0<a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/presentationBarbierr.pdf\">presentation\u00a0<\/a><\/em>10<em>:20-10:40<\/em><i><\/i><br \/>\n<b>Authors:\u00a0<\/b>J. Quilbeuf, <strong>M. Barbier<\/strong>, L.\u00a0 Rummelhard, C. Laugier, A. Legay, B. Baudouin, T. Genevois, J. Ibanez-Guzman, and O. Simonin<\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>: Automotive systems must undergo a strict process\u00a0of validation before their release on commercial vehicles. The\u00a0currently-used methods are not adapted to latest autonomous\u00a0systems, which increasingly use probabilistic approaches. Furthermore,\u00a0real life validation, when even possible, often imply\u00a0costs which can be obstructive. New methods for validation and\u00a0testing are necessary.<br \/>\nIn this paper, we propose a generic method to evaluate\u00a0complex automotive-oriented systems for automation (perception,\u00a0decision-making, etc.). The method is based on Statistical Model\u00a0Checking (SMC), using specifically defined Key Performance\u00a0Indicators (KPIs), as temporal properties depending on a set of\u00a0identified metrics. By feeding the values of these metrics during\u00a0a large number of simulations, and the properties representing\u00a0the KPIs to our statistical model checker, we evaluate the\u00a0probability to meet the KPIs. We applied this method to two\u00a0different subsystems of an autonomous vehicles: a perception\u00a0system (CMCDOT framework) and a decision-making system.\u00a0An overview of the two system is given to understand related\u00a0validation challenges. We show that the methodology is suited to\u00a0efficiently evaluate some critical properties of automotive systems,\u00a0but also their limitations.<\/p>\n<\/li>\n<li><b>Title: <\/b>\u00a0Automatically Learning Driver Behaviors for Safe Autonomous Vehicle Navigation\u00a0<em> <a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/paper19.pdf\">paper\u00a0<\/a> \u00a0<a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/presentationCheungr.pdf\">presentation\u00a0<\/a><\/em>10<em>:40-11:00<\/em><br \/>\n<b>Authors:\u00a0<\/b><strong>E. Cheung<\/strong>, A. Bera, E. Kubin, K. Gray, and D. Manocha<\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>: We present an autonomous driving planning algorithm\u00a0that takes into account neighboring drivers\u2019 behaviors\u00a0and achieves safer and more efficient navigation. Our approach\u00a0leverages the advantages of a data-driven mapping that is used\u00a0to characterize the behavior of other drivers on the road. Our\u00a0formulation also takes into account pedestrians and cyclists and\u00a0uses psychology-based models to perform safe navigation. We\u00a0demonstrate our benefits over previous methods: safer behavior\u00a0in avoiding dangerous neighboring drivers, pedestrians and\u00a0cyclists, and efficient navigation around careful drivers.<\/p>\n<\/li>\n<\/ul>\n<h4><strong>11:00-11:30 Coffee break<\/strong><\/h4>\n<h4><strong>11:30-12:50 Session 3 : Perception<\/strong><\/h4>\n<p>Chairman: Alberto Broggi (Ambarella, USA)<\/p>\n<ul>\n<li><b>Title: SoC for ultra HD mono and stereo-vision processing<\/b>\u00a0 <a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/keynoteBroggi.pdf\">presentation<\/a>\u00a0\u00a011<em>:30-12:10<\/em><i><\/i><br \/>\n<b>Keynote speaker: Alberto Broggi (Ambarella\/USA, <b>VisLab<\/b>\/Italy)<\/b><\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>: Ambarella has designed a sensing suite for autonomous vehicles that relies on input from stereovision and monocular cameras. The cameras, each based on the new Ambarella CV1 SoC (System on Chip), provide a perception range of over 150 meters for stereo obstacle detection and over 180 meters for monocular classification to support autonomous driving and deliver a viable, high-performance alternative to lidar technology.<\/p>\n<p>Ambarella\u2019s solution not only recognizes visual landmarks, but also detects obstacles without training and runs commonly used CNNs for classification. Additional features include automatic stereo calibration, terrain modeling, and vision-based localization.<\/p>\n<p>The presentation will emphasize the underlying architecture of Ambarella\u2019s CV1-based solution for vision-based autonomous vehicle driving systems and provide insight into the main technological breakthrough that made it possible.<\/li>\n<li><b>Title: <\/b>\u00a0Intelligent feature selection method for accurate laser-based mapping and localisation in self-driving cars\u00a0<em> <a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/paper10.pdf\">paper<\/a> \u00a0 \u00a0<a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/presentationHernandez.pdf\">presentation<\/a> \u00a0<\/em>\u00a012<em>:10-12:30<\/em><i><\/i><br \/>\n<b>Authors:\u00a0<\/b><strong>N. Hernandez<\/strong>, I. G. Daza, C. Salinas, I. Parra, J. Alonso, D. Fernandez-Llorca, M.A. Sotelo<\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>: Robust 3D mapping has become an attractive field\u00a0of research with direct application in the booming domain\u00a0of self-driving cars. In this paper, we propose a new method\u00a0for feature selection in laser-based point clouds with a view\u00a0to achieving robust and accurate 3D mapping. The proposed\u00a0method follows a double stage approach to map building. In a\u00a0first stage, the method compensates the point cloud distortion\u00a0using a rough estimation of the 3-DOF vehicle motion, given\u00a0that range measurements are received at different times during\u00a0continuous LIDAR motion. In a second stage, the 6-DOF motion\u00a0is accurately estimated and the point cloud is registered using a\u00a0combination of distinctive point cloud features. The appropriate\u00a0combination of such features, namely vertical poles, road curbs,\u00a0plane surfaces, etc. reveals to be a powerful tool to achieving\u00a0accurate mapping and robustness to aggressive motion and\u00a0temporary low density of features. We show and analyse the\u00a0results obtained after testing the proposed method with a\u00a0dataset collected in our own experiments on the Campus of the\u00a0University of Alcala (Spain) using the DRIVERTIVE vehicle\u00a0equipped with a Velodyne-32 sensor. In addition, we evaluate\u00a0the robustness and accuracy of the method for laser-based\u00a0localisation in a self-driving application.<\/p>\n<\/li>\n<li><b>Title: <\/b>\u00a0LiDAR based relative pose and covariance estimation for communicating vehicles exchanging a polygonal model of their shape\u00a0\u00a0<em>\u00a0 <a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/paper21.pdf\">paper<\/a> \u00a0\u00a0<a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/presentationHery.pdf\">presentation<\/a> \u00a0<\/em>12<em>:30-12:50<\/em><i><\/i><br \/>\n<b>Authors:\u00a0<\/b><strong>E. H\u00e9ry<\/strong>, P. Xu and P. Bonnifait<\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>: Relative localization between autonomous vehicles\u00a0is an important issue for accurate cooperative localization. It\u00a0is also essential for obstacle avoidance or platooning. Thanks\u00a0to communication between vehicles, additional information,\u00a0such as vehicle model and dimension, can be transmitted to\u00a0facilitate this relative localization process. In this paper, we\u00a0present and compare different algorithms to solve this problem\u00a0based on LiDAR points and the pose and model communicated\u00a0by another vehicle. The core part of the algorithm relies on\u00a0iterative minimization tested with two methods and different\u00a0model associations using point-to-point and point-to-line distances.\u00a0This work compares the accuracy, the consistency and\u00a0the number of iterations needed to converge for the different\u00a0algorithms in different scenarios, e.g. straight lane, two lanes\u00a0and curved lane driving.<\/p>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h4><strong>13:00-14:00 Lunch Break<\/strong><\/h4>\n<h4><strong>14:00-15-00 Session 4: Interactive session<\/strong><\/h4>\n<p>Chairman: Urbano Nunes (ISR\/University of Coimbra, Portugal) and Philippe Martinet (CHORALE team, Inria)<\/p>\n<ul>\n<li><b>Title: <\/b>\u00a0Vehicle Detection in UAV Aerial Video\u00a0 <a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/paper1.pdf\">paper<\/a> \u00a0 presentation<i><\/i><br \/>\n<b>Authors:\u00a0<\/b><strong>H. Zhang<\/strong>, <strong>C. Meng<\/strong>, P.\u00a0 Guo, X. Ding and Z. Li<\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>:\u00a0In recent years, unmanned aerial vehicle (UAV) is gradually applied in lane detection, vehicle detection and vehicle classification, etc. There are many challenges in the UAV aerial video detection, such as camera shake, interferential targets and a wide range of change of scene. To solve these problems, a new vehicle detection system which is suitable to detect in UAV aerial video is proposed. We utilize the bit plane to extract the lane surface and we use the extracted lane information to limit detection area. We improved the Vibe algorithm so that it can be used in dramatically changing scenarios. In addition, the moving target screening strategy is proposed to screen the moving vehicle. This paper is the first paper to introduce bit plane into detection method. Experiments show that our system is superior to other existing detection algorithms in terms of accuracy and computation time.<\/p>\n<\/li>\n<li><b>Title: <\/b>\u00a0MOMDP solving algorithms comparison for safe path planning problems in urban environments\u00a0 <a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/paper2.pdf\">paper<\/a> \u00a0 presentation<i><\/i><br \/>\n<b>Authors:\u00a0<\/b><strong>J.A. Delamer<\/strong> and Y. Watanabe and C. P. Carvalho Chanel<\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>:\u00a0This paper tackles a problem of UAV safe path\u00a0planning in an urban environment where the onboard sensors\u00a0can be unavailable such as GPS occlusion. The key idea\u00a0is to perform UAV path planning along with its navigation\u00a0an guidance mode planning where each of these modes uses\u00a0different set of sensors and whose availability and performance\u00a0are environment-dependent. It is supposed to have a-priori\u00a0knowledge in a form of gaussians mixture maps of obstacles and\u00a0sensors availabilities. These maps allow the use of an Extended\u00a0Kalman Filter (EKF) to have an accurate state estimate. This\u00a0paper proposes a planner model based on Mixed Observability\u00a0Markov Decision Process (MOMDP) and EKF. It allows the\u00a0planner to propagate such probability map information to the\u00a0future path for choosing the best action minimizing the expected\u00a0cost.<\/p>\n<\/li>\n<li><b>Title: <\/b>\u00a0On finding low complexity heuristics for path planning in safety relevant applications\u00a0 \u00a0 <a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/paper3.pdf\">paper<\/a>\u00a0 <a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/presentationKrutsch.pdf\">presentation<\/a><br \/>\n<b>Authors: <\/b><strong>R. <b>Krutsch <\/b><\/strong><\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>:\u00a0In many robotics applications path planning has\u00a0safety implications that need to be addressed and understood.\u00a0Approaches based purely on learning algorithms have today no\u00a0strong guarantees that the path found, even given perfect\u00a0environment model, is safe. In contrast, search based methods\u00a0have strong theoretical guarantees but are significantly slower\u00a0and hard to parallelize. In this paper we present a method of<br \/>\nobtaining heuristics for search based algorithms targeting to\u00a0reduce the search complexity by combining the strengths of the\u00a0two paradigms. We show that a complexity reduction of more\u00a0than 30% is achievable with less than 1% drop in path\u00a0optimality. As a consequence of the complexity reduction we\u00a0also measure a performance boost of more than 30%.<\/p>\n<\/li>\n<li><b>Title: <\/b>\u00a0Enhancing the educational process related to autonomous driving\u00a0 \u00a0 <a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/paper6.pdf\">paper<\/a> \u00a0 presentation<i><\/i><br \/>\n<b>Authors:\u00a0<\/b><strong>N. Sarantinoudis<\/strong>, P. Spanoudakis, L. Doitsidis and <strong>N. Tsourveloudis<\/strong><\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>:\u00a0Autonomous driving is one of the major areas of\u00a0interest for the automotive industry. This constantly evolving\u00a0field requires the involvement of a wide range of engineers with\u00a0complementary skills. The education of these engineers is a key\u00a0issue for the further development of the field. Currently in the\u00a0engineering curriculums, there is a lack of related platforms\u00a0that can assist the engineers to train in and further develop the\u00a0required dexterities. The current practice is using either small\u00a0robotic devices or full scale prototypes in order to understand\u00a0and experimentate in autonomous driving principals. Each\u00a0approach has disadvantages ranging from the lack of realistic\u00a0conditions to the cost of the devices that are used. In this paper<br \/>\nwe present a low cost modular platform which can be used\u00a0for experimentation and research in the area of autonomous\u00a0cars and driving. The functionality of the suggested system is\u00a0verified by extensive experimentation in &#8211; very close to- real\u00a0traffic conditions.<\/p>\n<\/li>\n<li><b>Title: <\/b>\u00a0CoMapping: Efficient 3D-Map Sharing: Methodology for Decentralized cases\u00a0 \u00a0 <a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/paper7.pdf\">paper<\/a> \u00a0 presentation<i><\/i><br \/>\n<b>Authors:\u00a0<\/b><strong>L. F. Contreras-Samame<\/strong>, S. Dominguez-Quijada, O. Kermorgant and P. Martinet<\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>:\u00a0CoMapping is a framework to efficient manage,\u00a0share, and merge 3D map data between mobile robots. The\u00a0main objective of this framework is to implement a Collaborative\u00a0Mapping for outdoor environments where it can not use all the\u00a0time GPS data. The framework structure is based on 2 stages.\u00a0The first one, the Pre-Local Mapping Stage, each robot constructs\u00a0in real-time a pre-local map of its environment using Laser\u00a0Rangefinder data and low cost GPS information only in certain\u00a0situations. Afterwards, in the Local Mapping Stage, the robots\u00a0share their pre-local maps and merge them in a decentralized way\u00a0in order to improve their new maps, renamed now as local maps.\u00a0An experimental study for the case of decentralized cooperative\u00a03D mapping is presented, where tests were conducted using 3\u00a0intelligent cars equipped with lidars and GPS receiver devices in\u00a0urban outdoor scenarios. We also discuss the performance of all\u00a0the cooperative system in terms of map alignments.<\/p>\n<\/li>\n<li><b>Title: <\/b>\u00a0Single-View Place Recognition under Seasonal Changes\u00a0 \u00a0 <a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/paper8.pdf\">paper<\/a>\u00a0 <a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/presentationOlid.pdf\">presentation<\/a><br \/>\n<b>Authors:\u00a0<\/b><strong>D. Olid<\/strong>, J. M. Facil and J. Civera<\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>:\u00a0Single-view place recognition, that we can define\u00a0as finding an image that corresponds to the same place as\u00a0a given query image, is a key capability for autonomous\u00a0navigation and mapping. Although there has been a considerable\u00a0amount of research in the topic, the high degree of\u00a0image variability (with viewpoint, illumination or occlusions\u00a0for example) makes it a research challenge.\u00a0One of the particular challenges, that we address in this\u00a0work, is weather variation. Seasonal changes can produce\u00a0drastic appearance changes, that classic low-level features do\u00a0not model properly. Our contributions in this paper are twofold.\u00a0First we pre-process and propose a partition for the Nordland\u00a0dataset, frequently used for place recognition research without<br \/>\nconsensus on the partitions. And second, we evaluate several\u00a0neural network architectures such as pre-trained, siamese and\u00a0triplet for this problem. Our best results outperform the state\u00a0of the art of the field. A video showing our results can be found\u00a0in https:\/\/youtu.be\/VrlxsYZoHDM.<\/p>\n<\/li>\n<li><b>Title: <\/b>\u00a0Future Depth as Value Signal for Learning Collision Avoidance\u00a0 <a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/paper14-1.pdf\">paper<\/a>\u00a0 <a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/presentationKelchtermans.pdf\">presentation<\/a><br \/>\n<b>Authors:\u00a0<\/b><strong>K. Kelchtermans<\/strong> and T. Tuytelaars<\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>:\u00a0The constant miniaturization of robots reduces\u00a0the array of available sensors. This raises the need for robust\u00a0algorithms that allow robots to navigate without collision based\u00a0solely on monocular camera input. In this work we propose\u00a0a new learning-based method for this task. We propose a\u00a0neural network policy for monocular collision avoidance with\u00a0self-supervision that does not require actual collisions during\u00a0training. To this end, we demonstrate that a neural network is\u00a0capable of evaluating an input image and action by predicting<br \/>\nthe expected depth in the future. In this sense, the future depth\u00a0can be seen as an action-value-signal. In comparison with our\u00a0baseline model that is based on predicting collision probabilities,\u00a0we show that using depth requires less data and leads to more\u00a0stable training without need for actual collisions. The latter can\u00a0be especially useful if the autonomous robot is more fragile\u00a0and not capable to deal with collisions (e.g. aerial robots). The\u00a0proposed method is evaluated thoroughly in simulation in a\u00a0ROS-Gazebo-tensorflow framework and will be made available\u00a0on publication.<\/p>\n<\/li>\n<li><b>Title: <\/b>\u00a0Automatic generation of ground truth for the evaluation of obstacle detection and tracking techniques\u00a0 \u00a0 <a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/paper15.pdf\">paper<\/a> \u00a0 presentation<i><\/i><br \/>\n<b>Authors:\u00a0<\/b><strong>H. Hajri<\/strong>, E. Doucet, M.\u00a0 Revilloud, L. Halit, B.\u00a0 Lusetti, M.C. Rahal<\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>:\u00a0As automated vehicles are getting closer to becoming\u00a0a reality, it will become mandatory to be able to characterise the\u00a0performance of their obstacle detection systems. This validation\u00a0process requires large amounts of ground-truth data, which\u00a0is currently generated by manually annotation. In this paper,\u00a0we propose a novel methodology to generate ground-truth\u00a0kinematics datasets for specific objects in real-world scenes.\u00a0Our procedure requires no annotation whatsoever, human\u00a0intervention being limited to sensors calibration. We present\u00a0the recording platform which was exploited to acquire the\u00a0reference data and a detailed and thorough analytical study\u00a0of the propagation of errors in our procedure. This allows us\u00a0to provide detailed precision metrics for each and every data\u00a0item in our datasets. Finally some visualisations of the acquired\u00a0data are given.<\/p>\n<\/li>\n<li><b>Title: <\/b>\u00a0Autonomous navigation using visual sparse map\u00a0 \u00a0 \u00a0 <a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/paper17.pdf\">paper<\/a> \u00a0presentation<i><\/i><br \/>\n<b>Authors:\u00a0<\/b><strong>S. Hong<\/strong> and H. Cheng<\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>:\u00a0This paper presents an autonomous navigation\u00a0system using only visual sparse map. Although a dense map\u00a0provides detail information of environment, the most\u00a0information of the dense map is redundant for autonomous\u00a0navigation. In addition, the dense map demands the high cost for\u00a0storage and management. To tackle these challenges, we propose\u00a0the autonomous navigation using visual sparse map. We leverage\u00a0a visual Simultaneous Localization and Mapping (SLAM) to\u00a0generate the visual sparse map and localize a robot in the map.\u00a0Using the robot position in the map, the robot navigates by\u00a0following a reference line generated from the visual sparse map.\u00a0We evaluated the proposed method using two robot platforms in\u00a0indoor environment and outdoor environment. Experimental\u00a0results show successful autonomous navigation in both\u00a0environments. The experimental video is available at\u00a0https:\/\/drive.google.com\/file\/d\/1DlDa6lkrQA6Zi2XAbKn0hZsR8 cXLyXKO\/view?usp=sharing.<\/p>\n<\/li>\n<li><b>Title: <\/b>\u00a0Socially Invisible Navigation for Intelligent Vehicles\u00a0 \u00a0 \u00a0 <a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/paper18.pdf\">paper<\/a> \u00a0presentation<i><\/i><br \/>\n<b>Authors:\u00a0<\/b><strong>A. Bera<\/strong>, T. Randhavane, E. Kubin, A. Wang, K.\u00a0 Gray, and D.\u00a0 Manocha<\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>:\u00a0Abstract\u2014We present a real-time, data-driven algorithm to\u00a0enhance the social-invisibility of autonomous vehicles within\u00a0crowds. Our approach is based on prior psychological research,\u00a0which reveals that people notice and\u2013importantly\u2013react negatively\u00a0to groups of social actors when they have high entitativity,\u00a0moving in a tight group with similar appearances and\u00a0trajectories. In order to evaluate that behavior, we performed\u00a0a user study to develop navigational algorithms that minimize\u00a0entitativity. This study establishes mapping between emotional\u00a0reactions and multi-robot trajectories and appearances, and further generalizes the finding across various environmental\u00a0conditions. We demonstrate the applicability of our entitativity\u00a0modeling for trajectory computation for active surveillance\u00a0and dynamic intervention in simulated robot-human interaction\u00a0scenarios. Our approach empirically shows that various levels\u00a0of entitative robots can be used to both avoid and influence\u00a0pedestrians while not eliciting strong emotional reactions, giving\u00a0multi-robot systems socially-invisibility.<\/p>\n<\/li>\n<li><b>Title: <\/b>\u00a0An Egocubemap Based Algorithm for Quadrotors Obstacle Avoidance Using a Single Depth Camera\u00a0 \u00a0 <a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/paper20.pdf\">paper<\/a>\u00a0 <a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/presentationMontcel.pdf\">presentation<\/a><br \/>\n<b>Authors:\u00a0<\/b><strong>T. Tezenas Du Montcel<\/strong>, A. Negre , M. Muschinowski , E. Gomez-Balderas and N. Marchand<\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>:\u00a0A fast obstacle avoidance algorithm is a necessary\u00a0condition to enable safe flights of Unmanned Aerial Vehicles\u00a0(UAVs) eventually at high-speed. Large UAVs usually have a lot\u00a0of sensors and available computational resources which allow\u00a0complex algorithms to run fast enough to navigate safely. On the\u00a0contrary, small UAVs gather many difficulties, like computation\u00a0and sensors limitations, forcing algorithms to retain only a\u00a0few keys points of their environment. This paper proposes\u00a0an obstacle avoidance algorithm for quadrotor using a single\u00a0depth camera. Taking advantage of the possibilities offered by\u00a0embedded GPUs, a cubic world representation centered on the\u00a0robot &#8211; called Egocubemap &#8211; is used while the whole obstacle\u00a0detection and avoidance algorithm is light enough to run at\u00a010Hz on-board. Numerical and experimental validations are\u00a0provided.<\/p>\n<\/li>\n<li><b>Title: <\/b>\u00a0DisNet: A novel method for distance estimation from monocular camera\u00a0 \u00a0 <a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/paper22.pdf\">paper<\/a>\u00a0 <a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/presentationHaseeb.pdf\">presentation<\/a><br \/>\n<b>Authors:\u00a0<\/b><strong>M. Abdul Haseeb<\/strong>, J. Guan, D.\u00a0 Risti\u0107-Durrant, A. Gr\u00e4se<\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>:\u00a0In this paper, a machine learning setup that provides the obstacle detection system with a method to estimate the distance from the monocular camera to the object viewed with the camera is presented. In particular, the preliminary results of an on-going research to allow the on-board multisensory system, which is under development within H2020 Shift2Rail project SMART, to learn distances to objects, possible obstacles, on the rail tracks ahead of the locomotive are given. The presented distance estimation system is based on Multi Hidden-Layer Neural Network, named DisNet, which is used to learn and predict the distance between the object and the camera sensor. The DisNet was trained using a supervised learning technique where the input features were manually calculated parameters of the object bounding boxes resulted from the YOLO object classifier and outputs were the accurate 3D laser scanner measurements of the distances to objects in the recorded scene. The presented DisNet-based distance estimation system was evaluated on the images of railway scenes as well as on the images of a road scene. Shown results demonstrate a general nature of the proposed DisNet system that enables its use for the estimation of distances to objects imaged with different types of monocular cameras.<\/p>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h4><strong>15:00-16:30 Session 5: Control, planning<\/strong><\/h4>\n<p>Chairman: Miguel Angel Sotelo (University of Alcala, Madrid, Spain)<\/p>\n<ul>\n<li><b>Title: Integration of Cooperative Services with Autonomous Driving<\/b>\u00a0 <em><a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/PresentationNaranjo.pdf\">presentation<\/a><\/em>\u00a0 15<em>:00-15:40<\/em><i><\/i><br \/>\n<b>Keynote speaker: Jose Eugenio Naranjo Hernandez (Universidad Politecnica de Madrid, Spain)<\/b><\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>: Cooperative Systems (C-ITS) are based on the generation of safety and efficiency information in road transport and its diffusion through the use of V2X vehicular communications networks. Several C-ITS experiences have been developed, which were mainly focused on sending traffic information via Vehicle-to-Infrastructure (V2I) communications to vehicles, in such a way that the same information shown in the variable information panels is presented to the driver inside the vehicle through a Human Machine Interface (HMI). Also, there are experiences where there is an exchange of information between some vehicles and others using vehicle-to-vehicle communications services (V2V). In this way, the limit of the visual horizon of the vehicles is exceeded. drivers and information on the entire driving environment that may affect safety.<br \/>\nThe European ITS Platform has published the first two sets of cooperative services that are currently available for deployment and have been named Day 1 and Day 1.5, in reference to their implementation term. In this way, within the C-ITS Day 1 services we can find the Emergency vehicle approaching (V2V), Hazardous location notification (V2I), the Road works warning (V2I) or the Weather conditions (V2I), and within the services C-ITS Day 1.5 we found Cooperative collision risk warning (V2V) or Zone access control for urban areas (V2I). This set of services is currently being deployed in projects such as C-ROADS (https:\/\/www.c-roads.eu), where C-ITS corridors are being enabled across Europe.<br \/>\nOn the other hand, it is clear that autonomous driving in the strict sense, at an SAE-3 or higher automation level, does not make sense if cooperative and connectivity components are not incorporated within the ego-vehicle, since the limitation of the horizon visual sensors on the vehicles makes them subject to the limitations of human beings.<br \/>\nIn this keynote, we present the results of the European project AUTOCITS &#8211; Regulation Study for Interoperability in the Adoption of Autonomous Driving in European Urban Nodes (https:\/\/www.autocits.eu\/), where they try to obtain synergies of both C-ITS as of the autonomous and connected vehicles, enabling the Cooperative Autonomous Driving. In this way, AUTOCITS is developing all the architecture to allow the TMCs to generate C-ITS Day 1 messages and transmit them to the road via V2X communications, which have been deployed in three pilot sites located in the urban accesses of 3 European cities: Paris, Madrid and Lisbon, belonging to the Atlantic Corridor of the TEN-T road network.<\/p>\n<\/li>\n<li><b>Title: <\/b>\u00a0Multi-Sensor-Based Predictive Control for Autonomous Backward Perpendicular and Diagonal Parking\u00a0\u00a0<em>\u00a0 <a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/paper9.pdf\">paper<\/a> \u00a0\u00a0<a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/presentationPerez.pdf\">presentation<\/a> \u00a0<\/em>15<em>:40-16:00<\/em><i><\/i><br \/>\n<b>Authors:\u00a0<\/b><strong>D. Perez-Morales,<\/strong>\u00a0O. Kermorgant , S. Dom\u0131nguez-Quijada and P. Martinet<\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>: This paper explores the feasibility of a Multi-Sensor-Based Predictive Control (MSBPC) approach for addressing\u00a0backward nonparallel (perpendicular and diagonal) parking problems of car-like vehicles as an alternative to more\u00a0classical (e.g. path planning based) approaches. The results of a\u00a0few individual cases are presented to illustrate the behavior and\u00a0performance of the proposed approach as well as results from\u00a0exhaustive simulations to assess its convergence and stability.\u00a0Indeed, preliminary results are encouraging, showing that the\u00a0vehicle is able to park successfully from virtually any sensible\u00a0initial position.<\/p>\n<\/li>\n<li><b>Title: <\/b>\u00a0Towards Uncertainty-Aware Path Planning for Navigation on Road Networks Using Augmented MDPs\u00a0\u00a0<em>\u00a0 <a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/paper11.pdf\">paper<\/a> \u00a0\u00a0 <a href=\"https:\/\/project.inria.fr\/ppniv18\/files\/2018\/10\/presentationNardi.pdf\">presentation<\/a>\u00a0<\/em>16<em>:00-16:20<\/em><i><\/i><br \/>\n<b>Authors:\u00a0L. Nardi<\/b>,C. Stachniss<\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>: Although most robots use probabilistic algorithms\u00a0to solve state estimation problems such as localization, path\u00a0planning is often performed without considering the uncertainty\u00a0about robot\u2019s position. Uncertainty, however, matters in planning.\u00a0In this paper, we investigate the problem of path planning\u00a0considering the uncertainty in the robot\u2019s belief about the\u00a0world, in its perceptions and in its action execution. We propose\u00a0the use of an uncertainty-augmented Markov Decision Process\u00a0to approximate the underlying Partially Observable Markov\u00a0Decision Process, and we employ a localization prior to estimate\u00a0how the uncertainty about robot\u2019s belief propagates through the\u00a0environment. This yields to a planning approach that generates navigation policies able to make decisions according to different\u00a0degrees of uncertainty while being computationally tractable.\u00a0We implemented our approach and thoroughly evaluated it on\u00a0different navigation problems. Our experiments suggest that\u00a0we are able to compute policies that are more effective than<br \/>\napproaches that ignore the uncertainty and also to outperform\u00a0policies that always take the safest actions.<\/p>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h4><strong>16:30-17:00 Coffee break<\/strong><\/h4>\n<h4><strong>17:00-18:00\u00a0<\/strong><strong>Round table:\u00a0Robot taxis &amp; Autonomous Shuttles<\/strong><\/h4>\n<p>Chairman: Urbano Nunes (ISR\/University of Coimbra, Portugal)<\/p>\n<p><b>Participants: <\/b><\/p>\n<ul>\n<li><strong>Robert Krutsch (Intel Deutschland GmbH, Germany)<\/strong><\/li>\n<li><strong>Moritz Werling (BMW, Germany)<\/strong><\/li>\n<li><strong>Nicole Camous (NavyaTech, France)<\/strong><\/li>\n<li><strong>Dominik Maucher (Bosch, Germany)<\/strong><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h4><strong><span style=\"font-family: 'Source Sans Pro';\">18:00 Closing<\/span><\/strong><\/h4>\n","protected":false},"excerpt":{"rendered":"<p>8:50 Opening\u00a0 Christian Laugier (CHROMA, Inria),\u00a0Philippe Martinet (CHORALE, Inria), Urbano Nunes (ISR\/University of Coimbra, Portugal), Miguel Angel Sotelo (University of Alcala, Madrid, Spain) 9:00-9:40 Session 1: Deep Learning Chairman:\u00a0Philippe Martinet (CHORALE, Inria) Title: \u00a0ISA2: Intelligent Speed Adaptation from Appearance\u00a0 paper\u00a0\u00a0presentation\u00a09:00-9:20 Authors:\u00a0C. Herranz-Perdiguero and R. J. Lopez-Sastre Abstract: In this work\u2026<\/p>\n<p> <a class=\"continue-reading-link\" href=\"https:\/\/project.inria.fr\/ppniv18\/program\/\"><span>Continue reading<\/span><i class=\"crycon-right-dir\"><\/i><\/a> <\/p>\n","protected":false},"author":1372,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-109","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/project.inria.fr\/ppniv18\/wp-json\/wp\/v2\/pages\/109","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/project.inria.fr\/ppniv18\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/project.inria.fr\/ppniv18\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/ppniv18\/wp-json\/wp\/v2\/users\/1372"}],"replies":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/ppniv18\/wp-json\/wp\/v2\/comments?post=109"}],"version-history":[{"count":95,"href":"https:\/\/project.inria.fr\/ppniv18\/wp-json\/wp\/v2\/pages\/109\/revisions"}],"predecessor-version":[{"id":308,"href":"https:\/\/project.inria.fr\/ppniv18\/wp-json\/wp\/v2\/pages\/109\/revisions\/308"}],"wp:attachment":[{"href":"https:\/\/project.inria.fr\/ppniv18\/wp-json\/wp\/v2\/media?parent=109"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}