

{"id":87,"date":"2020-02-21T09:11:03","date_gmt":"2020-02-21T08:11:03","guid":{"rendered":"https:\/\/project.inria.fr\/ppniv20\/?page_id=87"},"modified":"2020-10-25T06:01:51","modified_gmt":"2020-10-25T05:01:51","slug":"program","status":"publish","type":"page","link":"https:\/\/project.inria.fr\/ppniv20\/program\/","title":{"rendered":"PROGRAM"},"content":{"rendered":"<h4 style=\"font-family: 'Source Sans Pro';\"><strong>Invited Keynotes<\/strong><\/h4>\n<ul>\n<li><strong>Title:\u00a0 Self-Supervised Learning for Perception Tasks in Automated Driving (<a href=\"https:\/\/project.inria.fr\/ppniv20\/files\/2020\/10\/PPNIV20-slides-Self-Supervised-Learning-for-Perception-Tasks-in-Automated-Drivingr.pdf\">slides<\/a>, <a href=\"http:\/\/www-sop.inria.fr\/members\/Philippe.Martinet\/ppniv20\/PPNIV20-video-Self-Supervised Learning for Perception Tasks in Automated Driving.mp4\">video<\/a>)\u00a0<em>8:00 AM (Las Vegas time)<\/em><\/strong><br \/>\n<strong>Keynote speaker:\u00a0Wolfram Burgard \u00a0(University of Frieburg, Germany)\u00a0<\/strong><\/p>\n<p style=\"text-align: justify;\"><strong><i>Abstract<\/i><\/strong>:\u00a0At the Toyota Research Institute we are following the one-system-two-modes approach to building truly automated cars. More precisely, we simultaneously aim for the L4\/L5 chauffeur application and the the guardian system, which can be considered as a highly advanced driver assistance system of the future that prevents the driver from making any mistakes. TRI aims to equip more and more consumer vehicles with guardian technology and in this way to turn the entire Toyota fleet into a giant data collection system. To leverage the resulting data advantage, TRI performs substantial research in machine learning and, in addition to supervised methods, particularly focuses on unsupervised and self-supervised approaches. In this presentation, I will present three recent results regarding self-supervised methods for perception problems in the context of automated driving. I will present novel approaches to inferring depth from monocular images and a new approach to panoptic segmentation.<\/p>\n<p style=\"text-align: justify;\"><strong><i>Biography<\/i><\/strong>: Wolfram Burgard is VP for Automated Driving Technology at the Toyota Research Institute. He is on leave from his professorship at the University of Freiburg where he heads the research group for Autonomous Intelligent Systems. Wolfram Burgard is known for his contributions to mobile robot navigation, localization and SLAM (simultaneous localization and mapping). He has published more than 350 papers in the overlapping area of robotics and artificial intelligence.<\/p>\n<\/li>\n<li><strong><strong>Title: Understanding Risk and Social Behavior Improves Decision Making for Autonomous Vehicles (<a href=\"https:\/\/project.inria.fr\/ppniv20\/files\/2020\/10\/PPNIV20-slides-Risk-and-social-behavior-for-decision-making-for-autonomous-vehicles.pdf\">slides<\/a>, <a href=\"http:\/\/www-sop.inria.fr\/members\/Philippe.Martinet\/ppniv20\/PPNIV20-video-Risk and social behavior for decision making for autonomous vehicles.mp4\">video<\/a>)\u00a0<em>8:45 AM (Las Vegas time)<\/em><\/strong><\/strong><br \/>\n<strong>Keynote speaker:\u00a0 \u00a0Daniela Rus (MIT, USA)<\/strong><\/p>\n<p style=\"text-align: justify;\"><strong><i>Abstract<\/i><\/strong>:\u00a0Deployment of autonomous vehicles on public roads promises increases in efficiency and safety, and requires evaluating risk, understanding the intent of human drivers, and adapting to different driving styles. Autonomous vehicles must also behave in safe and predictable ways without requiring explicit communication. This talk describes how to integrate risk and behavior analysis in the control look of an autonomous car. I will describe how Social Value Orientation (SVO), which captures how an agent\u2019s social preferences and cooperation affect their interactions with others by quantifying the degree of selfishness or altruism, can be integrsted in decision making and provide recent examples of developing and deploying self-driving vehicles with adaptation capabilities.<\/p>\n<p style=\"text-align: justify;\"><strong><i>Biography<\/i><\/strong>: Daniela Rus is the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science, Director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT, and Deputy Dean of Research in the Schwarzman College of Computing at MIT. She is also a visiting fellow at Mitre Corporation. \u00a0Rus&#8217;s research interests are in robotics and artificial intelligence. The key focus of her research is to develop the science and engineering of autonomy. Rus is a Class of 2002 MacArthur Fellow, a fellow of ACM, AAAI and IEEE, and a member of the National Academy of Engineering and of the American Academy of Arts and Sciences. She is the recipient of the Engelberger Award for robotics. She earned her PhD in Computer Science from Cornell University<\/p>\n<\/li>\n<li><strong>Title:\u00a0\u00a0Safe Autonomous Driving and Humans:\u00a0Perception and Transitions (<a href=\"https:\/\/project.inria.fr\/ppniv20\/files\/2020\/10\/PPNIV20-slides-Safe-Autonomous-Driving-abd-Humansr.pdf\">slides<\/a>, <a href=\"http:\/\/www-sop.inria.fr\/members\/Philippe.Martinet\/ppniv20\/PPNIV20-video-Safe-Autonomous-Driving-abd-Humans.mp4\">video<\/a>) 9<em>:30 AM (Las Vegas time)<\/em><\/strong><br \/>\n<strong>Keynote speaker:\u00a0Mohan M Trivedi (University of California, USA)<\/strong><\/p>\n<p style=\"text-align: justify;\"><strong><i>Abstract<\/i><\/strong>:\u00a0These are truly exciting times especially for researchers and scholars active in robotics and\u00a0intelligent systems fields. Fruits of their labor are enabling transformative changes in daily\u00a0lives of general public. In this presentation we will focus on changes affecting our mobility on\u00a0roads with highly automated intelligent vehicles. We specifically discuss issues related to the\u00a0understanding of human agents interacting with the automated vehicle, either as occupants of\u00a0such vehicles, or who are in the near vicinity of the vehicles, as pedestrians, cyclists, or inside\u00a0surrounding vehicles. These issues require deeper examination and careful resolution to assure\u00a0safety, reliability and robustness of these highly complex systems for operation on public\u00a0roads. The presentation will highlight recent research dealing with understanding of activities,\u00a0behavior, intentions of humans specifically in the context of autonomous driving and\u00a0transition controls.<\/p>\n<p style=\"text-align: justify;\"><strong><i>Biography<\/i><\/strong>:\u00a0 Mohan Trivedi is a Distinguished Professor of Engineering and founding director of the\u00a0Computer Vision and Robotics Research Laboratory, as well as the Laboratory for Intelligent\u00a0and Safe Automobiles (LISA) at the University of California San Diego. These labs have\u00a0played significant roles in the development of human-centered safe autonomous driving,\u00a0advanced driver assistance systems, vision systems for intelligent transportation, homeland\u00a0security, assistive technologies and human-robot interaction fields. Trivedi has received the\u00a0IEEE Intelligent Transportation Systems (ITS) Society\u2019s Outstanding Researcher Award and\u00a0LEAD Institution Award, as well as the Meritorious Service Award of the IEEE Computer\u00a0Society. He is a Fellow of IEEE, SPIE, and IAPR. He serves very regularly as a consultant to\u00a0industry and government agencies in the USA and abroad. Trivedi frequently participates on panels dealing with\u00a0technological, strategic, privacy, and ethical issues surrounding research areas he is involved in.<\/p>\n<p><em><strong>Links to Related Papers<\/strong><\/em>: http:\/\/cvrr.ucsd.edu\/publications\/index.html<\/li>\n<li><strong>Title:\u00a0Decision Making Architectures for Safe Planning and Control of Agile Autonomous Vehicles\u00a0 (<a href=\"https:\/\/project.inria.fr\/ppniv20\/files\/2020\/10\/PPNIV20-slides-Decision-Making-Architectures-for-Safe-Planning-and-Control-of-Agile-Autonomous-Vehicles-2spp.pdf\">slides<\/a>, <a href=\"http:\/\/www-sop.inria.fr\/members\/Philippe.Martinet\/ppniv20\/PPNIV20-video-Decision-Making-Architectures-for-Safe-Planning-and-Control-of-Agile-Autonomous-Vehicle.mp4\">video<\/a>) 10<em>:15 AM (Las Vegas time)<\/em><\/strong><br \/>\n<strong>Keynote speaker:\u00a0 Evangelos Theodorou\u00a0(Georgia Institute of Technology, USA)<\/strong><\/p>\n<p style=\"text-align: justify;\"><strong><i>Abstract<\/i><\/strong>:\u00a0In this talk I will present novel algorithms and decision-making architectures for safe planning and control of terrestrial and aerial vehicles operating in dynamic environments. These algorithms incorporate different representations of robustness for high speed navigation and bring together concepts from stochastic contraction theory, robust adaptive control, and dynamic stochastic optimization using augmented importance sampling techniques.\u00a0 I will present demonstrations on simulated and real robotic systems and discuss future research directions.<\/p>\n<p style=\"text-align: justify;\"><strong><i>Biography<\/i><\/strong>:\u00a0Evangelos Theodorou is an Associate Professor with the School of Aerospace Engineering, Georgia Institute of Technology and is also the director of Autonomous Control and Decisions Systems (ACDS) laboratory. He is also affiliated with the Institute of Robotics and Intelligence Machines, and Center for Machine Learning Research at Georgia Tech.\u00a0 His interests are at the intersection stochastic control and optimization, machine learning, statistical physics and dynamic systems theory. Applications of his research include robotic and aerospace systems, applied physics, networked systems and bio-engineering.<\/p>\n<\/li>\n<\/ul>\n<div>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li style=\"list-style-type: none;\"><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h4 style=\"font-family: 'Source Sans Pro';\"><strong>Accepted Papers<\/strong><\/h4>\n<\/div>\n<ul>\n<li><b>Title:<em><strong>\u00a0Marker-Based Mapping and Localization for Autonomous Valet Parking\u00a0\u00a0<\/strong><\/em><\/b><em><a href=\"https:\/\/project.inria.fr\/ppniv20\/files\/2020\/09\/PPNIV20-paper-Marker-Based-Mapping-and-Localization-for-Autonomous-Valet-Parking_final.pdf\">paper<\/a><em>, <a href=\"https:\/\/project.inria.fr\/ppniv20\/files\/2020\/09\/PPNIV20-slides-Marker-Based-Mapping-and-Localization-for-Autonomous-Valet-Parking_final.pdf\">slides<\/a>, <a href=\"http:\/\/www-sop.inria.fr\/members\/Philippe.Martinet\/ppniv20\/PPNIV20-video-Marker-Based Mapping and Localization for Autonomous Valet Parking_final.mp4\">video<\/a>\u00a0<\/em><\/em><em>\u00a0\u00a0<\/em><br \/>\n<strong>Authors:\u00a0Zheng Fang, Yongnan Chen, <u>Ming Zhou<\/u>, Chao Lu<\/strong><\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>: Autonomous valet parking (AVP) is one of the\u00a0most important research topics of autonomous driving in low speed\u00a0scenes, with accurate mapping and localization being its\u00a0key technologies. The traditional visual-based method, due to\u00a0the change of illumination and appearance of the scene, easily\u00a0causes localization failure in long-term applications. In order\u00a0to solve this problem, we introduce visual fiducial markers\u00a0as artificial landmarks for robust mapping and localization\u00a0in parking lots. Firstly, the absolute scale information is\u00a0acquired from fiducial markers, and a robust and accurate\u00a0monocular mapping method is proposed by fusing wheel\u00a0odometry. Secondly, on the basis of the map of fiducial markers\u00a0that are sparsely placed in the parking lot, we propose a\u00a0robust and efficient filtering-based localization method, which\u00a0realizes accurate real-time localization of vehicles in parking\u00a0lot. Compared with the traditional visual localization methods,\u00a0we adopt artificial landmarks, which have strong stability and\u00a0robustness to illumination and viewpoint changes. Meanwhile,\u00a0because the fiducial markers can be selectively placed on the\u00a0columns and walls of the parking lot, it is not easy to be\u00a0occluded compared to the ground information, ensuring the\u00a0reliability of the system. We have verified the effectiveness\u00a0of our methods in real scenes. The experiment results show\u00a0\u00a0that the average localization error is about 0.3 m in a typical\u00a0autonomous parking operation at a speed of 10km\/h.<\/p>\n<\/li>\n<li><b>Title:<em><strong>\u00a0\u00a0Parameter Optimization for Loop Closure Detection in Closed\u00a0Environments \u00a0<\/strong><\/em> <a href=\"https:\/\/project.inria.fr\/ppniv20\/files\/2020\/09\/PPNIV20-paper-Parameter-Optimization-for-Loop-Closure-Detection-in-Closed-Envornments.pdf\">paper<\/a><\/b><em>, <a href=\"https:\/\/project.inria.fr\/ppniv20\/files\/2020\/09\/PPNIV20-slides-Parameter-Optimization-for-Loop-Closure-Detection-in-Closed-Envornments.pdf\">slides<\/a>, <a href=\"http:\/\/www-sop.inria.fr\/members\/Philippe.Martinet\/ppniv20\/PPNIV20-video-Parameter Optimization for Loop Closure Detection in Closed Envornments.mp4\">video<\/a>\u00a0\u00a0 \u00a0<\/em><br \/>\n<b>Authors:\u00a0<u>Nils Rottmann<\/u>, Ralf Bruder, Honghu Xue, Achim Schweikard, Elmar Rueckert<\/b><\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>: Tuning parameters is crucial for the performance\u00a0of localization and mapping algorithms. In general, the tuning\u00a0of the parameters requires expert knowledge and is sensitive to\u00a0information about the structure of the environment. In order\u00a0to design truly autonomous systems the robot has to learn the\u00a0parameters automatically. Therefore, we propose a parameter\u00a0optimization approach for loop closure detection in closed environments\u00a0which requires neither any prior information, e.g. robot\u00a0model parameters, nor expert knowledge. It relies on several path\u00a0traversals along the boundary line of the closed environment. We\u00a0demonstrate the performance of our method in challenging real\u00a0world scenarios with limited sensing capabilities. These scenarios\u00a0are exemplary for a wide range of practical applications including\u00a0lawn mowers and household robots.<\/p>\n<\/li>\n<li><b>Title:\u00a0<em><strong>\u00a0 Radar-Camera Sensor Fusion for Joint Object Detection\u00a0and Distance Estimation in Autonomous Vehicles\u00a0<\/strong><\/em> <a href=\"https:\/\/project.inria.fr\/ppniv20\/files\/2020\/09\/PPNIV20-paper-Radar-Camera-Sensor-Fusion-For-Joint-Object-Detection-And-Distance-Estimation-In-Autonomous-Vehicles.pdf\">paper<\/a><\/b><em>, <a href=\"https:\/\/project.inria.fr\/ppniv20\/files\/2020\/09\/PPNIV20-slides-RadarCameraSensorFusionForJointObjectDetectionAndDistanceEstimationInAutonomousVehicles.pdf\">slides<\/a>, <a href=\"http:\/\/www-sop.inria.fr\/members\/Philippe.Martinet\/ppniv20\/PPNIV20-video-RadarCameraSensorFusionForJointObjectDetectionAndDistanceEstimationInAutonomousVehicles.mp4\">video<\/a>\u00a0<\/em><br \/>\n<b>Authors:\u00a0<u>Ramin Nabati<\/u>, Hairong Qi<\/b><\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>:\u00a0In this paper we present a novel radar-camera\u00a0sensor fusion framework for accurate object detection and\u00a0distance estimation in autonomous driving scenarios. The proposed\u00a0architecture uses a middle-fusion approach to fuse the\u00a0radar point clouds and RGB images. Our radar object proposal\u00a0network uses radar point clouds to generate 3D proposals\u00a0from a set of 3D prior boxes. These proposals are mapped\u00a0to the image and fed into a Radar Proposal Refinement (RPR)\u00a0network for objectness score prediction and box refinement.\u00a0The RPR network utilizes both radar information and image\u00a0feature maps to generate accurate object proposals and distance\u00a0estimations.<br \/>\nThe radar-based proposals are combined with image-based\u00a0proposals generated by a modified Region Proposal Network\u00a0(RPN). The RPN has a distance regression layer for estimating\u00a0distance for every generated proposal. The radar-based and\u00a0image-based proposals are merged and used in the next stage for\u00a0object classification. Experiments on the challenging nuScenes\u00a0dataset show our method outperforms other existing radarcamera\u00a0fusion methods in the 2D object detection task while\u00a0at the same time accurately estimates objects\u2019 distances.<\/p>\n<\/li>\n<li><b>Title:<em><strong>\u00a0\u00a0 SalsaNext: Fast, Uncertainty-aware Semantic Segmentation\u00a0of LiDAR Point Clouds for Autonomous Driving\u00a0<\/strong><\/em> <a href=\"https:\/\/project.inria.fr\/ppniv20\/files\/2020\/09\/PPNIV20-paper-SalsaNext-Fast-Uncertainty-aware-Semantic-Segmentation-of-LiDAR-Point-Clouds-for-Autonomous-Driving.pdf\">paper<\/a><\/b><em>, <a href=\"https:\/\/project.inria.fr\/ppniv20\/files\/2020\/09\/PPNIV20-slides-SalsaNext-Fast-Uncertainty-aware-Semantic-Segmentation-of-LiDAR-Point-Clouds-for-Autonomous-Driving.pdf\">slides<\/a>, <a href=\"http:\/\/www-sop.inria.fr\/members\/Philippe.Martinet\/ppniv20\/PPNIV20-video-SalsaNext Fast Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving.mp4\">video<\/a>\u00a0\u00a0<\/em><br \/>\n<b>Authors:\u00a0<u>Tiago Cortinhal<\/u>, George Tzelepis, Eren Erdal Aksoy<\/b><\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>:\u00a0In this paper, we introduce SalsaNext for the\u00a0uncertainty-aware semantic segmentation of a full 3D LiDAR\u00a0point cloud in real-time. SalsaNext is the next version of SalsaNet\u00a0[1] which has an encoder-decoder architecture consisting\u00a0of a set of ResNet blocks. In contrast to SalsaNet, we introduce\u00a0a new context module, replace the ResNet encoder blocks with a\u00a0new residual dilated convolution stack with gradually increasing\u00a0receptive fields and add the pixel-shuffle layer in the decoder.\u00a0Additionally, we switch from stride convolution to average\u00a0pooling and also apply central dropout treatment. To directly\u00a0optimize the Jaccard index, we further combine the weighted\u00a0cross entropy loss with Lov\u00b4asz-Softmax loss [2]. We finally inject\u00a0a Bayesian treatment to compute the epistemic and aleatoric\u00a0uncertainties for each LiDAR point. We provide a thorough\u00a0quantitative evaluation on the Semantic-KITTI dataset [3],\u00a0which demonstrates that SalsaNext outperforms the previous\u00a0networks and ranks first on the Semantic-KITTI leaderboard<\/p>\n<\/li>\n<li><b>Title:<em><strong>\u00a0\u00a0SDVTracker: Real-Time Multi-Sensor Association and Tracking for\u00a0Self-Driving Vehicles \u00a0<\/strong><\/em> <a href=\"https:\/\/project.inria.fr\/ppniv20\/files\/2020\/09\/PPNIV20-paper-SDVTracker-Real-Time-Multi-Sensor-Association-and-Tracking-for-Self-Driving-Vehicles.pdf\">paper<\/a><\/b><em>, <a href=\"https:\/\/project.inria.fr\/ppniv20\/files\/2020\/09\/PPNIV20-slides-SDVTracker-Real-Time-Multi-Sensor-Association-and-Tracking-for-Self-Driving-Vehicles.pdf\">slides<\/a>, <a href=\"http:\/\/www-sop.inria.fr\/members\/Philippe.Martinet\/ppniv20\/PPNIV20-video-SDVTracker - Real-Time Multi-Sensor Association and Tracking for Self-Driving Vehicles.mp4\">video<\/a>\u00a0\u00a0 \u00a0<\/em><br \/>\n<b>Authors:\u00a0<u>Shivam Gautam<\/u>, Gregory P. Meyer, Carlos Vallespi-Gonzalez, Brian C. Becker<\/b><\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>:\u00a0Accurate motion state estimation of Vulnerable\u00a0Road Users (VRUs), is a critical requirement for autonomous\u00a0vehicles that navigate in urban environments. Due to their\u00a0computational efficiency, many traditional autonomy systems\u00a0perform multi-object tracking using Kalman Filters which\u00a0frequently rely on hand-engineered association. However, such\u00a0methods fail to generalize to crowded scenes and multi-sensor\u00a0\u00a0modalities, often resulting in poor state estimates which cascade\u00a0to inaccurate predictions. We present a practical and\u00a0lightweight tracking system, SDVTracker, that uses a deep\u00a0learned model for association and state estimation in conjunction\u00a0with an Interacting Multiple Model (IMM) filter. The\u00a0proposed tracking method is fast, robust and generalizes across\u00a0multiple sensor modalities and different VRU classes. In this\u00a0paper, we detail a model that jointly optimizes both association\u00a0and state estimation with a novel loss, an algorithm for determining\u00a0ground-truth supervision, and a training procedure.\u00a0We show this system significantly outperforms hand-engineered\u00a0methods on a real-world urban driving dataset while running\u00a0in less than 2.5 ms on CPU for a scene with 100 actors, making\u00a0it suitable for self-driving applications where low latency and\u00a0high accuracy is critical.<\/p>\n<\/li>\n<li><b>Title:\u00a0 <em><strong>\u00a0Situation Awareness at Autonomous Vehicle Handover: Preliminary\u00a0Results of a Quantitative Analysis\u00a0 <\/strong><\/em><a href=\"https:\/\/project.inria.fr\/ppniv20\/files\/2020\/09\/PPNIV20-paper-Situation-Awareness-at-Autonomous-Vehicle-Handover-Preliminary-Results-of-a-Quantitative-Analysis.pdf\">paper<\/a><\/b><em>, <a href=\"https:\/\/project.inria.fr\/ppniv20\/files\/2020\/09\/PPNIV20-slides-Situation-Awareness-at-Autonomous-Vehicle-Handover-Preliminary-Results-of-a-Quantitative-Analysis.pdf\">slides<\/a>, <a href=\"http:\/\/www-sop.inria.fr\/members\/Philippe.Martinet\/ppniv20\/PPNIV20-video-Situation Awareness at Autonomous Vehicle Handover - Preliminary Results of a Quantitative Analysis.mp4\">video<\/a>\u00a0\u00a0\u00a0<\/em><br \/>\n<b>Authors:\u00a0Tamas D. Nagy, Daniel A. Drexler, Nikita Ukhrenkov, <u>Arpad Takacs<\/u>, Tamas Haidegger<\/b><\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>:\u00a0Enforcing system level safety is a key research\u00a0domain within self-driving technology. Current general development\u00a0efforts aim for Level 3+ autonomy, where the vehicle controls\u00a0both lateral and longitudinal motion of the dynamic driving\u00a0task, while the driver is permitted to divert their attention,\u00a0as long as she\/he is able to react properly to a handover request\u00a0initiated by the vehicle. Consequently, situation awareness of\u00a0the human driver has become one of the most important metrics\u00a0of handover safety. In this paper, the preliminary results of a\u00a0user study are presented to quantitatively evaluate emergency\u00a0handover performance, using custom-designed experimental\u00a0setup, built upon the Master Console of the da Vinci Surgical\u00a0System and the CARLA driving simulator. The measured\u00a0control signals and the questionnaire filled out by participants\u00a0were analyzed to gain further knowledge on the situation\u00a0awareness of drivers during handover at Level 3 autonomy. The\u00a0supporting, custom open-source platform developed is available<br \/>\nat https:\/\/github.com\/ABC-iRobotics\/dvrk_carla.<\/p>\n<\/li>\n<li><b>Title:<em><strong>\u00a0\u00a0Towards Context-Aware Navigation for\u00a0Long-Term Autonomy in Agricultural Environments \u00a0<\/strong><\/em> <a href=\"https:\/\/project.inria.fr\/ppniv20\/files\/2020\/09\/PPNIV20-paper-Towards-Context-Aware-Navigation-for-Long-Term-Autonomy-in-Agricultural-Environments.pdf\">paper<\/a><\/b><em>, <a href=\"https:\/\/project.inria.fr\/ppniv20\/files\/2020\/09\/PPNIV20-slides-Towards-Context-Aware-Navigation-for-Long-Term-Autonomy-in-Agricultural-Environments.pdf\">slides<\/a>, <a href=\"http:\/\/www-sop.inria.fr\/members\/Philippe.Martinet\/ppniv20\/PPNIV20-video-Towards Context-Aware Navigation for Long-Term Autonomy in Agricultural Environments.mp4\">video<\/a>\u00a0\u00a0 \u00a0<\/em><br \/>\n<b>Authors:\u00a0Mark Hollmann, <u>Benjamin Kisliuk<\/u>, Jan Christoph Krause, Christoph Tieben, Alexander Mocky, Sebastian Putzy,\u00a0Felix Igelbrinky, Thomas Wiemanny, Santiago Focke Martinez, Stefan Stiene, Joachim Hertzberg<\/b><\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>:\u00a0Autonomous surveying systems for agricultural applications\u00a0are becoming increasingly important. Currently, most\u00a0systems are remote-controlled or relying on a single global\u00a0map representation. Over the last years, several use-case-specific\u00a0representations for path and action planning in different contexts\u00a0have been proposed. However, solely relying on fixed representations\u00a0and action schemes limits the flexibility of autonomous\u00a0systems. Especially in agriculture, the surroundings in which\u00a0autonomous systems are deployed, may change rapidly during vegetation periods, and the complexity of the environment may\u00a0vary depending on farm size and season. In this paper, we\u00a0propose a context-aware system implemented in ROS that allows\u00a0to change the representation, planning strategy and execution\u00a0logics based on a spatially grounded semantic context. Our\u00a0vision is to build up an autonomous system called Autonomous\u00a0Robotic Experimental Platform (AROX) that is able to generate\u00a0crop maps over a whole vegetation period without any user\u00a0interference. To this end, we built up the hardware infrastructure\u00a0for storing and charging the robot as well as the needed software\u00a0to realize context-awareness using available ROS packages.<\/p>\n<\/li>\n<li><b>Title:<em><strong>\u00a0\u00a0Exploiting Continuity of Rewards\u00a0 &#8211; Efficient Sampling in POMDPs with Lipschitz Bandits<\/strong><\/em>\u00a0 <a href=\"https:\/\/project.inria.fr\/ppniv20\/files\/2020\/09\/PPNIV20-paper-Exploiting-Continuity-of-Rewards-Efficient-Sampling-in-POMDPs-with-Lipschitz-Bandits.pdf\">paper<\/a><\/b><em>, <a href=\"https:\/\/project.inria.fr\/ppniv20\/files\/2020\/09\/PPNIV20-slides-Exploiting-Continuity-of-Rewards-Efficient-Sampling-in-POMDPs-with-Lipschitz-Bandits.pdf\">slides<\/a>, <a href=\"http:\/\/www-sop.inria.fr\/members\/Philippe.Martinet\/ppniv20\/\/PPNIV20-video-Exploiting Continuity of Rewards - Efficient Sampling in POMDPs with Lipschitz Bandits.mp4\">video\u00a0<\/a><\/em><br \/>\n<b>Authors:\u00a0<u>\u00d6mer Sahin Tas<\/u>, Felix Hauser, Martin Lauer<\/b><\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>:\u00a0Decision making under uncertainty can be framed\u00a0as a partially observable Markov decision process (POMDP).\u00a0Finding exact solutions of POMDPs is generally computationally\u00a0intractable, but the solution can be approximated by\u00a0sampling-based approaches. These approaches rely on multiarmed\u00a0bandit (MAB) heuristics, which assume the outcomes\u00a0of different actions to be uncorrelated. In some applications,\u00a0like motion planning in continuous spaces, similar actions yield\u00a0similar outcomes. In this paper, we use variants of MAB\u00a0heuristics that make Lipschitz continuity assumptions on the\u00a0outcomes of actions to improve the efficiency of sampling-based\u00a0planning approaches. We demonstrate the effectiveness of this\u00a0approach in the context of motion planning for automated\u00a0driving.<\/p>\n<\/li>\n<li>\n<p style=\"text-align: justify;\"><b>Title:\u00a0<em><strong>\u00a0 Impact of Traffic Lights on Trajectory Forecasting of Human-driven\u00a0Vehicles Near Signalized Intersections\u00a0<\/strong><\/em> <a href=\"https:\/\/project.inria.fr\/ppniv20\/files\/2020\/09\/PPNIV20-paper-Impact-of-Traffic-Lights-on-Trajectory-Forecasting-of-Human-driven-Vehicles-Near-Signalized-Intersections.pdf\">paper<\/a><\/b><em>, <a href=\"https:\/\/project.inria.fr\/ppniv20\/files\/2020\/09\/PPNIV20-slides-Impact-of-Traffic-Lights-on-Trajectory-Forecasting-of-Human-driven-Vehicles-Near-Signalized-Intersections.pdf\">slides<\/a>, <a href=\"http:\/\/www-sop.inria.fr\/members\/Philippe.Martinet\/ppniv20\/PPNIV20-video-Impact of Traffic Lights on Trajectory Forecasting of Human-driven Vehicles Near Signalized Intersections.mp4\">video<\/a>\u00a0\u00a0\u00a0<\/em><br \/>\n<b>Authors:\u00a0<u>Geunseob Oh<\/u>, Huei Peng<\/b><\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>:\u00a0Forecasting trajectories of human-driven vehicles\u00a0is a crucial problem in autonomous driving. Trajectory forecasting\u00a0in the urban area is particularly hard due to complex\u00a0interactions with cars and pedestrians, and traffic lights (TLs).\u00a0Unlike the former that has been widely studied, the impact\u00a0of TLs on the trajectory prediction has been rarely discussed.\u00a0Our contribution is twofold. First, we identify the potential\u00a0impact qualitatively and quantitatively. Second, we present a\u00a0novel resolution that is mindful of the impact, inspired by\u00a0the fact that human drives differently depending on signal\u00a0phase and timing. Central to the proposed approach is Human\u00a0Policy Models which model how drivers react to various states\u00a0of TLs by mapping a sequence of states of vehicles and\u00a0TLs to a subsequent action of the vehicle. We then combine\u00a0the Human Policy Models with a known transition function\u00a0(system dynamics) to conduct a sequential prediction; thus our\u00a0\u00a0approach is viewed as Behavior Cloning. One novelty of our\u00a0approach is the use of vehicle-to-infrastructure communications\u00a0to obtain the future states of TLs. We demonstrate the impact\u00a0of TL and the proposed approach using an ablation study for\u00a0longitudinal trajectory forecasting tasks on real-world driving\u00a0data recorded near a signalized intersection. Finally, we propose\u00a0probabilistic (generative) Human Policy Models which provide\u00a0probabilistic contexts and capture competing policies, e.g., pass\u00a0\u00a0or stop in the yellow-light dilemma zone.<\/p>\n<\/li>\n<li><b>Title:\u00a0<em><strong>\u00a0 Semantic Grid Map based LiDAR Localization in Highly Dynamic\u00a0Urban Scenarios\u00a0 <\/strong><\/em><a href=\"https:\/\/project.inria.fr\/ppniv20\/files\/2020\/09\/PPNIV20-paper-Semantic-Grid-Map-based-LiDAR-Localization-in-Highly-Dynamic-Urban-Scenarios.pdf\">paper<\/a><\/b><em>, <a href=\"https:\/\/project.inria.fr\/ppniv20\/files\/2020\/09\/PPNIV20-slides-Semantic_Grid_Map_based_LiDAR_Localization_in_Highly_Dynamic_Scenarios.pdf\">slides<\/a>, <a href=\"http:\/\/www-sop.inria.fr\/members\/Philippe.Martinet\/ppniv20\/PPNIV20-video-Semantic_Grid_Map_based_LiDAR_Localization_in_Highly_Dynamic_Scenarios.mp4\">video<\/a>\u00a0\u00a0<\/em><br \/>\n<b>Authors:\u00a0<u>Chenxi Yang<\/u>, Lei He, Hanyang Zhuang, Chunxiang Wang, Ming Yang<\/b><\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>:\u00a0Change-over-time objects such as pedestrians and\u00a0vehicles remain challenging for scan-to-map pose estimation\u00a0using 3D LiDAR in the field of autonomous driving because\u00a0they lead to incorrect data association and structural occlusion.\u00a0This paper proposes a novel semantic grid map (SGM) and\u00a0corresponding algorithms to estimate the pose of observed scans\u00a0in such scenarios to improve robustness and accuracy. The algorithms\u00a0consist of a Gaussian mixture model (GMM) to initialize\u00a0the pose, and a grid probability model to keep estimating the\u00a0pose in real-time. We evaluate our algorithm thoroughly in\u00a0two scenarios. The first scenario is an express road with heavy\u00a0traffic to prove the performance towards dynamic interferences.\u00a0The second scenario is a factory to confirm the compatibility.\u00a0Experimental results show that the proposed method achieves\u00a0higher accuracy and smoothness than mainstream methods, and\u00a0is compatible with static environments.<\/p>\n<\/li>\n<\/ul>\n<h4><\/h4>\n","protected":false},"excerpt":{"rendered":"<p>Invited Keynotes Title:\u00a0 Self-Supervised Learning for Perception Tasks in Automated Driving (slides, video)\u00a08:00 AM (Las Vegas time) Keynote speaker:\u00a0Wolfram Burgard \u00a0(University of Frieburg, Germany)\u00a0 Abstract:\u00a0At the Toyota Research Institute we are following the one-system-two-modes approach to building truly automated cars. More precisely, we simultaneously aim for the L4\/L5 chauffeur application\u2026<\/p>\n<p> <a class=\"continue-reading-link\" href=\"https:\/\/project.inria.fr\/ppniv20\/program\/\"><span>Continue reading<\/span><i class=\"crycon-right-dir\"><\/i><\/a> <\/p>\n","protected":false},"author":1372,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-87","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/project.inria.fr\/ppniv20\/wp-json\/wp\/v2\/pages\/87","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/project.inria.fr\/ppniv20\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/project.inria.fr\/ppniv20\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/ppniv20\/wp-json\/wp\/v2\/users\/1372"}],"replies":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/ppniv20\/wp-json\/wp\/v2\/comments?post=87"}],"version-history":[{"count":49,"href":"https:\/\/project.inria.fr\/ppniv20\/wp-json\/wp\/v2\/pages\/87\/revisions"}],"predecessor-version":[{"id":220,"href":"https:\/\/project.inria.fr\/ppniv20\/wp-json\/wp\/v2\/pages\/87\/revisions\/220"}],"wp:attachment":[{"href":"https:\/\/project.inria.fr\/ppniv20\/wp-json\/wp\/v2\/media?parent=87"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}