

{"id":73,"date":"2019-01-27T18:02:13","date_gmt":"2019-01-27T17:02:13","guid":{"rendered":"https:\/\/project.inria.fr\/ppniv19\/?page_id=73"},"modified":"2019-11-12T15:15:35","modified_gmt":"2019-11-12T14:15:35","slug":"program","status":"publish","type":"page","link":"https:\/\/project.inria.fr\/ppniv19\/program\/","title":{"rendered":"Program"},"content":{"rendered":"<h4><strong><span style=\"font-family: 'Source Sans Pro';\">Final program<\/span><\/strong><\/h4>\n<h4><strong><span style=\"font-family: 'Source Sans Pro';\">9:00 <a href=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/presentation-PPNIV19.pdf\">Opening<\/a><\/span><\/strong><\/h4>\n<p><span style=\"font-family: 'Source Sans Pro';\">Christian Laugier (CHROMA, Inria),\u00a0<\/span>Philippe Martinet (CHORALE, Inria), Marcelo Ang (NUS, Singapore)<\/p>\n<h4><strong>9:10-10:45 Session 1: Machine &amp; Deep Learning<\/strong><\/h4>\n<p>Chairman: <span style=\"font-family: 'Source Sans Pro';\">Danwei Wang\u00a0 (NTU, Singapore)<\/span><\/p>\n<ul>\n<li><strong>Title: The road towards perception for autonomous driving: methods, challenges, and the data required<\/strong>\u00a0 <em><a href=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/IROS-presentation-1036-Roland.pdf\">Presentation<\/a>\u00a0\u00a09:10-9:55<\/em><br \/>\n<strong>Keynote speaker: Roland Meertens (AID, Munich, Germany)<\/strong><\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>:\u00a0Self-driving cars are expected to make a big impact on our daily lives within a couple of years. However, first we should solve the most interesting Artificial Intelligence (AI) problem of this century: perception. We will look at the problem of perception for autonomous vehicles, the sensors which are used to solve this problem, and the methods which are currently state of the art. We will also take a look at the available data: a crucial thing we need to teach machines about the world.<\/p>\n<\/li>\n<\/ul>\n<ul>\n<li><b>Title:\u00a0<strong>Transformation-adversarial network for road detection in LIDAR rings,\u00a0and model-free evidential road grid mapping<\/strong><em><strong>\u00a0<\/strong> <\/em><\/b> <em><a href=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/PPNIV19-paper_Cappelier.pdf\">paper<\/a><em>, <a href=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/PPNI19-presentation-Cappellier.pdf\">presentation<\/a>,\u00a0<a href=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/demo_next_steps.mp4\">video1<\/a><\/em><\/em><em>, <a href=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/demo_tadnet.mp4\">video2<\/a>\u00a0\u00a0<\/em>9:55<em>-10:20<\/em><br \/>\n<strong>Authors: <u>E. Capellier<\/u>, F. Davoine, V. Cherfaoui, Y. Li<\/strong><\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>: We propose a deep learning approach to perform\u00a0road-detection in LIDAR scans, at the point level. Instead of\u00a0processing a full LIDAR point-cloud, LIDAR rings can be\u00a0processed individually. To account for the geometrical diversity\u00a0among LIDAR rings, an homothety rescaling factor can be\u00a0predicted during the classification, to realign all the LIDAR\u00a0rings and facilitate the training. This scale factor is learnt\u00a0in a semi-supervised fashion. A performant classification can\u00a0then be achieved with a relatively simple system. Furthermore,\u00a0evidential mass values can be generated for each point from\u00a0an observation of the conflict at the output of the network,\u00a0which enables the classification results to be fused in evidential\u00a0grids. Experiments are done on real-life LIDAR scans that\u00a0were labelled from a lane-level centimetric map, to evaluate\u00a0the classification performances.<\/p>\n<\/li>\n<li><b>Title: \u00a0<strong>End-to-End Deep Neural Network Design\u00a0for Short-term Path Planning<\/strong> <\/b> <a href=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/PPNIV19-paper_Dao.pdf\">paper<\/a><em>, <a href=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/MQ-Dao-ppniv_E2E_slide.pdf\">presentation<\/a>\u00a0 <a href=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/end_to_end_deep_neural_network_design_for_short_term_path_planning_udacity_dataset_demo_X2fi2xVr2jE_240p.mp4\">video\u00a0\u00a0<\/a>\u00a0<\/em>10<em>:20-10:45<\/em><br \/>\n<b>Authors:\u00a0<\/b><strong><u>M.Q. Dao<\/u>, D. Lanza, V. Fr\u00e9mont<\/strong><\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>:\u00a0Early attempts on imitating human driving behavior\u00a0with deep learning have been implemented in an\u00a0reactive navigation scheme which is to directly map the sensory\u00a0measurement to control signal. Although this approach has successfully\u00a0delivered the first half of the driving task &#8211; predicting\u00a0steering angles, learning vehicle speed in an end-to-end setting\u00a0requires significantly large and complex networks as well as\u00a0the accompanying dataset. Motivated by the rich literature in\u00a0trajectory planning which timestamps a geometrical path under\u00a0some dynamic constraints to provide the corresponding velocity\u00a0profile, we propose an end-to-end architecture for generating a\u00a0non-parametric path given an image of the environment in front\u00a0of a vehicle. The level of accuracy of the resulting path is 70%.\u00a0The first and foremost benefit of our approach is the ability of\u00a0incorporating deep learning into the navigation pipeline. This\u00a0is desirable because the neural network can ease the hardness\u00a0of developing the see-think-act scheme, while the trajectory\u00a0planning at the end adds a level of safety to the final output\u00a0by ensuring it obeys static and dynamic constraint.<\/p>\n<\/li>\n<\/ul>\n<h4><strong>10:45-11:15 Coffee break<\/strong><\/h4>\n<h4><strong>11:15-12:50 Session 2 : Perception &amp; Situation awareness<\/strong><\/h4>\n<p>Chairman: Marcelo Ang (NUS, Singapore)<\/p>\n<ul>\n<li><strong>Title:\u00a0<\/strong><span lang=\"EN-US\"><strong>Sim to Real: Using Simulation for 3D Perception and Navigation<\/strong>\u00a0 <em><a href=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/Baidu-RAL-2019-tech.pdf\">Presentation<\/a>\u00a0<\/em> <\/span>11<em>:15-12:00<\/em><i><\/i><br \/>\n<strong>Keynote speaker: Ruigang Yang (Baidu, China)<\/strong><\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>: T<span lang=\"EN-US\">he importance for simulations, in both robotics and more recently autonomous driving, has been more and more recognized. In this\u00a0talk, I will\u00a0talk\u00a0the fairly extensive line of simulation research at Baidu<\/span>\u2019<span lang=\"EN-US\">s Robotics and Autonomous Driving Lab (RAL), from low-level sensor simulation, such as LIDAR, \u00a0to high-level behavior simulation, such as drivers\/pedestrians.\u00a0 These different simulations tools are designed to either produce an abundant amount of annotated data to train deep neural network, or directly provide an end-to-end environment to test all aspects of robots\/autonomous vehicles movement capabilities.<\/span><\/p>\n<\/li>\n<li><strong>Title: \u00a0Feature Generator Layer for Semantic Segmentation Under Different Weather Conditions for Autonomous Vehicles<\/strong>\u00a0 <a href=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/PPNIV19-paper_Erkent.pdf\">paper<\/a><em>, <a href=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/Erkent-2019-IROSw_pres_lo.pdf\">presentation<\/a>\u00a0\u00a0<\/em>12<em>:00-12:25<\/em><br \/>\n<strong>Authors:\u00a0<u>O. Erkent<\/u>, C. Laugier<\/strong><\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>: Adaptation to new environments such as semantic\u00a0segmentation in different weather conditions is still a challenging\u00a0problem. We propose a new approach to adapt the\u00a0segmentation method to diverse weather conditions without\u00a0requiring the semantic labels of the known or the new weather\u00a0conditions. We achieve this by inserting a feature generator\u00a0layer (FGL) into the deep neural network (DNN) which is\u00a0previously trained in the known weather conditions. We only\u00a0update the parameters of FGL. The parameters of the FGL are\u00a0optimized by using two losses. One of the losses minimizes the difference between the input and output of FGL for the known\u00a0weather domain to ensure the similarity between generated and\u00a0non-generated known weather domain features; whereas, the\u00a0other loss minimizes the difference between the distribution of\u00a0the known weather condition features and the new weather\u00a0condition features. We test our method on SYNTHIA dataset\u00a0which has several different weather conditions with a wellknown\u00a0semantic segmentation network architecture. The results\u00a0show that adding an FGL improves the accuracy of semantic\u00a0segmentation for the new weather condition and does not\u00a0reduce the accuracy of the semantic segmentation of the known\u00a0weather condition.<\/p>\n<\/li>\n<li><strong>Title: \u00a0An Edge-Cloud Computing Model for Autonomous Vehicles<\/strong>\u00a0<em>\u00a0 <a href=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/PPNIV19-paper_Chishiro.pdf\">paper<\/a><\/em><em>, <a href=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/PPNIV19-YuSasaki-slides.pdf\">presentation<\/a> \u00a0<\/em>12<em>:25-12:50<\/em><br \/>\n<strong>Authors:\u00a0<u>Y. Sasaki<\/u>, T. Sato, H. Chishiro, T. Ishigooka, S. Otsuka, K. Yoshimura, S. Kato<\/strong><\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>: Edge-cloud computing for autonomous driving has\u00a0been a challenge due to the lack of fast and reliable networks to\u00a0handle a large amount of data and the traffic cost. The recent\u00a0development of 5th Generation (5G) mobile network allows us\u00a0to consider an edge-cloud computing model for autonomous\u00a0vehicles. However, previous work did not strongly focus on\u00a0the edge-cloud computing model for autonomous vehicles in\u00a05G mobile network. In this paper, we present an edge-cloud\u00a0computing model for autonomous vehicles using a software\u00a0platform, called Autoware. Using 1 Gbit\/s simulated network as\u00a0an alternative of 5G mobile network, we show that the presented\u00a0edge-cloud computing model for Autoware-based autonomous\u00a0vehicles reduces the execution time and deadline miss ratio\u00a0despite the latencies caused by communications, compared to\u00a0an edge computing model.<\/p>\n<\/li>\n<\/ul>\n<h4><strong>12:50-14:00 Lunch Break<\/strong><\/h4>\n<h4><strong>14:00-15-45 Session 3: P<\/strong><strong>lanning &amp; Navigation<\/strong><\/h4>\n<p>Chairman: Christian Laugier (CHROMA, INRIA, France)<\/p>\n<ul>\n<li><strong>Title: Intelligent Perception, Navigation and Control for Multi-robot Systems<\/strong>\u00a0 <em><a href=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/Danwei-WANG-IROS-Talk.pdf\">Presentation\u00a0<\/a><\/em>14:00<em>-14:45<\/em><br \/>\n<strong>Keynote speaker: Danwei Wang (NTU, Singapore)<\/strong><\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>:\u00a0While tremendous progress have been made in the development of localization and navigation algorithms for single robot, the operation of the multi-robot systems has recently garnered significant attention. This talk aim to report recent advancements in multi-robot systems research, which are developed by Prof Wang Danwei&#8217;s Group at Nanyang Technological University, Singapore. Emphases are placed on intelligent perception, navigation and control technologies that enable autonomous systems to operate in cluttered and GPS-denied environments. The talk will introduce a systematic multi-robot framework that contains core functions such as multi-sensor data fusion, complex scene understanding, multi-robot localization and mapping, moving object reasoning, and formation control.<\/p>\n<\/li>\n<li><strong>Title: miniSAM: A Flexible Factor Graph Non-linear Least Squares\u00a0Optimization Framework<\/strong>\u00a0 <em><a href=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/PPNIV19-paper_Dong.pdf\">paper<\/a>, <a href=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/miniSAM.pdf\">presentation<\/a> \u00a0<\/em>14<em>:45-15:10<\/em><br \/>\n<strong>Authors:\u00a0<u>J. Dong<\/u>, Z. Lv<\/strong><\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>: Many problems in computer vision and robotics\u00a0can be phrased as non-linear least squares optimization problems\u00a0represented by factor graphs, for example, simultaneous\u00a0localization and mapping (SLAM), structure from motion\u00a0(SfM), motion planning, and control. We have developed an\u00a0open-source C++\/Python framework miniSAM, for solving such\u00a0factor graph based least squares problems. Compared to most\u00a0existing frameworks for least squares solvers, miniSAM has (1)\u00a0full Python\/NumPy API, which enables more agile development\u00a0and easy binding with existing Python projects, and (2) a wide\u00a0list of sparse linear solvers, including CUDA enabled sparse\u00a0linear solvers. Our benchmarking results shows miniSAM offers\u00a0comparable performances on various types of problems, with\u00a0more flexible and smoother development experience.<\/p>\n<\/li>\n<li><strong>Title: \u00a0Linear Camera Velocities and Point Feature Depth Estimation Using\u00a0Unknown Input Observer<\/strong>\u00a0<em>\u00a0 <a href=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/PPNIV19-paper_Benyoucef.pdf\">paper<\/a><\/em><em>, <a href=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/PPNIV19-presentation-Benyoucef.pdf\">presentation<\/a>\u00a0\u00a0<\/em>15<em>:10-15:35<\/em><br \/>\n<strong>Authors:\u00a0\u00a0<u>R. Benyoucef<\/u>, L. Nehaoua, H. Hadj-Abdelkader, H. Arioui<\/strong><\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>:\u00a0In this paper, we propose a new approach to\u00a0estimate the missing 3D information of a point feature during\u00a0the camera motion and reconstruct the linear velocity of the\u00a0camera. This approach is intended to solve the problem of\u00a0relative localization and compute the distance between two\u00a0Unmanned Aerial Vehicles (UAV) within a formation. An\u00a0Unknown Input Observer is designed for the considered system\u00a0described by a quasi-linear parameter varying (qLPV) model\u00a0with unmeasurable variables to achieve kinematic from motion\u00a0estimation. An observability analysis is performed to ensure\u00a0the possibility of reconstructing the state variables. Sufficient\u00a0conditions to design the observer are derived in terms of\u00a0Linear Matrix Inequalities (LMIs) based on Lyapunov theory.\u00a0Simulation results are discussed to validate the proposed\u00a0approach.<\/p>\n<\/li>\n<\/ul>\n<h4><strong>15:45-16:15 Coffee break<\/strong><\/h4>\n<h4><strong>16:15-18:00\u00a0<\/strong><strong>Human vehicle interaction<\/strong><\/h4>\n<p><span style=\"font-family: 'Source Sans Pro';\">Chairman: Marcelo\u00a0 Ang (NUS, Singapore)\u00a0\u00a0<\/span><\/p>\n<ul>\n<li><strong>Title: The Effect of Vehicle Automation on Road Safety<\/strong>\u00a0 <a href=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/KeynotePPNIV_-IROS-2019.pdf\">Presentation <\/a>\u00a0 16:1<em>5-17:00<\/em><i><\/i><br \/>\n<strong>Keynote speaker:\u00a0Cristina Olaverri (Johannes Kepler Universitat, Austria)<\/strong><\/p>\n<p style=\"text-align: justify;\"><i>Abstract<\/i>:\u00a0The feasibility of incorporating new technology-driven functionality to vehicles has played a central role in automotive design. The overall diffusion in the application of digital technologies presents the possibility of designing systems, the functioning of which is based on intelligent technologies that simultaneously reside in multiple, interconnected applications. Consequently, the development of intelligent road-vehicle systems such as cooperative advanced driver assistance systems (co-ADAS) and with them the degree of vehicle automation is rapidly increasing.\u00a0The advent of vehicle automation promotes a reduction of the driver workload. However, depending on the automation grade consequences for the passengers such as out-of-the-loop states can be foreseen. Also the protection of Vulnerable Road Users (VRUs) has been an active research topic in recent years. A variety of responses that exhibit several levels of trust, uncertainty and a certain degree of fear when interacting with driverless vehicles has been observed. In this context, P2V (Pedestrian-to-Vehicle) and V2P (Vehicle-to-Pedestrian) have become crucial technologies to minimize potential dangers, due to the high detection rates and the high user-satisfaction levels they achieve.\u00a0This presentation gives an overview of the impact of such technologies on traffic awareness towards improving driving performance and reducing road accidents. Furthermore, the benefits and potential problems regarding vehicle automation will be outlined.<\/p>\n<\/li>\n<li>\n<p style=\"text-align: justify;\"><strong style=\"font-size: 1.6835em;\">Round table:\u00a0<a href=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/2019-11-04-IROS2019-RdTable-HVI-r1.pdf\">Human vehicle interaction<\/a> <\/strong>17<em>:00-18:00<\/em><\/p>\n<\/li>\n<\/ul>\n<p><b>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0Participants : <\/b><\/p>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li><strong><a href=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/Henriette.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-182\" src=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/Henriette-300x300.jpg\" alt=\"\" width=\"31\" height=\"32\" srcset=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/Henriette-300x300.jpg 300w, https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/Henriette-150x150.jpg 150w, https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/Henriette.jpg 312w\" sizes=\"auto, (max-width: 31px) 100vw, 31px\" \/>\u00a0<\/a>Henriette Cornet<\/strong> (TUMCREATE, Singapore) <a href=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/2019-11-04-IROS2019-RdTable-henriette.pdf\">slides<\/a><\/li>\n<li><span style=\"font-family: 'Source Sans Pro';\"><strong><a href=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/Haizhou.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-183\" src=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/Haizhou.png\" alt=\"\" width=\"30\" height=\"38\" srcset=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/Haizhou.png 181w, https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/Haizhou-119x150.png 119w\" sizes=\"auto, (max-width: 30px) 100vw, 30px\" \/>\u00a0<\/a>Li Haizhou<\/strong> (National University of Singapore) <a href=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/2019-11-04-IROS2019-RdTable-Haizhou.pdf\">slides<\/a><\/span><\/li>\n<li><span style=\"font-family: 'Source Sans Pro';\"><strong><a href=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/cristina.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-184\" src=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/cristina.png\" alt=\"\" width=\"31\" height=\"35\" srcset=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/cristina.png 255w, https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/cristina-133x150.png 133w\" sizes=\"auto, (max-width: 31px) 100vw, 31px\" \/>\u00a0<\/a>Cristina Olaverri <\/strong>(Johannes Kepler Universitat, Austria) slide<\/span><\/li>\n<li><strong><a href=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/Kabzan.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-185\" src=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/Kabzan-300x300.jpg\" alt=\"\" width=\"31\" height=\"31\" srcset=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/Kabzan-300x300.jpg 300w, https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/Kabzan-150x150.jpg 150w, https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/Kabzan.jpg 314w\" sizes=\"auto, (max-width: 31px) 100vw, 31px\" \/>\u00a0<\/a>Juraj Kabzan<\/strong> (Nutonomy, Singapore) <a href=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/2019-11-04-IROS2019-RdTable-Juraj.pdf\">slides<\/a><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h4><strong><span style=\"font-family: 'Source Sans Pro';\">18:00 <a href=\"https:\/\/project.inria.fr\/ppniv19\/files\/2019\/11\/presentation-PPNIV19.pdf\">Closing<\/a><\/span><\/strong><\/h4>\n","protected":false},"excerpt":{"rendered":"<p>Final program 9:00 Opening Christian Laugier (CHROMA, Inria),\u00a0Philippe Martinet (CHORALE, Inria), Marcelo Ang (NUS, Singapore) 9:10-10:45 Session 1: Machine &amp; Deep Learning Chairman: Danwei Wang\u00a0 (NTU, Singapore) Title: The road towards perception for autonomous driving: methods, challenges, and the data required\u00a0 Presentation\u00a0\u00a09:10-9:55 Keynote speaker: Roland Meertens (AID, Munich, Germany) Abstract:\u00a0Self-driving\u2026<\/p>\n<p> <a class=\"continue-reading-link\" href=\"https:\/\/project.inria.fr\/ppniv19\/program\/\"><span>Continue reading<\/span><i class=\"crycon-right-dir\"><\/i><\/a> <\/p>\n","protected":false},"author":1372,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-73","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/project.inria.fr\/ppniv19\/wp-json\/wp\/v2\/pages\/73","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/project.inria.fr\/ppniv19\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/project.inria.fr\/ppniv19\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/ppniv19\/wp-json\/wp\/v2\/users\/1372"}],"replies":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/ppniv19\/wp-json\/wp\/v2\/comments?post=73"}],"version-history":[{"count":61,"href":"https:\/\/project.inria.fr\/ppniv19\/wp-json\/wp\/v2\/pages\/73\/revisions"}],"predecessor-version":[{"id":235,"href":"https:\/\/project.inria.fr\/ppniv19\/wp-json\/wp\/v2\/pages\/73\/revisions\/235"}],"wp:attachment":[{"href":"https:\/\/project.inria.fr\/ppniv19\/wp-json\/wp\/v2\/media?parent=73"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}