

{"id":246,"date":"2019-05-10T13:22:38","date_gmt":"2019-05-10T11:22:38","guid":{"rendered":"https:\/\/project.inria.fr\/semapolis\/?page_id=246"},"modified":"2019-10-11T13:35:06","modified_gmt":"2019-10-11T11:35:06","slug":"results","status":"publish","type":"page","link":"https:\/\/project.inria.fr\/semapolis\/fr\/results\/","title":{"rendered":"R\u00e9sultats"},"content":{"rendered":"<p class=\"qtranxs-available-languages-message qtranxs-available-languages-message-fr\">D\u00e9sol\u00e9, cet article est seulement disponible en <a href=\"https:\/\/project.inria.fr\/semapolis\/en\/wp-json\/wp\/v2\/pages\/246\" class=\"qtranxs-available-language-link qtranxs-available-language-link-en\" title=\"English\">English<\/a>.<\/p><p>The Semapolis project has produced scientific results at three levels: methodologies (new learning methods), computer vision and computer graphics tasks (recognition, detection, segmentation, reconstruction, rendering), and applications (specialization and demonstration on semantic and 3D city modeling). Many of the <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/\">publications<\/a> come with <a href=\"https:\/\/project.inria.fr\/semapolis\/public-code-and-data\/\">code and data<\/a>, with supporting experiments involving urban data.<\/p>\n<h4 style=\"padding-left: 30px;\">Learning, with Weak, Little or No Supervision<\/h4>\n<p>Over the last years, Convolutional Neural Networks (CNNs) have transformed the field of computer vision thanks to their unparalleled capacity to learn high-level semantic image features. However, to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, semantic feature learning with little or no supervision\u00a0is of crucial importance in order to successfully harvest the vast amount of visual data that are available today.<\/p>\n<p><strong>Weakly Supervised Learning.<\/strong> Successful methods for visual object recognition are typically trained on datasets containing lots of images with rich annotations, which are both expensive to create and subjective. <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Oquab-et-al-CVPR-2015\">[Oquab et al. CVPR 2015]<\/a> propose\u00a0a weakly-supervised CNN for object classification that relies only on image-level labels, yet can learn from cluttered scenes containing multiple objects. It\u00a0performs comparably to its fully-supervised counter-part.<\/p>\n<p><strong>Semi-Supervised Learning.<\/strong> <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Kozinski-et-al-NeurIPSw-2017\">[Kozi\u0144ski et al. NeurIPSw 2017]<\/a> propose an approach based on a\u00a0Generative Adversarial Network (GAN) for the semi-supervised training of structured-output neural networks.Initial experiments in image segmentation demonstrate it has the same performance as in a fully supervised scenario, while using two times less annotations.<\/p>\n<p><strong>Low-Shot Learning.<\/strong> <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Gidaris-Komodakis-CVPR-2018\">[Gidaris &amp; Komodakis CVPR 2018]<\/a> define an attention-based\u00a0few-shot visual learning system that, during test time, is able to efficiently learn novel categories from only a few training data while at the same time not forgetting the initial categories on which it was trained.<\/p>\n<p><strong>Unsupervised Feature Learning.<\/strong> <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Gidaris-Komodakis-ICLR-2018\">[Gidaris &amp; Komodakis ICLR 2018]<\/a> propose to learn image features by training CNNs to recognize the 2D rotation that is applied to the image that it gets as input. They demonstrate that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning, yielding\u00a0state-of-the-art performance\u00a0in various unsupervised feature learning benchmarks (for recognition, detection, segmentation), only a few points lower\u00a0from the supervised case.<\/p>\n<p><strong>Unsupervised Learning &amp; Architecture Style Discovery. <\/strong><a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Lee-et-al-ICCP-2015\">[Lee et al. ICCP 2015]<\/a> explore\u00a0whether visual patterns can be discovered automatically in\u00a0the particular domain of architecture, using huge collections of street-level imagery. They find visual patterns that correspond to semantic-level architectural elements distinctive to specific time periods.\u00a0This analysis allows both to date buildings, as well as to discover how functionally-similar architectural elements (e.g. windows, doors, balconies, etc.) have changed over time due to evolving style.<\/p>\n<p><strong>Domain Adaptation for Unsupervised Learning from Synthetic Data.<\/strong> <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Massa-et-al-CVPR-2016\">[Massa et al. CVPR 2016]<\/a>\u00a0shows how to map\u00a0features from real images and features from CAD-rendered views into the same feature space, thus allowing training on a large amount of synthetic data, without the need for manual annotation, i.e., without supervision.<\/p>\n<p><strong>CNN Understanding.<\/strong> To help understanding the \u00ab\u00a0black box\u00a0\u00bb of a neural network,\u00a0<a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Aubry-Russell-ICCV-2015\">[Aubry &amp; Russell ICCV 2015]<\/a> introduce an approach for analyzing the variation of features generated by CNNs w.r.t. scene factors that occur in natural images, including object style, 3D viewpoint, color, and scene lighting configuration.<\/p>\n<h4 style=\"padding-left: 30px;\">Object Detection<\/h4>\n<p>One of the main basic task in scene understanding is object detection, which requires both recognizing objects and localizing them in an image.<\/p>\n<p><strong>Accurate Localization.<\/strong> <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Gidaris-Komodakis-ICCV-2015\">[Gidaris &amp; Komodakis ICCV 2015]<\/a> propose an accurate object detection system that relies on a multi-region CNN that encodes semantic segmentation-aware features\u00a0capturing a diverse set of discriminative appearance factors to enhance\u00a0localization sensitivity. \u00a0<a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Gidaris-Komodakis-CVPR-2016\">[Gidaris &amp; Komodakis CVPR 2016]<\/a> show how to boost\u00a0the localization accuracy of object detectors, with a probabilistic estimation of the boundaries of an\u00a0object of interest inside a region.\u00a0It\u00a0can achieve high detection accuracy even when it is given as input a set of sliding windows, thus proving that it is independent of box proposal methods.<\/p>\n<p><strong>Objectness.<\/strong> More generally, many\u00a0computer vision tasks rely on\u00a0category-agnostic bounding box proposals. <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Gidaris-Komodakis-BMVC-2016\">[Gidaris &amp; Komodakis BMVC 2016]<\/a>\u00a0propose a new approach to tackle this problem that is based on an active strategy for generating box proposals.<\/p>\n<h4 style=\"padding-left: 30px;\">Low-level Semantic Segmentation<\/h4>\n<p>Semantic segmentation is applicable both to 2D (images) and 3D data (depth maps, point clouds, meshes) . It can be low-level, at pixel, vertex or face level, as provided for instance by random forests or CNNs, possibly with regularization based on MRFs or deep learning. Or it can be high-level, with a structured, possibly hierarchical representation, as provided by parsing with a shape grammar (its counterpart being procedural modeling).<\/p>\n<p><strong>Pixelwise Segmentation with Auto-context.<\/strong> <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Gadde-et-al-PAMI-2018\">[Gadde et al. PAMI 2018]<\/a> train a sequence of boosted decision trees using auto-context features and stacked generalization, yielding a segmentation accuracy which is better or comparable with all previous published methods, not only for 2D images but also for 3D point clouds and meshes. (Preliminary results on images only were reported in <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Jampani-et-al-WACV-2015\">[Jampani et al. WACV 2015]<\/a>.)<\/p>\n<p><strong>Penalizing the Total Variation or the Total Boundary Size.<\/strong> <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Landrieu-Obozinski-SIIMS-2017\">[Landrieu &amp; Obozinski SIIMS 2017]<\/a> propose working-set\/greedy algorithms to efficiently solve problems penalized respectively by the total variation on a general weighted graph and its <em>l<\/em><sub>0<\/sub> counterpart, the total level-set boundary size, when the piecewise-constant solutions have a small number of distinct level-sets, which is the case for semantic segmentation. They\u00a0obtain significant speed-ups over state-of-the-art algorithms.<\/p>\n<p><strong>Accurate Labeling.<\/strong> To achieve accurate pixelwise image labeling, <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Gidaris-Komodakis-CVPR-2017\">[Gidaris &amp; Komodakis CVPR 2017]<\/a> train a deep neural network that, given as input an initial estimate of the output labels and the input image, is able to predict a new refined estimate for the labels, . The method\u00a0achieves state-of-the-art results on\u00a0dense disparity estimation. It can also be applied to unordered semantics labels for semantic segmentation tasks.<\/p>\n<p><strong>Relaxed Calibration for Multiview Segmentation.<\/strong> In a multi-view video setting, object segmentation\u00a0methods for dynamic scenes usually rely on geometric calibration to impose spatial shape constraints between viewpoints.\u00a0<a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Djelouah-et-al-3DV-2016\">[Djelouah et al. 3DV 2016]<\/a>\u00a0show\u00a0that the calibration constraint can be relaxed while still getting competitive segmentation results. The method relies on new multi-view cotemporality constraints through motion correlation cues, in addition to common appearance features used by co-segmentation methods to identify co-instances of objects.<\/p>\n<h4 style=\"padding-left: 30px;\">Structured Semantic Segmentation<\/h4>\n<p><strong>Top-Down Parsing with Graph Grammars and MRFs.<\/strong>\u00a0One of the main challenges of top-down parsing with shape grammars is the exploration of a large search space combining the structure of the object and the positions of its parts,\u00a0requiring randomized or greedy algorithms that do not produce repeatable results. <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Kozinski-Marlet-WACV-2014\">[Kozi\u0144ski &amp; Marlet WACV 2014]<\/a>\u00a0propose to\u00a0encode the possible object structures in a graph grammar and,\u00a0for a given structure, to infer\u00a0the position of parts using standard MAP-MRF techniques.\u00a0This limits the application of the less reliable greedy or randomized optimization algorithm to structure inference.<\/p>\n<p><strong>Learning Shape Grammars.<\/strong> Parsing methods based on shape grammars suffer from the limits of handwritten rules, which are prone to errors and not scalable. <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Gadde-et-al-IJCV-2016\">[Gadde et al. IJCV 2016]<\/a> propose a method to automatically learn shape grammars from segmentation samples. The learned grammars offer a faster parsing convergence while producing equal or more accurate parsing results compared to handcrafted grammars as well as to grammars learned by other methods.<\/p>\n<p><strong>Relaxing Parsing Constraints and Dealing with Occlusions.<\/strong> Instead of\u00a0exploring the procedural space of shapes derived from a grammar,\u00a0<a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Kozinski-et-al-ACCV-2014\">[Kozi\u0144ski et al. ACCV 2014]<\/a>\u00a0 formulate parsing as a linear binary program with\u00a0user-defined shape priors. The algorithm produces plausible approximations of globally optimal segmentations without grammar sampling. Pushing the idea further, <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Kozinski-et-al-CVPR-2015\">[Kozi\u0144ski et al. CVPR 2015]<\/a> propose\u00a0a new shape prior formalism for segmenting images with regularities such as facade images.\u00a0It combines the simplicity of split grammars with unprecedented expressive power: the capability of encoding simultaneous alignment in two dimensions, facade occlusions and irregular boundaries between facade elements. The method is extended to simultaneously segment the visible and occluding objects, and recover a plausible structure of the occluded object.<\/p>\n<p><strong>Urban Procedural Modeling.\u00a0<\/strong>On a number of occasions, it is desirable to design 3D models of buildings by hand, which is a notoriously difficult task for novices despite significant research effort to provide intuitive and automated systems. <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Nishida-et-al-TOG-2016\">[Nishida et al. TOG 2016]<\/a> propose an approach that associates sketch-based modeling and procedural modeling, which allows non-expert users to generate complex buildings in just a few minutes.<\/p>\n<h4 style=\"padding-left: 30px;\"><strong>2D\/3D Correspondence and Alignment<\/strong><\/h4>\n<p>Finding correspondences and alignments in images (2D-2D) or between images with three-dimensional information (2D-3D) is a key component in the visual analysis of urban data, with direct applications such as place recognition and object detection. These tasks face two main challenges: (1) huge variations due to age, lighting or change of seasons, if not structure, and (2) the size of the search space due to the extent of cities and the variety of viewpoints. We have developed and improved a number of methods that address these challenges, with applications to place recognition, object detection and pose estimation.<\/p>\n<p><strong>2D-2D Correspondence and Alignment. <\/strong>Repetitive structures are notoriously hard to deal with, in particular for establishing correspondences\u00a0in multi-view geometry or for bag-of-visual-words representations. Yet they constitute\u00a0an important distinguishing feature for many places. <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Torii-et-al-PAMI-2015\">[Torii et al. PAMI 2015]<\/a> propose a specific representation of repeated structures suitable for scalable retrieval and geometric verification. <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Rocco-et-al-CVPR-2017\">[Rocco et al. CVPR 2017]<\/a> address the problem of determining correspondences between two images in agreement with a geometric model such as an affine or thin-plate spline transformation, and estimating its parameters. <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Dalens-et-al-submitted\">[Dalens et al. submitted]<\/a> identify visual differences between objects over time.<\/p>\n<p><strong>2D-3D Alignment at Scene level.<\/strong>\u00a0<a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Aubry-et-al-TOG-2014\">[Aubry et al. TOG 2014]<\/a> represent a 3D model of a scene by a small set of discriminative visual elements that are automatically learnt from rendered views and that can reliably be matched in 2D depictions, offering robust and scalable 2D-3D alignments, even with non-photorealistic representations. <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Aubry-et-al-chapter-LSVG-2015\">[Aubry et al. chapter LSVG 2015]<\/a> provides more details and experiments.\u00a0<a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Torii-et-al-CVPR-2015\">[Torii et al. CVPR 2015]<\/a>, extended in <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Torii-et-al-PAMI-2017\">[Torii et al. PAMI 2017]<\/a>, observe that alignment becomes easier when both a query image and a database image depict the scene from approximately the same viewpoint. They develop a new place recognition approach that combines an efficient synthesis of novel views with a compact indexable image representation. Also\u00a0with applications to place recognition and image retrieval,\u00a0<a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Arandjelovic-et-al-CVPR-2016\">[Arandjelovic et al. CVPR 2016]<\/a>, extended in <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Arandjelovic-et-al-PAMI-2017\">[Arandjelovic et al. PAMI 2017]<\/a>, propose a CNN-based approach that can be\u00a0applied on very large-scale weakly labeled datasets.<\/p>\n<p><strong>2D-3D Alignment at Object Level.<\/strong>\u00a0<a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Aubry-et-al-CVPR-2014\">[Aubry et al. CVPR 2014]<\/a> pose object category detection in images as a type of part-based 2D-to-3D alignment problem, and learn\u00a0correspondences between real photographs and synthesized views of 3D CAD models.\u00a0<a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Massa-et-al-CVPR-2016\">[Massa et al. CVPR 2016]<\/a>\u00a0shows how to adapt the features of natural images to better align with those of CAD-rendered views, which is critical to the success of\u00a02D-3D exemplar detection.\u00a0<a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Massa-et-al-BMVC-2016\">[Massa et al. BMVC 2016]<\/a> compare various approaches for\u00a0estimating object\u00a0viewpoint in a unified setting, and consequently propose\u00a0a new training method for joint\u00a0detection and viewpoint estimation.<\/p>\n<h4 style=\"padding-left: 30px;\">3D Reconstruction of <strong>Man-made Environments<\/strong><\/h4>\n<p>Urban structures, in particular buildings, come with strong shape priors and regular patterns, which can be leveraged for analysis <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Torii-et-al-PAMI-2015\">[Torii et al. PAMI 2015]<\/a> and enforced for reconstruction. They also come with specific issues,\u00a0 such as textureless and specular areas, that break traditional 3D reconstruction methods.<\/p>\n<p><strong>Patch-Based Reconstruction.<\/strong> <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Bourki-et-al-WACV-2017\">[Bourki et al. WACV 2017]<\/a> propose an efficient patch-based method for the Multi-View Stereo (MVS) reconstruction of highly regular man-made scenes from calibrated, wide-baseline views and a sparse Structure-from-Motion (SfM) point cloud. The method is robust to textureless and specular areas.<\/p>\n<p><strong>Piecewise-Planar Reconstruction.<\/strong> <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Boulch-et-al-CGF-2014\">[Boulch et al. CGF 2014]<\/a> propose an effective method to impose a meaningful prior on piecewise-planar, watertight surface reconstruction, based on the regularization of the reconstructed surface w.r.t. the length of edges and the number of corners. This methods is also particular good at surface completion for unseen areas.<\/p>\n<p><strong>Shape Merging.<\/strong> Surface reconstruction from point clouds often relies on a primitive extraction step, that may be followed by a merging step because of a possible over-segmentation. <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Boulch-Marlet-ICPR-2014\">[Boulch &amp; Marlet ICPR 2014]<\/a>\u00a0propose statistical criteria, based on a single intuitive parameter, to decide whether or not two given surfaces (not just primitives) are to be considered as the same, and thus can be merged.<\/p>\n<p><strong>Reconstruction from Heterogeneous Data.<\/strong> New algorithms to where developed to combine the three main sources of urban data (lidar point clouds, mixed with aerial and ground shots) and manage the differences in resolution between pictures (aerial and ground), improving the accuracy and overall quality of the generated 3D models <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Keriven-techreport-2019\">[Keriven techreport 2019]<\/a>.<\/p>\n<p><strong>Shape simplification.<\/strong> A hybrid geometry generation algorithm has also been designed. It consists in preserving the details of the geometry while simplifying flat surfaces. This geometric simplification also improves the ability to correctly semantize the model \u00a0<a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Keriven-techreport-2019\">[Keriven techreport 2019]<\/a>.<\/p>\n<h4 style=\"padding-left: 30px;\">Image-Based Rendering for Virtual Navigation<\/h4>\n<p>A 3D model allows to virtually navigate in a building or city. However, a high-quality navigation requires a level of 3D accuracy, completeness, compactness and knowledge of materials which is totally inaccessible to current data captures and 3D reconstruction techniques. Yet, a smooth, real-time, virtual navigation in an urban environment is possible using\u00a0Image-based Rendering (IBR) techniques, relying only on partial information regarding 3D geometry and\/or semantics.<\/p>\n<p><strong>Real-Time Quality IBR.<\/strong> The various IBR\u00a0algorithms generate high-quality photo-realistic imagery but\u00a0have different strengths and weaknesses, depending on 3D reconstruction quality and scene content. Using\u00a0a Bayesian approach, <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Ortiz-Cayon-et-al-3DV-2015\">[Ortiz-Cayon et al. 3DV 2015]<\/a> propose a\u00a0principled approach to select the algorithm with the best quality\/speed trade-off in each image region.\u00a0<span lang=\"EN-US\">This reduces the cost of rendering significantly, allowing IBR to be run on a mobile device.<\/span><\/p>\n<p><strong><span lang=\"EN-US\">Dealing with Occlusion with Close Objects and View-Dependent Texturing<\/span>.<\/strong> For\u00a0indoor scenes, two important challenges are the compromise between compactness and fidelity of 3D information, especially regarding\u00a0occlusion relationships when viewed up close, and the performance cost of using many photographs for\u00a0view-dependent texturing of\u00a0man-made materials. <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Hedman-et-al-TOG-2016\">[Hedman et al. TOG 2016]<\/a> propose a method based on different representations at different scales as well as tiled IBR, giving real-time performance while hardly sacrificing quality.<\/p>\n<p><strong>Dealing with Thin Structures.<\/strong> Another challenge concern thin structures such as fences, which generate occlusion artifacts. Based on simple geometric information provided by the\u00a0user, <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Thonat-et-al-CGF-2018\">[Thonat et al. CGF 2018]<\/a> propose a multi-view segmentation algorithm for thin structures that\u00a0extracts multi-view mattes together with clean background images and geometry. These are are used by a multi-layer rendering algorithm that allows free-viewpoint navigation, with significantly improved quality compared to previous solutions.<\/p>\n<p><strong>Dealing with Reflective Surfaces.<\/strong> IBR allows good-quality free-viewpoint navigation in urban scenes, but suffers from artifacts on poorly reconstructed objects, e.g., reflective surfaces such as cars. To alleviate this problem, <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Ortiz-Cayon-et-al-3DV-2016\">[Ortiz-Cayon et al. 3DV 2016]<\/a> propose a method that automatically identifies stock 3D models (using a previous result from the project <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Gidaris-Komodakis-ICCV-2015\">[Gidaris &amp; Komodakis ICCV 2015]<\/a> ), aligns them in the 3D scene (leveraging on another result from the project <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Massa-et-al-BMVC-2016\">[Massa et al. BMVC 2016]<\/a> ) and performs morphing to better capture image contours.<\/p>\n<p><strong>Perspective and Multi-View Inpainting\u00a0<span lang=\"EN-US\">for Scene Editing<\/span>.<\/strong> <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Thonat-et-al-3DV-2016\">[Thonat et al. 3DV 2016]<\/a>\u00a0propose <span lang=\"EN-US\">an inpainting-based method to remove objects such as people, cars and motorbikes from urban images (as provided by [Gidaris &amp; Komodakis ICCV 2015]), with multi-view consistency. It enables a form of scene edition, with free-viewpoint IBR in the edited scenes <\/span> as well as IBR scene editing, such as limited displacement of real objects. Developing on the idea, <a href=\"https:\/\/project.inria.fr\/semapolis\/publications\/#Philip-Drettakis-I3D-2018\">[Philip &amp; Drettakis I3D 2018]<\/a>\u00a0provide correct perspective and multi-view coherence of inpainting results<span lang=\"EN-US\">, that scales to large scenes.<\/span>It is <span lang=\"EN-US\">based on a local planar decomposition allowing a better coherence and a better quality<\/span>.<\/p>","protected":false},"excerpt":{"rendered":"<p>D\u00e9sol\u00e9, cet article est seulement disponible en English.The Semapolis project has produced scientific results at three levels: methodologies (new learning methods), computer vision and computer graphics tasks (recognition, detection, segmentation, reconstruction, rendering), and applications (specialization and demonstration on semantic and 3D city modeling). Many of the publications come with code and data, with supporting experiments &hellip; <\/p>\n<p><a class=\"more-link btn\" href=\"https:\/\/project.inria.fr\/semapolis\/fr\/results\/\">Lire la suite<\/a><\/p>\n","protected":false},"author":377,"featured_media":0,"parent":0,"menu_order":20,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-246","page","type-page","status-publish","hentry","nodate","item-wrap"],"_links":{"self":[{"href":"https:\/\/project.inria.fr\/semapolis\/fr\/wp-json\/wp\/v2\/pages\/246","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/project.inria.fr\/semapolis\/fr\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/project.inria.fr\/semapolis\/fr\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/semapolis\/fr\/wp-json\/wp\/v2\/users\/377"}],"replies":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/semapolis\/fr\/wp-json\/wp\/v2\/comments?post=246"}],"version-history":[{"count":56,"href":"https:\/\/project.inria.fr\/semapolis\/fr\/wp-json\/wp\/v2\/pages\/246\/revisions"}],"predecessor-version":[{"id":377,"href":"https:\/\/project.inria.fr\/semapolis\/fr\/wp-json\/wp\/v2\/pages\/246\/revisions\/377"}],"wp:attachment":[{"href":"https:\/\/project.inria.fr\/semapolis\/fr\/wp-json\/wp\/v2\/media?parent=246"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}