

{"id":359,"date":"2012-06-07T17:24:39","date_gmt":"2012-06-07T15:24:39","guid":{"rendered":"http:\/\/project.inria.fr\/keops\/?page_id=359"},"modified":"2012-07-24T17:17:40","modified_gmt":"2012-07-24T15:17:40","slug":"recherche-application","status":"publish","type":"page","link":"https:\/\/project.inria.fr\/keops\/recherche-2\/recherche-application\/","title":{"rendered":"Research : Application"},"content":{"rendered":"<p><strong><span style=\"color: #ff0000; font-size: large;\">Integration of these new dynamic sensory modules in a visual architecture and experimental study of their performances in the case of degraded visual sources.<\/span><\/strong><\/p>\n<p><strong>Objective:<\/strong> Validate the non-standard bio-inspired early-vision front-end on realistic data set,\u00a0targeting low vision (e.g. underwater) applications.<\/p>\n<p><strong>Methods:<\/strong> Since a new innovative early-vision front-end is going to be made available thanks to\u00a0the previous tasks, the final step is to validate it embedded in a larger biologically inspired visual\u00a0system and targeting an application for which state of the art visual methods partially fail.<br \/>\nIn addition to this benchmarks, the non-standard bio-inspired early-vision front-end results are\u00a0going to be compared to high-level machine learning mechanisms, e.g. as novelty detectors\u00a0(Kassab et al, 2009), in order to quantify the obtained performances.<br \/>\nSince we want to experiment to which extent such an improved early-vision front-end contribute to\u00a0visual perception, we are going to not only take basic visual cues detection into account, but to\u00a0experiment on high-level visual functions. Concrete demonstration of cognitive tasks enhancement\u00a0are going to include:<br \/>\n(i) Non-linear static\/dynamic cues detection: calculation of maps of e.g. colored-\u00a0texture \/ background-motion \/ object-motion with segmentation of uniform regions wr.t. the cue.<br \/>\n(ii) Gesture recognition: discriminate between two different displacements (e.g. walk versus\u00a0march, or crowd behaviour).<br \/>\n(iii) Image category recognition: recognize the image general category (e.g. a natural versus\u00a0artificial scene, an animal versus a manufactured object).<br \/>\n(iv) Detection of unexpected event: recognize an unexpected displacement (i.e. a failure of\u00a0prediction in local motion detector).<br \/>\n(v) Image segmentation from categorization: when a categorization is performed the retinal units\u00a0with a non-negligible contribution to this categorization process provide a cue about the part\u00a0of the image corresponding to this categorization, thus allow the segmentation.<br \/>\nThis means that, at the biologically inspired modelling model level, extra-cortical functions in\u00a0connection with non-standard retinal cells response are going to be studied, including focus of\u00a0attention and motivated vision.<\/p>\n<p><strong>Task steps:<\/strong><br \/>\n(i) Specifications of a set of benchmark validation tests (object category recognition, novelty detection).<br \/>\n(ii) Deployment of the benchmarking platform data and software.<br \/>\n(iii) Realization of a benchmarking test set against general non-biologically constrained algorithms.<\/p>","protected":false},"excerpt":{"rendered":"<p>Integration of these new dynamic sensory modules in a visual architecture and experimental study of their performances in the case of degraded visual sources. Objective: Validate the non-standard bio-inspired early-vision front-end on realistic data set,\u00a0targeting low vision (e.g. underwater) applications. Methods: Since a new innovative early-vision front-end is going to be made available thanks to\u00a0the &hellip; <\/p>\n<p><a class=\"more-link btn\" href=\"https:\/\/project.inria.fr\/keops\/recherche-2\/recherche-application\/\">Continue reading<\/a><\/p>\n","protected":false},"author":36,"featured_media":0,"parent":128,"menu_order":5,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-359","page","type-page","status-publish","hentry","nodate","item-wrap"],"_links":{"self":[{"href":"https:\/\/project.inria.fr\/keops\/wp-json\/wp\/v2\/pages\/359","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/project.inria.fr\/keops\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/project.inria.fr\/keops\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/keops\/wp-json\/wp\/v2\/users\/36"}],"replies":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/keops\/wp-json\/wp\/v2\/comments?post=359"}],"version-history":[{"count":11,"href":"https:\/\/project.inria.fr\/keops\/wp-json\/wp\/v2\/pages\/359\/revisions"}],"predecessor-version":[{"id":374,"href":"https:\/\/project.inria.fr\/keops\/wp-json\/wp\/v2\/pages\/359\/revisions\/374"}],"up":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/keops\/wp-json\/wp\/v2\/pages\/128"}],"wp:attachment":[{"href":"https:\/\/project.inria.fr\/keops\/wp-json\/wp\/v2\/media?parent=359"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}