

{"id":151,"date":"2018-07-05T09:43:11","date_gmt":"2018-07-05T07:43:11","guid":{"rendered":"https:\/\/project.inria.fr\/ftv360\/?page_id=151"},"modified":"2018-07-10T13:14:03","modified_gmt":"2018-07-10T11:14:03","slug":"acquisition-procedure","status":"publish","type":"page","link":"https:\/\/project.inria.fr\/ftv360\/informations\/acquisition-procedure\/","title":{"rendered":"Acquisition Procedure"},"content":{"rendered":"<p><\/p>\n<h5><strong>New system based on 360\u00b0 cameras<\/strong><\/h5>\n<p style=\"padding-left: 30px;\">Ultimate Free Viewpoint Navigation enables a user to freely change the position, \\(\\displaystyle\\mathbf{t} = [x, y, z] \\in \\mathbb{R} \\),\u00a0<strong>and<\/strong> the angle, \\(\\displaystyle\\mathbf{r} = [\\alpha, \\beta, \\gamma] \\in[-\\pi\/2,\\pi\/2]\\times[-\\pi,\\pi]\\times[-\\pi,\\pi] \\) of his viewpoint.<\/p>\n<p style=\"padding-left: 30px;\">Naturally, it is impossible in practice to sample the light rays coming from <strong>every<\/strong> direction at <strong>every<\/strong> position. The challenge for an acquisition system is yet to make this sampling as dense as possible. For that purpose, perspective cameras have shown their limitation since each of them captures the light rays\u00a0at <strong>one<\/strong> given position coming from\u00a0<strong>some\u00a0<\/strong>directions. Recently, omnidirectional (or 360\u00b0) cameras have been introduced in the public market. Their strength is that they are able to record the light rays at <strong>one<\/strong> given position coming from <strong>every<\/strong> direction.<\/p>\n<p style=\"padding-left: 30px;\">This has motivated the following acquisition procedure, used to record the data made available on this website: <span style=\"color: #ff0000;\">a set of omnidirectional cameras have been spread inside a scene and synchronously film its content.\u00a0<\/span><\/p>\n<p style=\"padding-left: 30px;\"><a href=\"https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/AcquisitionProcedure-1.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-large wp-image-158 aligncenter\" src=\"https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/AcquisitionProcedure-1-1024x304.png\" alt=\"\" width=\"900\" height=\"267\" srcset=\"https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/AcquisitionProcedure-1-1024x304.png 1024w, https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/AcquisitionProcedure-1-300x89.png 300w, https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/AcquisitionProcedure-1-768x228.png 768w, https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/AcquisitionProcedure-1-150x44.png 150w, https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/AcquisitionProcedure-1.png 1753w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><\/a><\/p>\n<p style=\"padding-left: 30px;\">It is clear that, if a user is navigating through the recorded video, he is able to discretely translates in the scene, <em>i.e.<\/em>,\u00a0\\(\\displaystyle\\mathbf{t} \\in \\{ \\delta_i\\} \\), and at each translation position, he is able to visualize the angle he desires, <em>i.e.<\/em>,\u00a0\u00a0\\(\\displaystyle \\forall \\ i, \\ \\mathbf{r}(\\delta_i) \\in [-\\pi\/2,\\pi\/2]\\times[-\\pi,\\pi]\\times[-\\pi,\\pi] \\) (see figure above).<\/p>\n<h5><strong>A Capture in practice<\/strong><\/h5>\n<p style=\"padding-left: 30px;\">In this website, by <strong>Capture<\/strong>, we consider the following steps:<\/p>\n<p style=\"padding-left: 60px;\">&#8212; We position a certain number of omnidirectional cameras (typically 40) in a scene. Their distance with the neighbouring ones lies between 1m and 3m.<br \/>\n<a href=\"https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/20170921_102241.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-186 aligncenter\" src=\"https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/20170921_102241-300x225.jpg\" alt=\"\" width=\"300\" height=\"225\" srcset=\"https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/20170921_102241-300x225.jpg 300w, https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/20170921_102241-768x576.jpg 768w, https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/20170921_102241-1024x768.jpg 1024w, https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/20170921_102241-150x113.jpg 150w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a><\/p>\n<p style=\"padding-left: 60px;\">&#8212; We record one or several calibration sequences, in which a chessboard is moving in the scene. The recorded videos are then used to estimate the <a href=\"https:\/\/project.inria.fr\/ftv360\/informations\/calibration-parameters\/\">calibration parameters<\/a> with an algorithm detailed and available for download <a href=\"https:\/\/project.inria.fr\/ftv360\/download\/calibration\/\">here<\/a>.<\/p>\n<p style=\"padding-left: 60px;\"><span style=\"font-family: Futura, 'Century Gothic', AppleGothic, sans-serif;\"><a href=\"https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/IMG_20180629_115631.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-187 aligncenter\" src=\"https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/IMG_20180629_115631-300x225.jpg\" alt=\"\" width=\"300\" height=\"225\" srcset=\"https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/IMG_20180629_115631-300x225.jpg 300w, https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/IMG_20180629_115631-768x576.jpg 768w, https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/IMG_20180629_115631-1024x768.jpg 1024w, https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/IMG_20180629_115631-150x113.jpg 150w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a><\/span><\/p>\n<p style=\"padding-left: 60px;\">&#8212; We record several <strong>Sequences<\/strong> with the same camera arrangement (and thus the same calibration parameters). In each sequence, a small scene is acquired by all the synchronized cameras.<\/p>\n<p style=\"padding-left: 60px;\"><span style=\"font-family: Futura, 'Century Gothic', AppleGothic, sans-serif;\"><a href=\"https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/capture.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-189 aligncenter\" src=\"https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/capture-300x188.png\" alt=\"\" width=\"300\" height=\"188\" srcset=\"https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/capture-300x188.png 300w, https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/capture-768x480.png 768w, https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/capture-1024x640.png 1024w, https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/capture-150x94.png 150w, https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/capture.png 1920w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a><\/span><\/p>\n<p style=\"padding-left: 30px;\"><span style=\"font-family: Futura, 'Century Gothic', AppleGothic, sans-serif;\">To summarize, the shared data has the following structure:<br \/>\n<strong>Capture<\/strong>\u00a0\u2192 <strong>Sequence<\/strong>\u00a0\u2192 <strong>Video<\/strong><\/span><\/p>\n<p style=\"padding-left: 30px;\">A <strong>Capture\u00a0<\/strong>defines a set of Sequences that have been acquired with the same camera arrangement.<br \/>\nA <strong>Sequence<\/strong> corresponds to the recording of a given scene with several omnidirectional cameras.<br \/>\nA <strong>Video\u00a0<\/strong>is the file recorded by one of the cameras in a Sequence.<\/p>\n<p style=\"padding-left: 30px;\">For each <strong>Capture,\u00a0 <\/strong>the positions and orientations of each camera have been estimated and are given in the calibration data.<\/p>\n<p style=\"padding-left: 30px;\">The data are available for download <a href=\"https:\/\/project.inria.fr\/ftv360\/download\/download-ftv-data\/\">here<\/a>.<\/p>\n<h5><strong>Omnidirectional cameras<\/strong><\/h5>\n<p style=\"padding-left: 30px;\">The acquisitions are done with <a href=\"https:\/\/www.samsung.com\/fr\/wearables\/gear-360-c200\/\">Samsung Gear 360<\/a> cameras.<a href=\"https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/samsung-gear-360.jpeg\"><img loading=\"lazy\" decoding=\"async\" class=\"size-thumbnail wp-image-190 alignright\" src=\"https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/samsung-gear-360-150x150.jpeg\" alt=\"\" width=\"150\" height=\"150\" \/><\/a><br \/>\n<span style=\"font-family: Futura, 'Century Gothic', AppleGothic, sans-serif;\">They are made of 2 lenses spanning a bit more than 180 degrees. The raw footage consists of the image captured by the two lenses, without any geometrical correction, written side by side on the same frame. The resolution is\u00a0360\u00b0 (3840 x 1920)@30fps.<br \/>\nMore information on the data format can be found <a href=\"https:\/\/project.inria.fr\/ftv360\/informations\/video-data-format\/\">here<\/a>.<\/span><\/p>\n<p><\/p>","protected":false},"excerpt":{"rendered":"<p>New system based on 360\u00b0 cameras Ultimate Free Viewpoint Navigation enables a user to freely change the position, ,\u00a0and the angle, of his viewpoint. Naturally, it is impossible in practice to sample the light rays coming from every direction at every position. The challenge for an acquisition system is yet\u2026<\/p>\n<p> <a class=\"continue-reading-link\" href=\"https:\/\/project.inria.fr\/ftv360\/informations\/acquisition-procedure\/\"><span>Continue reading<\/span><i class=\"crycon-right-dir\"><\/i><\/a> <\/p>\n","protected":false},"author":1433,"featured_media":0,"parent":105,"menu_order":2,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-151","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/project.inria.fr\/ftv360\/wp-json\/wp\/v2\/pages\/151","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/project.inria.fr\/ftv360\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/project.inria.fr\/ftv360\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/ftv360\/wp-json\/wp\/v2\/users\/1433"}],"replies":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/ftv360\/wp-json\/wp\/v2\/comments?post=151"}],"version-history":[{"count":45,"href":"https:\/\/project.inria.fr\/ftv360\/wp-json\/wp\/v2\/pages\/151\/revisions"}],"predecessor-version":[{"id":341,"href":"https:\/\/project.inria.fr\/ftv360\/wp-json\/wp\/v2\/pages\/151\/revisions\/341"}],"up":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/ftv360\/wp-json\/wp\/v2\/pages\/105"}],"wp:attachment":[{"href":"https:\/\/project.inria.fr\/ftv360\/wp-json\/wp\/v2\/media?parent=151"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}