

{"id":198,"date":"2018-07-05T14:09:10","date_gmt":"2018-07-05T12:09:10","guid":{"rendered":"https:\/\/project.inria.fr\/ftv360\/?page_id=198"},"modified":"2018-07-10T13:11:45","modified_gmt":"2018-07-10T11:11:45","slug":"calibration-parameters","status":"publish","type":"page","link":"https:\/\/project.inria.fr\/ftv360\/informations\/calibration-parameters\/","title":{"rendered":"Calibration parameters"},"content":{"rendered":"<p><\/p>\n<h5><strong>Unified Spherical Model<\/strong><\/h5>\n<p style=\"padding-left: 30px;\">The relationship between a point\u00a0\\(\\displaystyle \\mathbf{P} \\) in the 3D domain, and the corresponding pixel\u00a0\\(\\displaystyle \\mathbf{u} \\) in the recorded image is drawn with the help of the <a href=\"http:\/\/www.robots.ox.ac.uk\/~cmei\/articles\/single_viewpoint_calib_mei_07.pdf\">Unified Spherical Model<\/a>. It consists of two steps:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-249\" src=\"https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/spherical_model-1.png\" alt=\"\" width=\"899\" height=\"330\" srcset=\"https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/spherical_model-1.png 899w, https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/spherical_model-1-300x110.png 300w, https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/spherical_model-1-768x282.png 768w, https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/spherical_model-1-150x55.png 150w\" sizes=\"auto, (max-width: 899px) 100vw, 899px\" \/><\/p>\n<p style=\"padding-left: 60px;\">1) <strong>Projection on the Sphere:<\/strong> the point\u00a0\\(\\displaystyle \\mathbf{P} \\) is first projected on the sphere of center\u00a0\\(\\displaystyle O \\) and of radius \\(\\displaystyle 1 \\). The projected point is called\u00a0\\(\\displaystyle \\mathbf{P}_s \\) and is given by:<\/p>\n<p style=\"text-align: center;\">\\(\\displaystyle \\mathbf{P}_s = \\frac{1}{||\\mathbf{P}||} \\mathbf{P} = \\left[ \\begin{array}{c} \\frac{X}{\\sqrt{X+Y+Z}}\\\\ \\frac{Y}{\\sqrt{X+Y+Z}} \\\\ \\frac{Z}{\\sqrt{X+Y+Z}} \\end{array}\\right] \\)<\/p>\n<p style=\"padding-left: 60px;\">2) <strong>Perspective projection:\u00a0<\/strong>the point\u00a0\\(\\displaystyle \\mathbf{P}_s \\) is then projected onto the camera sensor with a perspective projection of center\u00a0\\(\\displaystyle O_s \\) (translated from\u00a0\\(\\displaystyle O \\) with a distance of\u00a0\\(\\displaystyle \\xi \\)) and of parameters \\(\\displaystyle \\mathbf{K} \\). The pixel position of the projection is called\u00a0\\(\\displaystyle \\mathbf{u} \\) and is given by:<\/p>\n<p style=\"text-align: center;\">\\(\\displaystyle\\left[ \\begin{array}{c} u_x\\\\ u_y \\\\ 1 \\end{array} \\right] \\equiv \\mathbf{K} \\left(\\mathbf{P}_s +\\left[ \\begin{array}{c} 0\\\\ 0 \\\\ \\xi \\end{array} \\right] \\right) = \\mathbf{K} \\left[ \\begin{array}{c} \\frac{X}{\\sqrt{X+Y+Z}}\\\\ \\frac{Y}{\\sqrt{X+Y+Z}} \\\\ \\frac{Z}{\\sqrt{X+Y+Z}} + \\xi \\end{array}\\right] \\)<\/p>\n<p style=\"padding-left: 60px;\">Note that the sign\u00a0\\(\\displaystyle \\equiv \\) means an equality in the homogeneous coordinate. Both vectors should be divided by their last element in the vector before the equality is verified.<\/p>\n<h5><strong>Intrinsic parameters format<\/strong><\/h5>\n<p style=\"padding-left: 30px;\">The parameters of the models have been estimated through a calibration (explained and available for download <a href=\"https:\/\/project.inria.fr\/ftv360\/download\/calibration\/\">here<\/a>). The intrinsic parameters are shared on the <a href=\"https:\/\/project.inria.fr\/ftv360\/download\/download-ftv-data\/\">video downloading page<\/a>\u00a0or directly <a href=\"ftp:\/\/ftp.irisa.fr\/local\/sirocco-ftv360\/share\/Capture1\/intrinsic_parameters.txt\">here<\/a>. A distortion vector is also given (as a null vector). It is given in the convention of <a href=\"https:\/\/docs.opencv.org\/3.2.0\/dd\/d12\/tutorial_omnidir_calib_main.html\">OpenCV omnidirectional calibration toolbox<\/a>, but says that no distortion should be added to this model. These parameters are the same for all the cameras used in our captures.<\/p>\n<h5><strong>Extrinsic parameters format<\/strong><\/h5>\n<p style=\"padding-left: 30px;\">In order to keep a good calibration accuracy, we consider the two semi-spherical fisheye lenses that compose the Samsung Gear 360 camera, as two independent cameras. In other words, each one has its own calibration parameter, and thus its own reference system. The axis convention is drawn below.<\/p>\n<p>&nbsp;<\/p>\n<p style=\"padding-left: 30px;\"><a href=\"https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/extrinsic.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-286 size-full\" src=\"https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/extrinsic.png\" alt=\"\" width=\"400\" height=\"158\" srcset=\"https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/extrinsic.png 400w, https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/extrinsic-300x119.png 300w, https:\/\/project.inria.fr\/ftv360\/files\/2018\/07\/extrinsic-150x59.png 150w\" sizes=\"auto, (max-width: 400px) 100vw, 400px\" \/><\/a><\/p>\n<p style=\"padding-left: 30px;\">Extrinsic parameters shared with the video data detail, for each camera, its translation and rotation (matrix or Rodrigues angle)\u00a0with respect to the reference camera.<\/p>\n<p style=\"padding-left: 30px;\">Let us take the simple example of the two lenses of a camera (considered as two cameras). As it can be seen the difference between the front and rear coordinate system is a simple negative translation along the z axis, and a rotation around the y axis. In other words, if the reference coordinate is\u00a0\\(\\displaystyle [x_{\\rm f},\\ y_{\\rm f},\\ z_{\\rm f}] \\), then the translation vector is\u00a0\\(\\displaystyle [0 ,\\ 0,\\ -\\delta] \\) and the rotation angles are\u00a0\\(\\displaystyle [0 ,\\ \\pi,\\ 0] \\) (in the x-y-z convention).<\/p>\n<p style=\"padding-left: 30px;\">Two types of the extrinsic calibration file are available on the <a href=\"https:\/\/project.inria.fr\/ftv360\/download\/download-ftv-data\/\">download webpage<\/a>. Both give, as a first information, the name of the reference camera:<\/p>\n<pre style=\"padding-left: 30px;\">\u00a0\"reference_camera: 729_rear\"<\/pre>\n<p style=\"padding-left: 30px;\">Another common point of the two files is that the translation camera parameters are given for each camera as follows (with respect to the reference camera):<\/p>\n<pre>camera_name: 719_front\r\nposition: [-0.0117221, 0.10309, -3.29996]<\/pre>\n<p style=\"padding-left: 30px;\">What differs between the two files is the convention used to represent the angles. On one file format the rotation is given as an\u00a0<a href=\"https:\/\/docs.opencv.org\/3.3.1\/d9\/d0c\/group__calib3d.html#ga61585db663d9da06b68e70cfbf6a1eac\">Rodrigues angle<\/a> (compliant with OpenCV):<\/p>\n<pre>camera_name: 719_front\r\norientation: [-0.0376518, 3.13225, 0.0147664]<\/pre>\n<p style=\"padding-left: 30px;\">Another one gives the rotation in its matrix form. If the angles are\u00a0\\(\\displaystyle [\\alpha, \\ \\beta, \\ \\gamma ] \\), the rotation matrix is:<\/p>\n<p style=\"padding-left: 30px;\">\\(\\displaystyle \\mathbf{R}=\\left[\\begin{array}{ccc}\\cos\\gamma &amp; -\\sin\\gamma &amp; 0\\\\ \\sin\\gamma &amp; \\cos\\gamma &amp; 0\\\\ 0&amp;0&amp;1\\end{array}\\right]\\left[\\begin{array}{ccc} \\cos\\beta &amp; 0 &amp; \\sin\\beta\\\\ 0&amp;1&amp;0\\\\ -\\sin\\beta &amp;0&amp; \\cos\\gamma\\end{array} \\right]\\left[\\begin{array}{ccc} 1&amp;0&amp;0\\\\0&amp;\\cos\\alpha&amp;-\\sin\\alpha\\\\0&amp;\\sin\\alpha&amp;\\cos\\alpha\\end{array}\\right] \\)<\/p>\n<p style=\"padding-left: 30px;\">Here is an example of R matrix given in our calibration parameter files:<\/p>\n<pre style=\"padding-left: 30px;\">camera_name: 719_front\r\norientation: [-0.9996697906484845, -0.02407973187231039, 0.008970851563409689;\r\n -0.02399408075792513, 0.9996666183951203, 0.009536045042234481;\r\n -0.00919748625425762, 0.009317648814105252, -0.9999142901605016]<\/pre>\n<p><\/p>","protected":false},"excerpt":{"rendered":"<p>Unified Spherical Model The relationship between a point\u00a0 in the 3D domain, and the corresponding pixel\u00a0 in the recorded image is drawn with the help of the Unified Spherical Model. It consists of two steps: 1) Projection on the Sphere: the point\u00a0 is first projected on the sphere of center\u00a0\u2026<\/p>\n<p> <a class=\"continue-reading-link\" href=\"https:\/\/project.inria.fr\/ftv360\/informations\/calibration-parameters\/\"><span>Continue reading<\/span><i class=\"crycon-right-dir\"><\/i><\/a> <\/p>\n","protected":false},"author":1433,"featured_media":0,"parent":105,"menu_order":4,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-198","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/project.inria.fr\/ftv360\/wp-json\/wp\/v2\/pages\/198","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/project.inria.fr\/ftv360\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/project.inria.fr\/ftv360\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/ftv360\/wp-json\/wp\/v2\/users\/1433"}],"replies":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/ftv360\/wp-json\/wp\/v2\/comments?post=198"}],"version-history":[{"count":73,"href":"https:\/\/project.inria.fr\/ftv360\/wp-json\/wp\/v2\/pages\/198\/revisions"}],"predecessor-version":[{"id":339,"href":"https:\/\/project.inria.fr\/ftv360\/wp-json\/wp\/v2\/pages\/198\/revisions\/339"}],"up":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/ftv360\/wp-json\/wp\/v2\/pages\/105"}],"wp:attachment":[{"href":"https:\/\/project.inria.fr\/ftv360\/wp-json\/wp\/v2\/media?parent=198"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}