

{"id":73,"date":"2016-09-29T14:07:25","date_gmt":"2016-09-29T12:07:25","guid":{"rendered":"http:\/\/project.inria.fr\/IBC\/?page_id=73"},"modified":"2023-04-04T12:19:49","modified_gmt":"2023-04-04T10:19:49","slug":"data","status":"publish","type":"page","link":"https:\/\/project.inria.fr\/IBC\/data\/","title":{"rendered":"Documentation &#038; Data"},"content":{"rendered":"<p><span style=\"color: #ff00ff;\"><strong><a href=\"https:\/\/individual-brain-charting.github.io\/docs\/\"><span style=\"text-decoration: underline;\"><span style=\"font-size: 18pt;\"><span style=\"font-size: 24pt;\">Documentation<\/span><\/span><\/span><\/a><\/strong><\/span><\/p>\n<h6><span style=\"font-size: 24pt;\"><strong>Data<\/strong><\/span><\/h6>\n<table style=\"border-collapse: collapse; width: 103.769%; height: 730px;\" border=\"1\">\n<tbody>\n<tr>\n<td style=\"width: 56.2764%;\">\n<ul>\n<li>Source data: <a href=\"https:\/\/openneuro.org\/datasets\/ds002685\/versions\/1.0.0\">OpenNeuro<\/a> and <a href=\"https:\/\/search.kg.ebrains.eu\/instances\/Dataset\/f968dc40-2058-4178-bcf7-d1ce8db2d7cc\">EBRAINS<\/a><\/li>\n<li>preprocessed data: <a href=\"https:\/\/search.kg.ebrains.eu\/instances\/3ca4f5a1-647b-4829-8107-588a699763c1\">EBRAINS<\/a><\/li>\n<li>Contrast-maps of task-fMRI data: NeuroVault collections <a href=\"https:\/\/identifiers.org\/neurovault.collection:6618\">6618<\/a> and <a href=\"https:\/\/identifiers.org\/neurovault.collection:4438\">4438<\/a><\/li>\n<li><a href=\"https:\/\/github.com\/hbp-brain-charting\/public_protocols\">Behavioural protocols<\/a><\/li>\n<li><a href=\"https:\/\/github.com\/hbp-brain-charting\/public_analysis_code\">Analysis scripts<\/a><\/li>\n<\/ul>\n<p><span style=\"font-size: 12pt;\"><sup><a href=\"http:\/\/project.inria.fr\/IBC\/files\/2019\/10\/methods.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-391\" src=\"http:\/\/project.inria.fr\/IBC\/files\/2019\/10\/methods-1024x716.png\" alt=\"\" width=\"750\" height=\"524\" srcset=\"https:\/\/project.inria.fr\/IBC\/files\/2019\/10\/methods-1024x716.png 1024w, https:\/\/project.inria.fr\/IBC\/files\/2019\/10\/methods-300x210.png 300w, https:\/\/project.inria.fr\/IBC\/files\/2019\/10\/methods-768x537.png 768w, https:\/\/project.inria.fr\/IBC\/files\/2019\/10\/methods-150x105.png 150w, https:\/\/project.inria.fr\/IBC\/files\/2019\/10\/methods.png 1415w\" sizes=\"auto, (max-width: 750px) 100vw, 750px\" \/><\/a> <\/sup><span style=\"font-size: 14pt;\">All data (raw and processed) in the IBC dataset follow the\u00a0<a href=\"http:\/\/bids.neuroimaging.io\">BIDS\u00a0<\/a>specification. <\/span><\/span><span style=\"font-size: 14pt;\"><a href=\"https:\/\/github.com\/hbp-brain-charting\/public_analysis_code\/blob\/master\/ibc_data\/all_contrasts.tsv\">Contrast maps<\/a> derived from the task fMRI data are\u00a0labeled using\u00a0<a href=\"http:\/\/www.cognitiveatlas.org\">Cognitive Atlas<\/a>.<\/span><\/td>\n<td style=\"width: 35.3371%;\"><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><span style=\"font-size: 24pt;\"><strong>fMRI tasks<span style=\"font-size: 24pt;\">: <\/span><\/strong><\/span><strong><span style=\"font-size: 24pt;\">Currently available data<\/span><\/strong><\/p>\n<table style=\"height: 547px; width: 101.31%; border-collapse: collapse; background-color: #e8fcf3;\" border=\"1\">\n<tbody>\n<tr style=\"height: 24px;\">\n<td style=\"width: 27.5229%; height: 24px;\"><strong>TASK\/BATTERY<\/strong><\/td>\n<td style=\"width: 13.6468%; text-align: center;\"><strong>#TASKS<\/strong><\/td>\n<td style=\"width: 54.1775%; height: 24px;\"><strong>DOMAIN(S) PROBED<\/strong><\/td>\n<\/tr>\n<tr style=\"height: 24px;\">\n<td style=\"width: 27.5229%; height: 24px;\">ARCHI<\/td>\n<td style=\"width: 13.6468%; text-align: center;\">4<\/td>\n<td style=\"width: 54.1775%; height: 24px;\">Visuomotor, language, arithmetic, social and emotional<\/td>\n<\/tr>\n<tr style=\"height: 24px;\">\n<td style=\"width: 27.5229%; height: 24px;\"><a href=\"https:\/\/www.humanconnectome.org\/\">HCP<\/a><\/td>\n<td style=\"width: 13.6468%; text-align: center;\">7<\/td>\n<td style=\"width: 54.1775%; height: 24px;\">Motor, emotional, social, relational, social, gambling, working memory<\/td>\n<\/tr>\n<tr style=\"height: 24px;\">\n<td style=\"width: 27.5229%; height: 24px;\">RSVP Language<\/td>\n<td style=\"width: 13.6468%; text-align: center;\">1<\/td>\n<td style=\"width: 54.1775%; height: 24px;\">Language (sentence comprehension)<\/td>\n<\/tr>\n<tr style=\"height: 24px;\">\n<td style=\"width: 27.5229%; height: 24px;\">Mental Time Travel<\/td>\n<td style=\"width: 13.6468%; text-align: center;\">2<\/td>\n<td style=\"width: 54.1775%; height: 24px;\">Space\/Time representation<\/td>\n<\/tr>\n<tr style=\"height: 24px;\">\n<td style=\"width: 27.5229%; height: 24px;\">Positive Incentive Value<\/td>\n<td style=\"width: 13.6468%; text-align: center;\">4<\/td>\n<td style=\"width: 54.1775%; height: 24px;\">Incentive\/Reward system<\/td>\n<\/tr>\n<tr style=\"height: 24px;\">\n<td style=\"width: 27.5229%; height: 24px;\"><a href=\"https:\/\/saxelab.mit.edu\/localizers\">Theory of Mind<\/a><\/td>\n<td style=\"width: 13.6468%; text-align: center;\">3<\/td>\n<td style=\"width: 54.1775%; height: 24px;\">Representation of beliefs, facts, observed pain<\/td>\n<\/tr>\n<tr style=\"height: 24px;\">\n<td style=\"width: 27.5229%; height: 24px;\">Visual Short-Term Memory<\/td>\n<td style=\"width: 13.6468%; text-align: center;\">1<\/td>\n<td style=\"width: 54.1775%; height: 24px;\">Short-term memory, numerosity<\/td>\n<\/tr>\n<tr>\n<td style=\"width: 27.5229%;\">Enumeration<\/td>\n<td style=\"width: 13.6468%; text-align: center;\">1<\/td>\n<td style=\"width: 54.1775%;\">Enumeration, numerosity<\/td>\n<\/tr>\n<tr style=\"height: 24px;\">\n<td style=\"width: 27.5229%; height: 24px;\">Self<\/td>\n<td style=\"width: 13.6468%; text-align: center;\">1<\/td>\n<td style=\"width: 54.1775%; height: 24px;\">Encoding and retrieving representation of <em>self<\/em> and <em>others<\/em><\/td>\n<\/tr>\n<tr style=\"height: 24px;\">\n<td style=\"width: 27.5229%; height: 24px;\">Bang<\/td>\n<td style=\"width: 13.6468%; text-align: center;\">1<\/td>\n<td style=\"width: 54.1775%; height: 24px;\">Unconstrained audio-visual stimulation<\/td>\n<\/tr>\n<tr style=\"height: 24px;\">\n<td style=\"width: 27.5229%; height: 24px;\">Clips<\/td>\n<td style=\"width: 13.6468%; text-align: center;\">1<\/td>\n<td style=\"width: 54.1775%; height: 24px;\">Unconstrained visual stimulation<\/td>\n<\/tr>\n<tr style=\"height: 24px;\">\n<td style=\"width: 27.5229%; height: 24px;\">Classic Retinotopy<\/td>\n<td style=\"width: 13.6468%; text-align: center;\">2<\/td>\n<td style=\"width: 54.1775%; height: 24px;\">Retinotopy<\/td>\n<\/tr>\n<tr style=\"height: 24px;\">\n<td style=\"width: 27.5229%; height: 24px;\">Raiders<\/td>\n<td style=\"width: 13.6468%; text-align: center;\">1<\/td>\n<td style=\"width: 54.1775%; height: 24px;\">Unconstrained audio-visual stimulation<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3><strong>fMRI tasks: <\/strong><strong>Upcoming<\/strong><\/h3>\n<table style=\"height: 101px; width: 104.765%; border-collapse: collapse; background-color: #d9f8fc;\">\n<tbody>\n<tr style=\"height: 24px;\">\n<td style=\"width: 27.0064%; height: 24px;\"><strong>TASK\/BATTERY<\/strong><\/td>\n<td style=\"width: 13.8207%; text-align: center; height: 24px;\"><strong>#TASKS<\/strong><\/td>\n<td style=\"width: 61.6177%; height: 24px;\"><strong>DOMAIN(S) PROBED<\/strong><\/td>\n<\/tr>\n<tr style=\"height: 24px;\">\n<td style=\"width: 27.0064%; height: 24px;\"><a href=\"https:\/\/labex-cortex.universite-lyon.fr\/\">Lyon<\/a><\/td>\n<td style=\"width: 13.8207%; text-align: center; height: 24px;\">8<\/td>\n<td style=\"width: 61.6177%; height: 24px;\">Auditory, visual category, working memory, salience<\/td>\n<\/tr>\n<tr style=\"height: 12px;\">\n<td style=\"width: 27.0064%; height: 12px;\"><a href=\"https:\/\/www.maastrichtuniversity.nl\/e.formisano\">Realistic Sounds<\/a><\/td>\n<td style=\"width: 13.8207%; text-align: center; height: 12px;\">1<\/td>\n<td style=\"width: 61.6177%; height: 12px;\">Auditory perception of human and non-human sounds<\/td>\n<\/tr>\n<tr style=\"height: 17px;\">\n<td style=\"width: 27.0064%; height: 17px;\"><a href=\"https:\/\/poldracklab.stanford.edu\/\">Stanford<\/a><\/td>\n<td style=\"width: 13.8207%; text-align: center; height: 17px;\">9<\/td>\n<td style=\"width: 61.6177%; height: 17px;\">Risk-associated decision making, motor inhibition, planning, vigilance<\/td>\n<\/tr>\n<tr style=\"height: 24px;\">\n<td style=\"width: 27.0064%; height: 24px;\"><a href=\"https:\/\/www.changlab.hk\/\">Biological motion<\/a><\/td>\n<td style=\"width: 13.8207%; text-align: center; height: 24px;\">1<\/td>\n<td style=\"width: 61.6177%; height: 24px;\">Perception of biological motion<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n","protected":false},"excerpt":{"rendered":"<p>Documentation Data Source data: OpenNeuro and EBRAINS preprocessed data: EBRAINS Contrast-maps of task-fMRI data: NeuroVault collections 6618 and 4438 Behavioural protocols Analysis scripts All data (raw and processed) in the IBC dataset follow the\u00a0BIDS\u00a0specification. Contrast maps derived from the task fMRI data are\u00a0labeled using\u00a0Cognitive Atlas. fMRI tasks: Currently available data\u2026<\/p>\n<p> <a class=\"continue-reading-link\" href=\"https:\/\/project.inria.fr\/IBC\/data\/\"><span>Continue reading<\/span><i class=\"crycon-right-dir\"><\/i><\/a> <\/p>\n","protected":false},"author":880,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-73","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/project.inria.fr\/IBC\/wp-json\/wp\/v2\/pages\/73","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/project.inria.fr\/IBC\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/project.inria.fr\/IBC\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/IBC\/wp-json\/wp\/v2\/users\/880"}],"replies":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/IBC\/wp-json\/wp\/v2\/comments?post=73"}],"version-history":[{"count":169,"href":"https:\/\/project.inria.fr\/IBC\/wp-json\/wp\/v2\/pages\/73\/revisions"}],"predecessor-version":[{"id":818,"href":"https:\/\/project.inria.fr\/IBC\/wp-json\/wp\/v2\/pages\/73\/revisions\/818"}],"wp:attachment":[{"href":"https:\/\/project.inria.fr\/IBC\/wp-json\/wp\/v2\/media?parent=73"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}