

{"id":75,"date":"2016-11-28T15:41:45","date_gmt":"2016-11-28T14:41:45","guid":{"rendered":"https:\/\/project.inria.fr\/2016visdata\/?page_id=75"},"modified":"2023-12-05T22:56:49","modified_gmt":"2023-12-05T21:56:49","slug":"presentations","status":"publish","type":"page","link":"https:\/\/project.inria.fr\/bigvisdata\/presentations\/","title":{"rendered":"Presentations"},"content":{"rendered":"<h3>Lecture d&#8217;articles<\/h3>\n<p>En parall\u00e8le du cours, nous proposons de compl\u00e9ter les notions abord\u00e9es par la lecture d&#8217;articles de recherche, portant sur des sujets directement abord\u00e9s en cours, ou donnant un contexte \u00e9clairant sur celui-ci.<\/p>\n<h4>Consignes li\u00e9es \u00e0 l&#8217;\u00e9valuation:<\/h4>\n<ul>\n<li><strong>Pr\u00e9sentation des articles:<\/strong>\n<ul>\n<li>Chaque article est pr\u00e9sent\u00e9 par un ou deux \u00e9tudiants volontaires\n<ul>\n<li>La pr\u00e9sentation est \u00e9valu\u00e9e (avec bienveillance) et donne des points bonus<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<ul>\n<li><strong>Certaines semaines, un quizz not\u00e9<\/strong> est donn\u00e9 sur le ou les articles de la semaine\n<ul>\n<li>Cela demande, bien entendu, que tous les \u00e9tudiants aient lu cet article <strong>avant <\/strong>le cours lors duquel a lieu le quizz<\/li>\n<li>Pensez \u00e0 les imprimer ou les t\u00e9l\u00e9charger sur votre ordinateur si vous souhaitez les consulter pendant le quizz.\u00a0Acc\u00e8s \u00e0 internet et utilisation du t\u00e9l\u00e9phone interdit pendant le quizz.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h4>D\u00e9tails des consignes:<\/h4>\n<ul>\n<li>Les pr\u00e9sentations peuvent se faire seul ou par groupe de deux. Dans le cas d&#8217;un bin\u00f4me, les deux \u00e9tudiants doivent pr\u00e9senter.<\/li>\n<li>Chaque pr\u00e9sentation dure <strong>maximum 10 minutes. <\/strong>Les pr\u00e9sentations sont accompagn\u00e9es d&#8217;un support visuel: classiquement quelques transparents, mais une pr\u00e9sentation au tableau est aussi possible. Pensez que du mat\u00e9riel fourni par les auteurs des papiers est souvent d\u00e9j\u00e0 disponible et peut vous aider (transparents, posters, d\u00e9mo, etc). Mais dans ce cas, pensez \u00e0 donner le cr\u00e9dit !<\/li>\n<li>Soyez auto-suffisant pour vos supports (les salles n&#8217;ont pas d&#8217;ordinateur \u00e0 disposition)<\/li>\n<li>Les articles peuvent \u00eatre lus d\u00e8s les premiers cours, il n&#8217;est pas n\u00e9cessaire d&#8217;attendre le dernier moment ! Et vous en comprendrez d&#8217;autant mieux les cours.<\/li>\n<\/ul>\n<h4>Liste des articles sur lesquels porte l&#8217;\u00e9valuation<\/h4>\n<ul>\n<li>Article 1 &#8211; <strong>Unsupervised Domain Adaptation by Backpropagation <\/strong>(UDA). ICML 2015 [<a href=\"http:\/\/sites.skoltech.ru\/compvision\/projects\/grl\/files\/paper.pdf\">pdf<\/a>] (pr\u00e9sent\u00e9 par Damien &amp; Theophanis)<\/li>\n<li>Article 2 &#8211; <strong>Unsupervised Representation Learning by Predicting Image Rotations<\/strong> (RotNet). ICLR 2018 [<a href=\"https:\/\/arxiv.org\/abs\/1803.07728\">pdf<\/a>] (pr\u00e9sent\u00e9 par Fei)<\/li>\n<li>Article 3 &#8211; <strong>Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset <\/strong>(Quo Vadis). CVPR 2017 [<a href=\"https:\/\/openaccess.thecvf.com\/content_cvpr_2017\/papers\/Carreira_Quo_Vadis_Action_CVPR_2017_paper.pdf\">pdf<\/a>] (pr\u00e9sent\u00e9 par Eden)<\/li>\n<li>Article 4 &#8211; <strong>Momentum Contrast for unsupervised visual representation learning<\/strong> (MoCo). CVPR 2020 [<a href=\"https:\/\/arxiv.org\/abs\/1911.05722\">pdf<\/a>] (pr\u00e9sent\u00e9 par Ahmad)<\/li>\n<li>Article 5 &#8211; <strong>Learning Transferable Visual Models From Natural Language Supervision<\/strong> (CLIP). ICML 2021 [<a href=\"https:\/\/arxiv.org\/abs\/2103.00020\">pdf]<\/a> (only sections 1, 2, 3.1.1, 3.1.2, 3.1.3) (pr\u00e9sent\u00e9 par Loris)<\/li>\n<li>Article 6 &#8211; <strong>Masked Autoencoders Are Scalable Vision Learners<\/strong> (MAE). CVPR 2022 [<a href=\"https:\/\/arxiv.org\/pdf\/2111.06377.pdf\">pdf<\/a>] (pr\u00e9sent\u00e9 par El Hassan)<\/li>\n<li>Article 7 &#8211; <strong>Learning without Forgetting <\/strong>(LwF). ECCV 2016 [<a href=\"https:\/\/www.semanticscholar.org\/reader\/8f3b80ddc0dd62e6c3369fabb1715990c29e9b9a\">pdf<\/a>] (pr\u00e9sent\u00e9 par Tom)<\/li>\n<li>Article 8 &#8211; <strong>Incremental Learning of Object Detectors without Catastrophic Forgetting <\/strong>(IncDet). ICCV 2017 [<a href=\"http:\/\/thoth.inrialpes.fr\/~alahari\/papers\/shmelkov17.pdf\">pdf<\/a>] (pr\u00e9sent\u00e9 par Juliette &amp; Sibylle)<\/li>\n<\/ul>\n<h3>Pour aller plus loin<\/h3>\n<p>Pour compl\u00e9ter certains sujets abord\u00e9s en cours, n&#8217;h\u00e9sitez pas \u00e0 \u00e9galement parcourir les articles suivants qui sont\u00a0 pertinents pour ce cours. Ces articles ne sont pas couverts par les quizz, mais peuvent vous donner des bases solides pour la compr\u00e9hension du cours.<\/p>\n<ul>\n<li>Datasets\n<ul>\n<li><strong>ImageNet Large Scale Visual Recognition Challenge<\/strong>. IJCV 2015 [<a href=\"https:\/\/arxiv.org\/abs\/1409.0575\">pdf<\/a>][<a href=\"http:\/\/www.image-net.org\/\">dataset page<\/a>]<\/li>\n<li><strong>Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations<\/strong><em>.<\/em> IJCV 2017 [<a href=\"https:\/\/visualgenome.org\/static\/paper\/Visual_Genome.pdf\">pdf<\/a>][<a href=\"https:\/\/visualgenome.org\">dataset page<\/a>]<\/li>\n<\/ul>\n<\/li>\n<li>Deep architectures\n<ul>\n<li><strong>Deep residual learning for image recognition<\/strong> (ResNet). CVPR 2016 [<a href=\"https:\/\/www.cv-foundation.org\/openaccess\/content_cvpr_2016\/papers\/He_Deep_Residual_Learning_CVPR_2016_paper.pdf\">pdf<\/a>][<a href=\"http:\/\/kaiminghe.com\/icml16tutorial\/index.html\">Tutorial<\/a>]<\/li>\n<\/ul>\n<\/li>\n<li>Self-supervised learning\n<ul>\n<li><strong>Concept Generalization in Visual Representation Learning <\/strong>(CoG)<strong>.<\/strong> ICCV 2021[<a href=\"https:\/\/arxiv.org\/abs\/2012.05649\">pdf<\/a>][<a href=\"https:\/\/europe.naverlabs.com\/research\/computer-vision\/cog-benchmark\/\">project page<\/a>]<\/li>\n<\/ul>\n<\/li>\n<li>Continual learning\n<ul>\n<li><strong>Learning without Forgetting<\/strong><em>. <\/em>PAMI 2017 [<a href=\"https:\/\/arxiv.org\/pdf\/1606.09282.pdf\">pdf<\/a>][<a href=\"https:\/\/github.com\/lizhitwo\/LearningWithoutForgetting\">github<\/a>]<\/li>\n<li>\n<div id=\"papertitle\"><strong>iCaRL: Incremental Classifier and Representation Learning.<\/strong> CVPR 2017 [<a href=\"https:\/\/openaccess.thecvf.com\/content_cvpr_2017\/papers\/Rebuffi_iCaRL_Incremental_Classifier_CVPR_2017_paper.pdf\">pdf<\/a>] [<a href=\"https:\/\/openaccess.thecvf.com\/content_cvpr_2017\/supplemental\/Rebuffi_iCaRL_Incremental_Classifier_2017_CVPR_supplemental.pdf\">supp <\/a>[<a href=\"https:\/\/openaccess.thecvf.com\/content_cvpr_2017\/poster\/739_POSTER.pdf\">poster<\/a>]<\/div>\n<\/li>\n<\/ul>\n<\/li>\n<li>Retrieval\n<ul>\n<li><strong>Deep Image Retrieval: Learning global representations for image search.<\/strong> ECCV 2016 [<a href=\"https:\/\/arxiv.org\/abs\/1604.01325\">pdf<\/a>]<\/li>\n<li><strong>Learning Deep Structure-Preserving Image-Text Embeddings.<\/strong> CVPR 2016<em><br \/>\n<\/em>[<a href=\"https:\/\/slazebni.cs.illinois.edu\/publications\/cvpr16_structure.pdf\">pdf<\/a>]<\/li>\n<\/ul>\n<\/li>\n<li>Detection and Segmentation\n<ul>\n<li>\u00a0<strong>Histograms of oriented gradients for human detection<\/strong> (HOG)<em>.<\/em> CVPR 2005 [<a href=\"https:\/\/hal.inria.fr\/inria-00548512\/document\">pdf<\/a>]<\/li>\n<li><strong>Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs.<\/strong> ICLR 2015 [<a href=\"https:\/\/arxiv.org\/abs\/1412.7062\">pdf<\/a>]<\/li>\n<li><strong>You Only Look Once<\/strong> (YOLO). CVPR 2016 [<a href=\"https:\/\/arxiv.org\/pdf\/1506.02640.pdf\">pdf<\/a>][<a href=\"https:\/\/pjreddie.com\/darknet\/yolo\/\">project page<\/a>]<\/li>\n<li><strong>Mask R-CNN<\/strong><em>.<\/em> ICCV<em>,<\/em> 2017 [<a href=\"https:\/\/arxiv.org\/pdf\/1703.06870.pdf\">pdf<\/a>][<a href=\"https:\/\/github.com\/facebookresearch\/maskrcnn-benchmark\">project page<\/a>]<\/li>\n<\/ul>\n<\/li>\n<li>Other tasks\n<ul>\n<li><strong>Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks<\/strong><em>. <\/em>ICCV 2017 [<a href=\"https:\/\/arxiv.org\/pdf\/1703.10593.pdf\">pdf<\/a>][<a href=\"https:\/\/junyanz.github.io\/CycleGAN\/\">project page<\/a>]<\/li>\n<li><strong>GQA: Visual Reasoning in the Real World <\/strong>[<a href=\"https:\/\/arxiv.org\/pdf\/1902.09506.pdf\">pdf<\/a>][<a href=\"https:\/\/cs.stanford.edu\/people\/dorarad\/gqa\/index.html\">project page<\/a>]<\/li>\n<li><strong>LCR-Net: Localization-Classification-Regression for Human Pose<\/strong>. CVPR 2017 [<a href=\"https:\/\/openaccess.thecvf.com\/content_cvpr_2017\/papers\/Rogez_LCR-Net_Localization-Classification-Regression_for_CVPR_2017_paper.pdf\">pdf<\/a>]<\/li>\n<\/ul>\n<\/li>\n<li>AI Ethics\n<ul>\n<li><strong>Practical Data Ethics<\/strong> \u2013 Fast AI [<a href=\"https:\/\/ethics.fast.ai\/\">course page]<\/a><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Lecture d&#8217;articles En parall\u00e8le du cours, nous proposons de compl\u00e9ter les notions abord\u00e9es par la lecture d&#8217;articles de recherche, portant sur des sujets directement abord\u00e9s en cours, ou donnant un contexte \u00e9clairant sur celui-ci. Consignes li\u00e9es \u00e0 l&#8217;\u00e9valuation: Pr\u00e9sentation des articles: Chaque article est pr\u00e9sent\u00e9 par un ou deux \u00e9tudiants\u2026<\/p>\n<p> <a class=\"continue-reading-link\" href=\"https:\/\/project.inria.fr\/bigvisdata\/presentations\/\"><span>Continue reading<\/span><i class=\"crycon-right-dir\"><\/i><\/a> <\/p>\n","protected":false},"author":921,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-75","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/project.inria.fr\/bigvisdata\/wp-json\/wp\/v2\/pages\/75","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/project.inria.fr\/bigvisdata\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/project.inria.fr\/bigvisdata\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/bigvisdata\/wp-json\/wp\/v2\/users\/921"}],"replies":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/bigvisdata\/wp-json\/wp\/v2\/comments?post=75"}],"version-history":[{"count":82,"href":"https:\/\/project.inria.fr\/bigvisdata\/wp-json\/wp\/v2\/pages\/75\/revisions"}],"predecessor-version":[{"id":485,"href":"https:\/\/project.inria.fr\/bigvisdata\/wp-json\/wp\/v2\/pages\/75\/revisions\/485"}],"wp:attachment":[{"href":"https:\/\/project.inria.fr\/bigvisdata\/wp-json\/wp\/v2\/media?parent=75"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}