

{"id":4,"date":"2011-12-08T11:55:34","date_gmt":"2011-12-08T11:55:34","guid":{"rendered":"http:\/\/project.inria.fr\/template1\/?page_id=4"},"modified":"2024-04-05T08:43:02","modified_gmt":"2024-04-05T06:43:02","slug":"home","status":"publish","type":"page","link":"https:\/\/project.inria.fr\/sharp\/","title":{"rendered":"Home"},"content":{"rendered":"<div class=\"page\" title=\"Page 5\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>The amount of resources needed to train and deploy state of the art deep neural models is enormous, from CPUs, GPUs and storage to bandwidth and dataset acquisition (with possible annotations). More importantly, it steadily increases at an unsustainable pace, leading to a growing and alarming environmental footprint, and results in significant societal tensions. This raises serious concerns, not only in flagship domains such as computer vision (CV) and natural language processing (NLP) where all players become dependent on the main worldwide players (with the risk of an AI oligopoly), but also in many domains where data is scarce and resource-hungry models cannot be successfully trained with current know-how.<\/p>\n<p>The major challenge of the SHARP project is to achieve a leap forward in frugality by designing, analyzing and deploying intrinsically efficient models (neural or not) able to achieve the versatility and performance of the best models while requiring only a vanishing fraction of the resources currently needed.<\/p>\n<div class=\"page\" title=\"Page 7\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>To achieve its vision, SHARP will design a principled theoretical and algorithmic framework to leverage prior knowledge and the modern avatars of the notion of sparsity of predictors and\/or algorithms, and establish a new paradigm of computational representation learning bypassing the current technical and computing bottlenecks. Two showcase demonstrations of the impact of SHARP will be the frugal training of compact transformers with negligible performance loss, and the development of effective representation learning models on small unlabeled datasets, for a selected downstream application. To achieve such frugality in AI, SHARP will rely on three pillars: frugal architectures, frugal learning principles, and learning with small and scarce datasets.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<p><strong>Keywords<\/strong>: Statistical learning, algorithmic efficiency, sparsity, deep learning, computer vision, natural language processing<\/p>\n<p>A project funded by the <a href=\"https:\/\/anr.fr\/fr\/france-2030\/france-2030\/\">France 2030 program, managed by the ANR<\/a>, project <em><strong>ANR-23-PEIA-0008, in the context of the <a href=\"https:\/\/www.pepr-ia.fr\/projet\/sharp\/\">PEPR IA<\/a>.<\/strong><\/em><\/p>\n<\/div>\n<\/div>\n<\/div>\n\n\n<figure class=\"wp-block-image size-large is-resized\"><a href=\"https:\/\/project.inria.fr\/sharp\/files\/2024\/04\/ANR-logo-2021-complet.jpg\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/project.inria.fr\/sharp\/files\/2024\/04\/ANR-logo-2021-complet-1024x262.jpg\" alt=\"\" class=\"wp-image-100\" width=\"383\" height=\"97\" srcset=\"https:\/\/project.inria.fr\/sharp\/files\/2024\/04\/ANR-logo-2021-complet-1024x262.jpg 1024w, https:\/\/project.inria.fr\/sharp\/files\/2024\/04\/ANR-logo-2021-complet-300x77.jpg 300w, https:\/\/project.inria.fr\/sharp\/files\/2024\/04\/ANR-logo-2021-complet-768x197.jpg 768w, https:\/\/project.inria.fr\/sharp\/files\/2024\/04\/ANR-logo-2021-complet-1536x394.jpg 1536w, https:\/\/project.inria.fr\/sharp\/files\/2024\/04\/ANR-logo-2021-complet-150x38.jpg 150w, https:\/\/project.inria.fr\/sharp\/files\/2024\/04\/ANR-logo-2021-complet.jpg 1939w\" sizes=\"auto, (max-width: 383px) 100vw, 383px\" \/><\/a><figcaption>Logo ANR<\/figcaption><\/figure>\n\n\n\n<figure class=\"wp-block-image size-large is-resized\"><a href=\"https:\/\/project.inria.fr\/sharp\/files\/2024\/04\/FR2030_Intelligence-artificielle_Couleur-jpeg.jpg\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/project.inria.fr\/sharp\/files\/2024\/04\/FR2030_Intelligence-artificielle_Couleur-jpeg-1024x568.jpg\" alt=\"\" class=\"wp-image-99\" width=\"262\" height=\"145\" srcset=\"https:\/\/project.inria.fr\/sharp\/files\/2024\/04\/FR2030_Intelligence-artificielle_Couleur-jpeg-1024x568.jpg 1024w, https:\/\/project.inria.fr\/sharp\/files\/2024\/04\/FR2030_Intelligence-artificielle_Couleur-jpeg-300x166.jpg 300w, https:\/\/project.inria.fr\/sharp\/files\/2024\/04\/FR2030_Intelligence-artificielle_Couleur-jpeg-768x426.jpg 768w, https:\/\/project.inria.fr\/sharp\/files\/2024\/04\/FR2030_Intelligence-artificielle_Couleur-jpeg-1536x851.jpg 1536w, https:\/\/project.inria.fr\/sharp\/files\/2024\/04\/FR2030_Intelligence-artificielle_Couleur-jpeg-2048x1135.jpg 2048w, https:\/\/project.inria.fr\/sharp\/files\/2024\/04\/FR2030_Intelligence-artificielle_Couleur-jpeg-150x83.jpg 150w\" sizes=\"auto, (max-width: 262px) 100vw, 262px\" \/><\/a><figcaption>Logo France2030 &#8211; PEPR IA<\/figcaption><\/figure>","protected":false},"excerpt":{"rendered":"<p>The amount of resources needed to train and deploy state of the art deep neural models is enormous, from CPUs, GPUs and storage to bandwidth and dataset acquisition (with possible annotations). More importantly, it steadily increases at an unsustainable pace, leading to a growing and alarming environmental footprint, and results\u2026<\/p>\n<p> <a class=\"continue-reading-link\" href=\"https:\/\/project.inria.fr\/sharp\/\"><span>Continue reading<\/span><i class=\"crycon-right-dir\"><\/i><\/a> <\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"open","template":"","meta":{"footnotes":""},"class_list":["post-4","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/project.inria.fr\/sharp\/wp-json\/wp\/v2\/pages\/4","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/project.inria.fr\/sharp\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/project.inria.fr\/sharp\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/sharp\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/sharp\/wp-json\/wp\/v2\/comments?post=4"}],"version-history":[{"count":14,"href":"https:\/\/project.inria.fr\/sharp\/wp-json\/wp\/v2\/pages\/4\/revisions"}],"predecessor-version":[{"id":112,"href":"https:\/\/project.inria.fr\/sharp\/wp-json\/wp\/v2\/pages\/4\/revisions\/112"}],"wp:attachment":[{"href":"https:\/\/project.inria.fr\/sharp\/wp-json\/wp\/v2\/media?parent=4"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}