

{"id":4,"date":"2011-12-08T11:55:34","date_gmt":"2011-12-08T11:55:34","guid":{"rendered":"http:\/\/project.inria.fr\/template1\/?page_id=4"},"modified":"2025-11-14T22:35:47","modified_gmt":"2025-11-14T21:35:47","slug":"home","status":"publish","type":"page","link":"https:\/\/project.inria.fr\/llm4code\/","title":{"rendered":"About"},"content":{"rendered":"<p>Generative AI, in particular the recent Large Language Models (LLMs), show great promise for software developments. Specialized models are now able to perform an impressive variety of programming tasks: solving programming exercises, assisting software developers, or even generating mechanized proofs. Yet, many challenges still need to be addressed to build reliable and productive LLM-based coding assistants: improving the quality of the generated code, increasing the developers&#8217; confidence in the generated code, enabling interaction with other software development tools (verification, test), and providing new capabilities (automated migration and evolution of software).<\/p>\n\n\n\n<p>The goal of the D\u00e9fi Inria LLM4Code is to leverage LLM capabilities to build code assistants that can enhance both reliability and productivity. The d\u00e9fi is organized along three work packages:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Self-improving code generation<\/li>\n\n\n\n<li>Evolution of existing software<\/li>\n\n\n\n<li>Interactive tools with AI-in-the-loop<\/li>\n<\/ol>\n\n\n\n<p>The D\u00e9fi Inria LLM4Code started in July 2024 for 4 years.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Co-lead<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/www.mathieuacher.com\/\">Mathieu Acher<\/a>, <a href=\"https:\/\/www.diverse-team.fr\/\">DIVERSE<\/a>, Rennes<\/li>\n\n\n\n<li><a href=\"https:\/\/guillaume.baudart.eu\/\">Guillaume Baudart<\/a>, <a href=\"https:\/\/www.irif.fr\/equipes\/picube\/index\">PICUBE<\/a>, Paris<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Teams<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/www.di.ens.fr\/argo\/\">ARGO<\/a>, Paris<\/li>\n\n\n\n<li><a href=\"https:\/\/www.ens-lyon.fr\/LIP\/CASH\/\">CASH<\/a>, Lyon<\/li>\n\n\n\n<li><a href=\"https:\/\/deducteam.gitlabpages.inria.fr\/\">DEDUCTEAM<\/a>, Saclay<\/li>\n\n\n\n<li><a href=\"https:\/\/www.diverse-team.fr\/\">DIVERSE<\/a>, Rennes<\/li>\n\n\n\n<li><a href=\"https:\/\/www.inria.fr\/en\/evref\">EVREF<\/a>, Lille<\/li>\n\n\n\n<li><a href=\"https:\/\/flowers.inria.fr\/\">FLOWERS<\/a>, Bordeaux<\/li>\n\n\n\n<li><a href=\"https:\/\/gallinette.gitlabpages.inria.fr\/website\/\">GALLINETTE<\/a>, Nantes<\/li>\n\n\n\n<li><a href=\"https:\/\/www.labri.fr\/\">LaBRI<\/a>, Bordeaux<\/li>\n\n\n\n<li><a href=\"https:\/\/team.inria.fr\/mnemosyne\/fr\/\">MNEMOSYNE<\/a>, Bordeaux<\/li>\n\n\n\n<li><a href=\"https:\/\/www.irif.fr\/equipes\/picube\/index\">PICUBE<\/a>, Paris<\/li>\n\n\n\n<li><a href=\"https:\/\/team.inria.fr\/spirals\/\">SPIRALS<\/a>, Lille<\/li>\n\n\n\n<li><a href=\"https:\/\/team.inria.fr\/stamp\/\">STAMP<\/a>, Sophia<\/li>\n\n\n\n<li><a href=\"https:\/\/www.softwareheritage.org\/\">Software Heritage<\/a>, Paris<\/li>\n\n\n\n<li><a href=\"https:\/\/www.soprasteria.com\">Sopra Steria<\/a><\/li>\n<\/ul>","protected":false},"excerpt":{"rendered":"<p>Generative AI, in particular the recent Large Language Models (LLMs), show great promise for software developments. Specialized models are now able to perform an impressive variety of programming tasks: solving programming exercises, assisting software developers, or even generating mechanized proofs. Yet, many challenges still need to be addressed to build\u2026<\/p>\n<p> <a class=\"continue-reading-link\" href=\"https:\/\/project.inria.fr\/llm4code\/\"><span>Continue reading<\/span><i class=\"crycon-right-dir\"><\/i><\/a> <\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"open","template":"","meta":{"footnotes":""},"class_list":["post-4","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/project.inria.fr\/llm4code\/wp-json\/wp\/v2\/pages\/4","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/project.inria.fr\/llm4code\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/project.inria.fr\/llm4code\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/llm4code\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/llm4code\/wp-json\/wp\/v2\/comments?post=4"}],"version-history":[{"count":25,"href":"https:\/\/project.inria.fr\/llm4code\/wp-json\/wp\/v2\/pages\/4\/revisions"}],"predecessor-version":[{"id":133,"href":"https:\/\/project.inria.fr\/llm4code\/wp-json\/wp\/v2\/pages\/4\/revisions\/133"}],"wp:attachment":[{"href":"https:\/\/project.inria.fr\/llm4code\/wp-json\/wp\/v2\/media?parent=4"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}