

{"id":69,"date":"2021-10-13T15:11:09","date_gmt":"2021-10-13T13:11:09","guid":{"rendered":"https:\/\/project.inria.fr\/leanai\/?page_id=69"},"modified":"2024-02-26T10:48:40","modified_gmt":"2024-02-26T09:48:40","slug":"results","status":"publish","type":"page","link":"https:\/\/project.inria.fr\/leanai\/results\/","title":{"rendered":"Results"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\">Software<\/h2>\n\n\n\n<p>MPTorch is a low\/mixed-precision training and inference simulation framework built atop the popular PyTorch deep learning library, allowing users to test the effect of low precision arithmetic operators (in floating-point and fixed-point) in their deep learning workflows. It is built as a research prototype tool, favoring exploration and experimentation. For the moment, it reimplements the underlying computations of commonly used layers for CNNs (e.g., matrix multiplication and 2D convolutions) using user-specified floating-point formats for each operation (e.g., addition, multiplication). All the operations are internally done using IEEE-754 32-bit floating-point arithmetic, with the results rounded to the specified format.<\/p>\n\n\n\n<p>More information and examples can be found on our GitHub <a href=\"https:\/\/github.com\/mptorch\/mptorch\">repository<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Software MPTorch is a low\/mixed-precision training and inference simulation framework built atop the popular PyTorch deep learning library, allowing users to test the effect of\u2026<\/p>\n<p> <a class=\"continue-reading-link\" href=\"https:\/\/project.inria.fr\/leanai\/results\/\"><span>Continue reading<\/span><i class=\"crycon-right-dir\"><\/i><\/a> <\/p>\n","protected":false},"author":1754,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-69","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/project.inria.fr\/leanai\/wp-json\/wp\/v2\/pages\/69","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/project.inria.fr\/leanai\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/project.inria.fr\/leanai\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/leanai\/wp-json\/wp\/v2\/users\/1754"}],"replies":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/leanai\/wp-json\/wp\/v2\/comments?post=69"}],"version-history":[{"count":3,"href":"https:\/\/project.inria.fr\/leanai\/wp-json\/wp\/v2\/pages\/69\/revisions"}],"predecessor-version":[{"id":127,"href":"https:\/\/project.inria.fr\/leanai\/wp-json\/wp\/v2\/pages\/69\/revisions\/127"}],"wp:attachment":[{"href":"https:\/\/project.inria.fr\/leanai\/wp-json\/wp\/v2\/media?parent=69"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}