MPTorch is a low/mixed-precision training and inference simulation framework built atop the popular PyTorch deep learning library, allowing users to test the effect of low precision arithmetic operators (in floating-point and fixed-point) in their deep learning workflows. It is built as a research prototype tool, favoring exploration and experimentation. For the moment, it reimplements the underlying computations of commonly used layers for CNNs (e.g., matrix multiplication and 2D convolutions) using user-specified floating-point formats for each operation (e.g., addition, multiplication). All the operations are internally done using IEEE-754 32-bit floating-point arithmetic, with the results rounded to the specified format.

More information and examples can be found on our GitHub repository.

Comments are closed.