Memory Safe Computations with XLA Compiler

Date November 24, 2022
Authors Artem Artemev, Yuze An (Imperial College London), Tilman Roeder (Imperial College London), Mark van der Wilk (Imperial College London)

Software packages like TensorFlow and PyTorch are designed to support linear algebra operations, and their speed and usability determine their success. However, by prioritising speed, they often neglect memory requirements. As a consequence, the implementations of memory-intensive algorithms that are convenient in terms of software design can often not be run for large problems due to memory overflows. Memory-efficient solutions require complex programming approaches with significant logic outside the computational framework. This impairs the adoption and use of such algorithms. To address this, we developed an XLA compiler extension1 that adjusts the computational data-flow representation of an algorithm according to a user-specified memory limit. We show that k-nearest neighbour, sparse Gaussian process regression methods and Transformers can be run on a single device at a much larger scale, where standard implementations would have failed. Our approach leads to better use of hardware resources. We believe that further focus on removing memory constraints at a compiler level will widen the range of machine learning methods that can be developed in the future.

View the paper

Share
,,
Optimization Engine
    Learn more
Solutions
Insights
Company
Research
©2024 Secondmind Ltd.