Periodic Reporting for period 1 - Entrans (Energy Efficient Transprecision Techniques for Linear Solver)
Reporting period: 2018-06-01 to 2020-05-31
All computations that use floating-point arithmetic are built on the fundamental assumption that the computation is meaningful when employing only a limited precision. The required precision typically varies throughout the computation and depends on the actual represented values. Many computations use too many bits of precision, which reduces performance and increases energy consumption. The Entrans project aims to utilize the optimal precision during a computation without loss of accuracy, which will result in higher execution speed and lower energy consumption. This idea, dubbed transprecision computing, requires novel results in numerical analysis and runtime decision making to adapt the precision of a computation on the fly. The results from this project will benefit society by extending battery life of mobile phones; by saving energy for satellite navigation systems or by enhancing scalability of numerical simulations performed in high-performance computing facilities.
Work performed from the beginning of the project to the end of the period covered by the report and main results achieved so far
"The Fellow has improved the previous work from the Fellow's dissertation and developed a new iterative refinement algorithm using arbitrary dynamic precision that solves a linear system with least mantissa cost compared to other state-of-the-art iterative refinements. This work has been accepted in Elsevier Parallel Computing (Journal) : https://www.sciencedirect.com/science/article/pii/S0167819120300569?via%3Dihub . As another research work, the Fellow has received ""reject and resubmission"" (e.g. similar to major revision) decision from IEEE Transactions on Neural Networks and Learning Systems (one of the best journals in machine learning area) for a research work exploring the application of mixed precision arithmetic to a kernel method machine learning algorithm. A revised draft has been recently submitted, and it is currently under review. The Fellow disseminated his MSCA project by presenting the work at a Workshop ( ""Adaptive Mixed Precision Kernel Recursive Least Squares"", in Adaptive Many-Core Architecture and Systems Workshop: https://www-users.york.ac.uk/~mt540/graceful-ws/ ) and a Summer School ( ""Transprecision Techniques for Linear Solvers and Non-linear Regressions"" in NiPS Summer School : https://www.nipslab.org/summerschool2018/ ). The transprecision techniques developed from this project can be exploited by IT industries concerning energy and power consumption (e.g. Apple, Google and etc), since the transprecision techniques can save energy required for linear solvers and machine learning applications."
Progress beyond the state of the art and expected potential impact (including the socio-economic impact and the wider societal implications of the project so far)
The state of the art transprecision techniques for linear solvers prior to this project employed dynamic precision only with single, double, and double-double precision arithmetic. This project developed transprecision techniques for linear solvers that employs arbitrary dynamic precision arithmetic in order to save mantissa cost further compared to previously proposed transprecision linear solver algorithms. The transprecision algorithms developed throughout this project will contribute to energy savings in linear solver applications including computational science, signal processing, and machine learning applications. The energy savings from the algorithm will benefit society by extending battery life of mobile phones; by saving energy for satellite navigation systems or by enhancing scalability of numerical simulations performed in high-performance computing facilities.