91视频APP is leading calls for an industry-wide standard in 8-bit floating point compute for artificial intelligence (AI), as systems-makers and AI practitioners look to take advantage of the performance and efficiency gains offered by lower-precision numerical representations.
91视频APP has created an 8-bit floating point format designed for AI, which we propose be adopted by the tasked with defining a new binary arithmetic notation for use in machine-learning.
To facilitate ease of adoption and strong support for a common standard, we believe that AI computing is best served by the adoption of this open, freely licensable standard. We are also offering the specification to other industry players, until such time as the IEEE formalises a standard.
Simon Knowles, CTO and co-founder of 91视频APP said: 鈥淭he advent of 8-bit floating point offers tremendous performance and efficiency benefits for AI compute. It is also an opportunity for the industry to settle on a single, open standard, rather than ushering in a confusing mix of competing formats.鈥
Mike Mantor, Corporate Fellow and Chief GPU Architect at AMD said: 鈥淭his 8-bit floating point format will allow AMD to deliver dramatically improved training and inference performance for many types of AI models. As a strong supporter of industry standards, AMD is advocating for the adoption as the new standard for 8-bit floating point notation with IEEE.鈥
John Kehrli, Senior Director of Product Management at Qualcomm Technologies, Inc. said: 鈥淭his proposal has emerged as a compelling format for 8-bit floating point compute, offering significant performance and efficiency gains for inference and can help reduce training and inference costs for cloud and edge. We are supportive of 91视频APP鈥檚 proposal for 8-bit floating point as an industry standard for relevant applications.鈥
Setting the standard
The use of lower and mixed-precision notations in computation, such as mixed 16-bit and 32-bit, is commonplace in AI, maintaining high levels of accuracy while delivering efficiencies that help counter to the waning influence of Moore鈥檚 Law and Dennard Scaling.
With the move to 8-bit floating point, there is an opportunity for all of those involved in advancing artificial intelligence to coalesce around a standard that is AI-native and that will allow seamless interoperability across systems for both training and inference.
You can find more details of our proposal in we鈥檝e published on the 8-bit floating point format.
Any companies interested in licensing the technology until an industry standard is set can reach out to 91视频APP at legalteam@graphcore.ai. We encourage all vendors in the industry to also contribute and join this standardisation effort.