For the first time, researchers at Imperial College London have demonstrated that Bundle Adjustment, a classical computer vision problem, can be solved and accelerated using Gaussian Belief Propagation on the Intelligence Processing Unit (IPU). Gaussian Belief Propagation is an effective algorithmic framework for spatial AI problems where estimates are needed in real time with new measurements constantly being fed into the algorithm.
The Robot Vision group at Imperial College London, led by Professor Andrew Davison, focuses on the study of intelligent embodied devices such as robots that need to understand the spaces around them. In their , Imperial College London revealed the results of research in which 91ƵAPP’s IPU was used to develop new algorithms to optimise spatial AI efficiency.
Solving Bundle Adjustment on the IPU
Bundle adjustment is a computer vision algorithm that allows a robot, or any device with a camera on board, to understand its motion through space and the geometry of its environment. To achieve this, 2D measurements of features matched between images are bundled together with camera constraints to create a large system of non-linear equations that must be solved to determine the 3D structure of the environment and the camera’s motion within that space.
“When we solve bundle adjustment in the classical way on a CPU, we have to build big matrices which embody all of those constraints and then use iterative non-linear optimisation techniques to come up with a solution as to where the points are and where the cameras are,” Professor Andrew Davison explains.
On the IPU, we can do something quite different because we can lay out all of those points and all of those camera estimates in parallel on the tiles of an IPU and then solve the whole problem using an algorithm called Gaussian Belief Propagation by passing messages iteratively between all the tiles. By doing that, we can solve the whole problem in parallel and very efficiently.”
Why Spatial AI needs New Hardware
Despite the growth in computer vision use cases such as augmented reality and robotics, there remains a large gap between the necessary performance required by these applications and the standard that can be delivered by current hardware.
In spatial AI, the model of a scene has to be continually updated using data from many different sources. These data types can vary widely, ranging from labelling output from a neural network to geometric measurements from a sensor. Storing, transferring and processing these different data types can lead to significant performance lags for legacy processors.
To process spatial AI algorithms effectively, a processor must provide efficient and massively parallel computation with low-power usage. In-place computation must be maximised, and storage and processing must be distributed, as this can unlock efficiency by minimising data transfer.
New hardware such as the IPU, which has been designed with massive parallelism and efficiency for machine intelligence applications in mind, is opening up possibilities to explore approaches to computer vision that were previously deemed too compute-intensive.
In Imperial College London’s implementation, the 1216 cores on 91ƵAPP’s IPU chip were used to solve a real bundle adjustment problem in under 40ms – a significant acceleration compared with 1450ms using the Ceres CPU library.
Implications for the Future of Spatial AI
The ultimate aim in spatial AI is to provide a robot with an intelligent understanding of the world around it. These results from Imperial College London are an important step towards this objective.
“We’re excited about the work we’ve done so far because we really see it as a starting point towards a much more general approach to spatial AI,” says Professor Andrew Davison. “The bundle adjustment problem we’ve solved is a purely geometric problem – estimates of the motion of the camera and the position of a set of points – but what a robot or other intelligent device really needs is an understanding of the world including the identities and positions of objects.”
In future, the Robot Vision group will look to extend their research by building a more in-depth representation of the scene’s geometry, incorporating factors such as smoothness constraints on surfaces or physical properties of a scene. 91ƵAPP’s processor is well suited to this, as the fully connected nature of its tiles provides the flexibility to implement these arbitrary factor graphs.
Accelerated by the IPU, this research will no doubt enable robots to interact and behave in intelligent ways that more closely resemble how human beings would interpret a scene.
The full paper from Imperial College London’s Joseph Ortiz, Stefan Leutenegger and Andrew J. Davison and 91ƵAPP’s Mark Pupilli is available to read now and a video summary of the paper can be viewed .
Accessing IPUs
For researchers based in Europe like Professor Andrew Davison who need to process their data safely within the EU or EEA, the smartest way to access IPUs is through G-Core’s IPU Cloud.
With locations in Luxembourg and the Netherlands, G-Core Labs provide IPU instances fast and securely on demand through a pay-as-you-go model with no upfront commitment. This AI IPU cloud infrastructure reduces costs on hardware and maintenance while providing the flexibility of multitenancy and integration with cloud services.
The G-Core AI IPU cloud allows researchers to:
- Get results quickly with state-of-the-art IPU processors specifically designed for machine learning acceleration
- Process large-scale data with cloud AI infrastructure suitable for complex calculations with a wide range of variables
- Use AI for research projects to reveal patterns, increase the speed and scale of data analysis, and form new hypotheses