<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=145304570664993&amp;ev=PageView&amp;noscript=1">

91ÊÓƵAPP

Explore

IPU-POD

Ideal for exploration, the IPU-POD16 gives you all the power, performance and flexibility you need to fast track your IPU prototypes and speed from pilot to production. IPU-POD16 is your easy-to-use starting point for building better, more innovative AI solutions with IPUs whether you're focused on language and vision, exploring GNNs and LSTMs or creating something entirely new.

  • Fast & efficient for dense matmul models
  • Excels at sparse & fine-grained computation
  • Expert support to get you up and running quickly
IPU-POD16

We saw a roughly 5X speed gain. That means a researcher can now run potentially five times more experiments, which means we accelerate the whole research and development process and ultimately end up with better models in our products.

Razvan Ranca

Tractable

Healthcare

Healthcare

Biotech, pharma and healthcare providers are choosing IPU-POD16 to re-fuel their AI-driven business transformation

Line Graph

Finance

Banks, insurance companies and asset managers can supercharge their AI labs with IPU-POD16 systems

Manufacturing

Manufacturing

Bringing intelligence to industry to detect flaws in materials and equipment the human eye can't detect

World-class results whether you want to explore innovative models and new possibilities, faster time to train, higher throughput or performance per TCO dollar.

MLPerf v1.1 Training Results | MLPerf ID: 1.1-2039, 1.1-2088

Software tools and integrations to support every step of the AI lifecycle from development to deployment to improve productivity and AI infrastructure efficiency. And just make it easier to use.

tensorflow pytorch lightning paddlepaddle keras onnx huggingface slurm kubernetes openBMC redfish prometheus grafana openstack vmware
IPUs 16x GC200 IPUs
IPU-M2000s 4x IPU-M2000s
Memory 14.4GB In-Processor-Memoryâ„¢ and up to 1024GB Streaming Memory
Performance 4 petaFLOPS FP16.16
1 petaFLOPS FP32
IPU Cores 23,552
Threads 141,312
IPU-Fabric 2.8Tbps
Host-Link 100 GE RoCEv2
Software

Poplar

TensorFlow, PyTorch, PyTorch Lightning, Keras, Paddle Paddle, Hugging Face, ONNX, HALO

OpenBMC, Redfish DTMF, IPMI over LAN, Prometheus, and Grafana

Slurm, Kubernetes

OpenStack, VMware ESG

System Weight 66kg + Host server
System Dimensions 4U + Host servers and switches
Host Server Selection of approved host servers from 91ÊÓƵAPP partners
Thermal Air-Cooled
Optional Switched Version Contact 91ÊÓƵAPP Sales

The MLPerf name and logo are trademarks of MLCommons Association in the United States and other countries. All rights reserved.

Unauthorized use strictly prohibited. See for more information.

For more performance results, visit our Performance Results page