<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=145304570664993&amp;ev=PageView&amp;noscript=1">

91ÊÓƵAPP

Build

BOW POD

Built for multi-tenancy and concurrency, Bow Pod64 is a powerful, flexible building block for the enterprise datacenter, private or public cloud. With cloud-native capabilities to support multiple users and mixed workloads across multiple smaller VPods (Virtual Pods) or used as one single system for large training workloads Bow Pod64 gives you faster time to business value for today's models and unlocks a new world of new AI applications.

Also available as a Bow Pod128 system

  • Ease of use & flexibility
  • Faster time to business value
  • Support from AI experts so you're up and running fast

“, we are pushing the boundaries of machine learning and graph neural networks to tackle scientific problems that have been intractable with existing technology. For instance, we are pursuing applications in and . This year, 91ÊÓƵAPP systems have allowed us to significantly reduce both training and inference times from days to hours for these applications. This speed up shows promise in helping us incorporate the tools of machine learning into our research mission in meaningful ways. We look forward to extending our collaboration with this newest generation technology.† 

Sutanay Choudhury, Deputy Director

Pacific Northwest National Laboratory

Natural Language Processing

Language

Natural language processing (NLP) delivers business value today for finance firms to biotech leaders, scale-ups to hyper-scalers, improving internet search sentiment analysis, fraud detection, chatbots, drug discovery and more. Choose Bow Pod64 whether you are running large BERT models in production or starting to explore GPT-class models or GNNs.

Computer Vision

Vision

State-of-the-art computer vision is driving breakthroughs in medical imaging, claims processing, cosmology, smart cities, self-driving cars, and more. World-leading performance for traditional powerhouses like ResNet50 and high-accuracy emerging models like EfficientNet are ready to run on Bow Pod64.

Scientific Research

Scientific Research

National labs, universities, research institutes and supercomputing centres around the world are making scientific breakthroughs on Bow Pod64 by maximising the IPU's fine-grained compute to deploy emerging models like Graph Neural Networks (GNNs) and probabilistic models, explore sparsity and make the convergence of HPC and AI a reality.

World-class results whether you want to explore innovative models and new possibilities, faster time to train, higher throughput or performance per TCO dollar.

EfficientNet Training Throughput

Bow Pod Platforms | Preliminary Results (Pre-SDK2.5) | G16-EfficientNet-B4 Training

Software tools and integrations to support every step of the AI lifecycle from development to deployment to improve productivity and AI infrastructure efficiency. And just make it easier to use.

tensorflow pytorch lightning paddlepaddle keras onnx huggingface slurm kubernetes openBMC redfish prometheus grafana openstack vmware
Processors 64x Bow IPUs
1U blade units 16x Bow-2000 machines
Memory

57.6GB In-Processor-Memoryâ„¢

Up to 4.1TB Streaming Memoryâ„¢

Performance 22.4 petaFLOPS FP16.16
5.6 petaFLOPS FP32
IPU Cores 94,208
Threads 565,248
Host-Link 100 GE RoCEv2
Software

Poplar

TensorFlow, PyTorch, PyTorch Lightning, Keras, Paddle Paddle, Hugging Face, ONNX, HALO

OpenBMC, Redfish DTMF, IPMI over LAN, Prometheus, and Grafana

Slurm, Kubernetes

OpenStack, VMware ESG

System Weight 450kg + Host servers and switches
System Dimensions 16U + Host servers and switches
Host Server Selection of approved host servers from 91ÊÓƵAPP partners
Storage Selection of approved systems from 91ÊÓƵAPP partners
Thermal Air-Cooled

For more performance results, visit our Performance Results page