• NVIDIA ampere architecture
• Third-generation tensor cores
• Multi-instance gpu (mig)
• Next-generation Nvlink
A100 is part of the complete NVIDIA data center solution that incorporates building blocks across hardware, networking, software, libraries, and optimized AI models and applications from NGC™.
Representing the most powerful end-to-end AI and HPC platform for data centers, it allows researchers to deliver real-world results and deploy solutions into production at scale.
Peak FP64
9.7 TFLOPS
Peak FP64 Tensor Core
19.5 TFLOPS
Peak FP32
19.5 TFLOPS
Tensor Float 32 (TF32)
156 TFLOPS
BFLOAT16 Tensor Core
312 TFLOPS
FP16 Tensor Core
312 TFLOPS
INT8 Tensor Core
624 TOPS
Memory
80GB HBM2e
Memory Bandwidth
1,935GB/s
Max Thermal Design Power (TDP)
300W
Multi-Instance GPU
Up to 7 MIGs @ 10GB
Form Factor
PCIe dual-slot air cooled or single-slot liquid cooled
Interconnect
NVIDIA® NVLink® Bridge for 2 GPUs: 600GB/s ** PCIe Gen4: 64GB/s
Server Options Partner and NVIDIACertified Systems™ with 1-8 GPUs