The Most Powerful Compute PlatformNVIDIA A100TENSOR CORE

The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration—at every scale—to power the world’s highest performing elastic data centers for AI, data analytics, and high performance computing (HPC) applications. As the engine of the NVIDIA data center platform, A100 provides up to 20X higher performance over the prior NVIDIA Volta™ generation.
NVIDIA_A100tfs
NVIDIA_A100tf
NVIDIA_A100nvlink
NVIDIA_A100ba

NVIDIA A100Advanteges:

Innovations:

• NVIDIA ampere architecture
• Third-generation tensor cores
• Multi-instance gpu (mig)
• Next-generation Nvlink

Incredible Performance Across Workloads

A100 is part of the complete NVIDIA data center solution that incorporates building blocks across hardware, networking, software, libraries, and optimized AI models and applications from NGC™.

Unprecedented Acceleration at Every Scale

Representing the most powerful end-to-end AI and HPC platform for data centers, it allows researchers to deliver real-world results and deploy solutions into production at scale.

01NVIDIA Ampere architecture
Whether using MIG to partition an A100 GPU into smaller instances or NVLink to connect multiple GPUs to speed large-scale workloads, A100 can readily handle different-sized acceleration needs, from the smallest job to the biggest multi-node workload. A100’s versatility means IT managers can maximize the utility of every GPU in their data center, around the clock.
02MULTI-INSTANCE GPU (MIG)
An A100 GPU can be partitioned into as many as seven GPU instances, fully isolated at the hardware level with their own high-bandwidth memory, cache, and compute cores. MIG gives developers access to breakthrough acceleration for all their applications, and IT administrators can offer right-sized GPU acceleration for every job, optimizing utilization and expanding access to every user and application.
03Third-generation TENSOR cores
NVIDIA A100 delivers 312 teraFLOPS (TFLOPS) of deep learning performance. That’s 20X the Tensor floating-point operations per second (FLOPS) for deep learning training and 20X the Tensor tera operations per second (TOPS) for deep learning inference compared to NVIDIA Volta GPUs.
04Next-generation NVLINK
NVIDIA NVLink in A100 delivers 2X higher throughput compared to the previous generation. When combined with NVIDIA NVSwitch™, up to 16 A100 GPUs can be interconnected at up to 600 gigabytes per second (GB/sec), unleashing the highest application performance possible on a single server. NVLink is available in A100 SXM GPUs via HGX A100 server boards and in PCIe GPUs via an NVLink Bridge for up to 2 GPUs.

Specifications:

Peak FP64

9.7 TFLOPS

Peak FP64 Tensor Core

19.5 TFLOPS

Peak FP32

19.5 TFLOPS

Tensor Float 32 (TF32)

156 TFLOPS

BFLOAT16 Tensor Core

312 TFLOPS

FP16 Tensor Core

312 TFLOPS

INT8 Tensor Core

624 TOPS

Memory

80GB HBM2e

Memory Bandwidth

1,935GB/s

Max Thermal Design Power (TDP)

300W

Multi-Instance GPU

Up to 7 MIGs @ 10GB

Form Factor

PCIe dual-slot air cooled or single-slot liquid cooled

Interconnect

NVIDIA® NVLink® Bridge for 2 GPUs: 600GB/s ** PCIe Gen4: 64GB/s

Server Options Partner and NVIDIACertified Systems™ with 1-8 GPUs

https://gpu.bg/wp-content/uploads/2022/07/NVIDIA_A100nvlink.jpg
https://gpu.bg/wp-content/uploads/2020/11/img-red-skew.png

Изберете вашето решениеКонфигурацииКонфигурации

Start with Amwerk
$1299
Start with Amwerk
  • Objectively integrate core competencies
  • Process-centric communities
  • Evisculate holistic innovation
  • Incubate intuitive opportunities
Amwek full service
$2599
Amwek full service
  • Efficiently cost effective products
  • Synthesize principle-centered information
  • Innovate open-source infrastructures
  • Integrate enterprise-wide strategic
  • Productize premium technologies
Amwerk advanced
$1799
Amwerk advanced
  • Leverage existing premium innovation
  • E-business collaboration and idea-sharing
  • Convergence inter-mandated networks
  • Engage fully tested process improvements
Have questions?

Contact us if you have any questions related to the configuration options or the technical characteristics of the model.