Versatile compute acceleration for mainstream enterprise servers.NVIDIA A30TENSOR CORE

NVIDIA A30 Tensor Core GPU is the most versatile mainstream compute GPU for AI inference and mainstream enterprise workloads.
NVIDIA_A16b
NVIDIA_A16fu
NVIDIA_A16fa
NVIDIA_A16f

NVIDIA A30Advantages:

Innovations:

• NVIDIA ampere architecture
• Third-generation tensor cores
• Multi-instance gpu (mig)
• Next-generation Nvlink

Incredible Performance Across Workloads

A30 is part of the complete NVIDIA data center solution that incorporates building blocks across hardware, networking, software, libraries, and optimized AI models and applications from NGC™

01NVIDIA Ampere architecture
Whether using MIG to partition an A30 GPU into smaller instances or NVIDIA NVLink to connect multiple GPUs to speed larger workloads, A30 can readily handle diverse-sized acceleration needs, from the smallest job to the biggest multi-node workload. A30 versatility means IT managers can maximize the utility of every GPU in their data center with mainstream servers, around the clock.
02MULTI-INSTANCE GPU (MIG)
An A30 GPU can be partitioned into as many as four GPU instances, fully isolated at the hardware level with their own high-bandwidth memory, cache, and compute cores. MIG gives developers access to breakthrough acceleration for all their applications. And IT administrators can offer right-sized GPU acceleration for every job, optimizing utilization and expanding access to every user and application.
03Third-generation TENSOR cores
NVIDIA A30 delivers 165 teraFLOPS (TFLOPS) of TF32 deep learning performance. That’s 20X more AI training throughput and over 5X more inference performance compared to NVIDIA T4 Tensor Core GPU. For HPC, A30 delivers 10.3 TFLOPS of performance, nearly 30 percent more than NVIDIA V100 Tensor Core GPU.
04Next-generation NVLINK
NVIDIA NVLink in A30 delivers 2X higher throughput compared to the previous generation. Two A30 PCIe GPUs can be connected via an NVLink Bridge to deliver 330 TFLOPs of deep learning performance.

Specifications:

Peak FP64

5.2TF

Peak FP64 Tensor Core

10.3 TF

Peak FP32

10.3 TF

TF32 Tensor Core

82 TF | 165 TF*

BFLOAT16 Tensor Core

165 TF | 330 TF*

Peak FP16 Tensor Core

165 TF | 330 TF*

Peak INT8 Tensor Core

330 TOPS | 661 TOPS*

Peak INT4 Tensor Core

661 TOPS | 1321 TOPS*

Media engines

1 optical flow accelerator (OFA) 1 JPEG decoder (NVJPEG) 4 Video decoders (NVDEC)

GPU Memory

24GB HBM2 GPU

Memory Bandwidth

933GB/s

Interconnect

PCIe Gen4: 64GB/s Third-gen NVIDIA® NVLINK® 200GB/s**

Form Factor

2-slot, full height, full length (FHFL)

Max thermal design power (TDP)

165W

Multi-Instance GPU (MIG)

4 MIGs @ 6GB each 2 MIGs @ 12GB each 1 MIGs @ 24GB

Virtual GPU (vGPU) software support

NVIDIA AI Enterprise NVIDIA Virtual Compute Server

https://gpu.bg/wp-content/uploads/2022/07/NVIDIA_A16fa.jpg
https://gpu.bg/wp-content/uploads/2020/11/img-red-skew.png

Изберете вашето решениеКонфигурацииКонфигурации

Start with Amwerk
$1299
Start with Amwerk
  • Objectively integrate core competencies
  • Process-centric communities
  • Evisculate holistic innovation
  • Incubate intuitive opportunities
Amwek full service
$2599
Amwek full service
  • Efficiently cost effective products
  • Synthesize principle-centered information
  • Innovate open-source infrastructures
  • Integrate enterprise-wide strategic
  • Productize premium technologies
Amwerk advanced
$1799
Amwerk advanced
  • Leverage existing premium innovation
  • E-business collaboration and idea-sharing
  • Convergence inter-mandated networks
  • Engage fully tested process improvements
Have questions?

Contact us if you have any questions related to the configuration options or the technical characteristics of the model.