×

:

Not a valid Time
This is a required field.

Nvidia A100 80GB Tensor Core Workstation Graphics Card

AED 99,000.00
in UAE Pay as low as 8250.00 AED per month
In stock
SKU
900-21001-0020-100

FP64: 9.7 TFLOPS
FP64 Tensor Core: 19.5 TFLOPS
FP32: 19.5 TFLOPS
Tensor Float 32 (TF32): 156 TFLOPS | 312 TFLOPS*
BFLOAT16 Tensor Core: 312 TFLOPS | 624 TFLOPS*
FP16 Tensor Core: 312 TFLOPS | 624 TFLOPS*
INT8 Tensor Core: 624 TOPS | 1248 TOPS*

GPU Memory
80GB HBM2e

GPU Memory Bandwidth
1,935 GB/s

Max Thermal Design Power (TDP)
300W

Interconnect
NVIDIA® NVLink® Bridge
for 2 GPUs: 600 GB/s **
PCIe Gen4: 64 GB/s

Nvidia A100 80GB Tensor Core Workstation Graphics Card

-

NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale to power the world’s highest-performing elastic data centers for AI, data analytics, and HPC. Powered by the NVIDIA Ampere Architecture, A100 is the engine of the NVIDIA data center platform.

The Most Powerful End-to-End AI and HPC Data Center Platform
A100 is part of the complete NVIDIA data center solution that incorporates building blocks across hardware, networking, software, libraries, and optimized AI models and applications from NGC™. Representing the most powerful end-to-end AI and HPC platform for data centers, it allows researchers to rapidly deliver real-world results and deploy solutions into production at scale.

Deep Learning Training
NVIDIA A100 Tensor Cores with Tensor Float (TF32) provide up to 20X higher performance over the NVIDIA Volta with zero code changes and an additional 2X boost with automatic mixed precision and FP16. When combined with NVIDIA® NVLink®, NVIDIA NVSwitch™, PCI Gen4, NVIDIA® InfiniBand®, and the NVIDIA Magnum IO™ SDK, it’s possible to scale to thousands of A100 GPUs.

High-Performance Computing
NVIDIA A100 introduces double precision Tensor Cores to deliver the biggest leap in HPC performance since the introduction of GPUs. Combined with 80GB of the fastest GPU memory, researchers can reduce a 10-hour, double-precision simulation to under four hours on A100. HPC applications can also leverage TF32 to achieve up to 11X higher throughput for single-precision, dense matrix-multiply operations

High-Performance Data Analytics
Data scientists need to be able to analyze, visualize, and turn massive datasets into insights. But scale-out solutions are often bogged down by datasets scattered across multiple servers.

Enterprise-Ready Utilization
A100 with MIG maximizes the utilization of GPU-accelerated infrastructure. With MIG, an A100 GPU can be partitioned into as many as seven independent instances, giving multiple users access to GPU acceleration. With A100 40GB, each MIG instance can be allocated up to 5GB, and with A100 80GB’s increased memory capacity, that size is doubled to 10GB.