GPU Server
과학 연산, 데이터 분석, 머신러닝 추론, 비디오 트크랜스코딩과 같은 난이도 높은 연산 처리가 부각되면서 데이터를 신속/정확하게 산출하여 다변화된 비즈니스 환경에 대응이 가능한 컴퓨팅 자원으로 2개의 GPU부터 10개 GPU까지 병렬 서버로 연결하는 최고의 자원을 선택할 수 있습니다.


HGX-ADG4U A100 40GB4


| HPC Server

| 2U DP SXM4 A100 4 GPU Server

| Application : AI , AI Training , AI Inference & HPC


  • ● Supports NVIDIA HGX™ A100 with 4 x SXM4 GPU
  • ● Supports NVIDIA® NVLink® technology
  • ● Up to 600GB/s GPU to GPU interconnection
  • ● Dual AMD EPYC™ 7002 series processor family
  • ● 8-Channel RDIMM/LRDIMM DDR4 per processor, 16 x DIMMs
  • ● 2 x 1Gb/s LAN ports (Intel® I350-AM2)
  • ● 1 x dedicated management port
  • ● 4 x 2.5" Gen4 U.2 NVMe/SATA/SAS hot-swappable HDD/SSD bays
  • ● Ultra-Fast M.2 with PCIe Gen4 x4 interface
  • ● 6 x Low profile Gen4 x16 expansion slots
  • ● 1 x OCP 3.0 Gen4 x16 mezzanine slot
  • ● 3000W 80 PLUS Platinum redundant power supply

AMD EPYC™ 7002 Series Processor (Rome)

The next generation of AMD EPYC has arrived, providing incredible compute, IO and bandwidth capability – designed to meet the huge demand for more compute in big data analytics, HPC and cloud computing.


● Built on 7nm advanced process technology, allowing for denser compute capabilities with lower power consumption

● Up to 64 core per CPU, built using Zen 2 high performance cores and AMD’s innovative chiplet architecture

● Supporting PCIe® 4.0 with a bandwidth of up to 64GB/s, twice of PCIe 3.0

● Embedded security protection to help defend your CPU, applications, and data

up to
2X Performance
Per Socket & IO Bandwidth
up to
4X Floating Point
Per Socket
vs AMD EPYC Naples
vs AMD EPYC Naples

Product Overview

System Block Diagram

High Density HPC


Supports NVIDIA HGX™ A100 4-GPU

Massive datasets, exploding model sizes, and complex simulations require multiple GPUs with extremely fast interconnections. NVIDIA HGX A100 combines NVIDIA A100 Tensor Core GPUs with high-speed interconnects to form the world’s most powerful servers. With A100 80GB GPUs, a single HGX A100 has up to 1.3 terabytes (TB) of GPU memory and over 2 terabytes per second (TB/s) of memory bandwidth, delivering unprecedented acceleration.


PCIe® 4.0 Ready


AMD EPYC Rome is ready to support PCIe® 4.0 with a bandwidth of 64GB/s, twice that of PCIe 3.0. This doubles the bandwidth available from the CPU to peripheral devices such as graphics cards, storage devices and high speed network cards. GIGABYTE’s AMD EPYC 7002 Series server platforms are ready to be used with a new generation of PCIe 4.0 devices such as AMD’s Radeon MI50 GPGPU.

* The PCIe 4.0 standard supports a 16 GT/s bit rate, (roughly 2GB/s per single lane) as opposed to 8GT/s bit rate for PCIe 3.0 (1GB/s).



Higher Memory Speed


AMD EPYC’s Rome 7002 Series processors feature faster 8 channel DDR4 memory lanes, supporting RDIMM or LRDIMM memory modules with a speed of up to 3200MHz (1 DIMM per channel)


More M.2 Storage


GIGABYTE’s AMD EPYC 7002 Series server platforms feature more M.2 drive capacity for ultra-fast NVMe storage – both onboard M.2 slots and extra capacity via optional riser cards.


OCP 3.0 Add-On Card Ready


GIGABYTE’s AMD EPYC Rome Server Platforms feature an onboard OCP 3.0 mezzanine slot for the next generation of PCI Gen 4.0 add on cards.

Compared to previous OCP 2.0 type cards, advantages of this new type include:

 Easier Serviceability : simply slot in / pull out the card without needing to open the server chassis; tool less design

Larger Thermal Envelope : more space for heat sink provides an increased power budget for new & emerging capabilities


Data Security


TPM 2.0 Module

GIGABYTE’s AMD EPYC Server is designed to support Trusted Platform Modules (TPM - discrete cryptographic on-board processors).