Typical configuration
Reference 4× H100
- cpu
- AMD EPYC 9554 (DDR5 ECC)
- gpu
- 4× NVIDIA H100 (PCIe)
- ram
- 512GB DDR5 ECC
- storage
- 4× NVMe U.3 (3.84TB, scratch RAID0)
Enterprise rack servers
A practical sweet spot for many AI and HPC deployments — balance of density and thermals.
Supports 4× PCIe datacenter GPUs for training and inference to keep GPUs fed and raise throughput.
Validated 4× PCIe datacenter GPUs against power and thermal envelopes for sustained utilisation.
Optimised NVMe staging path for training and inference to cut staging latency and keep accelerators compute-bound.
Aligns DDR5 ECC memory bandwidth with batch throughput to avoid CPU bottlenecks during training and inference.
Engineered 240V / high-amp power options to hold thermal and electrical margins under sustained load.
PCIe bandwidth & expansion
Supports 4× PCIe datacenter GPUs lanes and slot topology to minimise interconnect stalls and raise throughput.
GPU support & density
Provides 4× PCIe datacenter GPUs expansion with sufficient power and thermal headroom for sustained utilisation.
Cooling & power considerations
Engineered 240V / high-amp power options for stable thermal and electrical margins under sustained load.
Representative configurations — every build is tailored to your workload and environment.
Typical configuration
Get a tailored quote and lead time from our engineering team.