Skip to main content

AMAX Accelerates Next-Gen AI Computing With NVIDIA A100 GPUs

Extends Portfolio of Multi-GPU HPC Servers for the Data Center



FREMONT, Calif. - October 5, 2020 - (Newswire.com)

AMAX’s HPC and AI Solutions Group announced its series of next-generation NVIDIA A100 powered server systems that bring AI training, inference, and analytics into a consolidated yet scalable platform.  The flexible system design accommodates standard full-length, full-height 250W PCIe 4.0 cards or 400W SXM A100 cards for high-capacity, multi-instance burstable workloads and makes them the ideal building blocks for modern AI data centers.​

AMAX’s AceleMax reference design data center servers provide single and dual AMD EPYC 7002 CPU options, four or eight NVIDIA A100 PCIe GPUs for up to five 510 PetaOPS of AI performance via direct-attach PCI-E 4.0 x16 CPU-to-GPU lanes for the lowest latency and highest bandwidth. These systems also support up to two additional high-performance PCI-E 4.0 expansion slots for a variety of uses, including SAS interface cards, NVIDIA Mellanox 200 Gb/s InfiniBand cards, or 200 Gigabit Ethernet to meet the demands of high-bandwidth applications. 

“Enterprise value improves when data science teams and IT teams align to improve overall productivity and results without having to worry about the infrastructure,” said Dr. Rene Meyer, VP of Technology and Product Development at AMAX. “NVIDIA is taking AI computing to new levels through the power of collaborative partner ecosystems that work. We have a long partner history together and our rack integration hubs are ready in capacity to build and integrate the next generation of A100-based solutions into data centers of all sizes.”

“The AceleMax and its upgraded features help to simplify and improve AI computing productivity in enterprise AI environments,” said Paresh Kharya, senior director of product management for accelerated computing at NVIDIA. “Adding in the power and flexibility of the NVIDIA A100 PCIeGPU and NVIDIA InfiniBand and Ethernet networking, further enables AMAX customers to optimize their enterprises for high utilization and lower cost.”

AMAX’s AceleMax series of NVIDIA Nvidia A100 GPU systems, powered by AMD EPYC 7002 series processor, include:

· ​AceleMax DGS-214A:  2U single-socket server with 8x 3.5”/2.5” hot-swap SSD/HDD drive bays, 4x NVIDIA A100 PCIe GPUs, with up to 4 PetaOPS of performance

· ​AceleMax DGS-224A:  2U dual socket server with 8x 3.5”/2.5” hot-swap SSD/HDD drive bays, 2x SATA-DOM, 4x NVIDIA A100 PCIe GPUs, with up to 4 PetaOPS of performance

· ​ AceleMax DGS-224AS: 2U dual socket server with 4x 2.5” hot-swap SATA/NVMe hybrid drive bays, 4x NVIDIA A100 SXM GPUs, with up to 4 PetaOPS of performance

· ​AceleMax DGS-428A: 4U dual socket server with up to 24x 2.5” hot-swap SAS/SATA, 4x 2.5” NVMe, 8x NVIDIA A100 PCIe GPUs, with up to 10 PetaOPS of performance

· AceleMax DGS-428AS: 4U dual socket server with up to 6 x U.2 NVMe and 2x M.2 NMVe drive bays, 8x NVIDIA A100 SXM GPUs, with up to 10 PetaOPS of performance

As an NVIDIA Elite Partner, AMAX offers a comprehensive line of GPU-integrated solutions optimized for deep learning at any scale. To schedule a technical consultation, please contact AMAX at info@amax.com.

About AMAX:

AMAX is an award-winning global leader in application-tailored cloud, data center, open architecture platforms, HPC, Deep Learning and OEM Server Manufacturing solutions designed towards highest efficiency and optimal performance. Whether you are a Fortune 1000 company seeking significant cost savings through better efficiency for your global data centers, or a software startup seeking an experienced manufacturing partner to design and launch your flagship product, AMAX is your trusted solutions provider, delivering the results you need to meet your specific metrics for success.




Press Release Service by Newswire.com

Original Source: AMAX Accelerates Next-Gen AI Computing With NVIDIA A100 GPUs
Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.