NVIDIA Reveals Enterprise Reference Architectures

Global enterprises use reference architectures for high-performance, scalable data centers.

Global enterprises use reference architectures for high-performance, scalable data centers.

NVIDIA Enterprise RAs provide full-stack hardware and software recommendations, and guidance on server, cluster and network configurations. Image courtesy of NVIDIA.


NVIDIA is unveiling Enterprise Reference Architectures (Enterprise RAs). These blueprints help NVIDIA systems partners and joint customers build artificial intelligence (AI) factories—high-performance, scalable, secure data centers for manufacturing intelligence, the company reports.

Building AI Factories

NVIDIA Enterprise RAs provide full-stack hardware and software recommendations, and guidance on server, cluster and network configurations for modern AI workloads. Each Enterprise RA includes recommendations for:

  • Accelerated infrastructure based on an optimized NVIDIA-Certified server configuration, featuring the latest NVIDIA GPUs, CPUs and networking technologies.
  • AI-optimized networking with the NVIDIA Spectrum-X AI Ethernet platform and NVIDIA BlueField-3 DPUs for guidance on optimal network configurations.
  • NVIDIA AI Enterprise software platform for production AI, including NVIDIA NeMo and NVIDIA NIM microservices for building and deploying AI applications.

Businesses that deploy AI workloads on partner solutions based upon Enterprise RAs may benefit from:

  • Accelerated time to market: By using NVIDIA’s structured approach and recommended designs, enterprises can deploy AI solutions faster.
  • Performance: Build upon tested and validated technologies for AI workloads to run at peak performance.
  • Scalability and manageability: Develop AI infrastructure while incorporating design best practices.
  • Security: Run workloads securely on AI infrastructure that supports confidential computing and is optimized for the latest cybersecurity AI innovations.
  • Reduced complexity: Accelerate deployment timelines through optimal server, cluster and network configurations for AI workloads.

Sources: Press materials received from the company and additional information gleaned from the company’s website.

More NVIDIA Coverage

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

DE Editors's avatar
DE Editors

DE’s editors contribute news and new product announcements to Digital Engineering.
Press releases may be sent to them via [email protected].

Follow DE
#29549