Designing highly resilient AI fabrics: Networking architectures for large-scale model training

Oluwatosin Oladayo Aramide *

NetApp Ireland Limited. Ireland.
 
Research Article
World Journal of Advanced Research and Reviews, 2024, 23(03), 3291-3303
Article DOI: 10.30574/wjarr.2024.23.3.2632
 
Publication history: 
Received on 18 July 2024; revised on 21 September 2024; accepted on 27 September 2024
 
Abstract: 
The fast development of big AI models, mostly big language models (LLMs), has caused new challenges to networking infrastructure as never seen before. As training expands towards hundreds and even thousands of GPUs in distributed systems, the resilience, efficiency and performance of AI fabrics become paramount to long-run throughput and reliability. This paper discusses some of the architecture design concepts and new technologies creating resilient AI fabrics to build large-scale model training. We discuss the use of high-bandwidth interconnects like RoCEv2 and 800G / 1.6T Ethernet, look at topology-aware routing schemes and evaluate how network-level fault-tolerance mechanisms can be made resilient. With the help of case studies and benchmarking, we point out both the good and bad practice of existing AI training networks. Our results give some advice to future-proof design of AI networking architectures to scale to model complexity in next generation models.
 
Keywords: 
AI Fabric; Distributed Training; RoCev2; Resilient Networking; 800G Per Ethernet; High Performance Data Center Computing; Network Fault Tolerance; Large Language Models; Smart-NIC; Data Center Interconnects
 
Full text article in PDF: 

Click here

This paper has received Best Paper award in the Volume 23 - Issue 3 (September 2024)

Download Best paper award Certificate

 
Share this