Research on Efficient Training Scheme for AI Large Model Based on Storage Remote and Lossless Network

Expand
  • China Mobile Communications Group Design Institute Co., Ltd.

Online published: 2025-06-27

Abstract

With the rapid development of AI large models, the demand for computing power has sharply increased. Focusing on the pain points of low utilization of computing power resources and security concerns caused by sensitive data leaving the park, this ar‐ ticle proposes a solution based on storage and lossless network technology. By constructing an intelligent computing experimental network and adopting an innovative mode of simultaneous transmission and training, real-time data processing and rapid model up‐ dates have been achieved. The experimental results show that this scheme significantly improves the efficiency of computing power usage, reduces single task training time by 50%, increases data transmission rate to 7.3 Gbps, and can still maintain efficient opera‐ tion under 200 KM distance conditions. However, challenges such as data security and privacy protection still need to be addressed in the future. This study provides new ideas and methods for addressing the computing power demand in the era of large models.

Cite this article

QI Yu HUANG Jia WANG Li-qiu TU Yan-li CHEN Zi-yu . Research on Efficient Training Scheme for AI Large Model Based on Storage Remote and Lossless Network[J]. Computer & Telecommunication, 2025 , 1(1-2) : 9 . DOI: 10.15966/j.cnki.dnydx.2025.01.005

Options
Outlines

/