Research on Efficient Training Scheme for AI Large Model Based on Storage Remote and Lossless Network

QI Yu HUANG Jia WANG Li-qiu TU Yan-li CHEN Zi-yu

Computer & Telecommunication ›› 2025, Vol. 1 ›› Issue (1-2) : 9.

Computer & Telecommunication ›› 2025, Vol. 1 ›› Issue (1-2) : 9. DOI: 10.15966/j.cnki.dnydx.2025.01.005

Research on Efficient Training Scheme for AI Large Model Based on Storage Remote and Lossless Network

Author information +
History +

Abstract

With the rapid development of AI large models, the demand for computing power has sharply increased. Focusing on the pain points of low utilization of computing power resources and security concerns caused by sensitive data leaving the park, this ar‐ ticle proposes a solution based on storage and lossless network technology. By constructing an intelligent computing experimental network and adopting an innovative mode of simultaneous transmission and training, real-time data processing and rapid model up‐ dates have been achieved. The experimental results show that this scheme significantly improves the efficiency of computing power usage, reduces single task training time by 50%, increases data transmission rate to 7.3 Gbps, and can still maintain efficient opera‐ tion under 200 KM distance conditions. However, challenges such as data security and privacy protection still need to be addressed in the future. This study provides new ideas and methods for addressing the computing power demand in the era of large models.

Key words

  / AI / large models / remote storage / simultaneous transmission and training / lossless network

Cite this article

Download Citations
QI Yu HUANG Jia WANG Li-qiu TU Yan-li CHEN Zi-yu. Research on Efficient Training Scheme for AI Large Model Based on Storage Remote and Lossless Network[J]. Computer & Telecommunication. 2025, 1(1-2): 9 https://doi.org/10.15966/j.cnki.dnydx.2025.01.005

Accesses

Citation

Detail

Sections
Recommended

/