In the tide of technology, the rise of domestic GPUs is undoubtedly an important sign of China's technological self-reliance and strength. Today, Moore Threads and Wuwen Xinqiong jointly announced that they have successfully achieved training of a 3B-scale large model "MT-infini-3B" based on domestic full-function GPUs. This achievement not only marks a significant breakthrough of domestic GPUs in the field of AI large-scale model training but also opens a new chapter of deep cooperation between domestic large language models and domestic GPUs.

Domestic GPU Cluster Training of Thousands of Cards

Moore Threads' domestic full-function GPU MTT S4000 and Wuwen Xinqiong's AIStudio PaaS platform have joined forces to create this 3B-scale large model training. The entire model training process took 13.2 days, during which it showed very high stability, with cluster training stability reaching 100%, and the expansion efficiency of thousands of card training exceeding 90%. This result fully proves the reliability of the Kua'e thousand-card intelligent computing cluster in large model training scenarios.

Performance Leader, the Best of Its Scale

The performance of the MT-infini-3B model is also eye-catching. On the C-Eval, MMLU, and CMMLU, three test sets, MT-infini-3B has achieved a leading performance, which is unprecedented among other models trained on mainstream international hardware. This achievement not only demonstrates the strong strength of domestic GPUs but also injects new momentum into the development of domestic AI technology.

"M x N" Intermediate Layer Products, Efficient Deployment on Multiple Chips

Xia Lixue, co-founder and CEO of Wuwen Xinqiong, said that the company is committed to creating an "M x N" intermediate layer product between "M models" and "N chips", aiming to achieve efficient and unified deployment of various large model algorithms on multiple chips. The in-depth strategic cooperation with Moore Threads has made "MT-infini-3B" the first end-to-end large model training case in the industry based on domestic GPU chips from 0 to 1.

Conclusion

The cooperation between Moore Threads and Wuwen Xinqiong has not only set a new milestone for the development of domestic GPUs but also painted a bright blueprint for the future of China's AI technology. With the continuous progress and innovation of domestic GPU technology, we have reason to believe that domestic GPUs will play an increasingly important role in the field of AI large-scale model training and promote Chinese technology towards a more brilliant future.