Inspur Open-Sources Trillion-Parameter Multimodal Large Model, Significantly Improving Computing Efficiency

93    2026-03-10

On March 9, 2026, Inspur Information released and open-sourced a trillion-parameter multimodal large model adopting the Mixture of Experts (MoE) architecture. The model boosts pre-training computing efficiency by 49%, inference speed by 35%, and reduces costs by 40%. It excels in enterprise scenarios like document understanding, table analysis, and multimodal interaction, driving domestic large models from labs to large-scale industrial deployment. 

Keywords: Inspur Information, Trillion Parameters, Multimodal Large Model, MoE Architecture, Computing Efficiency

31874_xdls_5141.png