On March 9, 2026, Inspur Information released and open-sourced a trillion-parameter multimodal large model adopting the Mixture of Experts (MoE) architecture. The model boosts pre-training computing efficiency by 49%, inference speed by 35%, and reduces costs by 40%. It excels in enterprise scenarios like document understanding, table analysis, and multimodal interaction, driving domestic large models from labs to large-scale industrial deployment.
Keywords: Inspur Information, Trillion Parameters, Multimodal Large Model, MoE Architecture, Computing Efficiency
