
Yesterday, Huawei China officially announced the official launch of Ant Group's Ling-1T, a trillion-parameter large-scale model, on the Huawei Cloud Big Model as a Service (MaaS) platform. As the first flagship "non-thinking" model in the Bailing Big Model series, Ling-1T not only boasts a trillion-parameter scale and is fully open source, but also supports dedicated resource deployment, fully unleashing its potential.
Based on the advanced Ling 2.0 architecture, this model is pre-trained on a high-quality, high-inference-density corpus of over 20 terabytes of tokens and supports 128K context windows. Its unique feature is that only approximately 50 billion parameters are activated per token, demonstrating efficient computation. Furthermore, through its "mid-training + post-training" evolutionary thinking chain technology, Ling-1T has achieved top-tier performance on multiple internationally recognized complex reasoning benchmarks, including code generation, software development, professional mathematics, and logical reasoning, achieving a balance between inference efficiency and accuracy.
Notably, the Ling-1T model is highly complementary to the Huawei Cloud CloudMatrix384 supernode. The MoE architecture used by the model generates significant communication requirements when deployed in a distributed manner. CloudMatrix384, with its fully peer-to-peer interconnection architecture and high-speed network technology, effectively overcomes the communication bottlenecks encountered in large-scale expert parallel computing, providing highly reliable, low-latency computing power for model inference.
Currently, the Huawei Cloud MaaS platform has pre-installed and fully optimized several mainstream open-source models, including DeepSeek, Qwen3, and Kimi. This means users can quickly access various model services through APIs, eliminating the need to manage complex hardware deployment and providing a convenient cloud-based intelligent experience.