Home
News
Business Stories AI Technology Travel Visa Asia Business Registration Telecommunication Medical Services
About Us
Home News AI Technology DeepSeek-V3.5: 671B MoE Model Surpasses GPT-5.2 on Chinese & English Long-Context Benchmarks

DeepSeek-V3.5: 671B MoE Model Surpasses GPT-5.2 on Chinese & English Long-Context Benchmarks

82    2026-02-16

DeepSeek open-sourced V3.5 (671B MoE), setting new state-of-the-art on 1M+ token long-context Chinese and English benchmarks, with native tool-calling and improved multilingual reasoning, making it one of the strongest open-weight models for enterprise long-document processing.

Previous article
OpenAI o4-mini Pro Launches with Breakthrough Chain-of-Verification Reasoning
Next article
Mistral NeMo 2.0: 12B Model with State-of-the-Art On-Device Multimodal Performance
new
Mistral NeMo 2.0: 12B Model with State-of-the-Art On-Device Multimodal Performance Anthropic Claude 4.1 Opus: First Model to Pass Internal 10-Hour Autonomous Software Engineering Test Google Gemini Robotics 1.0: End-to-End Multimodal Policy Model for Dexterous Manipulation DeepSeek-V3.5: 671B MoE Model Surpasses GPT-5.2 on Chinese & English Long-Context Benchmarks OpenAI o4-mini Pro Launches with Breakthrough Chain-of-Verification Reasoning Mistral Large 2.1: Open Model with 128K Context and Strong Reasoning in Non-English Languages xAI Grok-3.5 Early Access: Enhanced Long-Context Reasoning and Native Tool Integration Meta Llama 4 Scout Released: 405B Parameters with Strong Multimodal and Tool-Use Capabilities Google DeepMind AlphaFold 3.1 Update: Improved Protein-Ligand Binding Prediction Accuracy Anthropic Claude 4 Sonnet: New Multi-Agent Collaboration Framework for Complex Tasks
Email subscription
About
Navigation
News
©bizyet.com