Qwen3.5 represents a major advancement in open-weight multimodal AI models, engineered to function as a native vision-language agent system. Its flagship model, Qwen3.5-397B-A17B, leverages a hybrid architecture that fuses Gated DeltaNet linear attention with a high-sparsity mixture-of-experts framework, allowing only 17 billion parameters to activate during inference for improved speed and cost efficiency. Despite its sparse activation, the full 397-billion-parameter model achieves competitive performance across reasoning, coding, multilingual benchmarks, and complex agent evaluations. The hosted Qwen3.5-Plus version supports a one-million-token context window and includes built-in tool use for search, code interpretation, and adaptive reasoning. The model significantly expands multilingual coverage to 201 languages and dialects while improving encoding efficiency with a larger vocabulary. Native multimodal training enables strong performance in image understanding, video processing, document analysis, and spatial reasoning tasks. Its infrastructure includes FP8 precision pipelines and heterogeneous parallelism to boost throughput and reduce memory consumption. Reinforcement learning at scale enhances multi-step planning and general agent behavior across text and multimodal environments. Overall, Qwen3.5 positions itself as a high-efficiency foundation for autonomous digital agents capable of reasoning, searching, coding, and interacting with complex environments.