The Qwen3.5 122B-A10B native vision-language model is built on a hybrid architecture that integrates a linear attention mechanism with a sparse mixture-of-experts model, achieving higher inference efficiency. In terms of overall performance, this model is second only to Qwen3.5-397B-A17B. Its text capabilities significantly outperform those of Qwen3-235B-2507, and its visual capabilities surpass those of Qwen3-VL-235B.
Modalities
Input Price
35% off$0.26per 1M
Output Price
35% off$2.08per 1M
Context
262K
Weekly Tokens
13.4B
Released
Feb 25, 2026
