这页解决什么
The Qwen3.5 native vision-language Flash models are built on a hybrid architecture that integrates a linear attention mechanism with a sparse mixture-of-experts model, achieving higher inference efficiency. Compared to t
- 阿里云 · 通义 · qwen/qwen3.5-flash-02-23
- text+image+video->text · 中国模型路线
- 1,000,000 context · US$0.065 input