这页解决什么
A sophisticated text-based Mixture-of-Experts (MoE) model featuring 21B total parameters with 3B activated per token, delivering exceptional multimodal understanding and generation through heterogeneous MoE structures an
- 百度 · baidu/ernie-4.5-21b-a3b
- text->text · 中国模型路线
- 120,000 context · US$0.07 input