ماذا تجيب هذه الصفحة
A sophisticated text-based Mixture-of-Experts (MoE) model featuring 21B total parameters with 3B activated per token, delivering exceptional multimodal understanding and generation through heterogeneous MoE structures an
- Baidu Wenxin · baidu/ernie-4.5-21b-a3b
- text->text · مسار نموذج الصين
- 120,000 context · 0.07 US$ input