ماذا تجيب هذه الصفحة
MiniMax-M1 is a large-scale, open-weight reasoning model designed for extended context and high-efficiency inference. It leverages a hybrid Mixture-of-Experts (MoE) architecture paired with a custom "lightning attention"
- MiniMax · minimax/minimax-m1
- text->text · مسار نموذج الصين
- 1,000,000 context · 0.40 US$ input