What this page answers
A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA. The model is multilingual, supporting English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese,...
- Mistral AI · mistralai/mistral-nemo
- text->text · global model route
- 131,072 context · $0.02 input
Before connecting
Do not stop at the model name. Before integration, verify base URL, protocol, visible models, parameters, and limits together.
- supports frequency_penalty
- supports max_tokens
- supports min_p
- supports presence_penalty
- supports repetition_penalty
Next action
The goal is to catch search demand, then move users into model profiles, provider profiles, and key checking.
- Check whether the model fits the use case
- Then verify key permission and callable models