What this page answers
gpt-oss-120b is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B parameters per forward
- OpenAI · openai/gpt-oss-120b:free
- text->text · global model route
- 131,072 context · $0.00 input
Before connecting
Do not stop at the model name. Before integration, verify base URL, protocol, visible models, parameters, and limits together.
- supports include_reasoning
- supports max_tokens
- supports reasoning
- supports seed
- supports stop
Next action
The goal is to catch search demand, then move users into model profiles, provider profiles, and key checking.
- Check whether the model fits the use case
- Then verify key permission and callable models