What this page answers
GLM-4.5V is a vision-language foundation model for multimodal agent applications. Built on a Mixture-of-Experts (MoE) architecture with 106B parameters and 12B activated parameters, it achieves state-of-the-art results i
- Zhipu AI (GLM) · z-ai/glm-4.5v
- text+image->text · China model route
- 65,536 context · $0.60 input
Before connecting
Do not stop at the model name. Before integration, verify base URL, protocol, visible models, parameters, and limits together.
- supports frequency_penalty
- supports include_reasoning
- supports max_tokens
- supports presence_penalty
- supports reasoning
Next action
The goal is to catch search demand, then move users into model profiles, provider profiles, and key checking.
- Check whether the model fits the use case
- Then verify key permission and callable models