このページで分かること
GLM-4.5V is a vision-language foundation model for multimodal agent applications. Built on a Mixture-of-Experts (MoE) architecture with 106B parameters and 12B activated parameters, it achieves state-of-the-art results i
- Zhipu AI (GLM) · z-ai/glm-4.5v
- text+image->text · 中国モデルルート
- 65,536 context · $0.60 input