What this page answers
LFM2.5-1.2B-Instruct is a compact, high-performance instruction-tuned model built for fast on-device AI. It delivers strong chat quality in a 1.2B parameter footprint, with efficient edge inference and broad runtime supp
- Liquid · liquid/lfm-2.5-1.2b-instruct:free
- text->text · global model route
- 32,768 context · $0.00 input
Before connecting
Do not stop at the model name. Before integration, verify base URL, protocol, visible models, parameters, and limits together.
- supports frequency_penalty
- supports max_tokens
- supports min_p
- supports presence_penalty
- supports repetition_penalty
Next action
The goal is to catch search demand, then move users into model profiles, provider profiles, and key checking.
- Check whether the model fits the use case
- Then verify key permission and callable models