LFM2.5-1.2B-Thinking (free)

Other chatreasoningtool_use Free

API ID: liquid/lfm-2.5-1.2b-thinking-20260120

Input Price
Free
/1M tokens
Output Price
Free
/1M tokens

About LFM2.5-1.2B-Thinking (free)

LFM is Liquid AI's foundation model series, featuring innovative architecture for efficient sequence processing. The models deliver strong performance while maintaining practical resource requirements. LFM variants range from 3B to 40B parameters for different deployment scenarios. The architecture enables efficient handling of long sequences. For developers seeking capable AI with efficient architecture, LFM offers innovative design from Liquid AI.

🏆
Price Ranking
#1 lowest price among 950 Chat models — Top 20% cheapest!

Model Specifications

Context Length
33k
Max Output
—
Release Date
2026-01-20
Capabilities
chat reasoning tool_use
Input Modalities
text
Output Modalities
text

Best For

  • Complex reasoning, math problems, multi-step logic
  • Conversations, content writing, general assistance

Consider Alternatives For

  • Image understanding (needs vision capability)
  • Simple Q&A (cheaper models available)
🎉

This model is completely free!

No token costs - use it without worrying about API bills.

Estimate Token Usage

Other Model Lineup

Compare all models from Other to find the best fit

Model Input Output Context Capabilities
LFM2.5-1.2B-Thinking (free) Current Free Free 33k chat reasoning tool_use
Riverflow V2 Max Preview Free Free 8k chat vision image_gen
Riverflow V2 Standard Preview Free Free 8k chat vision image_gen
Riverflow V2 Fast Preview Free Free 8k chat vision image_gen
AFM 4.5B Free Free 66k chat
AFM 4.5B Free Free 66k chat

Similar Models from Other Providers

Cross-brand alternatives with similar capabilities

Google Gemma 3n 4B
Input: Free
Output: Free
Context: 33k
Meta Llama 3.2 3B Instruct
Input: Free
Output: Free
Context: 80k
Alibaba Qwen Qwen2.5-VL 7B Instruct
Input: Free
Output: Free
Context: 33k
ByteDance Seedream 4.5
Input: Free
Output: Free
Context: 4k

🚀 Quick Start

Get started with LFM2.5-1.2B-Thinking (free) API

OpenAI-compatible SDK
from openai import OpenAI

client = OpenAI(
    base_url="https://api.provider.com/v1",
    api_key="YOUR_API_KEY"
)

response = client.chat.completions.create(
    model="liquid/lfm-2.5-1.2b-thinking-20260120",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)
print(response.choices[0].message.content)