GLM 4.7 Flash

Other chatreasoningtool_use Long

API ID: z-ai/glm-4.7-flash-20260119

Input Price
$0.06
/1M tokens
Output Price
$0.40
/1M tokens

About GLM 4.7 Flash

GLM 4.7 Flash is a budget-friendly general-purpose model from Other with ultra-long context (203k), suitable for conversations, content creation, and general AI tasks.

๐Ÿ’ฐ
Price Ranking
#671 lowest price among 950 Chat models

Model Specifications

Context Length
203k
Max Output
โ€”
Release Date
2026-01-19
Capabilities
chat reasoning tool_use
Input Modalities
text
Output Modalities
text

Best For

  • Complex reasoning, math problems, multi-step logic
  • Conversations, content writing, general assistance

Consider Alternatives For

  • Image understanding (needs vision capability)
  • Simple Q&A (cheaper models available)

๐Ÿ’ฐ Real-World Cost Examples

Estimated monthly costs for common use cases

Personal AI Assistant
$0.15
/month
50 conversations/day, ~500 tokens each
Customer Service Bot
$4.50
/month
1000 tickets/day, ~800 tokens each
Data Analysis Pipeline
$6.15
/month
500 analyses/day, ~2k tokens each

Other Model Lineup

Compare all models from Other to find the best fit

Model Input Output Context Capabilities
GLM 4.7 Flash Current Free Free 203k chat reasoning tool_use
Riverflow V2 Max Preview Free Free 8k chat vision image_gen
Riverflow V2 Standard Preview Free Free 8k chat vision image_gen
Riverflow V2 Fast Preview Free Free 8k chat vision image_gen
AFM 4.5B Free Free 66k chat
AFM 4.5B Free Free 66k chat

Similar Models from Other Providers

Cross-brand alternatives with similar capabilities

Alibaba Qwen Qwen3 8B
Input: $0.05
Output: $0.40
Context: 41k
OpenAI GPT-5 Nano
Input: $0.05
Output: $0.40
Context: 400k
Alibaba Qwen Tongyi DeepResearch 30B A3B
Input: $0.09
Output: $0.40
Context: 131k
Nous Research Hermes 4 70B
Input: $0.11
Output: $0.38
Context: 131k

๐Ÿš€ Quick Start

Get started with GLM 4.7 Flash API

OpenAI-compatible SDK
from openai import OpenAI

client = OpenAI(
    base_url="https://api.provider.com/v1",
    api_key="YOUR_API_KEY"
)

response = client.chat.completions.create(
    model="z-ai/glm-4.7-flash-20260119",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)
print(response.choices[0].message.content)