Llama 3 Soliloquy 7B v3 32K

Other chat Free

API ID: lynn/soliloquy-v3

Input Price
Free
/1M tokens
Output Price
Free
/1M tokens

About Llama 3 Soliloquy 7B v3 32K

Soliloquy is Lynn's creative model series, optimized for narrative generation and expressive writing. The models excel at storytelling, creative content, and engaging dialogue. Soliloquy demonstrates strong performance on creative tasks with distinctive style. For developers building creative writing applications, Soliloquy offers expressive narrative capability.

๐Ÿ†
Price Ranking
#1 lowest price among 950 Chat models โ€” Top 20% cheapest!

Model Specifications

Context Length
33k
Max Output
โ€”
Release Date
2024-08-24
Capabilities
chat
Input Modalities
text
Output Modalities
text

Best For

  • Conversations, content writing, general assistance

Consider Alternatives For

  • Image understanding (needs vision capability)
๐ŸŽ‰

This model is completely free!

No token costs - use it without worrying about API bills.

Estimate Token Usage

Other Model Lineup

Compare all models from Other to find the best fit

Model Input Output Context Capabilities
Llama 3 Soliloquy 7B v3 32K Current Free Free 33k chat
Riverflow V2 Max Preview Free Free 8k chat vision image_gen
Riverflow V2 Standard Preview Free Free 8k chat vision image_gen
Riverflow V2 Fast Preview Free Free 8k chat vision image_gen
AFM 4.5B Free Free 66k chat
AFM 4.5B Free Free 66k chat

Similar Models from Other Providers

Cross-brand alternatives with similar capabilities

Google Gemma 3n 4B
Input: Free
Output: Free
Context: 33k
Meta Llama 3.2 3B Instruct
Input: Free
Output: Free
Context: 80k
Alibaba Qwen Qwen2.5-VL 7B Instruct
Input: Free
Output: Free
Context: 33k
ByteDance Seedream 4.5
Input: Free
Output: Free
Context: 4k

๐Ÿš€ Quick Start

Get started with Llama 3 Soliloquy 7B v3 32K API

OpenAI-compatible SDK
from openai import OpenAI

client = OpenAI(
    base_url="https://api.provider.com/v1",
    api_key="YOUR_API_KEY"
)

response = client.chat.completions.create(
    model="lynn/soliloquy-v3",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)
print(response.choices[0].message.content)