Creon.AI · v0.3 · Waiting for approval

Local AI
that listens.

Cross-platform client for local and cloud LLMs. Voice and text. Projects, MCP tools, context compression, export. iOS · Android · macOS · Windows.

iOS Android macOS Windows Flutter · M3
creon.ai · qwen3.5-122b · q4 · 32k context
ctx
12.4k / 32k · 39%
● voice · 0:08
Show me how to structure a voice chat app with a local LLM.
creon.ai
Three layers: capture (Whisper / system STT), reasoning (LLM with streaming), and TTS. A single state machine connects them.
whisper 1.2s llm 1.2s t/s 48.6 tokens 112
Hold to talk
Voice Text

Built for
real conversations.

Voice that doesn't lag. Text that doesn't hallucinate. Tools that actually run. A single client for every model — local first, cloud when you want it.

Voice & Text — one client

Hold to talk, release to send. Switch to text any time. Whisper STT, kokoro TTS, voice-activity detection with auto-stop on pause.

Local-first, cloud-ready

Run qwen3.5-122b at home. Or plug into OpenAI / Anthropic-compatible endpoints. Add models with explicit context windows.

Projects & threads

Group chats by project. Each project keeps its own context, files, and memory. Sidebar shows what's active and what's archived.

Context meter

Always-visible window indicator. At 80% it auto-compresses to summary or sliding window — your conversation never falls apart.

MCP tools

Exa Search · Filesystem · Memory — first-class. Add any MCP server (stdio / SSE / WebSocket). The model sees what you grant.

Network controls

Allowlist domains for the AI. Log every outbound request. First-time-ask dialog before a new domain. Privacy you can audit.

MCP-native.

Model Context Protocol is built in. Three tools out of the box, more in one click. The agent sees only what your settings allow — and tells you when it does.

Exa Search

Web search for tool calls. api.exa.ai · API key managed in settings.

Filesystem

Read local files inside whitelisted paths. @modelcontextprotocol/server-filesystem

Memory

142 notes · 0.4 MB. Auto-save facts from conversations, mix in relevant memories on new threads.

Add any MCP server

stdio · SSE · WebSocket. Bring your own — the client speaks the protocol.

tool call · exa.search
running query
"anthropic claude design labs warm typography"
✓ done · 1.4s · 8 sources
tool call · filesystem.read
~/Projects/spec.md
✓ done · 0.2s · 18.4 KB
tool call · memory.recall
project = "Дизайн Creon.AI"
✓ found 2 notes · 0.1s

Cowork mode.
Tasks that finish without you.

A separate mode where the AI runs autonomous jobs in the background. Multiple threads, parallel projects, schedule them or pin them — they keep working while the app is minimized.

running · 4 min · qwen3-122b

ML jobs in UAE — search

Senior ML / Research Engineer roles in Dubai/Abu Dhabi with public salary. Collect into table.

exa.search · 12filesystemmemory · 3
step 4/8 · ~6 min remaining
running · 2 min · gpt-4o

SVG icon set for Creon.AI

24 line icons in Lucide style on the fire palette — radio, text, MCP, projects.

filesystemmemory
step 2/5 · ~4 min remaining
queued · in 12 min · claude-sonnet

Fintech competitor analysis

Compare 5 top fintech apps in UAE: features, UX, reviews. Runs every Monday at 9:00.

scheduled · weekly · est 18 min
done · 23 min ago · qwen3-122b

Press release — Creon.AI launch

Draft + 3 headline variants + key visual. Artifact attached.

filesystem · 4 files · memory · 8

Status, not silence.

You always know what the model is doing. Listening, transcribing, thinking, calling tools — every state has its own indicator with a live timer.

Listening
Transcribing speech whisper · 1.1s
Thinking 3s
Thinking deeply 27s
Searching the web exa.search · 1.4s
Reading files filesystem · 4 files
Recalling memory 142 notes
Replying 48.6 t/s
Speaking kokoro · ru-female

Every switch in one place.

Models, voice, tools, network, memory, context, interface, languages, hotkeys, privacy. All explained, all toggleable.

Models

Default LLMqwen3.5-122b-a10b · q4 · 32k context · local
LocalCloud
Available models3 local · 2 cloud · 5 total
Manage
Vision supportSend attached images to multimodal models

Context

Auto-compressionSummarize when context fills up
at 70%at 80%at 90%
Strategysummary · sliding-window · hybrid
summary
Always show context %Indicator in chat header

Multi-lingual

Speaks four languages.

Interface, voice, and tool calls. Switch any time — your projects don't lose context.

ENEnglish RUРусский ZH中文 ARالعربية

Status · v0.3

Waiting for
approval.

Creon.AI is in private testing. Want a build, a demo, or to integrate it into your stack? Reach out — we'll take it from there.

CEO@CREON.AE → Back to CREON