Cross-platform client for local and cloud LLMs. Voice and text. Projects, MCP tools, context compression, export. iOS · Android · macOS · Windows.
Voice that doesn't lag. Text that doesn't hallucinate. Tools that actually run. A single client for every model — local first, cloud when you want it.
Hold to talk, release to send. Switch to text any time. Whisper STT, kokoro TTS, voice-activity detection with auto-stop on pause.
Run qwen3.5-122b at home. Or plug into OpenAI / Anthropic-compatible endpoints. Add models with explicit context windows.
Group chats by project. Each project keeps its own context, files, and memory. Sidebar shows what's active and what's archived.
Always-visible window indicator. At 80% it auto-compresses to summary or sliding window — your conversation never falls apart.
Exa Search · Filesystem · Memory — first-class. Add any MCP server (stdio / SSE / WebSocket). The model sees what you grant.
Allowlist domains for the AI. Log every outbound request. First-time-ask dialog before a new domain. Privacy you can audit.
Model Context Protocol is built in. Three tools out of the box, more in one click. The agent sees only what your settings allow — and tells you when it does.
Web search for tool calls. api.exa.ai · API key managed in settings.
Read local files inside whitelisted paths. @modelcontextprotocol/server-filesystem
142 notes · 0.4 MB. Auto-save facts from conversations, mix in relevant memories on new threads.
stdio · SSE · WebSocket. Bring your own — the client speaks the protocol.
A separate mode where the AI runs autonomous jobs in the background. Multiple threads, parallel projects, schedule them or pin them — they keep working while the app is minimized.
Senior ML / Research Engineer roles in Dubai/Abu Dhabi with public salary. Collect into table.
24 line icons in Lucide style on the fire palette — radio, text, MCP, projects.
Compare 5 top fintech apps in UAE: features, UX, reviews. Runs every Monday at 9:00.
Draft + 3 headline variants + key visual. Artifact attached.
You always know what the model is doing. Listening, transcribing, thinking, calling tools — every state has its own indicator with a live timer.
Models, voice, tools, network, memory, context, interface, languages, hotkeys, privacy. All explained, all toggleable.
Multi-lingual
Interface, voice, and tool calls. Switch any time — your projects don't lose context.
Status · v0.3
Creon.AI is in private testing. Want a build, a demo, or to integrate it into your stack? Reach out — we'll take it from there.