-
released this
2026-03-05 13:21:18 +01:00 | 0 commits to main since this releasev2.3.8
New features
-
Anytype integration - oAI can now read and write to your local Anytype knowledge base. Enable in Settings → Anytype (new tab). Requires the Anytype desktop app running locally. Available tools:
- Search across all spaces or within a specific one
- Read the full content of any object
- Append content to an existing object without rewriting it - preserves internal Anytype links and mention blocks
- Create new notes, tasks, and pages
- Surgically toggle individual checkboxes by text match (
anytype_toggle_checkbox) - Set task done/undone via native relation (
anytype_set_done)
-
Model favourites - Star any model to mark it as a favourite. Starred models:
- Float to the top of the Default sort order in the model picker
- Can be filtered to show exclusively with the ☆ button in the model picker toolbar
- Show a filled yellow ★ in the model row, the Model Info sheet header, and the main header bar - toggling in any one location updates all three
- Are persisted across sessions
-
Default model picker - Settings → General → Model Settings now has a proper Choose… button that opens the full model selector. A Clear button removes the default. Switching models during a chat session no longer overwrites the saved default - the default only changes when you explicitly set it in Settings.
Bug fixes & improvements
- Settings tab overflow - Adding the Anytype tab (11 tabs total) previously caused the rightmost tab to clip. Tab buttons are now slightly narrower to fit the full row.
- Image generation (tools path) - Fixed a lingering error where image-generation models that also support tools (e.g. GPT-5 Image via OpenRouter) produced an error response after generating images. The tools loop now correctly captures generated images, saves them to temp files, and invites the model to save them to the requested destination.
- Goodbye phrase false positives - "thanks", "thank you", "thx", "ty", and "done" removed from the auto-save trigger list - they fired too often on routine task requests. The trigger now activates only on clear farewells.
- Header bar - Widened spacing between the provider badge and model name; added a ★ favourite star directly in the header.
Downloads
-
-
released this
2026-03-04 12:39:57 +01:00 | 2 commits to main since this releaseSorry for the rapid-fire release. Was still a bug in the image generation handling with image gen. models. Tested and worked. Should be fixed now 🫰
Downloads
-
New version -> v2.3.6 Stable
released this
2026-03-04 10:26:36 +01:00 | 4 commits to main since this releasev2.3.6
New features
- Reasoning / Thinking tokens - Models that support extended thinking (DeepSeek R1, Qwen thinking, Claude 3.7+, o1/o3, Grok via OpenRouter) now stream their reasoning process live into a collapsible block above the reply. The block auto-expands while the model is thinking and collapses automatically when the final answer arrives. Configure in Settings → General → Features: toggle on/off, pick effort level (High ~80% / Medium ~50% / Low ~20% / Minimal ~10%), and optionally hide reasoning content entirely.
- Model selector improvements - Three quality-of-life upgrades to the
⌘Mmodel picker:- ⓘ info button on every row opens the full model info sheet without selecting the model.
- Description search - the search field now matches against model descriptions, not just name and ID.
- Sort menu - sort by Default / Price: Low to High / Price: High to Low / Context: High to Low.
- Thinking filter - A 🧠 quick-filter button in the model picker shows only reasoning-capable models. The 🧠 badge also appears on model rows and in the model info sheet.
- Localization - The UI is now
fully translatedbeing translated into Norwegian Bokmål (nb), Swedish (sv), Danish (da), and German (de). The app uses your macOS language preference automatically. While there are still a good mix of English in the other languages, I've started the work :-)
Bug fixes & improvements
- Image generation display - Images generated by GPT-5 Image (and similar image-output models via OpenRouter) now render inline in the chat. Previously the response bubble was empty despite tokens and cost being reported correctly.
- MCP folder selector - Fixed a bug that prevented selecting folders in the MCP allowed-folders list.
- EmbeddingService - Fixed crash in semantic search embeddings.
- MCP toggle removed from General → Features - The toggle was redundant; MCP is configured in its own dedicated tab.
- Paperless tab - Marked as Beta while remaining issues are addressed.
Downloads
-
iCloud Backup and more Stable
released this
2026-02-27 14:20:57 +01:00 | 8 commits to main since this releasev2.3.5
New features
- iCloud Backup - Settings → Backup (new tab 9). Back Up Now exports all non-encrypted settings to
~/iCloud Drive/oAI/oai_backup.json(falls back to Downloads if iCloud Drive is unavailable). Restore from File… imports from any.jsonbackup. API keys and passwords are intentionally excluded and must be re-entered after restoring on a new machine. The format is versioned for a future encrypted-credentials option. - Tool call inspection - Clicking a
🔧 Calling: …system message now expands it inline to show each tool's input arguments and result as pretty-printed JSON. A spinner indicates pending tools; a green checkmark shows when each one completes.
Bug fixes & improvements
- ⌘S saves, ⌘⇧S opens Stats - The shortcuts were swapped.
⌘Snow saves the current conversation (previously⌘⇧S);⌘⇧Snow opens the Stats panel. - ⌘K clears chat - Was documented in the help modal but never actually wired up. Now works.
- Load conversation keeps name - After opening a saved conversation, pressing
⌘Snow re-saves it under its original name. Previously the name was lost on load, causing⌘Sto prompt for a new name every time. - Settings modal width - Increased minimum width (740 → 860 px) so all 10 tabs fit without clipping.
Downloads
- iCloud Backup - Settings → Backup (new tab 9). Back Up Now exports all non-encrypted settings to
-
New features and bug fixes Stable
released this
2026-02-25 08:32:42 +01:00 | 10 commits to main since this releasev2.3.4
New features
- Bash execution - The AI can now run shell commands via
/bin/zsh. Opt-in (disabled by default). When "Require Approval" is on, a sheet appears before each command showing the command, working directory, and a warning. Choose Allow Once or Allow for Session (skips further prompts for the rest of the chat). Resets on new chat, model switch, or conversation load. Configure in Settings → MCP → Bash Execution. - Auto-retry on API overload - When Anthropic returns a 529 Overloaded error, oAI now automatically retries up to 3 times with exponential backoff (2 s → 4 s → 8 s), showing a status message before each attempt. Only surfaces the error if all retries fail.
Bug fixes
- Auto-continue - The "Please continue from where you left off" message is no longer shown in the chat. The continue prompt is sent silently to the model; a quiet
↩ Continuing…system message is shown instead. - Email formatting - AI responses sent as HTML email no longer arrive with stray
`or```htmlartefacts. The email handler now strips outer code fences from the AI response before conversion, and the system prompt explicitly instructs the model not to wrap replies in code blocks.
Misc
- /help modal - The help modal now shows keyboard shortcuts for commands that have one in a addition to a complete shortcut overview at the bottom.
Downloads
- Bash execution - The AI can now run shell commands via
-
Bug fixes Stable
released this
2026-02-23 08:06:25 +01:00 | 14 commits to main since this releasev2.3.3
Bug fixes
- Model switching - Switching models mid-chat now correctly updates the active provider. Previously, API calls continued going to the old provider until the app was restarted.
- Model identity - Models accessed via OpenRouter (e.g. Kimi K2.5, Gemma) no longer misidentify themselves as Claude. The model's actual name is now injected into the system prompt.
- Anthropic cost display - Cost is now shown correctly for all Claude models, including newer ones without date suffixes (e.g.
claude-sonnet-4-6). Unknown future models fall back to the correct pricing tier via prefix matching. - Web search as a tool - When online mode is enabled with MCP active,
web_searchis now a proper callable tool the model can invoke on demand, rather than blindly injecting DuckDuckGo results into the user message. - Update check - Version check on startup no longer causes a beach ball; it runs entirely on a background thread.
Downloads
-
New settings layout Stable
released this
2026-02-20 14:24:22 +01:00 | 17 commits to main since this releasev2.3.2
Settings redesign
The Settings panel has been rebuilt from the ground up:
- Icon toolbar - Nine tabs now shown as labelled SF Symbol icons, split into Core (General, MCP, Appearance, Advanced) and Extras (Shortcuts, Skills, Sync, Email, Paperless). Much less cramped than the old segmented picker.
- Card layout - Settings rows are grouped into rounded-rect cards with a material background, adapting cleanly to light and dark mode.
- Close button - The Done button is gone. A standard ✕ button sits in the top-left corner where macOS users expect it.
- Paperless tab - New tab for configuring the Paperless-NGX document integration.
Version & release check
- Release check - On startup we check if there is a new release available. Will show up in footer if available.
- Version info - Show version number In the footer.
Downloads
-
First Public Release Stable
released this
2026-02-19 16:51:36 +01:00 | 20 commits to main since this releasev2.3.1 — First Public Release
This is the first publicly available version of oAI.
oAI is a native macOS AI chat application built with SwiftUI. It connects to multiple AI providers, supports persistent conversations, and includes a broad set of power-user features — all packaged in a clean, keyboard-driven interface.
Providers & Models
oAI supports five AI providers out of the box:
- OpenRouter — Access hundreds of models from a single API key, including GPT-4o, Gemini, Llama, Mistral, and more
- Anthropic — Direct access to Claude (Opus, Sonnet, Haiku)
- OpenAI — GPT-4o, GPT-4 Turbo, o1, and others
- Google — Gemini Pro and Gemini Flash
- Ollama — Run models locally with no API key required
Switch providers and models at any time from the toolbar or with
⌘M. Model capabilities (vision, tools, web search) are shown as badges in the header.
Chat Interface
- Streaming responses — Text appears word by word as the model generates it
- Markdown rendering — Full markdown support in assistant messages: headings, bold, italic, code blocks with syntax highlighting, tables, and lists
- Multi-line input —
Shift+Returnfor newlines,Return(or⌘Return) to send - Auto-continue — Long responses that hit token limits are automatically continued
- Copy button — Hover over any assistant message to copy it
- Star button — Mark important messages to prioritise them in long conversations
- Processing indicator — Animated indicator while the model is thinking
Slash Commands
Type
/in the input bar to see available commands with autocomplete:Command Description /newStart a new conversation /save <name>Save the current conversation /loadBrowse saved conversations /modelOpen model selector /clearClear the current chat /retryRegenerate the last response /historyBrowse command history /online on|offToggle web search /memory on|offToggle conversation memory /mcp on|off|list|addManage file access /shortcutsManage prompt shortcuts /skillsManage agent skills /statsSession statistics /export mdExport conversation as Markdown /helpShow all commands
Shortcuts (Prompt Macros)
Define your own slash commands that expand to prompt templates. For example,
/summarizecould expand to "Please summarise the following in three bullet points:". Templates support an{{input}}placeholder so you can type additional context inline before sending.Shortcuts are managed via
/shortcutsor Settings → Shortcuts. Import and export as JSON.
Agent Skills (SKILL.md)
Attach behavioural instruction files to the AI — compatible with the open SKILL.md standard. Active skills are injected into the system prompt automatically, giving the model persistent expertise or a specific persona. Import from
.mdfiles, toggle per-skill, and export back to.md.Managed via
/skillsor Settings → Skills.
Conversation Management
- Persistent storage — All conversations saved to a local SQLite database
- Save indicator — Footer shows whether the current chat is saved, modified, or unsaved
- Quick save —
⌘⇧Sre-saves without typing a name again - Search — Find saved conversations by keyword or by meaning (semantic search)
- Multi-select delete — Bulk-delete conversations from the list
- Export — Save any conversation as a Markdown file
Enhanced Memory & Context
Three optional systems to keep long conversations accurate and efficient:
Smart Context Selection
Intelligently picks which messages to include rather than sending the entire history. Always includes recent messages and any starred messages. Configurable token budget. Reduces API costs by up to 80% for long conversations.Semantic Search
Generates embeddings for all messages and conversations, enabling meaning-based search ("find where we discussed authentication") rather than exact keyword matching. Supports OpenAI, OpenRouter, and Google embedding models.Progressive Summarisation
When a conversation grows past a configurable threshold, older messages are automatically summarised into a compact 2–3 paragraph digest that is included in the system prompt. Keeps context window usage low without losing important history.All three features are off by default and can be enabled independently in Settings → Advanced.
MCP — File Access
The Model Context Protocol gives the AI controlled access to files on your Mac. You define which folders are allowed, and the model can read, search, and (optionally) write files within them. Gitignore rules are respected when listing directories.
Configure allowed folders and permissions in Settings → MCP, or via
/mcpcommands.
Web Search (Online Mode)
Toggle online mode with
/online onor the⌘Oheader indicator. When active, the AI can search the web via DuckDuckGo or Google to answer questions about current events, documentation, or anything beyond its training data.
Git Sync
Optionally back up and sync your conversation history to any Git repository. oAI exports conversations as human-readable Markdown files, commits them, and pushes automatically. On launch it pulls the latest changes and imports anything new.
Sync triggers: app start, model switch, idle period, and app quit. Configure in Settings → Sync.
Email Handler
An optional AI-powered email responder. oAI monitors an IMAP inbox and automatically replies to emails matching a configurable subject identifier (e.g.
[Jarvis]). Replies are generated by the configured AI provider and sent via SMTP. All processed emails are logged.Configure IMAP/SMTP credentials, the subject filter, AI model, and rate limits in Settings → Email.
Command History
Every message you send is stored in a searchable command history (up to 5,000 entries). Open it with
⌘Hor/history, then click any entry to load it back into the input bar.
macOS Integration
- Standard app menus — File menu with New Chat (
⌘N), Open Chat (⌘O), Save Chat (⌘⇧S), and Export - Toolbar — Quick access to conversations, history, model selector, stats, settings, and help
- Keyboard shortcuts — Most actions are reachable without touching the mouse
- Dark mode — Native dark appearance throughout
- Hidden title bar — Clean, distraction-free window chrome
Settings Overview
Tab Contents Providers API keys, default model, temperature, max tokens, streaming MCP File access folders and permissions Sync Git repository URL and credentials Email IMAP/SMTP configuration and AI model Appearance Text sizes, toolbar icon size and labels Shortcuts Prompt macro management Skills Agent skill management
System Requirements
- macOS 14 (Sonoma) or later
- Apple Silicon or Intel Mac
- At least one configured AI provider API key (or Ollama running locally)
Known Limitations
- IMAP IDLE is not supported — the email handler polls every 30 seconds
- Notarization is not included in this release; Gatekeeper may require a one-time right-click → Open on first launch
- Ollama must be running locally before launching oAI for local models to appear
oAI is free software licensed under the GNU Affero General Public License v3.0 or later.
Downloads