A Spotlight-style launcher for quick LLM actions. Cross-platform (macOS/Windows/Linux), built with Electron + HTML/JS and styled with Tailwind.
- Pops up with a global hotkey (default Alt+Space)
- Auto-reads text/image from the clipboard on open
- Actions via hotkeys: Proofread, Translate → English, Translate → …, Summarize, Rewrite in style
- Writes the LLM response back to the clipboard automatically
- Works with OpenAI, OpenAI-compatible endpoints, or Ollama (local)
- Node.js 18+ and npm
- An LLM provider (choose one):
- OpenAI API key
- OpenAI-compatible server (base URL + key)
- Ollama running locally (e.g.,
ollama servewith a model likellama3.1:8b)
git clone <this-repo> echo
cd echo
npm i
Create your config and env:
cp config.example.json config.json
# .env is NOT checked in
printf "OPENAI_API_KEY=\nOPENAI_COMPAT_KEY=\n" > .env
Edit config.json to select a provider and default model.
Tailwind builds CSS → Electron serves the app.
Terminal A (Tailwind watch):
npm run tw:dev
Terminal B (Electron):
npm run dev
Open the app with the global hotkey (Alt+Space by default). Use the ⚙︎ Settings button to change hotkey, provider, model, API base, and a default target language.
- Mod+1 – Ask ("regular mode")
- Mod+2 – Proofread
- Mod+3 – Translate → English
- Mod+4 – Translate → (asks for language; uses default if set)
- Mod+5 – Summarize
- Mod+6 – Rewrite in style (asks for style)
- Ctrl/⌘ + Enter – Run the current action
- Esc – Close the window
- Clicking outside the window (blur) also hides it.
- Set
"provider": "openai"inconfig.json - Put your key in
.env→OPENAI_API_KEY=... - Configure model (e.g.,
gpt-4o-mini) and base (https://api.openai.com/v1)
- Set
"provider": "openaiCompatible" - Configure
openaiCompatible.apiBase(e.g., a self-hosted gateway) - Put your key in
.env→OPENAI_COMPAT_KEY=...
- Set
"provider": "ollama" - Ensure
ollamais running (ollama serve) - Choose a local model (e.g.,
llama3.1:8b) and setollama.host(usuallyhttp://localhost:11434)
Build Tailwind once, then package with electron-builder:
npm run tw:build
npm run dist
Artifacts for your platform will appear in dist/.
.
├─ main.js # Electron main process (ESM)
├─ preload.cjs # Preload (CommonJS) exposing window.api
├─ config.json # Runtime settings (copy from config.example.json)
├─ src/
│ ├─ renderer.html # UI shell
│ ├─ renderer.js # UI logic & actions
│ ├─ tw.css # Tailwind entry (source)
│ └─ styles.css # Generated by Tailwind (do not edit)
└─ providers/
├─ providerManager.js
├─ openai.js
└─ ollama.js
-
Window doesn’t react / buttons “do nothing”
- Ensure
preload.cjsis used inwebPreferences.preload(path built with__dirname). - Open DevTools in dev: win.webContents.openDevTools({ mode: 'detach' })
- Check the Console for errors.
- Ensure
-
“Unable to load preload script”
- Confirm the file exists and that the preload path uses
__dirname(notprocess.cwd()). - Keep preload as CommonJS (
preload.cjs).
- Confirm the file exists and that the preload path uses
-
window.apiis undefined in renderer- Preload didn’t load. Fix the preload path or syntax.
-
Global hotkey doesn’t work
- Change it in Settings (⚙︎) or edit
config.jsonand restart.
- Change it in Settings (⚙︎) or edit
-
Ollama not responding
- Verify
ollama serveis running and the model is pulled: ollama run llama3.1:8b
- Verify
contextIsolation: truein the BrowserWindow- Minimal CSP in
renderer.html(script-src 'self') - API keys live in
.env(never committed)