This guide walks you through setting up the Ollama backend so it works seamlessly with the Ollama Client Chrome Extension. Weβll cover:
gemma:3b
)Visit the official website to download Ollama:
π Download Ollama
Once installed, start the Ollama server:
ollama serve
This launches the Ollama API at:
http://localhost:11434
Keep this running while using the extension.
After installation, run a model of your choice:
ollama run gemma3:1b
Once downloaded, youβre ready to chat!
π‘ Replace
gemma3:1b
with other models likellama3
,mistral
, etc.
If youβre seeing CORS errors or using Firefox, follow these platform-specific instructions to set OLLAMA_ORIGINS
:
π₯οΈ macOS (Launch Agent)
Edit the plist file:
nano ~/Library/LaunchAgents/com.ollama.server.plist
Add inside <key>EnvironmentVariables</key>
:
<key>OLLAMA_ORIGINS</key>
<string>chrome-extension://*,moz-extension://*</string>
Reload the agent:
launchctl unload ~/Library/LaunchAgents/com.ollama.server.plist
launchctl load -w ~/Library/LaunchAgents/com.ollama.server.plist
π§ Linux (systemd)
Edit the service file:
sudo systemctl edit --full ollama.service
Add under [Service]
:
Environment="OLLAMA_ORIGINS=chrome-extension://*,moz-extension://*"
Reload and restart:
sudo systemctl daemon-reload
sudo systemctl restart ollama
πͺ Windows
Press Win + R
, type sysdm.cpl
, press Enter.
Go to Advanced β Environment Variables.
Add a new User variable:
OLLAMA_ORIGINS
chrome-extension://*,moz-extension://*
Restart Ollama.
You can also allow local web apps like this:
OLLAMA_ORIGINS=chrome-extension://*,moz-extension://*,http://localhost:3000
Open this in your browser:
http://localhost:11434
Or use curl:
curl http://localhost:11434/api/tags
You should see a JSON response with available models.
You can configure:
http://localhost:11434
)gemma:3b
, llama3
, etc.)ollama serve
is runningollama pull <model>
localhost
OLLAMA_ORIGINS
is correctly set