Trying out Continue AI code assistant in VS code
Just found out about the Continue extension for VS Code (and JetBrains), and what’s cool about it is that you can set it up with a number of AI providers, including Ollama, which runs on your local computer.
In order to install Ollama first install the app on your Mac from the official website, then pull and run a couple of models from your terminal:
1 2 |
ollama pull llama3.1:8b ollama run llama3.1:8b |
and also
1 2 |
ollama pull qwen2.5-coder:1.5b ollama run qwen2.5-coder:1.5b |
and
1 |
ollama pull nomic-embed-text |
Then open this file
1 |
/Users/YOUR-MAC-USER-HERE/.continue/config.json |
and here you can set the models you would like to use. If you want to use AI providers you have a key for, you can set them up.
Here is my config.json, where I have setup OpenAI, Claude, Mistral, Codestral, Llama for the chatbot, and Qwen by Ollama for the autocomplete
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 |
{ "models": [ { "title": "GPT-4o", "provider": "openai", "model": "gpt-4o", "apiKey": "sk-xxx", "systemMessage": "You are an expert software developer. You give helpful and concise responses." }, { "title": "Llama3.1 8B", "provider": "ollama", "model": "llama3.1:8b" }, { "title": "Codestral", "provider": "mistral", "model": "codestral-latest", "apiKey": "xxx", "apiBase": "https://codestral.mistral.ai/v1", "systemMessage": "You are an expert software developer. You give helpful and concise responses." }, { "model": "claude-3-5-sonnet-latest", "provider": "anthropic", "apiKey": "sk-xxx", "title": "Claude 3.5 Sonnet" }, { "title": "Mistral Large", "provider": "mistral", "model": "mistral-large-latest", "apiKey": "xxx" } ], "tabAutocompleteModel": { "title": "Qwen2.5-Coder 1.5B", "provider": "ollama", "model": "qwen2.5-coder:1.5b-base" }, "contextProviders": [ { "name": "code", "params": {} }, { "name": "docs", "params": {} }, { "name": "diff", "params": {} }, { "name": "terminal", "params": {} }, { "name": "problems", "params": {} }, { "name": "folder", "params": {} }, { "name": "codebase", "params": {} } ], "slashCommands": [ { "name": "share", "description": "Export the current chat session to markdown" }, { "name": "cmd", "description": "Generate a shell command" }, { "name": "commit", "description": "Generate a git commit message" } ], "embeddingsProvider": { "provider": "ollama", "model": "nomic-embed-text" }, "reranker": { "name": "free-trial" } } |
If you have an API key for codestral, Continue recommends using that model for code completion, so in that case it the embeddingsProvider would look like this:
1 2 3 4 5 6 7 |
"embeddingsProvider": { "provider": "mistral", "model": "mistral-embed", "apiKey": "xxx", "apiBase": "https://api.mistral.ai/v1", "apiKey": "your-key-here" }, |
This should be it. Once you save the file, the Continue extension should make it possible to chat using the providers you setup:
It’s also possible to add new models from the “Add chat model” button
And also, autocomplete should work with Ollama running on your Mac
I’ll give it a try and see how it goes, compared to other AI providers.