Local LLM for VS code copilot: Difference between revisions
From Master of Neuroscience Wiki
Created page with "In Config VS Code (Insiders) we see how we can add a custom LLM via the "OpenAI compatible API"." |
No edit summary |
||
| Line 1: | Line 1: | ||
In [[Config VS Code (Insiders)]] we see how we can add a custom LLM via the "OpenAI compatible API". | In [[Config VS Code (Insiders)]] we see how we can add a custom LLM via the "OpenAI compatible API". | ||
Add in a custom model to copilot in vs code insiders is easy. Let it do agent stuff is really hard. | |||
Here an example: Qwen3 Coder | |||
VS Code JSON:<syntaxhighlight lang="json"> | |||
"github.copilot.chat.customOAIModels": { | |||
"Qwen/Qwen3-Coder-30B-A3B-Instruct-FP8": { | |||
"name": "Qwen/Qwen3-Coder-30B-A3B-Instruct-FP8", | |||
"url": "http://gate0.neuro.uni-bremen.de:8000/v1", | |||
"toolCalling": true, | |||
"vision": false, | |||
"thinking": true, | |||
"maxInputTokens": 256000, | |||
"maxOutputTokens": 8192, | |||
"requiresAPIKey": false | |||
} | |||
</syntaxhighlight> | |||
Revision as of 16:33, 9 December 2025
In Config VS Code (Insiders) we see how we can add a custom LLM via the "OpenAI compatible API".
Add in a custom model to copilot in vs code insiders is easy. Let it do agent stuff is really hard.
Here an example: Qwen3 Coder
VS Code JSON:
"github.copilot.chat.customOAIModels": {
"Qwen/Qwen3-Coder-30B-A3B-Instruct-FP8": {
"name": "Qwen/Qwen3-Coder-30B-A3B-Instruct-FP8",
"url": "http://gate0.neuro.uni-bremen.de:8000/v1",
"toolCalling": true,
"vision": false,
"thinking": true,
"maxInputTokens": 256000,
"maxOutputTokens": 8192,
"requiresAPIKey": false
}