forked from cline/cline
-
Notifications
You must be signed in to change notification settings - Fork 2.4k
Closed
Labels
Issue - In ProgressSomeone is actively working on this. Should link to a PR soon.Someone is actively working on this. Should link to a PR soon.bugSomething isn't workingSomething isn't working
Description
App Version
3.11.12
API Provider
Ollama
Model Used
qwq:32b, qwen2.5-coder:32b, and many more.
Actual vs. Expected Behavior
I setup my context to be right to the VRAM capacity of my GPUs. For a 32b model, that means around 24k. In Roocode context length displays, it says my model has a 128k context length.
In the documentation, there is an optional point :
- (Optional) Configure Model context size in Advanced settings, so Roo Code knows how to manage its sliding window.
Well I didn't find this at all.
Detailed Steps to Reproduce
Follow the doc of setup for an ollama connection.
Relevant API Request Output
Additional Context
I want to change the context length in case it changes something to the bug I have about a file_count problem.
dosubot, greinacker, djmaze, virtualbeck, kiernan and 11 more
Metadata
Metadata
Assignees
Labels
Issue - In ProgressSomeone is actively working on this. Should link to a PR soon.Someone is actively working on this. Should link to a PR soon.bugSomething isn't workingSomething isn't working
Type
Projects
Status
Done