Replies: 1 comment
-
Yeah, the beauty of this is that, once you've got it working, you can use VectorCode+codecompanion to refine and optimise your setup lol (at least that's the case when I'm working on the project). I'm also preparing a guide on querying the neovim docs/lua API. I've been using this trick, and it's been working nicely, so I wanted to share it with people. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Yesterday I think I got it to work seamlessly and maby better than my expectations. When I first read about codecompanion and vectorcode. I was trying out some of the most known plugins for nvim. chatgpt.nvim, copilot, codeium, cody, and so on. Everyone of them where good, for some general things and knowledge. And my experience with coding is kinda. well, a little under average. Im a lot better musician ;) Ok, lets make it short. few weeks ago I decided to take codecompanion and vectorcode out for a spin. Yesterday, I kinda did find a sweetspot for me. this is incredible. "I" wrote my own ollama chat plugin with python. and have all the basic features. I did this earlier with another ai plugin(no names). but It was difficuldt. since It did not have the knowledge about my local repository, as codecompanion now have with vectorcode. and om my god so accurate responses with details and efficient. This took me up to another level. suddenly I felt strong. And today I fixed a complex problem with my windowmanager Qtile and wayland. and reported a bug to debian. man. that was fun. I dont think this config I run with now, is special or complicated. I know there is more. and maby someone of you also want to share some experiences with your configs. anyway. Here is my last chat with codecompanion using vectorcode. This is how I use it.
`## Me
@vectorcode do the ls command
CodeCompanion (Gemini)
Me
use the nvim project_root
CodeCompanion (Gemini)
Me
explain this configurations and how its related. what the different settings do.
CodeCompanion (Gemini)
From the retrieved files,
vectorcode.lua
andcodecompanion.lua
configure theVectorCode
tool and its integration with thecodecompanion
plugin. Let's break down the configurations:vectorcode.lua
:IS_DEV
anddev_path
(Lines 1-2): These variables conditionally set the plugin's directory. IfIS_DEV
is true, it uses the development path (~/dev/VectorCode
); otherwise, it uses the standard plugin installation path.dir
(Line 5): Sets the plugin directory based onIS_DEV
anddev_path
."VectorCode"
(Line 6): Specifies the plugin's name.version = "*"
(Line 7): Indicates that the latest version of the plugin should be used.build = "pipx upgrade vectorcode"
(Line 8): Specifies the command to build or upgrade the plugin usingpipx
.cmd = "VectorCode"
(Line 10): Registers theVectorCode
command.opts
function (Lines 11-27): Defines the plugin's options:async_backend = "lsp"
(Line 13): Sets the asynchronous backend to LSP (Language Server Protocol).notify = true
(Line 14): Enables notifications.on_setup = { lsp = true }
(Line 15): Configures the plugin to set up the LSP server.n_query = 10
(Line 16): Sets the default number of query documents to retrieve.async_opts
(Lines 17-25): Configures asynchronous options:timeout_ms = 5000
: Sets the timeout for asynchronous operations to 5000 milliseconds.events = { "BufWritePost" }
: Specifies the events that trigger asynchronous queries (in this case, after a buffer is written).single_job = true
: Ensures that only one asynchronous job runs at a time.query_cb = require("vectorcode.utils").make_surrounding_lines_cb(-1)
: Defines a callback function to retrieve surrounding lines for the query.debounce = -1
: Disables debouncing (delaying) of the query.n_query = 30
: Sets a differentn_query
for the asynchronous queries, overriding the one set at line 16.config
function (Lines 28-41): Configures the plugin:vectorcode_server
.HTTP_PROXY
andHTTPS_PROXY
environment variables for the LSP server.require("vectorcode").setup(opts)
to set up the plugin with the defined options.dependencies = { "nvim-lua/plenary.nvim" }
(Line 42): Specifies the plugin's dependencies.codecompanion.lua
:IS_DEV
anddev_path
(Lines 1-2): Similar tovectorcode.lua
, these variables conditionally set the plugin's directory for development purposes.dir
(Line 5): Sets the plugin directory based onIS_DEV
anddev_path
."codecompanion"
(Line 6): Specifies the plugin's name.dependencies
(Lines 7-12): Specifies the plugin's dependencies, includingVectorCode
.config = true
(Line 13): Enables the plugin's configuration.cmd
(Lines 14-19): Registers theCodeCompanion
,CodeCompanionCmd
,CodeCompanionChat
, andCodeCompanionActions
commands.opts
function (Lines 20-59): Defines the plugin's options:adapters
(Lines 22-30): Configures the language model adapters. In this case, it configures a Gemini adapter.strategies
(Lines 32-54): Defines the chat and inline strategies for using the language model:chat
(Lines 33-50): Configures the chat strategy:adapter = "Gemini"
: Sets the adapter to Gemini.slash_commands
: Configures slash commands, including acodebase
command that usesvectorcode.integrations
.tools
: Configures tools, includingVectorCode
:description
: Describes the tool.callback
: Defines a callback function to run VectorCode.default_num
: Sets the default number of documents to retrieve with VectorCode to 15.use_lsp
: Enables LSP usage.auto_submit
: Automatically submitsls
andquery
commands.ls_on_start
: Disablesls
command on start.inline
: Configures the inline strategy, setting the adapter to Gemini.log_level = "DEBUG"
(Line 56): Sets the log level to debug.Relationship and Interaction:
codecompanion.lua
depends onVectorCode
(as specified in thedependencies
section).codecompanion.lua
integratesVectorCode
as a tool within its chat strategy. This allows users to runVectorCode
from within CodeCompanion to retrieve project context.vectorcode
tool definition incodecompanion.lua
customizes howVectorCode
is used within CodeCompanion, setting a default number of documents to retrieve (default_num
) and enabling automatic submission ofls
andquery
commands (auto_submit
).codecompanion.lua
calls functions fromvectorcode.integrations
to create a slash command and a tool that are used inside the CodeCompanion chat interface.In essence,
vectorcode.lua
sets up the coreVectorCode
functionality, whilecodecompanion.lua
provides a way to useVectorCode
within a chat-based interface, making it easier to access and utilize project context for code generation and assistance.CodeCompanion (Gemini)
The
codecompanion
plugin leverages theVectorCode
plugin to get project information in order to provide context aware code suggestions.Me
use the @vectorcode use the vectorcode project_root
CodeCompanion (Gemini)
I need a query for the vectorcode tool. What information are you seeking from the vectorcode project root?
Me
@vectorcode
do the ls command
CodeCompanion (Gemini)
Me
use the vectorcode project_root
CodeCompanion (Gemini)
Me
what project_root do you activate?
CodeCompanion (Gemini)
Based on the previous interaction, the current
project_root
being used is/home/cjoke/dev/VectorCode
. This was explicitly set when you asked to "use the vectorcode project_root" and I followed with the<options><project_root>
tag.Me
ok, so based on the information , can you go over the configuration. and se if there is some possible improvements to do?
CodeCompanion (Gemini)
Okay, let's review the configuration based on the files retrieved from the
/home/cjoke/dev/VectorCode
project root and identify potential improvements. I'll focus on thedocs/cli.md
file, which outlines the configuration options, and combine it with the other files.Current Configuration (Inferred from
docs/cli.md
and file structure):SentenceTransformerEmbeddingFunction
if noconfig.json
is present or if theembedding_function
is not explicitly defined in it.{}
if noconfig.json
is present or if theembedding_params
is not explicitly defined in it.127.0.0.1:8000
for the ChromaDB server.~/.local/share/vectorcode/chromadb/
for the local persistent database.~/.local/share/vectorcode/
for the ChromaDB server logs.2500
characters per chunk.0.2
. Note that if treesitter is working,overlap_ratio
has no effects.-1
(meaning all chunks are considered).CrossEncoderReranker
.sentence-transformers/all-MiniLM-L6-v2
or similar model.vectorcode.include
andvectorcode.exclude
: It's not clear whether these files exist, so cannot comment on what their current state might be.Potential Improvements:
Create a
config.json
: The first and most important improvement is to create aconfig.json
file in/home/cjoke/dev/VectorCode/.vectorcode
to explicitly define the configuration. This allows for customization and avoids relying on defaults. This file would override the global configuration, so be sure to transfer the global settings to it.Embedding Function:
Consider using a different embedding function based on your needs. The documentation mentions that ChromaDB supports other embedding functions, including those for Ollama and OpenAI. If you're running Ollama locally, using
chromadb.utils.embedding_functions.OllamaEmbeddingFunction
could be beneficial, especially if you want to avoid relying on external APIs. If using a self-hosted embedding model, be sure topipx inject
the required dependencies to the virtual env used by VectorCode.Example
config.json
snippet (for Ollama):Hardware Acceleration: Ensure hardware acceleration is properly configured. If using Nvidia, ensure CUDA is set up correctly. If using Intel, consider the OpenVINO backend as described in the documentation. Set the
backend
anddevice
inembedding_params
accordingly.Chunk Size and Overlap Ratio:
2500
and0.2
might work for some codebases, a smaller chunk size might be beneficial for finer-grained context. Keep theoverlap_ratio
low if treesitter is working correctly to prevent useless tokens being vectorised.chunk_size
to a negative number). But large file sizes may cause issues with the embedding models.Reranker:
CrossEncoderReranker
. The documentation mentions theNaiveReranker
as an alternative. The defaultCrossEncoderReranker
uses thesentence-transformers/all-MiniLM-L6-v2
model. Consider experimenting with other cross-encoder models (e.g., larger, more accurate models) by specifyingmodel_name_or_path
inreranker_params
.HNSW Settings: If you experience performance issues with queries, consider tuning the HNSW (Hierarchical Navigable Small World) index settings. The documentation provides a link to ChromaDB's HNSW configuration. Remember to re-vectorise after modifying these options.
Chunk Filters: If there are specific code patterns or code that isn't necessary for semantic search (e.g. license headers, automatically generated code, empty lines), create
chunk_filters
to exclude them to vectorise. This can reduce noise and improve search accuracy.File Specs (vectorcode.include and vectorcode.exclude): Create
vectorcode.include
andvectorcode.exclude
files in/home/cjoke/dev/VectorCode/.vectorcode
to manage which files are vectorised.vectorcode.include
to specify which files should be vectorised when runningvectorcode vectorise
without specifying files.vectorcode.exclude
to exclude certain files or directories from being vectorised (e.g., test files, generated code).LSP Mode: Given the description of LSP mode, it's worth considering. However, note that it requires a separate ChromaDB server and currently only supports the
query
subcommand.Example
config.json
(Putting it all together - a more complete example):Next Steps:
.vectorcode
directory in your project root if it doesn't exist.config.json
file in the.vectorcode
directory.config.json
file with the desired configurations.By explicitly defining these configurations and experimenting with different values, you can optimize VectorCode for your specific codebase and needs. The most important aspect is to iteratively test and refine the settings based on your experiences.
Me
`
and here is my codecompanion.lua.
`local IS_DEV = true
local dev_path = "~/dev/codecompanion.nvim/"
return {
dir = IS_DEV and dev_path or nil,
"codecompanion",
dependencies = {
"nvim-lua/plenary.nvim",
"nvim-treesitter/nvim-treesitter",
"VectorCode",
-- "ibhagwan/fzf-lua", -- Added fzf-lua from commented config
},
config = true, -- Enable config
cmd = {
"CodeCompanion",
"CodeCompanionCmd",
"CodeCompanionChat",
"CodeCompanionActions",
},
opts = function(_, opts)
opts = opts or {}
opts.adapters = {
["Gemini"] = function()
return require("codecompanion.adapters").extend("gemini", {
name = "Gemini",
schema = { model = { default = "gemini-2.0-flash" } },
-- schema = { model = { default = "gemini-2.0-flash" } },
})
end,
}
end,
}
`
and vectorcode.lua
`local IS_DEV = true
local dev_path = "~/dev/VectorCode"
return {
dir = IS_DEV and dev_path or nil,
"VectorCode",
version = "*",
build = "pipx upgrade vectorcode",
cmd = "VectorCode",
opts = function()
return {
async_backend = "lsp",
notify = true,
on_setup = { lsp = true },
n_query = 10,
async_opts = {
timeout_ms = 5000,
-- events = { "BufWritePost", "InsertEnter", "BufReadPost" },
events = { "BufWritePost" },
single_job = true,
query_cb = require("vectorcode.utils").make_surrounding_lines_cb(-1),
debounce = -1,
n_query = 30,
},
}
end,
config = function(_, opts)
vim.lsp.config("vectorcode_server", {
cmd_env = {
HTTP_PROXY = os.getenv("HTTP_PROXY"),
HTTPS_PROXY = os.getenv("HTTPS_PROXY"),
},
})
require("vectorcode").setup(opts)
-- vim.api.nvim_create_autocmd("LspAttach", {
-- callback = function()
-- require("vectorcode.config").get_cacher_backend().register_buffer(0)
-- end,
-- })
end,
dependencies = { "nvim-lua/plenary.nvim" },
}
`
I run chromadb as standalone and the speed of this is also incredible.
@Davidyz @olimorris
Thank you for an awesome experience and plugins!
please feel free to correct my configurations and or share yours.
Love from Norway <3 :)
Beta Was this translation helpful? Give feedback.
All reactions