A Model Context Protocol (MCP) server that provides a RAG (Retrieval Augmented Generation) tool with hybrid search capabilities combining vector similarity and keyword metadata search for code retrieval. Built on the CocoIndex data transformation framework with specialized support for multiple programming languages.
This RAG MCP server enables AI tools (LLMs) to retrieve relevant code snippets from large codebases efficiently and in real-time, leveraging CocoIndex's incremental indexing, tree-sitter based chunking, and smart language-specific embeddings. It enhances the performance of code generation, code completion, and code understanding by virtually enlarging the context window available to the AI models.
Currently uses PostgreSQL + pgvector as the vector database backend, but can be adapted to other backends supported by CocoIndex.
- Quickstart
- Command Line Arguments
- Features
- Supported Languages
- Smart Embedding
- Development
- Contributing
git clone --recursive https://github.com/aanno/cocoindex-code-mcp-server.git
cd cocoindex-code-mcp-serverChecking out the sources is not strictly necessary if you just want to use the MCP server, as it can be installed from PyPI. However, there are some scripts e.g. for starting the pgvector database that are missing from the PyPI package.
Build from source using maturin:
# Install dependencies from PyPI
uv sync
uv sync --all-extras
# And build from source
maturin developOr simple install from PyPI:
pip install cocoindex-code-mcp-serverI provide native wheels for many systems (including Linux, Windows and MacOS) on PyPI, so no build should be necessary in most cases. cocoindex-code-mcp-server needs Python 3.11+ (and I prefer to build abi3 wheels for better compatibility).
In one terminal on your local machine, start the pgvector database:
cd cocoindex-code-mcp-server
./scripts/cocoindex-postgresql.sh
# Maybe you need to install pgvector extension once
./scripts/install-pgvector.pyUsing the scripts is optional, however you need a running PostgreSQL + pgvector database for the MCP server to work.
cocoindex_code_mcp_server uses the COCOINDEX_DATABASE_URL environment variable to connect to the database.
It reads the .env file in the current directory if present. You can copy the provided .env.template to .env and
adjust the connection string if needed.
The current directory does not need to be the directory that you want to scan (see section 'Command Line Arguments' below for details).
cp .env.template .envIn another terminal, start the cocoindex_code_mcp_server:
cd cocoindex-code-mcp-server
python -m cocoindex_code_mcp_server.main_mcp_server --rescan --port 3033 <path_to_code_directory>The server will index the code in the specified directory and start serving requests. This will take some time. It is ready when you see something like:
CodeEmbedding.files (batch update): 1505 source rows NO CHANGE
The PyPI package does provide starting server with cocoindex-code-mcp-server <options> <root-source-dir>. Remember
that you need a running PostgreSQL + pgvector database for this to work.
You can now use the RAG server running at http://localhost:3033 as a streaming HTTP MCP server. For example, with Claude Code, use the following snippet within "mcpServers" in your .mcp.json file:
{
"cocoindex-rag": {
"command": "pnpm",
"args": [
"dlx",
"mcp-remote@next",
"http://localhost:3033/mcp"
]
}
}| Argument | Type | Default | Description |
|---|---|---|---|
paths |
positional | - | Path(s) to code directory/directories to index (can specify multiple) |
--paths |
option | - | Alternative way to specify paths (can use multiple times) |
--no-live |
flag | false | Disable live update mode |
--poll |
int | 60 | Polling interval in seconds for live updates |
--default-embedding |
flag | false | Use default CocoIndex embedding instead of smart embedding |
--default-chunking |
flag | false | Use default CocoIndex chunking instead of tree-sitter/AST chunking |
--default-language-handler |
flag | false | Use default CocoIndex language handling |
--chunk-factor-percent |
int | 100 | Chunk size scaling factor as percentage (100=default, <100=smaller, >100=larger) |
--port |
int | 3000 | Port to listen on for HTTP |
--log-level |
string | INFO | Logging level (DEBUG, INFO, WARNING, ERROR) |
--json-response |
flag | false | Enable JSON responses instead of SSE streams |
--rescan |
flag | false | Clear database and tracking tables before starting to force re-indexing |
# Index a single directory with live updates
python -m cocoindex_code_mcp_server.main_mcp_server /path/to/code
# Index multiple directories
python -m cocoindex_code_mcp_server.main_mcp_server /path/to/code1 /path/to/code2
# Force re-indexing with custom port
python -m cocoindex_code_mcp_server.main_mcp_server --rescan --port 3033 /path/to/code
# Disable live updates (one-time indexing)
python -m cocoindex_code_mcp_server.main_mcp_server --no-live /path/to/code
# Custom chunk size (50% smaller chunks)
python -m cocoindex_code_mcp_server.main_mcp_server --chunk-factor-percent 50 /path/to/code- CocoIndex Backend: Uses CocoIndex as the embedding and vector database backend with PostgreSQL + pgvector
- Multiple Language Support: Specialized support for 20+ programming languages with language-specific parsers and embeddings
- Streaming HTTP MCP Server: Real-time code retrieval via Model Context Protocol over HTTP
- Code Change Detection: Incremental indexing with automatic detection of file changes
- Tree-sitter Chunking: Advanced code parsing and chunking using tree-sitter AST for better code understanding
- Smart Embedding: Multiple embedding models automatically selected based on programming language (see Smart Embedding)
- Hybrid Search: Combines vector similarity search with keyword/metadata filtering for precise results
- Vector Search: Semantic similarity using language-specific code embeddings
- Keyword Search: Exact matching on metadata fields (functions, classes, imports, etc.)
- Hybrid Search: Weighted combination of both approaches with configurable weights
The server supports multiple programming languages with varying levels of integration:
| Language | Extensions | Embedding Model | AST Chunking | Tree-sitter | Remarks |
|---|---|---|---|---|---|
| Python | .py |
GraphCodeBERT | ✅ astchunk | ✅ python | Custom (not using visitor), metadata extraction: language_handlers/python_handler.py, analyser: lang/python/tree_sitter_python_analyzer.py, (fallback: lang/python/python_code_analyzer.py), TODO: unify this with visitor approach |
| Rust | .rs |
UniXcoder | ? | ✅ rust | Full metadata support with specialized visitor: language_handlers/rust_visitor.py |
| JavaScript | .js, .mjs, .cjs |
GraphCodeBERT | ?astchunk? | ✅ javascript | Full metadata support with specialized visitor: language_handlers/javascript_visitor.py |
| TypeScript | .ts |
UniXcoder | ✅ astchunk | ✅ typescript | Extends javascript visitor: language_handlers/typescript_visitor.py |
| TSX | .tsx |
UniXcoder | ✅ astchunk | ?typescript? | ?see typescript? |
| Java | .java |
GraphCodeBERT | ✅ astchunk | ✅ java | Full metadata support with specialized visitor: language_handlers/java_visitor.py |
| Kotlin | .kt, .kts |
UniXcoder | ? | ✅ kotlin | Full metadata support with specialized visitor: language_handlers/kotlin_visitor.py |
| C | .c, .h |
GraphCodeBERT | ? | ✅ c | Full metadata support with specialized visitor: language_handlers/c_visitor.py |
| C++ | .cpp, .cc, .cxx,.hpp |
GraphCodeBERT | ? | ✅ cpp | Extends C visitor: language_handlers/cpp_visitor.py |
| C# | .cs |
UniXcoder | ✅ astchunk | ❌ | Tree-sitter parsing/chunking only |
| Haskell | .hs, .lhs |
all-mpnet-base-v2 | ✅ | ✅ | Custom maturin extension with specialized visitor, chunker: lang/haskell/haskell_ast_chunker.py, metadata extraction: language_handlers/haskell_handler.py |
| Other Languages | see mappers.py |
all-mpnet-base-v2 | ❌ | ❌ ?regex? | cocoindex defaults (baseline) |
- Embedding Model: The embedding model automatically selected for the language
- AST Chunking: Advanced chunking using ASTChunk or custom implementations (based on ideas from ASTChunk and using tree-sitter for the language).
- Tree-sitter: Language has tree-sitter parser configured for AST analysis. (python tree-sitter bindings, except for Haskell which uses a Maturin/Rust extension based on rust bindings cargos
tree-sitterandtree-sitter-haskell.) - Remarks: Additional notes about support level
- Other Languages: Files recognized but only basic text embedding and chunking applied (cocoindex defaults).
This includes: Go, PHP, Ruby, Swift, Scala, Dart, CSS, HTML, JSON, Markdown, YAML, TOML, SQL, R, Fortran, Pascal, XML
The server uses language-aware code embeddings that automatically select the optimal embedding model based on the programming language. This approach provides better semantic understanding of code compared to generic text embeddings.
The smart embedding system uses different specialized models optimized for different programming languages:
-
GraphCodeBERT (
microsoft/graphcodebert-base)- Optimized for: Python, Java, JavaScript, PHP, Ruby, Go, C, C++
- Pre-trained on code from these languages with graph-based code understanding
- Best for languages with explicit structure and common patterns
-
UniXcoder (
microsoft/unixcoder-base)- Optimized for: Rust, TypeScript, C#, Kotlin, Scala, Swift, Dart
- Unified cross-lingual model for multiple languages
- Best for modern statically-typed languages
-
Fallback Model (
sentence-transformers/all-mpnet-base-v2)- Used for: Languages not specifically supported by code models
- General-purpose text embedding for broader language support
- 768-dimensional embeddings matching code-specific models
The embedding model is automatically selected based on file extension:
# Example: Python file automatically uses GraphCodeBERT
file: main.py → language: python → model: microsoft/graphcodebert-base
# Example: Rust file automatically uses UniXcoder
file: lib.rs → language: rust → model: microsoft/unixcoder-base
# Example: Haskell file uses fallback model
file: Main.hs → language: haskell → model: sentence-transformers/all-mpnet-base-v2- Better Code Understanding: Code-specific models understand programming constructs better than generic text models
- Language-Specific Optimization: Each language gets embeddings from models trained on that language
- Consistent Search Quality: Similar code snippets in the same language produce similar embeddings
- Zero Configuration: Automatic model selection requires no manual configuration
The smart embedding system is implemented as an external wrapper around CocoIndex's SentenceTransformerEmbed function, located in python/cocoindex_code_mcp_server/smart_code_embedding.py. This approach:
- Does not modify CocoIndex source code
- Uses CocoIndex as a pure dependency
- Provides drop-in compatibility with existing workflows
- Can be easily updated independently
For more technical details, see:
- Rust (latest stable version)
- Python 3.11+
- Maturin (build tool for Python extensions in Rust)
- PostgreSQL with pgvector extension
- Tree-sitter language parsers (automatically installed via pyproject.toml)
# Run tests to verify installation
pytest -c pytest.ini tests/The project uses mypy for type checking. Use the provided scripts:
# Type check main source code
./scripts/mypy-check.sh
# Type check tests
./scripts/mypy-check-tests.shpython/cocoindex_code_mcp_server/: Main MCP server implementationmain_mcp_server.py: MCP server entry pointcocoindex_config.py: CocoIndex flow configurationsmart_code_embedding.py: Language-aware embedding selectionmappers.py: Language and field mappingstree_sitter_parser.py: Tree-sitter parsing utilitiesdb/: Database abstraction layerpgvector/: PostgreSQL + pgvector backend
lang/: Language-specific handlerspython/: Python code analyzerhaskell/: Haskell support (via Rust extension)
tests/: Pytest test suitedocs/: Documentationclaude/: Development notes and architecture docscocoindex/: CocoIndex-specific documentationinstructions/: Task instructions and guides
rust/: Rust componentssrc/lib.rs: Haskell tree-sitter Rust extension
astchunk/: ASTChunk submodule for advanced code chunking
# Run all tests
pytest -c pytest.ini tests/
# Run specific test file
pytest -c pytest.ini tests/test_hybrid_search_integration.py
# Run with coverage
pytest -c pytest.ini tests/ --cov=python/cocoindex_code_mcp_server --cov-report=htmlContributions are welcome! Please open issues and pull requests on the GitHub repository.
- Fork the repository
- Create a feature branch
- Make your changes with tests
- Run type checking:
./scripts/mypy-check.sh - Run tests:
pytest tests/ - Submit a pull request
- Additional language support (parsers, embeddings, chunking)
- Enhanced metadata extraction for existing languages
- Performance optimizations
- Documentation improvements
- Bug fixes and issue resolution
AGPL-3.0 or later
- CocoIndex Framework: https://cocoindex.io
- GitHub Repository: https://github.com/aanno/cocoindex-code-mcp-server
- Model Context Protocol: https://modelcontextprotocol.io
- ASTChunk: https://github.com/codelion/astchunk
Built on top of the excellent CocoIndex framework for incremental data transformation and the Model Context Protocol for AI tool integration.