Skip to content

bogdan01m/mcode

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

9 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

mcode

πŸ€– Terminal agent system with support for various LLM providers

A powerful CLI chat interface that connects to multiple LLM providers through a unified terminal experience. Features real-time chat, session management, and extensible provider architecture.

Language versions: Русский | English

πŸš€ Installation

Global Installation (Recommended)

# Clone repository
git clone https://github.com/bogdan01m/mcode.git
cd mcode

# Install globally with uv
uv tool install .

# Initialize global configuration (creates ~/.mcode/.env)
mcode config init

# Edit global config and add your API keys
# The config file will be created at ~/.mcode/.env

That's it! Now you can use mcode from anywhere:

mcode chat                    # Interactive chat with auto-provider selection
mcode chat -p ollama "Hello" # Quick question with specific provider

Development Installation

# Clone repository
git clone https://github.com/bogdan01m/mcode.git
cd mcode

# Install dependencies
uv sync

# Use with uv run (for development)
uv run mcode --help

πŸ”§ Configuration

Global Configuration (Recommended)

After installation, run mcode config init to create a global configuration template at ~/.mcode/.env. This config will be used from any directory.

mcode config init    # Creates ~/.mcode/.env with template
mcode config list    # Shows config file locations

Edit ~/.mcode/.env and uncomment/configure your API keys:

# OpenAI
OPENAI_API_KEY=your_openai_api_key_here
OPENAI_MODEL=gpt-4
OPENAI_SYSTEM_PROMPT="You are helpful assistant. You are able to use tools"

# Google Gemini
GEMINI_API_KEY=your_gemini_api_key_here
GEMINI_MODEL=gemini-2.0-flash-exp
GEMINI_SYSTEM_PROMPT="You are helpful assistant. You are able to use tools"

# Mistral AI
MISTRAL_API_KEY=your_mistral_api_key_here
MISTRAL_MODEL=mistral-large-latest
MISTRAL_SYSTEM_PROMPT="You are helpful assistant. You are able to use tools"

# Ollama (local server)
OLLAMA_BASE_URL=http://localhost:11434/v1
OLLAMA_MODEL=qwen3:8b
OLLAMA_SYSTEM_PROMPT="You are helpful assistant. You are able to use tools"

# OpenRouter
OPENROUTER_API_KEY=your_openrouter_api_key_here
OPENROUTER_MODEL=mistralai/devstral-small:free
OPENROUTER_SYSTEM_PROMPT="You are helpful assistant. You are able to use tools"

# Custom OpenAI-compatible API
CUSTOM_OPENAI_API_KEY=your_custom_api_key_here
CUSTOM_OPENAI_BASE_URL=https://your-custom-endpoint.com/v1
CUSTOM_OPENAI_MODEL=your-model-name
CUSTOM_OPENAI_SYSTEM_PROMPT="You are helpful assistant. You are able to use tools"

Local Project Configuration (Optional)

You can override global settings for specific projects by creating a local .env file in your project directory. Local configuration takes precedence over global configuration.

# In your project directory
echo 'OPENAI_MODEL=gpt-4-turbo' > .env
# This will override the global OPENAI_MODEL setting for this project only

πŸ“‹ Usage

Chat Commands

# Interactive chat with auto-provider selection
mcode chat

# Single query with specific provider
mcode chat -p ollama "Hello, how are you?"

# Chat with custom model and system prompt
mcode chat -p openai -m gpt-4 -s "You are a coding assistant" "Write a Python function"

# Chat without saving history
mcode chat -p ollama --no-history "Quick question"

Provider Management

# List all available providers
mcode providers list

# Test provider connection
mcode providers test ollama

# Get provider information
mcode providers info openai

Session Management

# List chat sessions
mcode session list

# Resume a previous session
mcode session resume <session-id>

# Export session to file
mcode session export <session-id> --format markdown

Configuration Management

# Initialize global configuration
mcode config init

# Show configuration file paths
mcode config list

# Validate provider settings
mcode config validate

πŸ€– Supported Providers

Provider Models Status Notes
OpenAI All available provider models βœ… Chat-only Full tool-calling support
Google Gemini All available provider models βœ… Chat-only Full tool-calling support
Mistral AI All available provider models βœ… Chat-only Full tool-calling support
Ollama All locally installed models βœ… Chat-only Tool-calling depends on model
OpenRouter 100+ models via unified API βœ… Chat-only Tool-calling depends on model
Custom OpenAI Any OpenAI-compatible endpoint βœ… Chat-only Tool-calling depends on model

MCP Note: With planned MCP protocol integration, there may be limitations with models that don't support tool-calling. Modern models (GPT-4, Gemini 2.5, Mistral Large) have full support.

✨ Features

  • πŸ€– Real LLM Integration - Direct API calls to all major providers
  • ⚑ Auto Provider Selection - Interactive provider chooser with status indicators
  • 🎨 Rich Terminal UI - Beautiful formatting with markdown and syntax highlighting
  • πŸ” Global Configuration - Secure environment-based setup with global and local configs
  • πŸ“ Session Management - Persistent chat history with resume and export capabilities
  • πŸš€ Flexible Usage - Single queries, interactive sessions, or custom model parameters
  • πŸ”§ Extensible Architecture - Factory pattern for easy provider addition
  • πŸ“¦ Global Installation - Install once, use anywhere on your system

πŸ“ Project Structure

src/
β”œβ”€β”€ cli/                     # πŸ–₯️  CLI interface
β”‚   β”œβ”€β”€ commands/           # Chat, providers, session commands
β”‚   β”œβ”€β”€ ui/                 # Rich terminal UI components
β”‚   β”‚   └── chat_engine.py  # Core chat functionality
β”‚   └── session/           # Session management (framework)
β”œβ”€β”€ llms/                   # 🧠 LLM providers
β”‚   └── llm_call/          # Factory pattern for providers
β”‚       β”œβ”€β”€ base_provider.py      # Base provider interface
β”‚       β”œβ”€β”€ provider_factory.py   # Provider factory & registry
β”‚       β”œβ”€β”€ env_config.py         # Environment configuration
β”‚       └── llm_providers/        # Individual provider implementations
└── mcp/                   # πŸ”§ MCP integration (planned)

πŸ› οΈ Development

# Code formatting
uvx black .

# Linting
uvx ruff .

# Run factory pattern demo (after global install)
mcode-demo

# Test chat functionality
mcode chat -p ollama "Hello from development!"

# Or use development mode
uv run mcode chat -p ollama "Hello from development!"

πŸš€ Examples

Quick Start

# Install and initialize
uv tool install .
mcode config init

# Set up Ollama (local) - no API key required
echo 'OLLAMA_MODEL=qwen3:8b' >> ~/.mcode/.env
mcode chat -p ollama "Explain quantum computing"

Advanced Usage

# Multi-turn conversation with custom settings
mcode chat -p openai -m gpt-4 -s "You are a Python expert"
# Then type multiple questions interactively

Provider Comparison

# Test same question across providers
mcode chat -p ollama "Write a Python function"
mcode chat -p openai "Write a Python function"
mcode chat -p gemini "Write a Python function"

About

Terminal-cli to interract with almost any LLM provider you like

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •  

Languages