Loom weaves feature requests into comprehensive coding prompts through LLM-driven sequential analysis. Named after the art of weaving disparate threads into cohesive fabric, Loom transforms raw ideas into structured, actionable development plans.
Designed for developers who want to automate the software development lifecycle while retaining complete control over the AI models used at every stage. The core philosophy is "Meet Developers Where They Are" - Loom is model-agnostic, allowing you to use any AI model or tool at any step of the process.
Loom orchestrates a sequence of specialized AI agents to perform a series of tasks, starting with a simple feature request.
Example Workflow:
- You provide a task:
python loom.py "implement JWT user authentication" - Project Analysis Agent: Scans your codebase to understand existing patterns, tech stack, and conventions.
- Feature Research Agent: Uses the project context to research the best technical approaches and implementation strategies for the feature.
- Prompt Assembly Agent: Synthesizes all the gathered information into a detailed, context-aware coding prompt, ready for an implementation LLM.
The final output is a high-quality, actionable prompt that you can feed into your coding LLM of choice to get consistent and contextually-aware code.
# Clone the repository
git clone [repository-url] ~/Loom
cd ~/Loom
# Ensure Python 3.7+ is installed
python --version
# Install dependencies
pip install -r requirements.txt🔑 Get your free Gemini API key:
- Visit Google AI Studio
- Click "Create API key"
- Copy your API key
🔧 Set up your environment:
# Linux/Mac - add to your shell profile for persistence
export GEMINI_API_KEY="your-api-key-here"
echo 'export GEMINI_API_KEY="your-api-key-here"' >> ~/.bashrc
# Windows Command Prompt
setx GEMINI_API_KEY "your-api-key-here"
# Windows PowerShell
$env:GEMINI_API_KEY="your-api-key-here"
[Environment]::SetEnvironmentVariable("GEMINI_API_KEY", "your-api-key-here", "User")# Navigate to your project directory
cd /path/to/your/project
# Copy and configure the project settings
cp ~/Loom/dev-automation.config.json .
# Edit dev-automation.config.json with your project details
# Copy and configure the default template
cp ~/Loom/meta-prompt-template.md .
# Edit meta template to meet your project details, this is the default template used if specific ones are not provided at each step.
# Verify your API key is set
echo $GEMINI_API_KEY # Should show your API key# Basic usage - runs the configured agent sequence
python ~/Loom/loom.py "implement user authentication system"
# Validate your configuration and API key
python ~/Loom/loom.py --validate-configWindows:
Create loom.bat in a directory in your PATH:
@echo off
python "C:\path\to\Loom\loom.py" %*Mac/Linux: Create a symlink or alias:
# Symlink approach
ln -s ~/Loom/loom.py /usr/local/bin/loom
# Or add alias to your shell profile
echo 'alias loom="python ~/Loom/loom.py"' >> ~/.bashrc~/Loom/
├── loom.py # Main entry point
├── agents/ # Agent directory
│ ├── orchestrator.py # Agent orchestration engine
│ ├── base_agent.py # Base class for all agents
│ ├── project_analysis_agent/ # Analyzes codebase structure
│ ├── feature_research_agent/ # Researches implementation approaches
│ ├── prompt_assembly_agent/ # Assembles final coding prompts
│ └── issue_generator/ # Legacy issue generation
├── core/ # Core system components
│ ├── config_manager.py # Configuration management
│ ├── llm_manager.py # LLM provider abstraction
│ └── context_manager.py # Cross-agent context sharing
├── requirements.txt # Python dependencies
└── README.md # This file
# Per-project files (created in your project directory):
your-project/
├── dev-automation.config.json # Project configuration
└── generated-issues/ # Output directory
├── 20240714_123456_feature.md # Generated specifications
└── ...
Each project gets its own dev-automation.config.json file that configures the agent execution sequence and LLM providers.
{
"project": {
"name": "Loom",
"context": "A flexible system that automates the software development lifecycle",
"tech_stack": "Python, markdown",
"architecture": "Open orchestration",
"target_users": "Developers",
"constraints": "Model API Differences, Context Management, Output Consistency"
},
"agent_execution_order": [
"project-analysis-agent",
"feature-research-agent",
"prompt-assembly-agent"
]
}{
"llm_settings": {
"default_provider": "gemini",
"model": "gemini-2.0-flash-exp",
"temperature": 0.6,
"max_tokens": 8192,
"output_format": "structured",
"research_depth": "standard"
}
}{
"github": {
"repo_owner": "your-username",
"repo_name": "your-repo",
"default_project": "Your-Project-Board-Name",
"default_labels": ["auto-generated", "needs-review", "enhancement"]
},
"automation": {
"auto_create_issues": true, # Enable automatic GitHub issue creation
"auto_assign": false
}
}{
"templates": {
"ui_feature": "Focus on user experience, responsive design...",
"api_feature": "Focus on performance, security, scalability...",
"data_feature": "Focus on data processing, ETL, validation...",
"perf_feature": "Focus on optimization, caching, performance..."
}
}# Run the configured agent sequence
python loom.py "implement OAuth2 authentication"
python loom.py "add real-time notifications"
python loom.py "optimize database query performance"# Validate your LLM providers and configuration
python loom.py --validate-configLoom's power comes from its orchestrated multi-agent architecture. Each agent specializes in a specific aspect of the development workflow:
Project Analysis Agent: Scans your codebase to understand:
- Existing patterns and conventions
- Technology stack and dependencies
- Architecture and file structure
- Coding standards and practices
Feature Research Agent: Conducts comprehensive research on:
- Best practices for the requested feature
- Implementation approaches and alternatives
- Integration considerations with existing codebase
- Potential risks and mitigation strategies
Prompt Assembly Agent: Synthesizes information to create:
- Context-aware coding prompts
- Detailed implementation specifications
- Code examples following project conventions
- Ready-to-use prompts for any LLM
The AgentOrchestrator manages the execution sequence:
- Loads agents dynamically from the
agents/directory - Executes them in the order specified in
agent_execution_order - Manages context sharing between agents via
ContextManager - Handles LLM provider abstraction through
LLMManager
Agents communicate through a shared context, allowing later agents to build upon the work of earlier ones.
When auto_create_issues is enabled in your config:
# This will automatically create a GitHub issue with full specification
python loom.py "implement user roles and permissions"# Generate specification file
python loom.py "implement user roles and permissions"
# Use the generated file with GitHub CLI
gh issue create --body-file generated-issues/YYYY-MM-DD-HHMMSS-feature-slug.md --label "enhancement"# Install GitHub CLI and authenticate
gh auth login
# Ensure you have project scope for automatic issue creation
gh auth refresh --scopes repo,projectThe beauty of this system is that you can use one installation across multiple projects:
# Project A
cd ~/projects/my-web-app
cp ~/Loom/dev-automation.config.json . # Copy and customize config
python ~/Loom/loom.py "add user authentication"
# Project B
cd ~/projects/my-mobile-app
cp ~/Loom/dev-automation.config.json . # Copy and customize config
python ~/Loom/loom.py "implement offline sync"
# Each project gets its own config and generated-issues folderEach generated issue includes:
- Executive Summary: What the feature does and why it matters
- Codebase Analysis: Integration points and architectural impact
- Domain Research: User workflows and industry best practices
- Technical Approach: Implementation strategy with alternatives
- Implementation Specification: Detailed technical requirements
- Risk Assessment: Technical and business risks with mitigation
- Project Details: Effort estimates, dependencies, acceptance criteria
- GitHub Issue Template: Ready-to-use issue content
- Python 3.7+: Core runtime
- google-generativeai: Python package for Gemini API access
- Gemini API Key: Free API key from Google AI Studio
- GitHub CLI (
gh): For automated issue creation - Git: For repository context (auto-detected)
- Get your Gemini API key: Visit Google AI Studio
- Set environment variable:
# Linux/Mac export GEMINI_API_KEY="your-api-key-here" echo 'export GEMINI_API_KEY="your-api-key-here"' >> ~/.bashrc # Windows setx GEMINI_API_KEY "your-api-key-here"
- Verify setup:
python loom.py --validate-config
"GEMINI_API_KEY environment variable not set"
# Verify your API key is set
echo $GEMINI_API_KEY
# If empty, set it:
export GEMINI_API_KEY="your-api-key-here""google-generativeai package not installed"
pip install google-generativeai"Gemini API call failed"
- Check your API key is valid at Google AI Studio
- Verify you haven't exceeded API quotas
- Ensure you have internet connectivity
Configuration validation fails
# Run validation to see specific issues
python loom.py --validate-config- Multiple LLM Support - Add OpenAI, Claude, and local model providers
- Template Marketplace - Share and discover project-specific templates
- Progress Tracking - Monitor implementation progress and outcomes
- Team Collaboration - Shared configurations and team workflows
- IDE Integration - VSCode extension for in-editor issue generation
- CI/CD Integration - Trigger issue generation from repository events
This is a universal system designed to work across any project type. Contributions welcome for:
- New template categories
- LLM provider integrations
- Output format improvements
- Cross-platform compatibility
- Documentation and examples
MIT License - Use this system for any project, commercial or personal.
🎯 The Goal: Weave feature ideas into well-researched, context-aware coding prompts through intelligent agent orchestration. Transform raw concepts into actionable development plans that understand your codebase, follow your patterns, and integrate seamlessly with your workflow.