This project implements a modular program synthesis pipeline designed to work alongside any large language model (LLM). The pipeline boosts an LLM's capabilities by orchestrating structured prompting, program generation, verification, and refinement loops.
+-- src
+-- synthesis
+-- llm_interface.py # LLM adapter interfaces and mock implementations
+-- openai_client.py # OpenAI GPT adapters implementing LLMClient
+-- prompts.py # Prompt templates and builders for pipeline stages
+-- tasks.py # Task specifications and result tracking
+-- workspace.py # Isolated execution sandbox for candidate programs
+-- evaluation.py # Automated test execution and scoring
+-- synthesis_loop.py # Core synthesis controller orchestrating the stages
+-- utils.py # Shared helpers (logging, retry, timing)
+-- plugins
+-- python_executor.py # Plugin for executing Python candidate programs
Key ideas:
- LLM-agnostic adapters - Implement
LLMClient
interfaces that can wrap any model. The mock client enables local development without network calls. - Task specifications - Define tasks with natural language problem statements, IO contracts, and reference tests.
- Iterative refinement - Run a generate -> execute -> evaluate -> reflect loop to improve synthesized programs.
- Plugin-based execution - Execute synthesized programs in isolated workspaces. Python is included out-of-the-box; other languages can be plugged in.
- Install dependencies in editable mode:
pip install -e .
- Run the demonstration script:
PYTHONPATH=src python examples/run_pipeline.py
- Execute the unit test:
PYTHONPATH=src python -m unittest discover -s tests
- Extend the pipeline by implementing new
LLMClient
adapters or adding execution plugins for other languages.
Detailed module documentation is provided inline with the source.
To call a hosted OpenAI model such as GPT-5, install the optional dependency and configure credentials:
-
Install the OpenAI client if you skipped pip install -e .: pip install openai>=1.30.0.
-
Export OPENAI_API_KEY (and optionally OPENAI_ORGANIZATION / OPENAI_PROJECT).
-
Instantiate synthesis.OpenAIClient and pass it to the pipeline:
from synthesis import OpenAIClient, SynthesisPipeline, SynthesisConfig llm = OpenAIClient() # defaults to GPT-5 pipeline = SynthesisPipeline(llm, config=SynthesisConfig(max_iterations=3))
-
Run the pipeline as usual; prompts and code will be generated by the real model.
See examples/show_arc_run.py for a script you can adapt to use OpenAIClient.