Skip to content

This project attempts to improve the capabilities of LLMs by applying program synthesis techniques with test-time adaption

Notifications You must be signed in to change notification settings

Arcify/program-synthesis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Program Synthesis Pipeline

This project implements a modular program synthesis pipeline designed to work alongside any large language model (LLM). The pipeline boosts an LLM's capabilities by orchestrating structured prompting, program generation, verification, and refinement loops.

Architecture Overview

+-- src
    +-- synthesis
        +-- llm_interface.py      # LLM adapter interfaces and mock implementations
        +-- openai_client.py     # OpenAI GPT adapters implementing LLMClient
        +-- prompts.py            # Prompt templates and builders for pipeline stages
        +-- tasks.py              # Task specifications and result tracking
        +-- workspace.py          # Isolated execution sandbox for candidate programs
        +-- evaluation.py         # Automated test execution and scoring
        +-- synthesis_loop.py     # Core synthesis controller orchestrating the stages
        +-- utils.py              # Shared helpers (logging, retry, timing)
        +-- plugins
            +-- python_executor.py # Plugin for executing Python candidate programs

Key ideas:

  • LLM-agnostic adapters - Implement LLMClient interfaces that can wrap any model. The mock client enables local development without network calls.
  • Task specifications - Define tasks with natural language problem statements, IO contracts, and reference tests.
  • Iterative refinement - Run a generate -> execute -> evaluate -> reflect loop to improve synthesized programs.
  • Plugin-based execution - Execute synthesized programs in isolated workspaces. Python is included out-of-the-box; other languages can be plugged in.

Getting Started

  1. Install dependencies in editable mode: pip install -e .
  2. Run the demonstration script: PYTHONPATH=src python examples/run_pipeline.py
  3. Execute the unit test: PYTHONPATH=src python -m unittest discover -s tests
  4. Extend the pipeline by implementing new LLMClient adapters or adding execution plugins for other languages.

Detailed module documentation is provided inline with the source.

Using OpenAI GPT Models

To call a hosted OpenAI model such as GPT-5, install the optional dependency and configure credentials:

  1. Install the OpenAI client if you skipped pip install -e .: pip install openai>=1.30.0.

  2. Export OPENAI_API_KEY (and optionally OPENAI_ORGANIZATION / OPENAI_PROJECT).

  3. Instantiate synthesis.OpenAIClient and pass it to the pipeline:

    from synthesis import OpenAIClient, SynthesisPipeline, SynthesisConfig llm = OpenAIClient() # defaults to GPT-5 pipeline = SynthesisPipeline(llm, config=SynthesisConfig(max_iterations=3))

  4. Run the pipeline as usual; prompts and code will be generated by the real model.

See examples/show_arc_run.py for a script you can adapt to use OpenAIClient.

About

This project attempts to improve the capabilities of LLMs by applying program synthesis techniques with test-time adaption

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages