Skip to content

AI-powered model auditing agent with multi-agent debate for robust evaluation of machine learning models.

License

Notifications You must be signed in to change notification settings

MLO-lab/ModelAuditor

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ModelAuditor Agent

AI-powered model auditing agent with multi-agent debate for robust evaluation of machine learning models.

Setup

This repository has been tested extensively with Python 3.10.15. Typical install time via uv is less than a minute.

Using uv (recommended)

uv sync
uv run python main.py --model resnet50 --dataset CIFAR10 --weights path/to/weights.pth

Using pip

pip install -e .
python main.py --model resnet50 --dataset CIFAR10 --weights path/to/weights.pth

Medical AI dependencies (optional)

uv sync --extra medical  # or pip install -e ".[medical]"

Usage

General Usage

python main.py --model resnet50 --dataset CIFAR10 --weights models/model.pth

Medical Models

# ISIC skin lesion classification
python main.py --model siim-isic --dataset isic --weights models/isic/model.pth

# HAM10000 dataset
python main.py --model deepderm --dataset ham10000 --weights models/ham10000.pth

Toy Example

We prepared a small toy model, trained on CIFAR10 so the Auditor can be tested. All that is needed is a valid Anthropic API Key as can be seen below (see section 'Environment Variables').

python main.py --model resnet18 --dataset CIFAR10 --weights examples/cifar10/cifar10.pth

Expected runtime varies depending on user response speed and subset time but should take less than 10 minutes in total.

Options

  • --subset N: Use N samples for faster evaluation
  • --no-debate: Disable multi-agent debate
  • --single-agent: Use single agent instead of multi-agent debate
  • --device: Specify device (cpu, cuda, mps)

Environment Variables

Set your API keys:

export ANTHROPIC_API_KEY="your-key"
export OPENAI_API_KEY="your-key"  # if using non-Anthropic models

Project Structure

  • main.py - Interactive model auditor with multi-agent debate
  • testbench.py - Automated evaluation script
  • utils/agent.py - Multi-agent conversation system
  • architectures/ - Custom model architectures
  • prompts/ - System prompts for different evaluation phases
  • models/ - Pre-trained model weights
  • results/ - Evaluation results and conversation logs

About

AI-powered model auditing agent with multi-agent debate for robust evaluation of machine learning models.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages