A reinforcement learning codebase focusing on the emergence of cooperation and alignment in multi-agent AI systems.
- Discord: https://discord.gg/mQzrgwqmwy
- Short (5m) Talk: https://www.youtube.com/watch?v=bt6hV73VA8I
- Talk: https://foresight.org/summary/david-bloomin-metta-learning-love-is-all-you-need/
Metta AI is an open-source research project investigating the emergence of cooperation and alignment in multi-agent AI systems. By creating a model organism for complex multi-agent gridworld environments, the project aims to study the impact of social dynamics, such as kinship and mate selection, on learning and cooperative behaviors of AI agents.
Metta AI explores the hypothesis that social dynamics, akin to love in biological systems, play a crucial role in the development of cooperative AGI and AI alignment. The project introduces a novel reward-sharing mechanism mimicking familial bonds and mate selection, allowing researchers to observe the evolution of complex social behaviors and cooperation among AI agents. By investigating this concept in a controlled multi-agent setting, the project seeks to contribute to the broader discussion on the path towards safe and beneficial AGI.
Metta is a simulation environment (game) designed to train AI agents capable of meta-learning general intelligence. The core idea is to create an environment where incremental intelligence is rewarded, fostering the development of generally intelligent agents.
-
Agents and Environment: Agents are shaped by their environment, learning policies that enhance their fitness. To develop general intelligence, agents need an environment where increasing intelligence is continually rewarded.
-
Competitive and Cooperative Dynamics: A game with multiple agents and some competition creates an evolving environment where challenges increase with agent intelligence. Purely competitive games often reach a Nash equilibrium, where locally optimal strategies are hard to deviate from. Adding cooperative dynamics introduces more behavioral possibilities and smooths the behavioral space.
-
Kinship Structures: The game features a flexible kinship structure, simulating a range of relationships from close kin to strangers. Agents must learn to coordinate with close kin, negotiate with more distant kin, and compete with strangers. This diverse social environment encourages continuous learning and intelligence growth.
The game is designed to evolve with the agents, providing unlimited learning opportunities despite simple rules.
The current version of the game can be found here. It's a grid world with the following dynamics:
- Agents and Vision: Agents can see a limited number of squares around them.
- Resources: Agents harvest diamonds, convert them to energy at charger stations, and use energy to power the "heart altar" for rewards.
- Energy Management: All actions cost energy, so agents learn to manage their energy budgets efficiently.
- Combat: Agents can attack others, temporarily freezing the target and stealing resources.
- Defense: Agents can toggle shields, which drain energy but absorb attacks.
- Cooperation: Agents can share energy or resources and use markers to communicate.
The game offers numerous possibilities for exploration, including:
- Diverse Energy Profiles: Assigning different energy profiles to agents, essentially giving them different bodies and policies.
- Dynamic Energy Profiles: Allowing agents to change their energy profiles, reflecting different postures or emotions.
- Resource Types and Conversions: Introducing different resource types and conversion mechanisms.
- Environment Modification: Enabling agents to modify the game board by creating, destroying, or altering objects.
The game explores various kinship structures:
- Random Kinship Scores: Each pair of agents has a kinship score sampled from a distribution.
- Teams: Agents belong to teams with symmetric kinship among team members.
- Hives/Clans/Families: Structuring agents into larger kinship groups.
Future plans include incorporating mate-selection dynamics, where agents share future rewards at a cost, potentially leading to intelligence gains through a signaling arms race.
Metta aims to create a rich, evolving environment where AI agents can develop general intelligence through continuous learning and adaptation.
The project's modular design and open-source nature make it easy for researchers to adapt and extend the platform to investigate their own hypotheses in this domain. The highly performant, open-ended game rules provide a rich environment for studying these behaviors and their potential implications for AI alignment.
Some areas of research interest:
Develop rich and diverse gridworld environments with complex dynamics, such as resource systems, agent diversity, procedural terrain generation, support for various environment types, population dynamics, and kinship schemes.
Incorporate techniques like dense learning signals, surprise minimization, exploration strategies, and blending reinforcement and imitation learning.
Investigate scalable training approaches, including distributed reinforcement learning, student-teacher architectures, and blending reinforcement learning with imitation learning, to enable efficient training of large-scale multi-agent systems.
Design and implement a comprehensive suite of intelligence evaluations for gridworld agents, covering navigation tasks, maze solving, in-context learning, cooperation, and competition scenarios.
Develop tools and infrastructure for efficient management, tracking, and deployment of experiments, such as cloud cluster management, experiment tracking and visualization, and continuous integration and deployment pipelines.
This README provides only a brief overview of research explorations. Visit the research roadmap for more details.
Clone the repository and run the setup:
git clone https://github.com/Metta-AI/metta.git
cd metta
./install.sh # Interactive setup - installs uv, configures metta, and installs components
After installation, you can use metta commands directly:
metta status # Check component status
metta install # Install additional components
metta configure # Reconfigure for a different profile
./install.sh --profile=softmax # For Softmax employees
./install.sh --profile=external # For external collaborators
./install.sh --help # Show all available options
The repository contains command-line tools in the tools/
directory.
run.py
is a script that kicks off tasks like training, evaluation, and visualization. The runner looks up the task,
builds its configuration, and runs it. The current available tasks are:
-
experiments.recipes.arena.train: Train on the arena curriculum
./tools/run.py experiments.recipes.arena.train --args run=my_experiment
-
experiments.recipes.navigation.train: Train on the navigation curriculum
./tools/run.py experiments.recipes.navigation.train --args run=my_experiment
-
experiments.recipes.arena.play: Play in the browser
./tools/run.py experiments.recipes.arena.play
-
experiments.recipes.arena.replay: Replay a single episode from a saved policy
./tools/run.py experiments.recipes.arena.replay --overrides policy_uri=wandb://run/local.alice.1
-
experiments.recipes.arena.evaluate: Evaluate a policy on the arena eval suite
./tools/run.py experiments.recipes.arena.evaluate --args policy_uri=wandb://run/local.alice.1
-
Dry-run version, e.g. Print the resolved config without executing it
./tools/run.py experiments.recipes.arena.train --args run=my_experiment --dry-run
Use the runner like this:
./tools/run.py <task_name> [--args key=value ...] [--overrides path.to.field=value ...] [--dry-run]
task_name
: a Python-style path to a task (for example,experiments.recipes.arena.train
).--args
: name=value pairs passed to the task function (these become constructor args of the Tool it returns).- Types: integers (
42
), floats (0.1
), booleans (true/false
), and strings. - Multiple args: add more pairs separated by spaces.
- Example:
--args run=local.alice.1
- Types: integers (
--overrides
: update fields inside the returned Tool configuration using dot paths.- Common fields:
system.device=cpu
,wandb.enabled=false
,trainer.total_timesteps=100000
,trainer.rollout_workers=4
,policy_uri=wandb://run/<name>
(for replay/eval). - Multiple overrides: add more pairs separated by spaces.
- Example:
--overrides system.device=cpu wandb.enabled=false
- Common fields:
--dry-run
: print the fully-resolved configuration as JSON and exit without running.
Quick examples:
# Faster local run on CPU, less logging
./tools/run.py experiments.recipes.arena.train \
--args run=local.alice.1 \
--overrides system.device=cpu wandb.enabled=false trainer.total_timesteps=100000
# Evaluate a specific policy URI on the arena suite
./tools/run.py experiments.recipes.arena.evaluate --args policy_uri=wandb://run/local.alice.1
Tips:
- Strings with spaces: quote the value, for example
notes="my local run"
. - Booleans are lowercase:
true
andfalse
. - If a value looks numeric but should be a string, wrap it in quotes (for example,
run="001"
).
A “task” is just a Python function (or class) that returns a Tool configuration. The runner loads it by name and runs
its invoke()
method.
What you write:
- A function that returns a Tool, for example
TrainTool
,SimTool
,PlayTool
, orReplayTool
. - Place it anywhere importable (for personal use,
experiments/user/<your_file>.py
is convenient). - The function name becomes part of the task name you run.
Minimal example:
# experiments/user/my_tasks.py
from metta.mettagrid.config.envs import make_arena
from metta.rl.trainer_config import EvaluationConfig, TrainerConfig
from metta.sim.simulation_config import SimulationConfig
from metta.tools.train import TrainTool
def my_train(run: str = "local.me.1") -> TrainTool:
trainer = TrainerConfig(
evaluation=EvaluationConfig(
simulations=[SimulationConfig(name="arena/basic", env=make_arena(num_agents=4))]
)
)
return TrainTool(trainer=trainer, run=run)
Run your task:
./tools/run.py experiments.user.my_tasks.my_train --args run=local.me.2 \
--overrides system.device=cpu wandb.enabled=false
Notes:
- Tasks can also be Tool classes (subclasses of
metta.common.config.tool.Tool
). The runner will construct them with--args
and then apply--overrides
. - Use
--dry-run
while developing to see the exact configuration your task produces.
To use WandB with your personal account:
- Get your WandB API key from wandb.ai (click your profile → API keys)
- Add it to your
~/.netrc
file:machine api.wandb.ai login user password YOUR_API_KEY_HERE
- Edit
configs/wandb/external_user.yaml
and replace???
with your WandB username:entity: ??? # Replace with your WandB username
Now you can run training with your personal WandB config:
./tools/run.py experiments.recipes.arena.train --args run=local.yourname.123 --overrides wandb.enabled=true wandb.entity=<your_user>
Mettascope allows you to run and view episodes in the environment you specify. It goes beyond just spectator mode, and allows taking over an agent and controlling it manually.
For more information, see ./mettascope/README.md.
./tools/run.py experiments.recipes.arena.play
Optional overrides:
policy_uri=<path>
: Use a specific policy for NPC agents.- Local checkpoints:
file://./train_dir/<run>/checkpoints
- WandB artifacts:
wandb://run/<run_name>
- Local checkpoints:
./tools/run.py experiments.recipes.arena.replay --overrides policy_uri=wandb://run/local.alice.1
When you run training, if you have WandB enabled, then you will be able to see in your WandB run page results for the eval suites.
However, this will not apply for anything trained before April 8th.
If you want to run evaluation post-training to compare different policies, you can do the following:
Evaluate a policy against the arena eval suite:
./tools/run.py experiments.recipes.arena.evaluate --args policy_uri=wandb://run/local.alice.1
Evaluate on the navigation eval suite (provide the policy URI):
./tools/run.py experiments.recipes.navigation.eval --overrides policy_uris=wandb://run/local.alice.1
This repo implements a MettaAgent
policy class. The underlying network is parameterized by config files in
configs/agent
(with configs/agent/fast.yaml
used by default). See configs/agent/reference_design.yaml
for an
explanation of the config structure, and this wiki section
for further documentation.
To use MettaAgent
with a non-default architecture config:
- (Optional): Create your own configuration file, e.g.
configs/agent/my_agent.yaml
. - Run with the configuration file of your choice:
./tools/run.py experiments.recipes.arena.train --overrides policy_architecture.agent_config=my_agent
We support agent architectures without using the MettaAgent system:
- Implement your agent class under
metta/agent/src/metta/agent/pytorch/my_agent.py
. Seemetta/agent/src/metta/agent/pytorch/fast.py
for an example. - Register it in
metta/agent/src/metta/agent/pytorch/agent_mapper.py
by adding an entry toagent_classes
with a key name (e.g.,"my_agent"
). - Select it at runtime using the runner and an override on the agent config name:
./tools/run.py experiments.recipes.arena.train --overrides policy_architecture.name=pytorch/my_agent
Further updates to support bringing your own agent are coming soon.
To run the style checks and tests locally:
ruff format
ruff check
pyright metta # optional, some stubs are missing
pytest
Task | Command |
---|---|
Train (arena) | ./tools/run.py experiments.recipes.arena.train --args run=my_experiment |
Train (navigation) | ./tools/run.py experiments.recipes.navigation.train --args run=my_experiment |
Play (browser) | ./tools/run.py experiments.recipes.arena.play |
Replay (policy) | ./tools/run.py experiments.recipes.arena.replay --overrides policy_uri=wandb://run/local.alice.1 |
Evaluate (arena) | ./tools/run.py experiments.recipes.arena.evaluate --args policy_uri=wandb://run/local.alice.1 |
Evaluate (navigation suite) | ./tools/run.py experiments.recipes.navigation.eval --overrides policy_uris=wandb://run/local.alice.1 |
Dry-run (print config) | ./tools/run.py experiments.recipes.arena.train --args run=my_experiment --dry-run |
Running these commands mirrors our CI configuration and helps keep the codebase consistent.