Skip to content

ActiveInferenceInstitute/ActiveInferAnts

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🧠 Active InferAnts

A Multi-Language Active Inference Framework for Advanced AI Research and Applications

License: CC BY-NC-ND 4.0 Python 3.8+ Multi-Language Active Inference

Welcome to Active InferAnts - a comprehensive, multi-language framework that implements Active Inference algorithms across 32+ programming languages. This project serves as both a research platform for studying active inference mechanisms and a practical toolkit for building sophisticated AI applications that can learn, adapt, and make decisions in complex environments.

Table of Contents

🎯 Overview

Active InferAnts represents a groundbreaking approach to implementing Active Inference algorithms - a mathematical framework for understanding perception, learning, and decision-making in biological and artificial agents. Our system uniquely combines:

  • Multi-Language Implementation: Core Active Inference algorithms implemented in 32+ programming languages
  • Ant Colony Optimization: Nature-inspired optimization strategies integrated with Active Inference principles
  • Modular Architecture: Clean separation of concerns across 6 operational phases
  • Research-to-Production Pipeline: From theoretical models to deployable applications

What is Active Inference?

Active Inference is a mathematical framework that explains how biological agents (including humans) perceive, learn, and act in uncertain environments. It proposes that agents minimize "surprise" by constantly updating their beliefs about the world and taking actions to confirm those beliefs.

Why Active InferAnts?

Traditional AI approaches often separate perception, learning, and action. Active Inference unifies these processes under a common mathematical framework, enabling more robust, adaptable, and biologically-plausible AI systems.

Key Innovation: Multi-Language Approach

By implementing the same algorithms in multiple programming languages, we ensure:

  • Algorithm Correctness: Cross-validation across language implementations
  • Performance Benchmarking: Direct comparison of language capabilities
  • Accessibility: Choose the best language for your specific use case
  • Educational Value: Learn Active Inference concepts across different programming paradigms

✨ Key Features

πŸ€– Core AI Capabilities

  • Advanced Active Inference: State-of-the-art implementations of variational message passing and belief propagation
  • Ant Colony Optimization: Nature-inspired algorithms for complex optimization problems
  • Multi-Agent Systems: Distributed inference across multiple agents with pheromone-based communication
  • Adaptive Learning: Real-time belief updating and policy optimization
  • Uncertainty Quantification: Robust handling of environmental uncertainty and sensor noise

πŸ—οΈ Architecture & Design

  • Modular Pipeline: 6-phase operational framework (Prepare β†’ Operate β†’ Measure β†’ Report β†’ Follow-up β†’ API)
  • Multi-Language Support: 32+ programming languages with consistent APIs
  • Plugin Architecture: Extensible design for custom algorithms and integrations
  • Configuration Management: Flexible parameter system with JSON-based configuration
  • Security-First: Built-in encryption, hashing, and secure communication utilities

πŸ”§ Technical Features

  • High Performance: Parallel processing and GPU acceleration support
  • Cross-Platform: Runs on Linux, macOS, Windows, and cloud environments
  • Real-Time Processing: Low-latency inference for time-critical applications
  • Scalable Architecture: From edge devices to distributed cloud deployments
  • Memory Efficient: Optimized data structures for large-scale problems

πŸ“Š Analysis & Visualization

  • Rich Visualizations: Interactive plots for belief states, free energy landscapes, and agent behavior
  • Performance Monitoring: Real-time metrics and comprehensive benchmarking tools
  • Debugging Support: Detailed logging and state inspection capabilities
  • Report Generation: Automated generation of analysis reports and performance summaries

πŸ”Œ Integration Ecosystem

  • REST APIs: FastAPI-based knowledge management and inference services
  • Database Support: Multi-database architecture (PostgreSQL, MongoDB, Redis, Neo4j, Elasticsearch)
  • Third-Party Integrations: BPMN, Coda, ActivityPub, Nostr, Kafka, and more
  • Container Ready: Docker support for easy deployment and scaling

πŸš€ Quick Start

Get up and running with Active InferAnts in under 5 minutes:

Option 1: Run All Language Implementations

# Clone the repository
git clone https://github.com/ActiveInferenceInstitute/ActiveInferAnts.git
cd ActiveInferAnts

# Set up environment and run all implementations
python3 0_CONTEXT/Computer_Languages/master_controller.py setup
python3 0_CONTEXT/Computer_Languages/master_controller.py run

Option 2: Run Python Implementation Only

# Basic Active Inference example
from active_infer_ants import InferenceModel

# Initialize with default configuration
model = InferenceModel()

# Run inference for 1000 iterations
results = model.run(max_iterations=1000)

# Visualize results
model.visualize(results)

Option 3: Start the Knowledge API

# Start the FastAPI knowledge management service
cd 6_API && python3 Knowledge_API.py

# API will be available at http://localhost:8000
# Interactive docs at http://localhost:8000/api/docs

Option 4: Run Benchmarks

# Run comprehensive performance benchmarks
python3 0_CONTEXT/Computer_Languages/master_controller.py benchmark

# View status dashboard
python3 0_CONTEXT/Computer_Languages/master_controller.py status

πŸ“¦ Installation

Prerequisites

  • Python 3.8+ (for core functionality and master controller)
  • Git (for cloning and version control)
  • 32+ Programming Languages (optional, for multi-language implementations)

Core Installation

# Clone the repository
git clone https://github.com/ActiveInferenceInstitute/ActiveInferAnts.git
cd ActiveInferAnts

# Install Python dependencies
pip install -r requirements.txt

# Optional: Install development dependencies
pip install -r requirements-dev.txt

Multi-Language Setup

For full multi-language support, install the required compilers and interpreters:

# Use the automated setup script
python3 0_CONTEXT/Computer_Languages/master_controller.py setup

# Or manually install language-specific dependencies
python3 0_CONTEXT/Computer_Languages/config_manager.py --all

Docker Installation

# Build the Docker image
docker build -t active-inferants .

# Run the container
docker run -p 8000:8000 active-inferants

Development Installation

# Install in development mode
pip install -e .

# Install pre-commit hooks
pre-commit install

# Set up all language environments
./0_CONTEXT/Computer_Languages/run_all.sh --setup

System Requirements

  • Minimum: 4GB RAM, 2GB disk space
  • Recommended: 16GB RAM, 10GB disk space for full multi-language setup
  • GPU: Optional, CUDA-compatible GPU for accelerated computations

Dependencies Overview

  • Core: NumPy, SciPy, PyTorch
  • APIs: FastAPI, uvicorn, SQLAlchemy
  • Databases: PostgreSQL, MongoDB, Redis, Neo4j, Elasticsearch
  • Visualization: Matplotlib, Plotly, Seaborn
  • Security: cryptography, bcrypt, PyJWT

πŸ› οΈ Usage

Basic Active Inference

from active_infer_ants import ActiveInferenceAgent, Environment

# Create an environment
env = Environment(config={"complexity": 3, "uncertainty": 0.2})

# Initialize an Active Inference agent
agent = ActiveInferenceAgent(
    sensory_precision=5,
    prior_precision=2,
    learning_rate=0.1
)

# Run inference loop
for iteration in range(1000):
    # Sense the environment
    observation = env.observe()

    # Update beliefs and plan actions
    action = agent.infer(observation)

    # Execute action and get reward
    reward = env.step(action)

    # Learn from the experience
    agent.learn(reward)

# Visualize final beliefs
agent.visualize_beliefs()

Multi-Agent Simulation

from active_infer_ants import AntColony, PheromoneNetwork

# Create a colony of 50 agents
colony = AntColony(n_agents=50)

# Initialize pheromone communication network
pheromones = PheromoneNetwork(colony.agents)

# Run distributed optimization
for iteration in range(100):
    # Each agent performs active inference
    actions = colony.parallel_inference()

    # Update pheromone trails
    pheromones.update_trails(actions)

    # Agents learn from collective experience
    colony.learn_from_colony(pheromones.get_pheromone_map())

# Analyze emergent behavior
colony.analyze_emergent_behavior()

Using the Master Controller

# Run all language implementations
python3 0_CONTEXT/Computer_Languages/master_controller.py run

# Run specific language implementation
python3 0_CONTEXT/Computer_Languages/master_controller.py run python

# Run comprehensive benchmarks
python3 0_CONTEXT/Computer_Languages/master_controller.py benchmark

# Generate detailed reports
python3 0_CONTEXT/Computer_Languages/master_controller.py report

# View interactive status dashboard
python3 0_CONTEXT/Computer_Languages/master_controller.py status

API Usage

import requests

# Store knowledge
response = requests.post(
    "http://localhost:8000/api/knowledge/",
    json={
        "source": "experiment_001",
        "content": {"accuracy": 0.95, "parameters": {"lr": 0.01}}
    },
    headers={"X-API-Key": "your-secret-key"}
)

# Retrieve knowledge
knowledge = requests.get(
    "http://localhost:8000/api/knowledge/experiment_001",
    headers={"X-API-Key": "your-secret-key"}
).json()

Configuration

# Using JSON configuration
config = {
    "max_iterations": 1000,
    "learning_rate": 0.1,
    "exploration_factor": 0.3,
    "visualization_enabled": True,
    "output_directory": "./results"
}

# Load from file
with open('config.json', 'r') as f:
    config = json.load(f)

# Initialize with custom config
agent = ActiveInferenceAgent.from_config(config)

Advanced Examples

For comprehensive examples, see:

πŸ—οΈ Project Architecture

Core Operational Pipeline

Active InferAnts follows a 6-phase operational pipeline that transforms theoretical Active Inference models into deployable applications:

graph LR
    A[0_CONTEXT] --> B[1_PREPARE]
    B --> C[2_OPERATE]
    C --> D[3_MEASURE]
    D --> E[4_REPORT]
    E --> F[5_FOLLOWUP]
    F --> G[6_API]
Loading

Core Directory Structure

Multi-Language Support

Active InferAnts implements Active Inference algorithms in 32+ programming languages, ensuring:

  • Algorithm Validation: Cross-language verification of mathematical correctness
  • Performance Benchmarking: Direct comparison across language implementations
  • Accessibility: Choose the optimal language for your specific requirements
  • Educational Value: Learn Active Inference across different programming paradigms

Supported Languages:

  • Systems Languages: Rust, C++, C, Zig, Go, Nim, Odin
  • Scientific Computing: Python, R, Julia, MATLAB
  • Functional Languages: Haskell, OCaml, F#, Elixir, Erlang, Clojure
  • Scripting Languages: JavaScript, TypeScript, Ruby, Perl, PHP, Lua
  • Enterprise Languages: Java, C#, Scala, Kotlin
  • Specialized: Assembly, Brainfuck, Jock, V, Prolog, Fortran, Pascal, SQL

Key Files:

πŸ”Œ APIs

Knowledge Management API

A comprehensive REST API for managing knowledge across multiple databases with automatic synchronization:

# FastAPI-based service running on port 8000
# Features: Multi-database support, caching, async operations
# Endpoints: CRUD operations, search, analytics

Key Features:

  • Multi-Database Architecture: PostgreSQL, MongoDB, Redis, Neo4j, Elasticsearch
  • Asynchronous Operations: High-performance async/await patterns
  • Auto-Synchronization: Real-time data consistency across databases
  • API Key Authentication: Secure access control
  • Interactive Documentation: Auto-generated OpenAPI/Swagger docs
  • Caching Layer: Redis-based caching for improved performance

Endpoints:

  • POST /api/knowledge/ - Create knowledge entry
  • GET /api/knowledge/{source} - Retrieve knowledge
  • PUT /api/knowledge/{source} - Update knowledge
  • DELETE /api/knowledge/{source} - Delete knowledge
  • GET /api/knowledge/ - List all knowledge entries

Meta-Information API

Advanced API for managing meta-information about Active Inference processes and agents:

Key Features:

  • Process Tracking: Monitor inference processes in real-time
  • Agent Management: Control and monitor multiple agents
  • Performance Metrics: Real-time performance monitoring
  • Configuration Management: Dynamic parameter adjustment
  • Health Checks: System health and status monitoring

πŸ§ͺ Testing & Quality Assurance

Comprehensive Test Suite

Active InferAnts includes a sophisticated testing framework that ensures reliability across all implementations:

Key Components:

  • Multi-Language Testing: Automated testing across 32+ programming languages
  • Performance Benchmarking: Cross-language performance comparisons
  • Algorithm Validation: Mathematical correctness verification
  • Integration Testing: End-to-end system validation
  • Continuous Integration: Automated testing pipelines

Test Categories:

  • Unit Tests: Individual algorithm and function testing
  • Integration Tests: Component interaction validation
  • Performance Tests: Benchmarking and profiling
  • Cross-Language Tests: Consistency validation across implementations
  • Regression Tests: Preventing functionality degradation

Running Tests

# Run all language implementations with testing
python3 0_CONTEXT/Computer_Languages/master_controller.py test

# Run specific language tests
python3 0_CONTEXT/Computer_Languages/master_controller.py test python

# Run comprehensive benchmark suite
python3 0_CONTEXT/Computer_Languages/test_suite.py

# View test results and coverage
python3 0_CONTEXT/Computer_Languages/test_suite.py --report

Quality Metrics

  • Code Coverage: >90% across all implementations
  • Performance Consistency: <5% variance across language implementations
  • Algorithm Accuracy: Verified against reference implementations
  • Documentation Coverage: 100% API documentation
  • Security Compliance: Regular security audits and updates

πŸ“Š Performance & Benchmarking

Benchmarking Framework

Comprehensive performance analysis across all language implementations:

# Run performance benchmarks
python3 0_CONTEXT/Computer_Languages/master_controller.py benchmark

# Generate performance reports
python3 0_CONTEXT/Computer_Languages/master_controller.py report

# View interactive performance dashboard
python3 0_CONTEXT/Computer_Languages/status_dashboard.sh

Performance Metrics

  • Execution Time: Comparative analysis across languages
  • Memory Usage: Peak and average memory consumption
  • Scalability: Performance scaling with problem size
  • Accuracy: Algorithm correctness and convergence rates
  • Resource Efficiency: CPU and GPU utilization patterns

Optimization Features

  • Parallel Processing: Multi-core and distributed execution
  • GPU Acceleration: CUDA and OpenCL support where applicable
  • Memory Optimization: Efficient data structures and caching
  • Algorithm Tuning: Automatic parameter optimization
  • Resource Monitoring: Real-time performance tracking

πŸ”§ Development

Development Workflow

# Set up development environment
python3 0_CONTEXT/Computer_Languages/master_controller.py setup

# Install development dependencies
pip install -r requirements-dev.txt

# Run linting and code quality checks
pre-commit run --all-files

# Run tests in watch mode
python3 -m pytest --watch

# Build documentation
mkdocs build

# Run development server
python3 6_API/Knowledge_API.py

Code Quality Tools

  • Linting: Black, Flake8, MyPy for Python code
  • Security: Bandit for security vulnerability scanning
  • Documentation: Sphinx for API documentation
  • Testing: pytest with coverage reporting
  • CI/CD: GitHub Actions for automated testing and deployment

Contributing Guidelines

  1. Fork and Clone: Fork the repository and create a feature branch
  2. Code Standards: Follow PEP 8 and project-specific guidelines
  3. Testing: Add comprehensive tests for new features
  4. Documentation: Update documentation for any new functionality
  5. Cross-Language Consistency: Ensure implementations work across all supported languages
  6. Performance: Include performance benchmarks for significant changes

Development Commands

# Clean all outputs and caches
python3 0_CONTEXT/Computer_Languages/master_controller.py clean

# Check dependencies
python3 0_CONTEXT/Computer_Languages/config_manager.py --check

# Update all dependencies
python3 0_CONTEXT/Computer_Languages/config_manager.py --update

# Generate comprehensive reports
python3 0_CONTEXT/Computer_Languages/master_controller.py report

🚨 Troubleshooting

Common Issues and Solutions

Installation Issues

Problem: Missing dependencies after installation

# Solution: Run dependency check and installation
python3 0_CONTEXT/Computer_Languages/config_manager.py --all
python3 0_CONTEXT/Computer_Languages/master_controller.py setup

Problem: Permission denied when running scripts

# Solution: Make scripts executable
chmod +x 0_CONTEXT/Computer_Languages/run_all.sh
chmod +x 0_CONTEXT/Computer_Languages/status_dashboard.sh

Runtime Issues

Problem: API server fails to start

# Check database connections
python3 -c "import redis; print('Redis OK')"  # Test Redis
python3 -c "import pymongo; print('MongoDB OK')"  # Test MongoDB

# Check configuration
cat config.json

Problem: Memory errors during large simulations

# Reduce simulation parameters
{
    "max_iterations": 500,  # Reduce from 1000
    "memory_limit": "4GB",
    "parallel_processes": 2  # Reduce parallelism
}

Multi-Language Issues

Problem: Specific language implementation fails

# Run individual language test
python3 0_CONTEXT/Computer_Languages/master_controller.py run <language>

# Check language-specific dependencies
python3 0_CONTEXT/Computer_Languages/config_manager.py --install <language>

Problem: Performance inconsistency across languages

# Run benchmark comparison
python3 0_CONTEXT/Computer_Languages/master_controller.py benchmark

# Check system resources
python3 0_CONTEXT/Computer_Languages/status_dashboard.sh

Debug Mode

Enable detailed logging for troubleshooting:

# Set debug logging
export LOG_LEVEL=DEBUG
python3 0_CONTEXT/Computer_Languages/master_controller.py run

# View detailed logs
tail -f 0_CONTEXT/Computer_Languages/test_results/test_suite.log

Getting Help

  1. Check Existing Issues: Search GitHub Issues
  2. Run Diagnostics: Use the built-in status dashboard
  3. Review Documentation: Check detailed docs
  4. Community Support: Join our Discord community

🀝 Contributing

We welcome contributions from researchers, developers, and enthusiasts! Here's how to get involved:

Ways to Contribute

  • πŸ› Bug Reports: Found a bug? Open an issue
  • πŸ’‘ Feature Requests: Have an idea? Submit a feature request
  • πŸ”§ Code Contributions: Ready to code? See our development workflow below
  • πŸ“š Documentation: Help improve documentation and tutorials
  • πŸ§ͺ Testing: Add test cases or improve test coverage
  • 🌐 Language Ports: Implement Active Inference in a new programming language

Development Workflow

  1. Fork and Clone

    git clone https://github.com/your-username/ActiveInferAnts.git
    cd ActiveInferAnts
    git checkout -b feature/your-amazing-feature
  2. Set Up Development Environment

    python3 0_CONTEXT/Computer_Languages/master_controller.py setup
    pip install -r requirements-dev.txt
    pre-commit install
  3. Make Your Changes

    • Follow our coding standards
    • Add comprehensive tests
    • Update documentation
    • Ensure cross-language consistency
  4. Test Your Changes

    # Run tests
    python3 0_CONTEXT/Computer_Languages/master_controller.py test
    
    # Run benchmarks to ensure no performance regression
    python3 0_CONTEXT/Computer_Languages/master_controller.py benchmark
    
    # Check code quality
    pre-commit run --all-files
  5. Submit Your Contribution

    git add .
    git commit -m "feat: add amazing new feature"
    git push origin feature/your-amazing-feature

    Then create a pull request

Contribution Guidelines

  • Code Standards: Follow PEP 8 for Python, and equivalent standards for other languages
  • Testing: Maintain >90% test coverage for new code
  • Documentation: Update relevant documentation for any new functionality
  • Performance: Include benchmarks for performance-critical changes
  • Cross-Language: Ensure new features work across supported languages
  • Security: Follow security best practices and run security checks

Recognition

Contributors are recognized through:

  • Author credits in release notes
  • Contributor spotlight in our newsletter
  • Exclusive contributor swag
  • Speaking opportunities at conferences

Communication

πŸ“„ License

Active InferAnts is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) license. This license allows you to:

  • βœ… Share: Copy and redistribute the material in any medium or format for non-commercial purposes only
  • βœ… Attribution: You must give appropriate credit, provide a link to the license, and indicate if changes were made

Restrictions

  • ❌ No Commercial Use: You may not use the material for commercial purposes
  • ❌ No Derivatives: You may not remix, transform, or build upon the material
  • ❌ No Additional Restrictions: You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits

Attribution Requirements

When using or sharing this work, you must:

  1. Credit: Provide attribution to "Active InferAnts" and link to the original repository
  2. License Notice: Include the license text or a link to the license
  3. Indicate Changes: Clearly indicate if you made any changes to the material
  4. Link: Include a URI or hyperlink to the material to the extent reasonably practicable

Example Attribution

"Active InferAnts" by Active Inference Institute (@docxology unless otherwise specified) is licensed under CC BY-NC-ND 4.0

Third-Party Licenses

This project includes components with the following licenses:

  • Python Dependencies: Various open-source licenses (see requirements.txt)
  • Multi-Language Runtimes: Respective language runtime licenses
  • Database Components: PostgreSQL (PostgreSQL License), MongoDB (SSPL), etc.

Important Note: The CC BY-NC-ND 4.0 license applies to the Active InferAnts framework and documentation. Third-party components may have different licenses that allow more permissive use. Always check individual component licenses for redistribution rights.

For detailed license information, see LICENSE and Third-Party Licenses.

πŸ™ Acknowledgments

Research Foundations

Active InferAnts builds upon groundbreaking research in Active Inference and swarm intelligence:

  • Active Inference Theory: Karl Friston, Rafal Bogacz, and the broader Active Inference research community
  • Ant Colony Optimization: Marco Dorigo, Thomas StΓΌtzle, and swarm intelligence researchers
  • Free Energy Principle: Foundational work on predictive coding and active inference
  • Multi-Language Research: Cross-language algorithm validation and performance analysis

Technical Contributors

Special thanks to our core development team and contributors who have made this project possible through their expertise in:

  • Machine Learning & AI: Advanced algorithm implementation and optimization
  • Multi-Language Development: Cross-platform implementation and maintenance
  • Systems Architecture: Scalable system design and performance engineering
  • Research Software Engineering: Best practices in scientific software development

Community & Support

We gratefully acknowledge:

  • Beta Testers: Early adopters who provided valuable feedback
  • Code Contributors: Developers who contributed implementations and improvements
  • Research Collaborators: Academic partners who validated our approaches
  • Open Source Community: The broader community enabling this work

Funding & Support

This project has been supported by:

  • Active Inference Institute: Research funding and infrastructure
  • Open Source Grants: Community contributions and sponsorships
  • Academic Partnerships: Collaborative research initiatives

πŸ“ž Contact & Community

Get In Touch

Support Channels

Channel Purpose Response Time
πŸ› GitHub Issues Bug reports & technical issues 24-48 hours
πŸ’¬ GitHub Discussions Questions & community support 12-24 hours
πŸ’» Discord Real-time chat & community Immediate
πŸ“§ Email Business & partnership inquiries 1-2 business days

Community Guidelines

  • Be Respectful: Maintain a welcoming environment for all participants
  • Stay On Topic: Keep discussions relevant to Active Inference and related topics
  • Share Knowledge: Help others learn and contribute to the community
  • Follow Code of Conduct: Adhere to our Community Code of Conduct

Research Collaboration

We're always interested in collaborating with:

  • Research Institutions: Joint research projects and publications
  • Industry Partners: Real-world applications and deployments
  • Educational Organizations: Curriculum development and teaching resources
  • Open Source Projects: Integration and cross-project collaboration

🧠 Active InferAnts - Bridging the gap between theoretical Active Inference and practical AI applications through multi-language implementation and rigorous validation.

Built with ❀️ by the Active Inference research community

⬆️ Back to Top

About

Active Inference models of/for Ants

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •