Skip to content

Novel blockchain consensus mechanism replacing energy-intensive mining with productive federated learning. Miners collaborate to train AI models, with winners selected through democratic voting.

Notifications You must be signed in to change notification settings

amirrezaskh/Proof-of-Collaborative-Learning

Repository files navigation

Proof of Collaborative Learning: A Multi-winner Federated Learning Consensus Mechanism

Paper License TensorFlow Hyperledger Fabric

Overview

This repository implements PoCL (Proof of Collaborative Learning), a novel blockchain consensus mechanism that replaces energy-intensive mining with federated learning. Instead of solving cryptographic puzzles, miners collaboratively train a global deep learning model, with winners selected based on model performance through a democratic voting system.

Key Innovation

PoCL transforms blockchain mining from a wasteful competition into a productive collaboration where:

  • 🧠 Miners train ML models instead of computing meaningless hashes
  • πŸ—³οΈ Democratic voting determines winners based on model quality
  • 🎁 Performance-based rewards incentivize honest participation
  • πŸ“Š Global model improvement benefits all participants

Design

πŸ—οΈ System Architecture

Core Components

Component Purpose Technology
Miners Train models on local data, participate in consensus Python + TensorFlow
Blockchain Network Immutable ledger for transactions and consensus Hyperledger Fabric
Express Applications API gateway and process coordination Node.js + Express
Aggregator Combine winning models using FedAvg Python + Flask
Chaincodes Smart contracts for different system functions JavaScript

Workflow

graph TD
    A[Transaction Assignment] --> B[Local Model Training]
    B --> C[Model Proposal + Test Data]
    C --> D[Cross-Prediction Phase]
    D --> E[Voting on Performance]
    E --> F[Winner Selection]
    F --> G[Model Aggregation]
    G --> H[Reward Distribution]
    H --> A
Loading

πŸš€ Quick Start

Prerequisites

  • Docker & Docker Compose: For Hyperledger Fabric network
  • Node.js 16+: For Express applications
  • Python 3.8+: For miners and aggregator
  • TensorFlow 2.x: For deep learning models

Installation

  1. Install Hyperledger Fabric

    curl -sSL https://bit.ly/2ysbOFE | bash -s
  2. Install Python Dependencies

    pip install tensorflow flask requests numpy scikit-learn matplotlib
  3. Install Node.js Dependencies

    cd express-application
    npm install

Running the System

  1. Start the Complete System

    python3 run.py

    This automatically:

    • Deploys the Hyperledger Fabric network
    • Starts all Express applications
    • Launches 10 miners
    • Initializes the aggregator and submitter
    • Begins federated learning rounds
  2. Monitor Progress

    # Real-time monitoring
    tail -f logs/*.txt
    
    # Check specific components
    tail -f logs/app1.txt      # Admin coordination
    tail -f logs/miner1.txt    # Individual miner
    tail -f logs/aggregator.txt # Model aggregation
  3. Stop the System

    python3 stop.py

πŸ“Š System Parameters

Parameter Value Description
Miners 10 Number of federated learning participants
Winners per Round 5 Top performers selected for rewards
Total Rounds 20 Complete federated learning experiment
Training Time 3 minutes Maximum time for local model training
Prediction Time 15 seconds Time to predict on others' test data
Voting Time 15 seconds Time to submit performance votes
Dataset CIFAR-10 Image classification benchmark

πŸ”„ Consensus Process

Phase 1: Training (180 seconds)

  • Miners receive demo transactions to process
  • Train global CNN model on local CIFAR-10 data partitions
  • Submit trained model hash and test data samples

Phase 2: Prediction (15 seconds)

  • Each miner receives test data from all other miners
  • Make predictions using their trained model
  • Submit predictions to blockchain

Phase 3: Voting (15 seconds)

  • Evaluate prediction accuracy on own test data
  • Rank other miners based on accuracy and speed
  • Submit democratic votes to blockchain

Phase 4: Aggregation

  • Select top 5 miners based on vote aggregation
  • Combine winning models using FedAvg algorithm
  • Distribute rewards proportional to contribution
  • Update global model for next round

πŸ›οΈ Project Structure

β”œβ”€β”€ πŸ“ clients/                    # Federated learning participants
β”‚   β”œβ”€β”€ πŸ“ miner/                 # 10 individual miners + analysis tools
β”‚   β”œβ”€β”€ πŸ“ aggregator/            # FedAvg model combination service  
β”‚   β”œβ”€β”€ πŸ“ global model/          # Shared CNN model architecture
β”‚   └── πŸ“ submitter/             # Transaction generation service
β”œβ”€β”€ πŸ“ *-coin-transfer/           # Blockchain transaction chaincodes
β”‚   β”œβ”€β”€ πŸ“ demo-coin-transfer/    # Demo transactions for mining
β”‚   └── πŸ“ main-coin-transfer/    # Main cryptocurrency operations
β”œβ”€β”€ πŸ“ *-propose/                 # Consensus mechanism chaincodes
β”‚   β”œβ”€β”€ πŸ“ model-propose/         # Model submission handling
β”‚   β”œβ”€β”€ πŸ“ prediction-propose/    # Cross-prediction management
β”‚   └── πŸ“ vote-assign/           # Democratic voting system
β”œβ”€β”€ πŸ“ express-application/       # API gateways and coordination
β”œβ”€β”€ πŸ“ test-network/              # Hyperledger Fabric blockchain
β”œβ”€β”€ πŸ“ logs/                      # Real-time system monitoring
β”œβ”€β”€ πŸ“ results/                   # Experimental results and analysis
β”œβ”€β”€ πŸ“ figures/                   # Architecture diagrams
β”œβ”€β”€ 🐍 run.py                     # Complete system startup
└── 🐍 stop.py                    # Graceful system shutdown

πŸ”’ Security Features

Consensus Security

  • Byzantine Tolerance: Resilient to up to 1/3 malicious miners
  • Democratic Voting: Equal voting weight prevents centralization
  • Model Integrity: Cryptographic hash verification
  • Transparent Auditing: All decisions recorded on immutable blockchain

Attack Resistance

  • KNN Attack Detection: Identifies miners using simple algorithms instead of deep learning
  • Vote Validation: Prevents invalid or duplicate votes
  • Deadline Enforcement: Prevents unlimited computation time
  • Performance Verification: Cross-validation ensures honest reporting

πŸ“ˆ Performance Results

Training Performance

  • Convergence: Models converge within 10-20 epochs per round
  • Accuracy: Validation accuracy reaches 70-80% on CIFAR-10
  • Efficiency: Complete consensus round in ~4 minutes
  • Scalability: Successfully tested with 10 miners, extensible to more

Consensus Quality

  • Participation: 90-100% miners participate in each round
  • Fairness: Rewards distributed based on actual contribution
  • Stability: Consistent winner selection across rounds
  • Attack Resilience: 100% detection rate for adversarial miners

Generate Results

cd clients/miner
python3 training_results.py    # Training performance plots
python3 datasize_winners.py    # Data size vs winning analysis

πŸ§ͺ Experimental Features

Attack Simulation

Test system robustness against adversarial miners:

# In miner1.py and miner6.py, uncomment:
# self.model = KNNClassifier()  # Instead of CNN training

Data Distribution Studies

  • Heterogeneous Data: Miners have different amounts of training data
  • Two Strategies: Decreasing vs grouped data distribution
  • Impact Analysis: Correlation between data size and winning frequency

Consensus Variations

  • Winner Count: Adjustable from 1 to 9 miners
  • Voting Algorithms: Performance + speed vs accuracy-only
  • Aggregation Methods: FedAvg vs other federated learning algorithms

πŸ”§ Configuration

System Scaling

To add more miners:

  1. Copy miner1.py to miner11.py (or higher)
  2. Add port 8010 to app1.js miners list
  3. Update run.py to start the new miner
  4. Adjust total_miners parameter in miner configuration

Network Customization

  • Modify app1.js: Adjust timing, winner count, round number
  • Update miner.py: Change data distribution or model architecture
  • Configure aggregator.py: Implement different aggregation algorithms

πŸ“š Research Applications

Academic Research

  • Federated Learning: Novel consensus mechanism research
  • Blockchain: Energy-efficient mining alternatives
  • Machine Learning: Collaborative training in adversarial environments
  • Distributed Systems: Byzantine fault tolerance in ML systems

Industry Applications

  • Healthcare: Collaborative medical AI without data sharing
  • Finance: Fraud detection across institutions
  • IoT: Edge device collaboration for smart cities
  • Privacy: Machine learning with preserved data locality

🀝 Contributing

We welcome contributions! Please see our Contributing Guidelines for details on:

  • Setting up development environment
  • Code style and testing requirements
  • Submitting pull requests
  • Reporting issues and bugs

Development Setup

# Clone repository
git clone https://github.com/your-org/FL-Validated-Learning.git
cd FL-Validated-Learning

# Install development dependencies
pip install -r requirements-dev.txt
npm install --dev

# Run tests
python -m pytest tests/
npm test

πŸ“„ Citation

If you use this work in your research, please cite our paper:

@inproceedings{sokhankhosh2024proof,
  title={Proof-of-Collaborative-Learning: A Multi-winner Federated Learning Consensus Algorithm},
  author={Sokhankhosh, Amirreza and Rouhani, Sara},
  booktitle={2024 IEEE International Conference on Blockchain (Blockchain)},
  pages={370--377},
  year={2024},
  organization={IEEE}
}

πŸ“ž Support and Contact

πŸ“œ License

This project is licensed under the MIT License - see the LICENSE file for details.

⭐ Star this repository if you find it useful for your research or projects!

About

Novel blockchain consensus mechanism replacing energy-intensive mining with productive federated learning. Miners collaborate to train AI models, with winners selected through democratic voting.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published