This repository contains a flexible simulation framework for Electric Vehicle (EV) Charging Clusters (EVCCs). EVCCs are large-scale EV-charging-enabled parking lots. Examples include workplace charging facilities, destination parking lots (e.g., mall, supermarket or gym parking garages) or fleet depots.
EVCCs are expected to become a core component of the future charging portfolio outweighing the importance of home charging by some estimates. Planning (sizing) and operating such EVCCs is a non-trivial task with three-way inter-dependencies between (1) user preferences, (2) infrastructure decisions and (3) operations management.
This simulation is intended to explore these interdependencies through extensive sensitivity testing and through testing new algorithms and models for sizing and operating EVCCs. The module structure is as follows:
The EVCC simulation framework is built with a modular, decoupled architecture that separates concerns and enables easy integration with different RL algorithms and libraries.
┌─────────────────────────────────────────────────────────────────┐
│ EVCC Simulation │
├─────────────────────────────────────────────────────────────────┤
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────┐ │
│ │ Preferences │ │Infrastructure│ │ Operations │ │ Results │ │
│ │ Module │ │ Module │ │ Module │ │ Module │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ └─────────┘ │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ RL Agent Integration │
├─────────────────────────────────────────────────────────────────┤
│ ┌─────────────────┐ ┌──────────────────┐ ┌─────────────┐ │
│ │ RL Library │ │ Gym Adapter │ │ EVCH Gym │ │
│ │ (Stable-Bas3, │───▶│ (Standard │───▶│ Environment │ │
│ │ RLlib, etc.) │ │ Interface) │ │ (Wrapper) │ │
│ └─────────────────┘ └──────────────────┘ └─────────────┘ │
└─────────────────────────────────────────────────────────────────┘
The following modules are included:
Preferences
Module: Initializes vehicle objects with respective charging and parking preferences (i.e., requests) based on empirical dataInfrastructure
Module: Initializes infrastructure objects (EV supply equipment (EVSE), connectors per each EVSE, grid connection capacity, on-site storage and on-site generation (PV))Operations
Module: Contains algorithms for assigning physical space (vehicle routing) and electrical capacity (vehicle charging) to individual vehicle objects based on a pre-defined charging policyResults
Module: Monitors EVCC activity in pre-defined intervals and accounts costs. Includes plotting routines.
The framework now includes a unified agent decision system that ensures ALL decisions in the EV charging operations are made by agents (RL agents, rule-based agents, algorithm agents, etc.) rather than being hardcoded in business logic.
┌─────────────────────────────────────────────────────────────────┐
│ Agent Decision System │
├─────────────────────────────────────────────────────────────────┤
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Pricing │ │ Charging │ │ Storage │ │
│ │ Service │ │ Service │ │ Service │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Agent │ │ Agent │ │ Agent │ │
│ │ Decision │ │ Decision │ │ Decision │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │ │ │ │
│ └─────────────────┼────────────────┘ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ Agent Decision System │ │
│ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ │
│ │ │ RL SAC │ │ Rule-Based │ │ Algorithm │ │ │
│ │ │ Agent │ │ Agent │ │ Agent │ │ │
│ │ └─────────────┘ └─────────────┘ └─────────────┘ │ │
│ └─────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
- PRICING: Energy prices, parking fees, dynamic pricing strategies
- CHARGING: Charging power allocation, schedules, priority assignment
- STORAGE: Energy storage operations, peak shaving, arbitrage
- ROUTING: Vehicle routing, parking allocation, queue management
- VEHICLE_ASSIGNMENT: Charging station assignment, connector allocation
- PARKING_ALLOCATION: Parking space allocation, duration optimization
- GRID_MANAGEMENT: Grid capacity management, load balancing
- DEMAND_FORECASTING: Energy demand prediction, load forecasting
- RL_SAC: Soft Actor-Critic reinforcement learning agent
- RL_DQN: Deep Q-Network reinforcement learning agent
- RL_DDPG: Deep Deterministic Policy Gradient agent
- RULE_BASED: Rule-based agents with predefined strategies
- HEURISTIC: Algorithm agents that wrap existing algorithms
- OPTIMIZATION: Mathematical optimization algorithms
- ML_MODEL: Machine learning models (neural networks, etc.)
The system includes algorithm agents that wrap all existing charging, routing, and storage algorithms:
uncontrolled
,first_come_first_served
,earliest_deadline_first
least_laxity_first
,equal_sharing
,online_myopic
online_multi_period
,integrated_storage
,perfect_info
perfect_info_with_storage
random
,lowest_occupancy_first
,fill_one_after_other
lowest_utilization_first
,matching_supply_demand
,minimum_power_requirement
uncontrolled
,temporal_arbitrage
,peak_shaving
Pre-built rule-based agents for common strategies:
- Time-of-Use: Peak/off-peak pricing based on time
- Demand-Based: Dynamic pricing based on current demand
- Cost-Plus: Fixed markup over base electricity cost
- First-Come-First-Served: Serve vehicles in arrival order
- Priority-Based: Prioritize vehicles by energy deficit and departure time
- Load Balancing: Distribute power evenly among vehicles
- Peak Shaving: Discharge during high load, charge during low load
- Arbitrage: Charge during low-price hours, discharge during high-price hours
- Grid Support: Support grid frequency stability
The simulation supports complete decoupling of RL agents through a standardized gym-like interface:
┌─────────────────────────────────────────────────────────────────┐
│ RL Agent Services │
├─────────────────────────────────────────────────────────────────┤
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Pricing │ │ Charging │ │ Storage │ │
│ │ Service │ │ Service │ │ Service │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Pricing │ │ Charging │ │ Storage │ │
│ │ Agent │ │ Agent │ │ Agent │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
└─────────────────────────────────────────────────────────────────┘
The framework supports integration with any gym-compatible RL library:
- Stable Baselines3: SAC, PPO, DQN, A2C, TD3
- RLlib: Distributed training, hyperparameter tuning
- Custom Agents: Any agent implementing the gym interface
- Vectorized Environments: Support for parallel training
- Separation of Concerns: RL logic is completely separated from simulation logic
- Standardized Interfaces: All agents conform to gym-compatible interfaces
- Modularity: Each service (pricing, charging, storage) is independent
- Extensibility: Easy to add new RL algorithms or modify existing ones
- Scalability: Support for distributed training and vectorized environments
- Agent-First Design: All decisions go through agents, no hardcoded logic
- Backward Compatibility: Existing algorithms are preserved and wrapped as agents
- Comprehensive Tracking: Every decision is logged and can be monitored
This project uses uv
, a modern and ultra-fast Python package manager compatible with pip.
If you don't have uv
installed, run:
# Step 1: Install uv
pip install uv
# Step 2: Create a virtual environment
python -m venv .venv
# Step 3: Activate the environment
source .venv/bin/activate # On macOS/Linux
# or
.venv\Scripts\activate # On Windows
# Step 4: Install dependencies
uv pip install -r requirements.uv.txt
The main entry point for the simulation is main.py
. You can run it with different configuration files:
# Run with default configuration
python main.py resources/configuration/ini_files/app-remote.ini
# Run with custom configuration
python main.py path/to/your/config.ini
The simulation uses INI configuration files to set parameters for:
- Environment settings (seasons, duration, facility size)
- Infrastructure (chargers, grid capacity, storage, PV)
- Agent types and strategies
- Logging and monitoring options
[AGENT_DECISION_SYSTEM]
enabled = True
pricing_agent_type = RULE_BASED
charging_agent_type = HEURISTIC
enable_hyperparameter_tuning = False
[SETTINGS]
log_level = INFO
facility_size = 200
- Agent Decision System Guide: Comprehensive guide to the new agent system
- Decision Request System: Details about the underlying decision tracking system
- Algorithm Agents: Documentation of algorithm agents
- Rule-Based Agents: Documentation of rule-based agents
- Agent Decision System Example: Basic usage examples
- Algorithm Agents Example: Using algorithm agents
- Decision Request Example: Decision tracking examples
- All decisions follow the same pattern
- Standardized interfaces and data structures
- Consistent error handling and logging
- Easy to add new agent types
- Simple to switch between different strategies
- Clear separation of concerns
- Agents can be tested independently
- Mock agents for unit testing
- Easy to compare different strategies
- Every decision is tracked and logged
- Performance metrics for all agents
- Decision history for analysis
- Support for multiple agent types
- Easy to implement new strategies
- Can mix different agent types
- Clear agent interfaces
- Well-documented decision types
- Easy to understand and modify
- Existing algorithms are preserved as agents
- No need to rewrite existing code
- Gradual migration path
To migrate existing code to use the agent decision system:
- Identify Decision Points: Find all places where decisions are made
- Create Agents: Implement agents for each decision type
- Register Agents: Register agents with the system
- Replace Decision Logic: Replace hardcoded logic with agent calls
- Test and Monitor: Verify behavior and monitor performance
# Before: Direct algorithm call
first_come_first_served(
env=env,
connected_vehicles=vehicles,
charging_stations=charging_stations,
charging_capacity=500,
free_grid_capacity=300,
planning_period_length=15
)
# After: Using algorithm agent
charging_agent = AlgorithmChargingAgent(algorithm="first_come_first_served")
context = {
"env": env,
"charging_stations": charging_stations,
"charging_capacity": 500,
"free_grid_capacity": 300,
"planning_period_length": 15
}
decision = charging_agent.select_action(vehicles, context)
- Multi-Agent Coordination: Agents that can coordinate with each other
- Adaptive Agents: Agents that can switch strategies based on performance
- Distributed Agents: Support for distributed agent deployment
- Advanced Analytics: More sophisticated performance analysis
- Agent Marketplace: Repository of pre-built agents for common use cases
- Algorithm Performance Comparison: Tools to compare different algorithms
- Hybrid Agents: Agents that combine multiple strategies
We welcome contributions! Please see our Contributing Guide for details.
This project is licensed under the MIT License - see the LICENSE file for details.