This repository features a machine learning-powered Snake game, seamlessly integrated with a Next.js frontend using Socket.IO for real-time communication. Participants will implement AI agents using Deep Q-Networks (DQN) to train snakes that learn to play autonomously.
- What You'll Build
- Project Architecture
- Why We Need the Backend - The AI Game Engine
- Prerequisites
- Getting Started
- Getting Started - Key Implementation Concepts
- File-by-File Implementation Guide
- Optional Challenges for Advanced Participants
By the end of this bootcamp, you'll have created:
- A real-time multiplayer Snake game with WebSocket communication
- An AI agent (
apps/backend/src/agent.py) that learns through reinforcement learning - A neural network (
apps/backend/src/model.py) trained from scratch using PyTorch - A responsive web interface (
apps/frontend/app/page.tsx) for visualizing AI gameplay - Understanding of modern AI/ML concepts and full-stack development
This project consists of two main components that work together:
src/app.py- WebSocket server with event handlers (connect,start_game,update_game)src/agent.py- DQN agent class with methods likeget_state(),get_action(),train_long_memory()src/model.py- PyTorch neural network (LinearQNet) and training logic (QTrainer)src/game.py- Game controller withstep(),reset(), and state management (Working)src/snake.py- Snake entity with movement and collision detection (Working)src/food.py- Food generation and collision checking (Working)
app/page.tsx- Main game canvas with Socket.IO client and drawing functionscomponents/- Reusable UI components for theming and layout
The backend serves as the intelligent game engine that powers the AI-driven Snake experience. Here's why it's essential:
The backend runs the DQN (Deep Q-Network) agent that makes split-second decisions about where the snake should move. Unlike traditional games where humans control the snake, our AI agent:
- Analyzes game state through the
get_state()function inagent.py - Chooses optimal actions using neural network predictions (
get_action()method) - Learns from experience through reinforcement learning (
train_long_memory())
The backend maintains the single source of truth for the game state:
- Game physics are computed server-side in
game.py(step(), collision detection) - Score tracking and game progression managed centrally
- Multiple clients can connect and watch the same AI agent play
- Prevents cheating since game logic isn't exposed to frontend
The backend implements a complete AI training system:
- Experience replay memory system stores past game states for learning
- Neural network training happens in real-time as the snake plays
- Epsilon-greedy exploration balances trying new moves vs. using learned knowledge
- Reward calculation teaches the AI what constitutes good/bad gameplay
The backend broadcasts real-time updates to connected frontends:
- Game state streaming via
update_game()function inapp.py - Multiple viewers can watch the AI learn simultaneously
- Low-latency updates for smooth gameplay visualization
- Event-driven architecture with handlers for connection, game start, etc.
Without the backend: You'd have just a static frontend with no AI, no learning, and no real-time gameplay. The backend is where the magic happens!
Before you begin, ensure you have the following installed:
- Node.js (v18 or higher): Download Node.js
- npm (comes with Node.js)
- Python (version 3.9): Download Python
You can verify installations with:
node -v
npm -v
python3 --versionFollow these steps to run the project locally on macOS, Windows, or Linux.
cd apps/frontend
npm install
npm run dev-
macOS/Linux:
cd apps/backend python3 -m venv .venv source .venv/bin/activate
-
Windows (Command Prompt):
cd apps\backend python -m venv .venv .venv\Scripts\activate -
Windows (PowerShell):
cd apps\backend python -m venv .venv .venv\Scripts\Activate.ps1
pip install -r requirements.txt
python src/app.pyOnce both servers are running, open http://localhost:3000 in your browser.
Here are the essential concepts and minimal starter code to guide your implementation:
# Essential imports you'll need
import asyncio
import socketio
from aiohttp import web
from game import Game
from agent import DQN
# Basic server setup
sio = socketio.AsyncServer(cors_allowed_origins="*")
app = web.Application()
sio.attach(app)
@sio.event
async def connect(sid, environ):
print(f"Client {sid} connected")
# TODO: Initialize game and agent for this client
@sio.event
async def start_game(sid, data):
# TODO: Create Game() and DQN() instances
# TODO: Save to session and start game loop
passYour AI needs to "see" the game world as numbers. Design a get_state() function that converts the visual game into numerical features. Consider including:
- Danger detection: Is there danger in different directions relative to the snake's current heading?
- Food direction: Where is the food located relative to the snake head?
- Distance information: How far is the food from the snake?
- Current direction: Which way is the snake currently moving?
- Snake body information: What about the snake's own body positioning?
Your challenge: Decide how many features you need and how to extract them from the game object!
Build a PyTorch neural network that:
- Takes your chosen number of input features (your state representation)
- Has one or more hidden layers (experiment with different sizes)
- Outputs 3 Q-values (for straight, right, left actions)
- Uses appropriate activation functions and loss functions
Key concepts to research: Q-learning, neural network forward pass, PyTorch basics
Your React component should:
- Connect to WebSocket server at
localhost:8765 - Listen for game state updates and render them on HTML5 canvas
- Handle connection events and game initialization
- Draw the snake, food, and game grid in real-time
Your challenge: Figure out the Socket.IO client setup and canvas drawing logic!
Understanding what each file does and what you need to implement:
What it does: Main server that handles client connections and runs the game loop
Your tasks:
- Complete the
connect()event handler to initialize client sessions - Implement
start_game()to create Game and DQN agent instances - Build the
update_game()loop for real-time AI gameplay - Add
disconnect()cleanup for when clients leave
Learning focus: Real-time communication, session management, async programming
What it does: The DQN agent that learns to play Snake through reinforcement learning
Your tasks:
- Design
get_state()to convert game data into 13 neural network features - Implement
get_action()with epsilon-greedy exploration strategy - Build
calculate_reward()to teach the AI good vs bad moves - Add
remember()and training functions for experience replay
Learning focus: State representation, reward engineering, reinforcement learning concepts
What it does: PyTorch neural network that predicts Q-values for each action
Your tasks:
- Build the
LinearQNetclass with proper layer architecture - Implement
forward()method for neural network inference - Complete
QTrainerwith loss calculation and backpropagation - Add model saving/loading for persistent learning
Learning focus: Neural network architecture, PyTorch basics, gradient descent
What it does: React component that displays the AI playing Snake in real-time
Your tasks:
- Set up Socket.IO connection to backend server
- Implement canvas drawing functions for snake, food, and grid
- Add state management for game data (snake position, score, etc.)
- Create responsive design for different screen sizes
Learning focus: WebSocket clients, HTML5 canvas, React state management
What it does: Manages core game mechanics and state
Key functions you can use:
game.step()- Advance game by one framegame.reset()- Start a new game roundgame.send()- Get current state for broadcastinggame.queue_change()- Handle direction changes
What it does: Handles snake movement, growth, and collision detection
Key properties you can access:
snake.head- Current head positionsnake.body- List of all body segmentssnake.direction- Current movement directionsnake.grow- Whether snake is growing this frame
What it does: Spawns food in random locations and detects when eaten
Key properties you can access:
food.position- Current food coordinatesfood.check_eaten()- Test if snake ate food this frame
Once you've completed the basic implementation, try these advanced challenges to level up your skills:
Goal: Experiment with different AI training strategies to find the optimal snake
What to implement:
- Create multiple DQN agents with different hyperparameters
- Compare performance across different configurations
- Track training metrics and analyze results
Learning outcomes: Hyperparameter tuning, experimental design, data analysis
Goal: Create multiple AI snakes competing in the same environment
What to implement:
- Modify game logic to support multiple snakes
- Design competitive reward systems
- Implement tournament-style training
Learning outcomes: Multi-agent systems, competitive AI, game theory
Goal: Deploy your AI Snake game to the cloud for others to watch and interact with
What to implement:
- Containerize with Docker and deploy to cloud platforms
- Add authentication and leaderboards
- Implement production monitoring and logging
Learning outcomes: DevOps, cloud deployment, production systems, scalability
For the truly ambitious:
- Advanced AI: Implement Double DQN, Dueling DQN, or Rainbow DQN
- Computer Vision: Train an AI that plays by analyzing screen pixels
- Evolutionary Algorithms: Breed the best AI snakes using genetic algorithms
- Real-time Analytics: Create ML dashboard showing training metrics
- Documentation: Each challenge includes starter guidance and architecture suggestions
- Club Support: Advanced challenges are perfect for pair programming sessions
- Showcase: Present your completed challenges to the CSAI community
- Open Source: Contribute your solutions back to help future participants
Remember: These challenges are designed to be portfolio-worthy projects that demonstrate advanced AI/ML and full-stack development skills to potential employers!