Skip to content

A high-performance, Redis-compatible distributed cache system built in Go. zephyr provides horizontal scalability, high availability, and advanced caching features for modern applications.

Notifications You must be signed in to change notification settings

Nadeerpk/zephyr

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

zephyr πŸš€

A high-performance, Redis-compatible distributed cache system built in Go. zephyr provides horizontal scalability, high availability, and advanced caching features for modern applications.

Go Version License Tests Coverage

✨ Features

Core Functionality

  • Redis-Compatible Protocol: Drop-in replacement for Redis clients
  • High Performance: Sub-millisecond response times with concurrent request handling
  • Memory Efficient: Advanced eviction policies (LRU, LFU, TTL-based)
  • Thread-Safe: Optimized concurrent access with minimal lock contention

Distributed Systems

  • Consistent Hashing: Automatic data distribution across cluster nodes
  • High Availability: Master-slave replication with automatic failover
  • Horizontal Scaling: Add/remove nodes without downtime
  • Data Partitioning: Intelligent key distribution for optimal performance

Advanced Features

  • Pub/Sub Messaging: Real-time message broadcasting
  • Transactions: ACID transaction support with MULTI/EXEC
  • Lua Scripting: Server-side script execution
  • Persistence: WAL and snapshot-based durability
  • Monitoring: Built-in metrics and health checks

πŸ—οΈ Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   zephyr Node  β”‚    β”‚   zephyr Node  β”‚    β”‚   zephyr Node  β”‚
β”‚                 β”‚    β”‚                 β”‚    β”‚                 β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚    β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚    β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
β”‚  β”‚   Cache   β”‚  │◄──►│  β”‚   Cache   β”‚  │◄──►│  β”‚   Cache   β”‚  β”‚
β”‚  β”‚  Engine   β”‚  β”‚    β”‚  β”‚  Engine   β”‚  β”‚    β”‚  β”‚  Engine   β”‚  β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚    β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚    β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
β”‚                 β”‚    β”‚                 β”‚    β”‚                 β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚    β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚    β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
β”‚  β”‚    WAL    β”‚  β”‚    β”‚  β”‚    WAL    β”‚  β”‚    β”‚  β”‚    WAL    β”‚  β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚    β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚    β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         β”‚                       β”‚                       β”‚
         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                 β”‚
                    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                    β”‚  Consistent     β”‚
                    β”‚  Hash Ring      β”‚
                    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸš€ Quick Start

Installation

# Clone the repository
git clone https://github.com/yourname/zephyr.git
cd zephyr

# Build the binary
go build -o zephyr cmd/server/main.go

# Or install directly
go install github.com/yourname/zephyr/cmd/server@latest

Docker

# Run single node
docker run -p 6379:6379 zephyr/zephyr:latest

# Run with custom configuration
docker run -p 6379:6379 -v $(pwd)/config.yaml:/etc/zephyr/config.yaml zephyr/zephyr:latest

Basic Usage

# Start a single node
./zephyr --port 6379

# Start a cluster node
./zephyr --port 6379 --cluster --node-id node1 --peers node2:6380,node3:6381

Client Examples

Go Client:

import "github.com/go-redis/redis/v8"

client := redis.NewClient(&redis.Options{
    Addr: "localhost:6379",
})

// Set a key with TTL
client.Set(ctx, "mykey", "myvalue", time.Minute)

// Get a key
val, err := client.Get(ctx, "mykey").Result()

Python Client:

import redis

r = redis.Redis(host='localhost', port=6379)
r.set('mykey', 'myvalue', ex=60)  # 60 second TTL
value = r.get('mykey')

Node.js Client:

const redis = require('redis');
const client = redis.createClient({ host: 'localhost', port: 6379 });

await client.setEx('mykey', 60, 'myvalue');  // 60 second TTL
const value = await client.get('mykey');

βš™οΈ Configuration

config.yaml

server:
  port: 6379
  bind: "0.0.0.0"
  max_connections: 10000
  tcp_keepalive: true
  timeout: 30s

cluster:
  enabled: true
  node_id: "node1"
  peers:
    - "node2:6380"
    - "node3:6381"
  replication_factor: 2

cache:
  max_memory: "2GB"
  eviction_policy: "allkeys-lru"  # noeviction, allkeys-lru, allkeys-lfu, volatile-lru, volatile-lfu, volatile-ttl
  default_ttl: "1h"

persistence:
  enable_wal: true
  wal_dir: "./data/wal"
  snapshot_enabled: true
  snapshot_interval: "5m"
  snapshot_dir: "./data/snapshots"

monitoring:
  enable_metrics: true
  metrics_port: 8080
  health_check_interval: "10s"
  
logging:
  level: "info"  # debug, info, warn, error
  format: "json"
  file: "./logs/zephyr.log"

Environment Variables

zephyr_PORT=6379
zephyr_MAX_MEMORY=2GB
zephyr_CLUSTER_ENABLED=true
zephyr_NODE_ID=node1
zephyr_PEERS=node2:6380,node3:6381

πŸ› οΈ Supported Commands

String Operations

  • GET key - Get value
  • SET key value [EX seconds] - Set value with optional TTL
  • DEL key [key ...] - Delete keys
  • EXISTS key [key ...] - Check if keys exist
  • EXPIRE key seconds - Set TTL
  • TTL key - Get TTL
  • INCR key - Increment integer value
  • DECR key - Decrement integer value

Hash Operations

  • HSET key field value - Set hash field
  • HGET key field - Get hash field
  • HDEL key field [field ...] - Delete hash fields
  • HGETALL key - Get all hash fields

List Operations

  • LPUSH key value [value ...] - Push to head
  • RPUSH key value [value ...] - Push to tail
  • LPOP key - Pop from head
  • RPOP key - Pop from tail
  • LLEN key - Get list length

Set Operations

  • SADD key member [member ...] - Add to set
  • SREM key member [member ...] - Remove from set
  • SMEMBERS key - Get all members
  • SISMEMBER key member - Check membership

Pub/Sub

  • PUBLISH channel message - Publish message
  • SUBSCRIBE channel [channel ...] - Subscribe to channels
  • UNSUBSCRIBE [channel ...] - Unsubscribe

Transactions

  • MULTI - Start transaction
  • EXEC - Execute transaction
  • DISCARD - Discard transaction
  • WATCH key [key ...] - Watch keys for changes

Server Commands

  • PING [message] - Ping server
  • INFO [section] - Get server info
  • FLUSHDB - Clear current database
  • FLUSHALL - Clear all databases

πŸ“Š Performance

Benchmarks

Single Node Performance (Intel i7-10700K, 32GB RAM):

SET: 180,000 ops/sec
GET: 220,000 ops/sec
Mixed (50/50): 200,000 ops/sec

Cluster Performance (3 nodes):

SET: 450,000 ops/sec
GET: 580,000 ops/sec
Mixed (50/50): 520,000 ops/sec

Memory Usage:

  • ~40 bytes overhead per key-value pair
  • Efficient memory pooling reduces GC pressure
  • Configurable eviction policies prevent OOM

Scaling Characteristics

Nodes Throughput Latency (p99)
1 200K ops/s 0.8ms
3 520K ops/s 1.2ms
5 780K ops/s 1.8ms
10 1.2M ops/s 2.5ms

πŸ§ͺ Testing

Running Tests

# Unit tests
go test ./...

# Integration tests
go test -tags=integration ./...

# Benchmark tests
go test -bench=. ./...

# Coverage report
go test -coverprofile=coverage.out ./...
go tool cover -html=coverage.out

Load Testing

# Install redis-benchmark
apt-get install redis-tools

# Basic load test
redis-benchmark -h localhost -p 6379 -c 100 -n 100000

# Mixed workload test
redis-benchmark -h localhost -p 6379 -c 100 -n 100000 -t set,get --ratio 1:3

πŸ”§ Development

Project Structure

.
β”œβ”€β”€ cmd/
β”‚   └── server/          # Server entry point
β”œβ”€β”€ internal/
β”‚   β”œβ”€β”€ cache/           # Core cache engine
β”‚   β”œβ”€β”€ cluster/         # Clustering logic
β”‚   β”œβ”€β”€ config/          # Configuration management
β”‚   β”œβ”€β”€ consistent/      # Consistent hashing
β”‚   β”œβ”€β”€ persistence/     # WAL and snapshots
β”‚   β”œβ”€β”€ protocol/        # RESP protocol
β”‚   β”œβ”€β”€ pubsub/          # Pub/Sub system
β”‚   β”œβ”€β”€ replication/     # Replication logic
β”‚   └── server/          # HTTP/TCP servers
β”œβ”€β”€ pkg/
β”‚   └── client/          # Go client library
β”œβ”€β”€ deployments/
β”‚   β”œβ”€β”€ docker/          # Docker files
β”‚   └── k8s/             # Kubernetes manifests
β”œβ”€β”€ docs/                # Documentation
β”œβ”€β”€ scripts/             # Build and deployment scripts
└── tests/               # Integration tests

Building from Source

# Development build
make build

# Production build with optimizations
make build-prod

# Cross-compile for multiple platforms
make build-all

# Run linting
make lint

# Run all tests
make test

# Generate documentation
make docs

Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Make your changes
  4. Add tests for new functionality
  5. Run the test suite (make test)
  6. Commit your changes (git commit -m 'Add amazing feature')
  7. Push to the branch (git push origin feature/amazing-feature)
  8. Open a Pull Request

Please read CONTRIBUTING.md for detailed guidelines.

πŸ“ˆ Monitoring

Metrics Endpoint

zephyr exposes Prometheus-compatible metrics at /metrics:

curl http://localhost:8080/metrics

Key Metrics:

  • zephyr_ops_total - Total operations by type
  • zephyr_ops_duration_seconds - Operation latency histograms
  • zephyr_memory_usage_bytes - Memory usage by component
  • zephyr_connections_active - Active client connections
  • zephyr_cluster_nodes_up - Number of healthy cluster nodes
  • zephyr_replication_lag_seconds - Replication lag

Health Checks

# Basic health check
curl http://localhost:8080/health

# Detailed status
curl http://localhost:8080/status

Grafana Dashboard

Import the provided Grafana dashboard for comprehensive monitoring.

🐳 Deployment

Docker Compose

version: '3.8'
services:
  zephyr-1:
    image: zephyr/zephyr:latest
    ports:
      - "6379:6379"
      - "8080:8080"
    environment:
      - zephyr_NODE_ID=node1
      - zephyr_CLUSTER_ENABLED=true
      - zephyr_PEERS=zephyr-2:6379,zephyr-3:6379
    volumes:
      - ./data/node1:/data

  zephyr-2:
    image: zephyr/zephyr:latest
    ports:
      - "6380:6379"
      - "8081:8080"
    environment:
      - zephyr_NODE_ID=node2
      - zephyr_CLUSTER_ENABLED=true
      - zephyr_PEERS=zephyr-1:6379,zephyr-3:6379
    volumes:
      - ./data/node2:/data

  zephyr-3:
    image: zephyr/zephyr:latest
    ports:
      - "6381:6379"
      - "8082:8080"
    environment:
      - zephyr_NODE_ID=node3
      - zephyr_CLUSTER_ENABLED=true
      - zephyr_PEERS=zephyr-1:6379,zephyr-2:6379
    volumes:
      - ./data/node3:/data

Kubernetes

# Deploy to Kubernetes
kubectl apply -f deployments/k8s/

# Scale the cluster
kubectl scale statefulset zephyr --replicas=5

# Check status
kubectl get pods -l app=zephyr

Helm Chart

# Add repository
helm repo add zephyr https://charts.zephyr.io

# Install
helm install my-cache zephyr/zephyr \
  --set cluster.enabled=true \
  --set cluster.replicas=3 \
  --set resources.memory=2Gi

πŸ”’ Security

  • TLS Encryption: Enable TLS for client connections and inter-node communication
  • Authentication: Support for password-based and certificate-based auth
  • Network Security: Configurable bind addresses and firewall rules
  • Data Encryption: Optional encryption at rest for persistent data
security:
  tls:
    enabled: true
    cert_file: "/etc/ssl/certs/zephyr.crt"
    key_file: "/etc/ssl/private/zephyr.key"
  auth:
    enabled: true
    password: "your-secure-password"
  encryption:
    at_rest: true
    algorithm: "AES-256-GCM"

πŸ› Troubleshooting

Common Issues

High Memory Usage:

  • Check eviction policy configuration
  • Monitor key distribution across nodes
  • Verify TTL settings are appropriate

Connection Issues:

  • Verify firewall rules allow traffic on configured ports
  • Check cluster peer connectivity
  • Review network latency between nodes

Performance Degradation:

  • Monitor GC pressure with GOGC tuning
  • Check disk I/O for persistence operations
  • Review client connection pooling

Debug Mode

# Enable debug logging
./zephyr --log-level debug

# Enable profiling
./zephyr --pprof --pprof-port 6060

# View profiles
go tool pprof http://localhost:6060/debug/pprof/profile

πŸ“š Documentation

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

🀝 Community

πŸ™ Acknowledgments

  • Redis team for the excellent protocol design
  • Go community for amazing concurrent programming primitives
  • All contributors who helped make this project better

zephyr - Built with ❀️ in Go

About

A high-performance, Redis-compatible distributed cache system built in Go. zephyr provides horizontal scalability, high availability, and advanced caching features for modern applications.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages