A high-performance, Redis-compatible distributed cache system built in Go. zephyr provides horizontal scalability, high availability, and advanced caching features for modern applications.
- Redis-Compatible Protocol: Drop-in replacement for Redis clients
- High Performance: Sub-millisecond response times with concurrent request handling
- Memory Efficient: Advanced eviction policies (LRU, LFU, TTL-based)
- Thread-Safe: Optimized concurrent access with minimal lock contention
- Consistent Hashing: Automatic data distribution across cluster nodes
- High Availability: Master-slave replication with automatic failover
- Horizontal Scaling: Add/remove nodes without downtime
- Data Partitioning: Intelligent key distribution for optimal performance
- Pub/Sub Messaging: Real-time message broadcasting
- Transactions: ACID transaction support with MULTI/EXEC
- Lua Scripting: Server-side script execution
- Persistence: WAL and snapshot-based durability
- Monitoring: Built-in metrics and health checks
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β zephyr Node β β zephyr Node β β zephyr Node β
β β β β β β
β βββββββββββββ β β βββββββββββββ β β βββββββββββββ β
β β Cache β βββββΊβ β Cache β βββββΊβ β Cache β β
β β Engine β β β β Engine β β β β Engine β β
β βββββββββββββ β β βββββββββββββ β β βββββββββββββ β
β β β β β β
β βββββββββββββ β β βββββββββββββ β β βββββββββββββ β
β β WAL β β β β WAL β β β β WAL β β
β βββββββββββββ β β βββββββββββββ β β βββββββββββββ β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β β β
βββββββββββββββββββββββββΌββββββββββββββββββββββββ
β
βββββββββββββββββββ
β Consistent β
β Hash Ring β
βββββββββββββββββββ
# Clone the repository
git clone https://github.com/yourname/zephyr.git
cd zephyr
# Build the binary
go build -o zephyr cmd/server/main.go
# Or install directly
go install github.com/yourname/zephyr/cmd/server@latest
# Run single node
docker run -p 6379:6379 zephyr/zephyr:latest
# Run with custom configuration
docker run -p 6379:6379 -v $(pwd)/config.yaml:/etc/zephyr/config.yaml zephyr/zephyr:latest
# Start a single node
./zephyr --port 6379
# Start a cluster node
./zephyr --port 6379 --cluster --node-id node1 --peers node2:6380,node3:6381
Go Client:
import "github.com/go-redis/redis/v8"
client := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
})
// Set a key with TTL
client.Set(ctx, "mykey", "myvalue", time.Minute)
// Get a key
val, err := client.Get(ctx, "mykey").Result()
Python Client:
import redis
r = redis.Redis(host='localhost', port=6379)
r.set('mykey', 'myvalue', ex=60) # 60 second TTL
value = r.get('mykey')
Node.js Client:
const redis = require('redis');
const client = redis.createClient({ host: 'localhost', port: 6379 });
await client.setEx('mykey', 60, 'myvalue'); // 60 second TTL
const value = await client.get('mykey');
server:
port: 6379
bind: "0.0.0.0"
max_connections: 10000
tcp_keepalive: true
timeout: 30s
cluster:
enabled: true
node_id: "node1"
peers:
- "node2:6380"
- "node3:6381"
replication_factor: 2
cache:
max_memory: "2GB"
eviction_policy: "allkeys-lru" # noeviction, allkeys-lru, allkeys-lfu, volatile-lru, volatile-lfu, volatile-ttl
default_ttl: "1h"
persistence:
enable_wal: true
wal_dir: "./data/wal"
snapshot_enabled: true
snapshot_interval: "5m"
snapshot_dir: "./data/snapshots"
monitoring:
enable_metrics: true
metrics_port: 8080
health_check_interval: "10s"
logging:
level: "info" # debug, info, warn, error
format: "json"
file: "./logs/zephyr.log"
zephyr_PORT=6379
zephyr_MAX_MEMORY=2GB
zephyr_CLUSTER_ENABLED=true
zephyr_NODE_ID=node1
zephyr_PEERS=node2:6380,node3:6381
GET key
- Get valueSET key value [EX seconds]
- Set value with optional TTLDEL key [key ...]
- Delete keysEXISTS key [key ...]
- Check if keys existEXPIRE key seconds
- Set TTLTTL key
- Get TTLINCR key
- Increment integer valueDECR key
- Decrement integer value
HSET key field value
- Set hash fieldHGET key field
- Get hash fieldHDEL key field [field ...]
- Delete hash fieldsHGETALL key
- Get all hash fields
LPUSH key value [value ...]
- Push to headRPUSH key value [value ...]
- Push to tailLPOP key
- Pop from headRPOP key
- Pop from tailLLEN key
- Get list length
SADD key member [member ...]
- Add to setSREM key member [member ...]
- Remove from setSMEMBERS key
- Get all membersSISMEMBER key member
- Check membership
PUBLISH channel message
- Publish messageSUBSCRIBE channel [channel ...]
- Subscribe to channelsUNSUBSCRIBE [channel ...]
- Unsubscribe
MULTI
- Start transactionEXEC
- Execute transactionDISCARD
- Discard transactionWATCH key [key ...]
- Watch keys for changes
PING [message]
- Ping serverINFO [section]
- Get server infoFLUSHDB
- Clear current databaseFLUSHALL
- Clear all databases
Single Node Performance (Intel i7-10700K, 32GB RAM):
SET: 180,000 ops/sec
GET: 220,000 ops/sec
Mixed (50/50): 200,000 ops/sec
Cluster Performance (3 nodes):
SET: 450,000 ops/sec
GET: 580,000 ops/sec
Mixed (50/50): 520,000 ops/sec
Memory Usage:
- ~40 bytes overhead per key-value pair
- Efficient memory pooling reduces GC pressure
- Configurable eviction policies prevent OOM
Nodes | Throughput | Latency (p99) |
---|---|---|
1 | 200K ops/s | 0.8ms |
3 | 520K ops/s | 1.2ms |
5 | 780K ops/s | 1.8ms |
10 | 1.2M ops/s | 2.5ms |
# Unit tests
go test ./...
# Integration tests
go test -tags=integration ./...
# Benchmark tests
go test -bench=. ./...
# Coverage report
go test -coverprofile=coverage.out ./...
go tool cover -html=coverage.out
# Install redis-benchmark
apt-get install redis-tools
# Basic load test
redis-benchmark -h localhost -p 6379 -c 100 -n 100000
# Mixed workload test
redis-benchmark -h localhost -p 6379 -c 100 -n 100000 -t set,get --ratio 1:3
.
βββ cmd/
β βββ server/ # Server entry point
βββ internal/
β βββ cache/ # Core cache engine
β βββ cluster/ # Clustering logic
β βββ config/ # Configuration management
β βββ consistent/ # Consistent hashing
β βββ persistence/ # WAL and snapshots
β βββ protocol/ # RESP protocol
β βββ pubsub/ # Pub/Sub system
β βββ replication/ # Replication logic
β βββ server/ # HTTP/TCP servers
βββ pkg/
β βββ client/ # Go client library
βββ deployments/
β βββ docker/ # Docker files
β βββ k8s/ # Kubernetes manifests
βββ docs/ # Documentation
βββ scripts/ # Build and deployment scripts
βββ tests/ # Integration tests
# Development build
make build
# Production build with optimizations
make build-prod
# Cross-compile for multiple platforms
make build-all
# Run linting
make lint
# Run all tests
make test
# Generate documentation
make docs
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Make your changes
- Add tests for new functionality
- Run the test suite (
make test
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
Please read CONTRIBUTING.md for detailed guidelines.
zephyr exposes Prometheus-compatible metrics at /metrics
:
curl http://localhost:8080/metrics
Key Metrics:
zephyr_ops_total
- Total operations by typezephyr_ops_duration_seconds
- Operation latency histogramszephyr_memory_usage_bytes
- Memory usage by componentzephyr_connections_active
- Active client connectionszephyr_cluster_nodes_up
- Number of healthy cluster nodeszephyr_replication_lag_seconds
- Replication lag
# Basic health check
curl http://localhost:8080/health
# Detailed status
curl http://localhost:8080/status
Import the provided Grafana dashboard for comprehensive monitoring.
version: '3.8'
services:
zephyr-1:
image: zephyr/zephyr:latest
ports:
- "6379:6379"
- "8080:8080"
environment:
- zephyr_NODE_ID=node1
- zephyr_CLUSTER_ENABLED=true
- zephyr_PEERS=zephyr-2:6379,zephyr-3:6379
volumes:
- ./data/node1:/data
zephyr-2:
image: zephyr/zephyr:latest
ports:
- "6380:6379"
- "8081:8080"
environment:
- zephyr_NODE_ID=node2
- zephyr_CLUSTER_ENABLED=true
- zephyr_PEERS=zephyr-1:6379,zephyr-3:6379
volumes:
- ./data/node2:/data
zephyr-3:
image: zephyr/zephyr:latest
ports:
- "6381:6379"
- "8082:8080"
environment:
- zephyr_NODE_ID=node3
- zephyr_CLUSTER_ENABLED=true
- zephyr_PEERS=zephyr-1:6379,zephyr-2:6379
volumes:
- ./data/node3:/data
# Deploy to Kubernetes
kubectl apply -f deployments/k8s/
# Scale the cluster
kubectl scale statefulset zephyr --replicas=5
# Check status
kubectl get pods -l app=zephyr
# Add repository
helm repo add zephyr https://charts.zephyr.io
# Install
helm install my-cache zephyr/zephyr \
--set cluster.enabled=true \
--set cluster.replicas=3 \
--set resources.memory=2Gi
- TLS Encryption: Enable TLS for client connections and inter-node communication
- Authentication: Support for password-based and certificate-based auth
- Network Security: Configurable bind addresses and firewall rules
- Data Encryption: Optional encryption at rest for persistent data
security:
tls:
enabled: true
cert_file: "/etc/ssl/certs/zephyr.crt"
key_file: "/etc/ssl/private/zephyr.key"
auth:
enabled: true
password: "your-secure-password"
encryption:
at_rest: true
algorithm: "AES-256-GCM"
High Memory Usage:
- Check eviction policy configuration
- Monitor key distribution across nodes
- Verify TTL settings are appropriate
Connection Issues:
- Verify firewall rules allow traffic on configured ports
- Check cluster peer connectivity
- Review network latency between nodes
Performance Degradation:
- Monitor GC pressure with
GOGC
tuning - Check disk I/O for persistence operations
- Review client connection pooling
# Enable debug logging
./zephyr --log-level debug
# Enable profiling
./zephyr --pprof --pprof-port 6060
# View profiles
go tool pprof http://localhost:6060/debug/pprof/profile
- Architecture Guide
- Configuration Reference
- API Documentation
- Cluster Setup Guide
- Performance Tuning
- Migration Guide
This project is licensed under the MIT License - see the LICENSE file for details.
- Discord: Join our Discord server
- GitHub Discussions: Community discussions
- Stack Overflow: Tag your questions with
zephyr
- Twitter: @zephyr_db
- Redis team for the excellent protocol design
- Go community for amazing concurrent programming primitives
- All contributors who helped make this project better
zephyr - Built with β€οΈ in Go