FlinkDotNet is a comprehensive .NET framework that enables developers to build and submit streaming jobs to Apache Flink 2.1 clusters using a fluent C# API. It provides extensive compatibility with Apache Flink 2.1 and integrates with three core technologies - Apache Flink (real-time stream processing), Kafka (message streaming broker), and Temporal.io (workflow orchestration platform) - making it easier for .NET developers to handle large-scale data processing challenges in multi-tiered, distributed real-time stream processing.
| Stage | What Works | Problems You'll Hit | Why Simple Solutions Break |
|---|---|---|---|
| Starting Out | Single server + database | Everything runs on one machine | Works great for small apps! |
| Growing Fast | Need to handle more users | Server crashes under heavy load | One machine can't handle thousands of users at once |
| Data gets lost when server restarts | No backup - if server dies, everything is gone | ||
| Slow response times | Processing requests one by one is too slow | ||
| Going Big | Need multiple servers | How do servers talk to each other? | Direct connections become a tangled mess |
| Messages get lost between servers | Network failures mean data disappears | ||
| Can't track long-running processes | If a process takes hours, how do you monitor it? | ||
| Need to process data in real-time | Batch processing is too slow for live data | ||
| Enterprise Scale | Millions of users globally | Coordinating across data centers | Hundreds of servers need to work together seamlessly |
| Handling 1 million+ connections per second | Need smart routing and load balancing | ||
| Data must survive server failures | Redundancy and durability become critical | ||
| Complex retry logic needed | Failures happen - need automatic recovery |
FlinkDotNet brings billion-scale architecture to .NET developers - combining these three technologies to handle routing billions of messages per second across distributed systems, processing them in real-time with global context awareness, and coordinating millions of complex workflows simultaneously.
FlinkDotNet lets you write Apache Flink 2.1 streaming jobs in C# and submit them to production Flink clusters. No Java required for job development - just write fluent C# code.
// Write this in C#...
var env = Flink.GetExecutionEnvironment();
var stream = env.FromKafka("orders")
.Filter(order => order.Amount > 100)
.Map(order => order.ToUpperCase())
.SinkToKafka("processed-orders");
await env.ExecuteAsync("order-processor");
// ...and it runs on Apache Flink clusters processing millions of events/secWhat makes it different?
- ✅ Native .NET API - Full Apache Flink 2.1 DataStream API in C#
- ✅ Production-Ready - 10 passing integration tests, validated end-to-end pipeline
- ✅ Enterprise Scale - Supports Kafka, event-time windowing, exactly-once semantics
- ✅ Zero Java Code - Write everything in C#, runs on Flink clusters via IR translation
- ✅ Local Development - .NET Aspire integration for one-command cluster startup
Prerequisites: .NET 9.0 SDK, Docker Desktop (or Podman)
# 1. Clone and build
git clone https://github.com/devstress/FlinkDotnet.git
cd FlinkDotnet
dotnet build FlinkDotNet/FlinkDotNet.sln --configuration Release
# 2. Run integration tests (validates complete pipeline)
cd LocalTesting
dotnet test LocalTesting.IntegrationTests --configuration Release
# Expected: ✅ 10 tests pass - Kafka → Flink → Processing → Output validatedYour first Flink job:
using FlinkDotNet.DataStream;
var env = Flink.GetExecutionEnvironment();
// Read from Kafka
var orders = env.FromKafka("orders", "kafka:9093", "my-group");
// Transform with Flink operators
var processed = orders
.Filter(o => o.Amount > 1000)
.Map(o => o.ToUpperInvariant())
.KeyBy(o => o.CustomerId);
// Write back to Kafka
processed.SinkToKafka("high-value-orders", "kafka:9093");
await env.ExecuteAsync("fraud-detection");Add the FlinkDotNet package to your .NET project:
dotnet add package FlinkDotNetUse the fluent API to build and submit Flink jobs from your .NET application.
Run FlinkJobGateway as a container:
docker pull flinkdotnet/jobgateway:latest
docker run -p 5000:5000 \
-e FLINK_CLUSTER_HOST=your-flink-host \
-e FLINK_CLUSTER_PORT=8081 \
flinkdotnet/jobgateway:latestAccess the API at http://localhost:5000.
For complete setup and validation instructions, see ReleasePackagesTesting - includes post-release validation examples and integration tests.
Download standalone executables from GitHub Releases:
- Windows:
jobgateway-win-x64-VERSION.zip- Extract, editstart-gateway.bat, run - Linux:
jobgateway-linux-x64-VERSION.tar.gz- Extract, editstart-gateway.sh, run
See the included README.md in each package for detailed setup instructions.
For local development and contributions:
- LocalTesting: Complete local dev environment with .NET Aspire orchestration
- LearningCourse: 15-day hands-on exercises and integration tests
See LocalTesting and LearningCourse for details.
As your .NET application scales, you need:
- Real-time processing of millions of events/second
- Exactly-once guarantees across distributed systems
- Event-time windowing for out-of-order data
- Multi-cluster orchestration for enterprise deployments
Traditional solutions require Java expertise or vendor lock-in. FlinkDotNet brings Apache Flink's proven stream processing to .NET developers.
┌─────────────────┐ C# Job ┌──────────────────┐ JSON IR ┌─────────────────┐
│ Your .NET │────────────────>│ FlinkDotNet │───────────────>│ Apache Flink │
│ Application │ │ SDK │ │ Cluster (2.1) │
│ (C# Fluent API)│<────────────────│ (Translates) │<───────────────│ (Executes) │
└─────────────────┘ Results └──────────────────┘ Job Status └─────────────────┘
How it works:
- Write stream processing jobs using C# fluent API (just like Java Flink)
- FlinkDotNet translates to portable JSON IR (Intermediate Representation)
- Submit to Flink cluster via Gateway - prebuilt Java runner interprets IR
- Jobs run at full Flink performance on production clusters
| Feature | Description |
|---|---|
| DataStream API | Complete Apache Flink 2.1 API: map, filter, flatMap, window, aggregate, join |
| Kafka Integration | First-class support for Kafka sources and sinks |
| Event-Time Processing | Watermarks, late data handling, time windows (tumbling/sliding/session) |
| Exactly-Once | Checkpointing and savepoints for fault tolerance |
| Dynamic Scaling | Flink 2.1 adaptive scheduler, reactive mode, savepoint-based scaling |
| Workflow Integration | Temporal.io platform integration for complex orchestration |
| Local Development | .NET Aspire integration - start full stack with one command |
| Enterprise Observability | Full PGL stack (Prometheus, Grafana, Loki) + OpenTelemetry |
✅ 10 Integration Tests Passing - Complete pipeline validated on every commit
What's validated:
- ✅ Kafka producer/consumer with Flink processing
- ✅ Basic transformations (map, filter, flatMap)
- ✅ Stateful processing (timers, event-time windows)
- ✅ Flink SQL via TableEnvironment and SQL Gateway
- ✅ Complex multi-step pipelines
- ✅ Aspire orchestration and service discovery
- ✅ Temporal workflow integration
Financial Services - Real-time fraud detection, risk calculation, regulatory reporting
E-commerce - Order processing, inventory management, personalization
IoT/Manufacturing - Sensor data processing, predictive maintenance, quality control
Healthcare - Patient monitoring, care coordination, compliance tracking
See Architecture & Use Cases for detailed implementations.
FlinkDotNet/
├── FlinkDotNet.DataStream/ # Core FlinkDotnet unified package (DataStream API, Common, JobBuilder)
├── FlinkDotNet.JobGateway/ # Job submission service
└── Test Projects/ # Unit and integration tests
LocalTesting/ # Complete local dev environment
├── LocalTesting.FlinkSqlAppHost/ # .NET Aspire orchestration
└── LocalTesting.IntegrationTests/ # End-to-end validation tests
LearningCourse/ # 15-day learning path
├── IntegrationTests.sln/ # Dedicated solution for course tests
└── Day01-Day15/ # 15 days of hands-on exercises
| Guide | Description |
|---|---|
| Getting Started | Complete setup and first job |
| Architecture & Use Cases | System design, scaling strategies, real-world examples |
| API Reference | Complete DataStream API documentation |
| Flink vs Kafka Streams vs Temporal | When to use each technology |
| Learning Course | 15-day hands-on exercises |
| Contributing | Development guidelines |
- 📖 Quickstart Guide
- 🔧 Local Development Setup
- 📊 Observability & Monitoring
- 🚨 Troubleshooting
- 🔄 CI/CD Integration
New to FlinkDotNet? Follow our 15-Day Learning Course:
- Days 1-2: Kafka + Flink fundamentals, stream processing basics
- Days 3-4: Event-time windowing, backpressure handling
- Days 5-6: Temporal workflows, enterprise observability
- Days 7-8: Stress testing, exactly-once semantics
- Days 9-10: Performance tuning, security patterns
- Days 11-14: Disaster recovery, chaos engineering
- Day 15: Capstone project
Each day includes working code examples and integration tests.
FlinkDotNet implements extensive Apache Flink 2.1 features:
- Adaptive Scheduler - Automatic parallelism optimization
- Reactive Mode - Elastic scaling based on cluster resources
- Dynamic Scaling - Change parallelism without job restart
- Advanced Partitioning - Rebalance, rescale, forward, shuffle, broadcast, custom
- Savepoint Operations - Create, restore, scale from savepoints
- Fine-grained Resource Management - Slot sharing groups, resource profiles
See Apache Flink 2.1 Features for complete API mapping.
Validated throughput (LocalTesting environment):
- 📈 800K+ messages/sec through complete Kafka → Flink → Output pipeline
- 📈 80K+ msg/sec per Kafka partition (20 partitions tested)
- 📈 10% Temporal workflow processing (80K workflows/sec) with full orchestration
- 📈 3 TaskManagers, 8 slots each = 24 parallel task capacity
See Performance Benchmarks for detailed metrics.
- 💬 GitHub Issues - Bug reports and feature requests
- 📧 Discussions - Architecture questions and best practices
- 🌟 Star the repo - Stay updated on releases
- 🤝 Contribute - See CONTRIBUTING.md
| Feature | FlinkDotNet | Kafka Streams | AWS Kinesis | Azure Stream Analytics |
|---|---|---|---|---|
| Language | C# native | Java/Scala | Multiple | SQL/JavaScript |
| Scale | Millions/sec | < 100K/sec | Thousands/sec | Cloud-dependent |
| Exactly-Once | ✅ External systems | ✅ Kafka only | ❌ | ❌ |
| Complex CEP | ✅ | ❌ | ❌ | Limited |
| Multi-Cloud | ✅ | ✅ | AWS only | Azure only |
| Local Dev | ✅ Aspire | ✅ | ❌ | ❌ |
| Cost | Infrastructure | Infrastructure | Per shard | Per job |
See Technology Decision Guide for detailed comparison.
- .NET 9.0 SDK - Required for all development
- Docker Desktop or Podman - For local testing with Aspire
- Apache Flink 2.1 cluster - Production deployments
- Apache Kafka - For stream sources/sinks (optional)
MIT License - see LICENSE for details.
Built on top of:
- Apache Flink - Stream processing framework
- Apache Kafka - Distributed streaming platform
- Temporal.io - Durable workflow orchestration
- .NET Aspire - Local development orchestration
Ready to process billions of events? Start with the Quick Start or explore the Learning Course.
🌟 Star this repo to stay updated on new features and releases.