Skip to content

Architecture Overview

Natan Vivo edited this page Jan 6, 2016 · 4 revisions

This is an overview of the project architecture and how it was envisioned. The intention is to explain how it was built and why, so people that want to hack with it can understand the main idea.

Goals

My goal with Even is to have a framework that I can use to write regular enterprise/commercial applications and web sites using event sourcing and a scalable architecture.

The idea is to have something simple enough that you can add to a new new project, as you would use something like Entity Framework. The only difference is that you'd see problems from a event sourcing perspective and the solutions would map to events and aggregates instead of tables.

The framework should abstract Akka's the messaging paradigm from regular .NET code. That does not mean hiding actors or preventing you from calling them directly. But it means that if you want to interact with Even from an ASP.NET action for example, you shouldn't have deal with messages.

You should be able to call a regular method from outside and expect it to either work or throw an exception. Tasks are used wherever needed to leverage async/await patterns.

It's designed to run either as a dedicated service where nodes connect to it thought clustering or as a single app/website like an ASP.NET MVC application. Running on .NET Core and Linux is a goal but depends on Akka.NET having support for it.

Note on clustering / multi-server setups

Even is mainly thought as a framework to build single master service architectures. That means, the server that writes to the database and takes care of things like publishing to projections is viewed as a single server that can have downtimes.

The reason for this is because to make event sourcing simple, one of the requirements is having a simple and consistent event ordering. Even currently depends on the database being able to assign incremental numbers events that won't change. And while it's possible to order events with multi-master, it requires more complex solutions that I don't want to tackle for now.

That being said, Even is built to support a reliable query-side by projecting to distributed databases. Reliable write-side can possibly be achieved by means of additional akka actors. It's just not something I'm thinking right now.

Core Components

Even is built on top of Akka actors. The core idea is that you create a regular actor system and start Even on top of it.

###Master

When you start Even, a master actor is started, and this actor spawns many other actors to handle the subsystems.

var sys = ActorSystem.Create("App");
await actorSystem.SetupEven().Start();

The main actors inside the master are: reader, writer and dispatcher:

The reader is responsible for reading anything from the store. Other components send requests to the reader which starts workers to do the actual reading. Each of the workers are dedicated to a specific task like reading events from stream X, or reading events from projection Y. Once the worker is started, it sends responses directly to the requester.

Every time any component needs to write anything, it sends a request to the writer. This writer directs requests to:

  • Buffered Event Writer

    Log-style events where you simply want to store that something happened (eg. the user clicked that link, a confirmation mail was sent) don't have any ordering requirements. Turns out a lot of the events on an event sourcing system are of this nature. The buffered event writer will batch many write requests in a single batch to improve write performance. This allows for you to simply write an event every time a product was viewed by someone and not care about concurrent writes. Most events will be written in a single transaction. Because the structure of the event table is quite simple, this can easily lead to tens of thousands of writes per second with very little effort.

  • Serial Writer

    The serial writer simply handles one request at a time. It's used mostly by aggregates. If you start a new aggregate, you expect it to be the first event in that stream. If you're processing the next command, you expect it to be the second event in the stream. This requires each write to be done independently of others (a write request may contain multiple events from the same command) to make sure all checks match.

Once the events are written, they are sent to the dispatcher.

The dispatcher receives events from the writers and publishes them in order to the EventStream. It achieves ordering by tracking the global sequence number for each received event.

If a gap is detected, it waits a little bit and if the missing event is not received in a few milliseconds, it triggers a read to the store to recover that event or make sure the event was never written. Most databases can skip auto-generated numbers if a transaction is rolled back for example. This process ensures nothing is missed, but also opens the door to things like multiple servers writing to the same database.

Clone this wiki locally