Hacker News new | past | comments | ask | show | jobs | submit login
DDD, Event Sourcing, and CQRS Tutorial (cqrs.nu)
39 points by lisa_henderson on Sept 9, 2015 | hide | past | favorite | 11 comments



Some quick points on DDD/CQRS/Event sourcing from my perspective. I've been involved in a project and have a number of concerns:

Firstly, the main drivers of this i.e. Greg Young/Udi Dahan/Particular are trying to sell a product and consultancy. While there is nothing wrong with this, there is a significant cost and the motivation to use this architecture is marketing driven rather than logical gains. Make sure it is appropriate for your organisation. 95% of organisations it probably isn't. It wasn't for ours but overzealous architecture proponents bought the marketing.

The cognitive load is immense compared to a smaller system architecture. Every change results in masses of friction, cost and time. Your agility will be destroyed pretty quickly. Also due to the cross-cutting concerns, there is no escape from it other than to port everything away.

The products to support it are generally immature, low quality and opinionated to the point that it's impossible to integrate cleanly on anything but a greenfield. Expect friction.

There are better gains to be had from changing your architecture to a purely service model (I hate the term, but microservices) and skip the whole concept.

However if there's one thing I have learned that Command-Query Separation is a good model for isolation. The Event Sourcing bit, not so much.


Hi, I'm really interested in your experience. Could you tell me a little bit more about the cognitive load and cross-cutting concerns?


I am not buffoon but I have used CQRS with event sourcing for a reasonably large system written and maintained by around 10 developers. Additionally I have used event sourcing without CQRS on two other systems.

I found the cognitive load to be less than with other service based architectures I have worked on. I could jump into any area of the code base and because of the naming conventions for commands and events I could see what was going on - this has never been my experience with a service oriented architecture.

The main change in developer thinking that is necessary is that you don't record state you record state transitions. Once that is internalised development is easy, a new feature requires new commands and handlers, new events, new queries and an updates to aggregates.

I had absolutely no problem with cross cutting concerns - security for example was handled in command handlers where needed. Other system components of our system that didn't use CQRS - for example reporting, general ledger, third party integrations - received data from the event stream and injected commands back into the system. So in no way was our organisation infected by CQRS / ES.


Yes it's definitely a YMMV. Your system may have been better suited to the architecture than ours.


With respect to cross cutting concerns, your domain is fractured into something with temporal concerns i.e. events and a timeline. That doesn't reduce complexity, merely move it elsewhere and change the context to something unfamiliar. Then you have to remodel parts to enforce idempotency and handle command failures. Commands do fail but the caller isn't made immediately aware of this. Then there's the distribution and reliability concerns of the system which have to be absolute. If a complex event has numerous subscribers that all must complete, it is very difficult to provide guarantees of consistency even if it is eventual. So your logical isolation ends up with massive cross cutting concerns. For example your UI needs to be aware of the underlying model if the user requires confirmation that something has occurred. Also things like validation become terribly painful.

The cognitive load is the same as any distributed system. It adds massive complexity. If the surface of your application is a SaaS system it is far simpler to deploy hundreds of small tightly coupled instances than to scale it up to one massive instance architecturally.


I think a lot of what you are mentioning comes down to the implementation of CQRS / ES being used not fundamental flaws in the approach.

CQRS doesn't force you to build a distributed system. It doesn't make you use eventual consistency or asynchrony but provides them as options.

For example the system I worked on had synchronous command handling with a mixture of synchronous and asynchronous subscribers. This allowed us to use eventual consistency where it was appropriate and for all clients to be immediately aware of problems.


I really like Greg Young's talks on CQRS and Event Sourcing.

* https://www.youtube.com/watch?v=JHGkaShoyNs

* https://www.youtube.com/watch?v=KXqrBySgX-s


I am a big user of both CQRS and Event Sourcing, but not in the very narrow sense implied by DDD/CQRS/ES advocates. In particular, there are two main tenets that I feel are not sufficiently justified.

One is the idea that commands are write-only (they are not allowed to return information). The reason for this restriction is fairly unclear, and there are several pieces of information that I believe can be legitimately returned:

- Has the command finished executing?

- Was it successful? If not, what was the exception thrown?

- If the command created something, what is that thing's identifier?

- For distributed systems, what is a vector clock after the command's execution?

Another idea is aggregates, especially the fact that each event is bound to a single aggregate and that two aggregates process their associated events independently. I can see how this can be useful, in a NoSQL-ish kind of way: by sacrificing the ability to enforce invariants across multiple aggregates, it becomes trivial to distribute event streams, to quickly compute the state of a single aggregate among millions, and to write to several aggregates in parallel without a global lock.

Still, sacrificing those invariants is costly. Many domains will have constraints on uniqueness (e.g. each e-mail address may belong to only one user) or relationship arity (e.g. each student may have at most one internship, each internship may have at most one student), and from my experience, expressing those constraints in an aggregate-based system usually ends up in:

- creating a "lock" aggregate with "Locked" and "Unlocked" events, or

- ignoring constraints and providing an user interface to manually resolve violations.

I would rather express simple constraints with simple code and have to work hard for optimizations later, than to have to write complex code just to solve an unconfirmed performance issue.


A nice resource for those exploring a completely event sourced, domain driven, responsibility segregated application.

I really do loath the acronyms.


> I really do loath the acronyms.

I like to pronounce CQRS as "suckers". Though I find utility in the concept itself :)


An apt analogy. Many architectures are marketing driven. It scares me exactly how often this is bought into by short sighted people who rank highly and want a week off for training.

I'm currently dealing with a proverbial balls up which is the outcome of an inappropriate application of CQRS and then microservices architecture fads. They have packer, vagrant, ansible, AWS, all sorts of random things half-plugged into a .net solution. Which makes little sense as this is a Microsoft shop who owns their own paid up datacentre estate and it's a monolithic project that doesn't suit this model nor justify the cost of conversion.

Let me clarify fad here: these architectures work well for specific cases like netflix, amazon and other flag waving success stories but not for general cases like standard LOB stuff.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: