The Problem.

Choosing the target architecture for a future product can be daunting, mainly when the decision lies between two seemingly opposite approaches - Monolith and microservices. While both these architectures have their own advantages and disadvantages, deciding which one to choose can significantly impact the product's scalability, maintainability, deployment, and overall success.

Today's overhyped buzzwords like Microservices and Event-Driven Architecture (EDA) often confuse and mislead their potential adopters. Separate deployment units and reactions to events sound great, don't they?

In this article, I'd like to figure out what is better and why the upcoming challenges must be considered as the first steps.

Microservices vs. Monolith.

Microservices are a software architecture pattern where applications are broken down into small, independently deployable services that communicate with each other via APIs. Each microservice can be developed, tested, and deployed independently, providing agility, scalability, and fault tolerance.

Some of the main benefits of microservices include:

✅ Scalability: Each microservice can be deployed and scaled independently, allowing for better scalability of the entire application.
✅ Resilience
: If one microservice fails, the rest of the application can continue functioning as expected.
✅ Flexibility
: As each microservice is developed and deployed independently, it allows for greater flexibility regarding technology choices and development approaches.

However, there are also some disadvantages of microservices, such as:

❌ Complexity: As the application is broken down into multiple services, it can become more complex to manage.
Inter-service communication: As the services need to communicate with each other, it can create additional latency and complexity.
❌ Overhead
: Additional overhead is involved in developing and maintaining multiple services, which can increase costs and time-to-market.

As you probably guessed, we will talk about the disadvantages of the microservice approach.

The first one is complexity.

Complexity is something that you need to realize at the beginning because it is that straightforward, like

Sounds easy, right?

Now, it comes to more practical examples when complexity becomes an issue.

Communication between services - the invisible bottleneck

Imagine that you and your team are working on a new product (it can be a promising startup or one inside the enterprise), and the decision has been made to use small deployed units, and every small domain obtains its microservice. Therefore, the API is the only way to trigger the action or update/receive some data(according to best practices). And we're slowly moving to the Inter-service communication issue.

(https://successive.cloud/what-is-service-mesh/ Service Mesh, Inter-Service communication)
Service Mesh, Inter-Service communication

That means every time you need to get any information, your service requests it from another (Service Mesh); therefore, in case you require more enhanced data or in a slightly different form, you have to ask the owner of your dependency service to update the API to fulfill your requirements, which means additional meetings, quarrels, and chats, etc... And that's still not a problem because we all human beings and we will resolve those incompatibility issues eventually. But it becomes a problem when the delivery time is crucial when there's a deadline or when it is ready for the market date defined (because your competitors will immediately respond).

You should raise a hand and ask why not define the API upfront, check if it fits everyone, and then start building services. It would be a fair question because it must be proven, tested, and reviewed whenever an idea or product exists. But keep in mind the meaning of API (Application Programming Interface); it's not only Restful API, which is synchronously HTTP-based but also asynchronous like messaging systems (like Kafka, RabbitMQ) and PubSub systems (like Redis) where the definition of the events is defining of the contracts for API. So now, whenever the API for the messaging system is changed, it can break the consumers that are out of date with the contracts.

The solution here is to introduce the versioning of the messages, but it also requires an update on other consumers willing to receive new form messages. For the producer, it means producing two messages in different shapes (new and old), leading to even more uncertainty and potential issues (remember the human factor). The complexity of maintaining several versions of API is relevant to sync APIs (REST, RPC), where you should support different versions of endpoints while keeping the OpenAPI documentation (ex. Swagger) updated.

GraphQL solves this issue more gracefully, allowing the addition of new fields or the introduction of new resolvers where the documentation doesn't require editing, but it doesn’t save you from updating all dependant consumers, having additional meetings, and so on…

One of the solutions here can be to develop everything within one team using one mono repo so that every change will align with the API provider and the client service. Still, it requires thoroughly checking all consumers to avoid causing a breaking change, which is risky. That's why you should introduce an automated E2E testing the contract compatibility of every service, which leads to the expansion of the development time.

But still, are you sure this overhead costs its purpose and the target?

Monolithic architecture - basics and challenges

Monolithic architecture is a style of software design that structures an application as a single, tightly coupled unit (below, I'll explain how to treat it to make it less coupled). The entire application is delivered and deployed as a single package, and all components within the application share the same codebase and data store. This style is often used for smaller, more straightforward applications that only require some scalability or flexibility.

There are several advantages to using a monolithic architecture for web and backend development:

✅ Simplicity: Monolithic architectures are often easier to develop and maintain than microservices architectures, as there are fewer components to worry about.
✅ Performance:
Monolithic architectures often provide better performance, as less overhead is associated with communication between components.
✅ Ease of testing:
Monolithic architectures are often easier to test, as there is a single codebase.

It doesn't go without disadvantages as well:

❌ Scalability: Monolithic architectures can be challenging to scale horizontally, as adding more servers will not necessarily improve performance.
❌ Maintainability:
Maintaining a monolithic architecture can become increasingly difficult as the application grows in complexity. Especially if developers need to follow some agreements of the code structure and link different features together.
❌ Flexibility:
Monolithic architectures can be challenging to adapt to changes in requirements or technology (It goes with the previous issue).

👉 Monolithic architecture is a good choice for simple applications requiring little scalability or flexibility. However, microservices architectures are typically better for larger, more complex applications.

But how to cure your Monolith to be suitable for large enterprises, especially if it's already in place?

As you can notice, the main disadvantages are Scalability, Maintainability, and Flexibility, but what makes it this way? The answer is Coupling and Cohesion.

Coupling and Cohesion are two critical concepts in software engineering used to measure the quality of a software system's design.

🔗 Coupling is the degree to which components of an application are interconnected. Coupling can be categorized as tight/high or loose/low.

📍 Cohesion refers to the degree to which elements within a module work together to fulfill a single, well-defined purpose. High Cohesion means that elements(units) are closely related and focused on a single purpose, while Low Cohesion means that components are loosely related and serve multiple purposes.

Both coupling and Cohesion are essential factors in determining a software system's maintainability, scalability, and reliability. High Coupling and Low Cohesion can make a system difficult to change and test, while low coupling and High Cohesion can make a system more accessible to maintain and improve.

We agree that high Cohesion and loose coupling are your friends from this definition.Therefore, it allows us to improve inter-module communication, making our system more flexible and maintainable.

The problem of the ‘distributed monolith’

Distributed Monolith is something other than something you're looking for or something you need. Initially, you are building new microservices from a monolith app, which is the target architecture. But it will only bring you the pain of aligning communication within your tightly coupled, highly dependent, narrow-function services. In general, the distributed Monolith is an architectural anti-pattern you should avoid when splitting your Monolith. That's why first, you must get rid of coupling, and only then think what needs to be moved to separate services.

Build it loosely coupled.

Let me introduce the outcome of these improvements: Loosely-Coupled Monolith.

It inherits the benefits from Monolith and Microservices (not directly, of course) and allows us to quickly adapt contracts between modules (whether it is a direct call or indirect event-driven triggering). In the meantime, you keep your domain modules separated, which is very beneficial when your team decides to move the system to a microservices approach, simply splitting the Monolith by domains onto separate services.

Further extracting the domain logic to service should be acceptable due to loose dependencies based on abstract contracts/interfaces/events.

Ok, how can we achieve this sort of well-prepared Monolith and not make it a Big Ball of Mud (according to DDD)?

There are main requirements you need to follow:

📌 SOLID principles - general practices to keep your code flexible, more independent, and well-maintained
📌 Domain-Driven Design (DDD)
- will give an understanding of what the domain and the business logic have to be separated from infrastructure details and many more.
📌 Hexagonal Architecture (Layered Architecture, Ports & Adapters)
- will help to organize your in-app infrastructure to be more agile, allowing you to connect the same business logic to different adapters (external connectors)
📌 Event-Driven Architecture (EDA)
- something you can consider optional, but this is what helps organize the event flow within the app and move it to a messaging system sequentially. (Of course, it makes sense only if you consider async interconnections among systems)

(Graphic created by the author Denys Dudarev)

The picture above shows how it's split into layers and domain modules and organized according to responsibilities that, in the future, can be considered dedicated services with well-aligned interfaces in between. The applications inside every domain module represent separate use cases that can be applied within the scope of the domain. The main rule is not to call an application service from another application because use cases have to be independent and not cause direct involvement.

Every domain module has to obtain its database or at least specific tables related to the domain. The main goal is not to cross the domain boundaries between domains (e.g., not to query the table connected to another domain). The communication has to be built on external interfaces like module facades and event handlers.

Slice it.

Layered/sliced architecture (Hexagonal architecture is the most separated one) is an excellent way to organize the separation of concerns between the business logic and infrastructure. Infrastructure (like framework, adapters, database, cache, external API, protocols, etc.) should be changed while the business logic remains unchanged.

Slice it graphic

By Business logic, I'd dedicate Domain Logic and Application Logic. There's a subtle difference between them, but it's vital.

Events are your salvation in the future.

EDA inside the Monolith is typical when the events result from some actions within the domain, and subscribers should react to it (e.g., the order has been placed into the cart, then the email must be sent). According to DDD, events are part of a domain and should be used to track behavior and actions within entities.Event-based communication doesn't involve an external module to be triggered; instead, it relies on the abstract event contract.

Following EDA gives you an easy way to transform in-app communication into external message/event exchange (Message Queue, PubSub,…)

In the End.

In conclusion, I'd like you to consider or reconsider your opinion regarding the Monolithic approach as well as evaluate the moment when the Microservices architecture should be applied.

Overall, Monolith is a good choice whenever you evaluate ideas and the contracts are not well-established. At the same time, the microservice pattern is better when you want to improve the maintainability of your application, allowing separate teams to take care of their services.Usually, it happens in enterprises when the communication between domains is well established, and the Monolith can be segregated into smaller components.

To achieve this, you should strive for a Loosely Coupled Monolith definition, making your application more decoupled and flexible, and then decide wisely whether you need to migrate it to microservices or maintain it as a decoupled monolith.

Also, event-driven architecture can easily live within a monolith, allowing it to handle system events that can then be decoupled to external transition events like messages.

In the article, I highlight that, in general, the Monolith is treated as tight coupled by default, but following some techniques, it's possible to make it less coupled and well organized.

-----------

FAQ: Monolith vs. Microservices – What’s Best for Your Business?

1. How do I decide between Monolith and Microservices for my business?

The decision depends on scalability needs, team structure, and long-term goals. A Monolith is easier to develop and maintain for startups or smaller projects, while Microservices suit larger businesses needing flexibility and independent teams.

2. Is migrating to Microservices always the right move?

Not necessarily. While Microservices are popular, they come with higher complexity and costs. If your current Monolith meets business needs without performance issues, a migration may not be necessary.

3. What are the business risks of choosing the wrong architecture?

Choosing Microservices too early can lead to higher development overhead, while staying with a Monolith too long may limit scalability and agility. The key is understanding your growth trajectory before making a decision.

4. How can I future-proof my architecture choice?

A Loosely Coupled Monolith can provide the best of both worlds, allowing for a gradual transition to Microservices if needed. Aligning architecture with business goals ensures long-term flexibility.

5. When should I consider breaking up my Monolith?

If your development teams struggle with scaling, if deployments become bottlenecks, or if different parts of your business evolve independently, transitioning to Microservices might be the next logical step. But breaking the monolith straight into microservices can sometimes lead to unnecessary complexity and increased development time. Therefore, I'd suggest starting with refactoring by splitting it into a loosely coupled monolith with established boundaries and moving toward EDA within the monolith. This approach makes it easier to split it into separate services later.

👉 Still unsure? Let’s have talk an find the best strategy for your business! 🚀