Skip to content

Latest commit

 

History

History
162 lines (100 loc) · 9.93 KB

File metadata and controls

162 lines (100 loc) · 9.93 KB

Chapter 2: Modules Separation: Focus On Maintainability

ea banner

Build Status

Case

Overview

After the initial steps - while building the MVP of the application - the codebase starts to get bigger and more complicated.

Imagine that you started developing your application with a similar approach as in the first chapter - with only 1 production code project. It was successful, got more and more traction in the market, and therefore a lot of customers. In addition, together with the development team, you introduce new features and therefore:

  1. Each module grows.

  2. Some modules change more often than others.

  3. New teams are created. Now there are 3 different teams working on features within one project. It becomes harder to maintain - teams touch the same areas, there are a lot of conflicts.

  4. You notice that some modules are typical CRUD modules and some are very complex.

It makes sense now to start splitting a production project into a set of projects per domain.

Note
This step makes the codebase much larger than before. Think twice before deciding to do this.
Important
To keep the code simple and understandable while comparing it to the first chapter, we have not added any new features (business processes). Thanks to this, you can see how complex the code structure is compared to the previous one.

Requirements

As mentioned in the overview, the requirements remain unchanged to keep the codebase comparable to the previous step. We will continue to do this throughout the chapters.

Main assumptions

Due to changing requirements and the current market situation (here you have to imagine that this is the case, although we do not assume any new requirements), we have to adjust our assumptions:

  1. Our application is now being used by 5000 people, which is the maximum number of users based on our initial MVP assumptions. It still makes no sense to extract parts of the application into microservices, so we keep our modular monolith and scale it as a single deployment unit.

  2. A lot of new features will be added to our solution. There are many requests from our customers and after considering them, we usually decide to implement them. We also decide to add some features on our own based on analytics observations.

  3. Parts of the application change extremely quickly - several times a day.

  4. The Contracts module is becoming increasingly complex due to new business logic.

  5. We have several development teams and they start complaining about conflicts they have while working with the code and touching the same areas.

Note
We can utilize load balancer and feature flags to independently scale modules in our monolith.

Solution

Overview

The step we decided to take was crucial to our application. We started in a very simple way:

  1. Create 3 projects, where 1 is production code and 2 are test related.

  2. Separate the modules by namespace.

  3. Communicate using an in-memory queue.

Over time, however, we have found that it makes sense to divide it up more granularly (see the Main Assumptions chapter).

Before we look at the technical solution, a few explanations are in order. First, we discussed the complexity of each module and tried to rethink the subdomain types we decided on in the first chapter. After further investigation, we agreed to stick with what we had originally thought:

  • Contracts is a core type.

  • The rest of our subdomains are still supporting.

subdomain types

The next step is to focus on the patterns that can be used in our modules - we were not able to make this decision in the first chapter due to lack of knowledge about our business domain and how it will be used by our customers (see Project Paradox we described earlier). Thanks to the knowledge we gained, we decided on the following patterns for our modules:

  • In Contracts we will use the Domain Model

  • For Passes and Offers we have chosen the Active Record

  • In Reports we will use Transaction Script

subdomains architectural patterns

Let’s translate all of the above into code.

Note
Domain Model and Transaction Script are domain logic patterns and Active Record is a data source architecture pattern. That’s why we’re grouping them all together under the term Patterns. You can see how they are implemented in our solution by looking at the codebase.

Solution structure

In total there are now over 20 projects. Each module is built using different set of projects (because of the selected pattern). This way:

  1. Contracts are built with Api, Application, Core and Infrastructure projects. Additionally there are unit and integrations tests and a project responsible for keeping integration events. You can read more about this decision in Architecture Decision Log for this chapter.

  2. Offers and Passes are built with Api and DataAccess projects. Additionally there are integration tests and integration events projects. As these are projects with simple or no business logic, we do not need to overcomplicate its structure.

  3. Reports contain only one production code project and integration tests. As this is a transaction script, we do not need to overcomplicate the structure of the code.

  4. We had to extract common logic that is used by different modules and it made no sense to write everything twice (WET). The example of the shared logic is the exception middleware, business rule validation mechanism or event bus.

  5. Additionally, we have one project Fitnet that is responsible for all modules registration and starting our application.

Note
You can now see how complex the code get. As an exercise, compare it with a simple structure from the first chapter. In the end, you can ask yourself if it is worth to focus on such division for your application MVP, especially as the requirements will change a lot in the initial phases of development.

Communication

We decided to keep the in-memory queue communication (because we plan to replace it in the third chapter), but it makes sense to think about a more reliable component.

You have several options - e.g. implement an Outbox pattern or integrate a 3rd party component like RabbitMQ. That way, if something goes wrong in the process, you can retry and send the information again.

The change we have already made in this chapter is to create a separate project for each module that sends an integration event. This way the consuming modules can only reference this project and use the integration event from the sending module. The disadvantage of this solution is that we are tightly coupling, for example, the Passes module with a project from the Contracts module.

Important
The above problem of an additional project for the module’s integration events can be solved either by extending the in-memory implementation of our queue or by using a third party component, which we will show in the third chapter.

Tests

Compared to the previous chapter, where we only had 2 projects with tests: Fitnet.UnitTests and Fitnet.IntegrationTests, we decided to split them into separate projects for each module. This way, each module has the following structure:

  • SelectedModule.UnitTests

  • SelectedModule.IntegrationTests

Tests are located in solution folders: SelectedModule | Tests | SelectedModule.*Tests.

In addition, tests for common code (such as ExceptionMiddleware) are located in the Common namespace.

Miscellaneous

We have introduced a concept of feature triggers that can enable and disable any module. This is important because:

  1. We can set the visibility of each module in the production code. This allows us to turn them on or off based on business needs (or subscription levels).

  2. We can ensure that testing a particular module does not require running the entire application. Instead we can configure it so that for e.g. Passes.IntegrationTests we want to set up the environment that will run only Passes (or 2 modules if it requires integration between several).

Note
This step is not required in your application, but is highly recommended - it will help make it as flexible as possible and can reduce the cost of resources needed to run the entire application.

How to run?

Requirements

  • .NET SDK

  • Docker

How to get .NET SDK?

To run the Fitnet application, you will need to have the recent .NET SDK installed on your computer.

Click here

to download it from the official Microsoft website.

Run locally

The Fitnet application requires Docker to run properly.

There are only 3 steps you need to start the application:

  1. Make sure that you are in /Src directory.

  2. Run docker-compose build to build the image of the application.

  3. Run docker-compose up to start the application. In the meantime it will also start Postgres inside container.

The application runs on port :8080. Please navigate to http://localhost:8080 in your browser or http://localhost:8080/swagger/index.html to explore the API.

That’s it! You should now be able to run the application using either one of the above. 👍

How to run Integration Tests?

To run the integration tests go to a module integration tests (SelectedModule.IntegrationTests) and run using either the command:

dotnet test

or the IDE test Explorer.

These tests are written using xUnit and require Docker to be running as they use test containers package to run PostgresSQL in a Docker container during testing. Therefore, make sure to have Docker running before executing the integration tests.