In this new article, we will discuss the use of some patterns to migrate from Monolith to Microservices. So let’s see different ways and following different guidelines or patterns.
The use of microservices in large architectures is becoming more widespread, thanks in part to:
- ability to isolate responsibilities,
- their ease of cloud deployment,
- unique responsibilities,
- fault tolerance,
- ease of scaling, etc.
However, when we have a monolithic application and want to approach its possible migration, we typically face the question of where to start: functionalities, services, databases, etc.
We aim to explore and explain some of the most common and used microservice patterns to migrate our architectures to microservices.
The common monolithic architecture features tightly coupled code that manages all objects and database accesses, storing all information in a single database.. And all business logic is located in the client and server application. One of the steps to consider when migrating our monolithic architecture to microservices is to migrate the database, but it needs to be done in a way that maintains the same functionality by migrating it to different schemas or databases.
We will look at some of the most common and used patterns in migration architectures from monolith to microservices. We will also discuss some patterns or ways of working that may cause some problems :(.
Big Bang Rewrite
Although it sounds very appealing to do a Big Bang, as the word indicates a Big Bang will lead us to a big explosion. Many times when we start migrating to a microservices architecture, we think about refactoring and redoing everything, but this will complicate the migration by not reusing and applying migration patterns. Rewriting means investing a lot of time developing the code that is in the monolith.
If we do a Big Bang, all the development, as well as its logic and functionality, will be developed again in our microservices. This new development will cause a longer development time (which will be more money), as well as every new change in the monolith having to be done in the microservices or stop the new developments; which in few occasions will be agreed upon.
Strangler Fig pattern to migrate from Monolith to Microservices
When we talk about patterns to migrate from Monolith to Microservices, Strangler Fig is one of the most well-known and most commonly applied. From my experience, this is the easiest to use and apply in a migration from Monolith to Microservices.
This pattern, which draws inspiration from the behavior of a real-life fig tree, is named Strangler Fig. Figs start on the branch and their root descends to the ground, growing without stopping until they “strangle” the parent tree. If we apply this idea to the migration from a Monolith to a Microservices architecture, the idea is to create Microservices until we eliminate the original software.
To make this pattern work, we will create new services peripherally to our Monolith. That is, we will migrate the external services with unitary functionalities to Microservices. Let’s see a better example:
In the previous example, we have a monolith that contains the 4 functionalities we see in the drawing. To obtain the global position, it’s necessary to call Banking Movements, Cards, and Loans. We could consider these last three functionalities as the edges or perimeter of our monolith. Therefore, according to the Strangler Fig pattern, we could start the migration with the most perimeter services such as Banking Movements, Cards, and Loans and create microservices for those functionalities.
Pattern of Parallel Runs to migrate from Monolith to Microservices
Obviously, when we create software, it can fail, and what we have to achieve is to detect and minimize those errors and failures. And in the case that we migrate software that is already in production, we will have to be even more careful. To avoid problems and test our software as we develop it, we can use Parallel Run as one of the patterns to migrate from Monolith to Microservices.
To minimize risks in our migration, we will execute our new microservice and the Monolith functionality in parallel. The source of truth will be the Monolith, which will handle and resolve the requests, but the information will also be sent to the microservice, and the response will be contrasted with the Monolith. One possible action to verify that everything is correct could be to store both responses and contrast them with a batch process. This way, we can resolve errors in our microservice.
Pattern Branch By Abstraction
The Branch By Abstraction pattern allows us to extract functionality while the system is in use. This pattern is perfect for cases where that functionality depends on other functionalities or modules.
The idea of this pattern is to allow existing functionality, such as the new implementation of that functionality, to coexist within the same running process at the same time. To achieve this pattern, we create an abstraction layer over the functionality we want to implement.
This process can be divided into several stages:
- Create the abstraction layer: We create an abstraction layer over the module we are going to call.
- Refactoring: We need to modify the code to call the new layer instead of the old functionality directly.
- Create the microservice: After identifying the functionality we can replace it and create the abstraction layer, the next step is to develop a microservice that provides that same functionality.
- Migration: After creating and testing the new microservice, we can gradually connect it to handle all requests.
Let’s see these steps starting from the previous drawing in which we will migrate Banking Movements:
Create the microservice:
Connect with the microservice:
Remove old Functionality:
A monolith usually has a single database where all functionalities are stored, but when moving to a microservices architecture, it’s common to have a schema or database per microservice. Therefore, let’s see some database migration patterns when moving from a monolith to microservices.
Patterns for shared DataBases
Pattern Database View
When coming from an architecture in which we share our database with all services and we want to maintain a shared database in our microservices architecture, the approach of generating views is the most correct one. With a view, a service can access the database as if it were a schema, allowing it to view only the information that belongs to that service. This can be a good approach to use as patterns to migrate from a monolith to microservices since it will allow us to isolate our database and gradually decouple our monolith.
By using views, we can control access to data for our microservice, so we can hide the rest of the data or information that is not required.
Although this may be a measure to take into account when we migrate to microservices, we have to keep in mind that it has limitations:
- A view is read-only, which already limits its usefulness.
- The vast majority of NoSQL databases do not allow views.
- Coupling due to the use of the same database.
- We have to scale the entire database.
- Single point of failure.
Pattern Database Wrapping Service
When in some of our migrations we find it difficult to create views or deal with specific database information, or when the DBA or the data team does not allow migrations or modifications to the database, we can use a Database Wrapping Service as one of the patterns to migrate from a monolith to microservices. This pattern enables us to create a service that encapsulates our database and provides the necessary output.
The Database Wrapping Service pattern offers some advantages, such as:
- A single database view, which will allow us to read and write.
- Use of an intermediate service that will allow us to return the data we need.
- Allows the creation of a database consumption API.
- Provides time to readjust or split the database into schemas or multiple databases.
Service Interface Pattern
There may be occasions in which in our architecture we only have one database on which we make queries. Therefore, for these occasions, it makes sense to allow the data managed by the service to be viewed. The only thing that needs to be taken into account in these cases is to have separate databases that we expose and the one that is within the limits of our service.
To implement the Database as a Service Interface pattern, we can create a dedicated read-only database that can be accessed outside of our service’s boundaries through an API exposed to the outside, and update it when changes occur in the internal database; as well as a database within our service. For the process of updating our read-only database, we will have an engine that will populate the database.
Let’s see it with a drawing:
In the above diagram, we see how we will have two databases, the internal one of our service that will allow readings and writings, and the one that allows external read-only access. We also see a key piece in this pattern, the update or mapping engine service that functions as an abstraction layer. This will allow the update of our database when there are changes to maintain the data consistency of our system.
Advantages and disadvantages of the Database-as-a-Service Interface:
- It presents greater complexity than other shared database patterns.
- It offers greater flexibility than using views.
- We isolate readings from writings.
- It is perfect for clients who only need readings.
- It requires greater effort and therefore a greater investment of time than other patterns.
Up to this point, we have seen how to mitigate or reduce effort through guidelines or patterns to migrate from a monolith to microservices when sharing databases. Let’s see how we can divide our database according to our microservices.
Data Synchronization with Multiple Databases for Migrating from Monolith to Microservices
When we are migrating our architecture to microservices, and we use, for example, Strangler Fig, we will have both microservices and our monolith coexisting. In this case, the shared database and the databases of our microservices will also coexist.
During the coexistence of both architectures, we will need to synchronize all databases.
To face this problem and find the best possible solution, we will have to analyze what we need, whether a shared database, for which we could create views; if we are facing a Big Bang, for which we could use a Batch or a Data Stream, for example with Kafka and KSQL for migration, etc.
Trace Writer Pattern
This pattern will allow us to gradually migrate the monolith database, which is our source of truth, to the new database of our microservice. At this point, two databases will exist simultaneously, so we need to consider synchronization to prevent data inconsistencies. The new code should use the new database as the source of truth, and it should be developed in the new microservice.
The idea of the trace writer pattern is to make incremental releases so that we do not make a switch covering all functionalities, as this could lead us to generate many problems and inconsistencies. With each release, we will have new features so we can disconnect clients from the old source of truth and connect them to the new source of truth.
Once we have finished with all the functionalities, it will no longer be necessary to have the two data sources, and the old source of truth can be disconnected once all consumers connect to the new database.
It is necessary to keep in mind that we are going to have two sources of truth, so, on the one hand, we will have to avoid inconsistencies, and on the other hand, we will have to keep the data synchronized even if that means having duplicates.
Another factor that we will have to anticipate with this pattern is the eventual consistency of the data. When we perform synchronizations, for example, with a Change Data Capture System (CDC), we may have fragments of time when our data is inconsistent, so we will have to evaluate and take those times into account.
Dividing the Database to migrate from Monolith to Microservices
When we face the migration from a monolithic architecture to microservices, we may want to tackle the division and breakup of our monolith database and divide it so that each microservice has its own database. In these cases, where delving into that breakup can be complicated, we can follow some patterns, steps, and suggestions to try to simplify the work, although dividing a database will always be quite complicated.
To divide the database, the first thing we should consider is whether we want to start by dividing the code and creating our microservices, or start by dividing the database and then move towards code division.
Starting with code division first
The advantage of starting with the code first is that we will have a clearer idea of the data we will need in our microservice. In this way, we can then address that separation with clearer ideas. To tackle this task, we can use the Monolith as a Data Access Layer pattern, which is based on creating an API in front of the monolith to obtain data. This way, we can continue to develop our microservice and then “break” our database. Or we can also use the Multischema Storage approach, where we can add new tables or functionalities to new schemas.
Divide the Database First
When we maintain our code, that is, our monolith, and decide to divide the database first, we will have to make modifications to our code since we will now have to make queries against several databases or schemas. So, what we previously achieved with a single query, we will now perform several queries and have to do their respective joins in memory. Despite this inconvenience, we will be able to verify the changes we are making without affecting connected clients, so in case of any errors, we can easily roll back.
Alternatively, we will face issues such as transactionality that require handling with multiple transactions. To address this approach, we can use the Repository per bounded context pattern, which suggests adding a repository layer with a framework like Hibernate to map database objects more easily.
Dividing both code and database simultaneously is obviously possible, but this approach will require much more effort and time, and reverting back to problems or errors will also be much more costly.
Conclusion
These have been some suggestions and patterns to migrate from Monolith to Microservices, although undoubtedly there are many more or even hybrid ones between different patterns.
Obviously, some topics or problems will arise during migration, such as treating transactionality in the database, which will require microservice patterns such as Sagas, or dividing a table across multiple services, etc. Either way, migrating to a microservices architecture can be long and complex, so before starting, we must analyze well how we will approach it.
If you need more information, you can leave us a comment or send an email to refactorizando.web@gmail.com You can also contact us through our social media channels on Facebook or twitter and we will be happy to assist you!!