Guest Journey breaks free from monolith on a budget

Senior Backend Engineer from Guest Team, Gamer, and Sport Addict.

Just like the User team, we in Guest Experience started our journey towards microservice architecture last year. It took us almost six months of research, experimentation, and innovation until we reached our first milestone — running a single page of our Booking Engine inside a container with its own runtime, independent of the rest of the system, in production. This was a significant moment for our team and the entire company because it was the first production service running on our Atlas platform. 

After a brief celebration, we got back to work. The next step was to do it again but on a larger scale — getting the entire Guest Portal app out of the monolith. This is still a work in progress, but at the time of writing this article, we’ve completed most of the heavy lifting and made key architectural decisions. In the following paragraphs, I will walk you through our journey. 

A bit of context 

This migration is just a part of a bigger initiative. The idea is to not only move towards a more flexible architecture but also to reimagine the entire application from the product POV. We want to showcase the new product at a conference, so we also have a date by which everything needs to be ready. 

This puts an interesting spin on a well-known problem. We are not just building a system using microservice architecture but doing so in a limited time frame. This of course influences the decisions we are making. We are no longer asking “How will we build this?” but “How will we build this so we can easily improve it later?” and “How are we going to do this in three days?”. 

First steps 

We couldn’t just rewrite everything from scratch. This initiative was just one aspect of our roadmap. We still needed to deliver new features and support the existing functionality. Because of this, we needed to take a more pragmatic approach.  

The first thing we wanted to do was ensure we could introduce a new layer between our frontend client and the monolith API. By focusing purely on the technical details of communication between individual parts of the system, we were able to resolve most web server configuration issues early on. This included things like cookie configuration, HTTP header configuration, and cross-origin requests. We didn’t want to focus on the business logic and architecture of our new service at this stage, so the only functionality, for now, was forwarding client calls to the monolith and sending the response back to the client. This proxy was enough to verify that all parts of the system could communicate properly. 

Using new service as a proxy

Working with data 

The most significant challenge we are facing is our monolith database. It serves as the sole source of data. Due to time constraints, we couldn’t implement a service-specific database that could be synchronized with the main one. While eventual consistency and asynchronous communication are the preferred approaches for reducing the risk of building a distributed monolith and enhancing architectural flexibility, at the time we began developing our service, there was only a single database. There were no events to listen to and update our copy of the data. Everything operated entirely synchronously. 

With limited time at our disposal, we couldn’t afford to spend several months developing internal infrastructure to support this approach. Instead, we opted to create an internal API that functions as a wrapper for the monolith database. 

We recognized from the outset that this API is merely a stepping stone enabling quick actions, and we made efforts to hide this method of communication from the rest of our service. 

Reading data 

Our new service calls internal API whenever it needs data. The API endpoints are highly granular and support fetching the least data possible with a single call. This allows us to make them as performant and reusable as possible. For instance, to retrieve a complete reservation, you would need to fetch the reservation itself, the associated products, and the reservation members separately with three API calls. 

This approach enables us to shift all the business logic related to the Guest Experience part of the system to our new service; logic that does not belong to our domain still stays in the monolith. In this case, the internal API simulates a service instead of a database. 

Writing data 

The granular API endpoint approach works well for retrieving data but isn’t suitable for write operations, primarily due to transactions. Executing complex business operations step-by-step and committing changes in the monolith database after each internal API call is inherently flawed. If a single API call in this sequence fails, reverting previous changes becomes impossible, leaving the database in an inconsistent state. Naturally, this problem has several solutions commonly used in microservices architecture, such as Saga or Choreography patterns. However, as you might have guessed, we didn’t have the time to implement any of these. 

In the context of our system, it would mean adjusting the entirety of the business layer inside the monolith to be able to support these distributed approaches. Instead, we opted to expose the entire business operation via our internal API, again simulating a service. While not a definitive solution, this approach ensured that we could call operations outside our domain, maintain data consistency and build our own business logic on top of it.  

Using internal API to work with data

A better look at the API 

So far, I’ve covered how we are using an API to get the data to the service and invoke business operations. Now let’s look at some implementation details. 

The API is intended only for internal use. The fact that it exists is not visible to the rest of the world. Not only that, but nobody except Mews services is be able to use the API.

Do you want to put your shoulder to the wheel?

Help us on our journey of building the new architecture!

We also needed to address user action authorization. We moved all application logic, including authorization logic, into the service. We are confident enough in our authorization logic, but from the monolith POV, the internal API allows all possible operations to all callers! This could have catastrophic consequences when introducing other services and the teams working on them weren’t part of the internal API development. They might easily make a mistake and assume the API handles authorization internally. 

This convinced us to properly authorize all actions on the API side as well. To avoid performing the authorization twice (both at the service level and inside the monolith), we added an extra parameter to all API calls requiring some kind of authorization. This parameter indicates to the server whether authorizing the action is necessary and reminds the caller that authorization can be skipped if it’s already performed on the service level. At the same time, it encourages developers to think about the authorization aspect of the feature they’re building on top of the API. The authorization on the monolith side is required by default to prevent skipping it by accident.


Because we know that using the internal API to read and write data will only get us so far, we needed to prepare for the moment when each microservice has its own database. Making our business logic as independent as possible from the data layer was something we did not want to underestimate. 

To prevent internal API usage from spreading throughout our new service, we employed Repository pattern. Each API call is wrapped in a repository method and hidden behind an interface. This will allow us to replace repository implementations easily, while everything using the repository will remain unchanged. 

Internal API hidden behind repositories

The repository methods return business objects specifically designed for our domain, which may differ from how entities are represented in other parts of the system. This level of abstraction makes the implementation of our business operations truly independent. It’s also easier to work with than a general-purpose representation of entities that need to support all use cases within the monolith. We often don’t need all the data an entity contains and just ignore it. 

We are using two types of repositories: read repositories and command repositories. We’ve decided to use CQRS pattern to keep our data layer flexible. Currently, everything calls a single database through the internal API so we are not gaining any immediate advantage. Once there are multiple databases and multiple services, we will have the option to scale them independently. 

Adding a feature 

What does feature development look like inside our service? When we combine all the above, we get this process: 

  1. Expose data and business operations via the internal API. 
  2. Make sure the service can call new internal API endpoints. 
  3. Create repository abstraction on top of the API. 
  4. Build business logic on top of the repository. 
  5. Expose the service business logic to the rest of the world through a service API. 

We’ve already covered how the internal API works and how we are using the Repository pattern to decouple new business logic from the monolith as much as possible. Now, let’s look at another aspect of introducing new features: how to structure the code inside the service. 

This might sound trivial, but have you noticed we haven’t mentioned anything besides our new service and the monolith? That’s because nothing else exists. This means we now have two big runtime units, which doesn’t exactly scream microservice architecture. To take another step towards microservices, we need to divide our big experimental service into smaller services, each covering a specific domain. Properly structuring the code inside the service can make this split quite straightforward. 

In our case, we were able to identify parts of our app that are prime candidates for a microservice, such as the authentication module or check-in module. There’s also an API layer exposing the service business logic, which can be turned into a backend-for-frontend service. All these components should reside within their own project and clearly define their dependencies on other projects. This will allow us to take each project, add an API for communication with the outside world, place it inside a GitHub repository and deploy it as a separate service. 

We also have some shared code that we don’t want to maintain in each service. This code is a great candidate to create our own library that can be shared across multiple services. This ties services together in a way that contradicts the microservice paradigm. But given we don’t expect the code inside the library to change often (logging mechanism or basic API layer implementation), we prefer it to multiple copies of the same code in each service. 

Example of splitting the code into modules


This is how we’ve built the first iteration of our new architecture. We’ve had to make compromises when appropriate and at the same time, ensure the foundation we were introducing was solid and we could keep building on top of it. There’s still a lot of work to do, but I believe we chose the right direction given our circumstances. 

I hope this blog post will save you a couple of headaches when you find yourself in a similar situation. Our architecture journey is far from over, so keep an eye out for more articles like this in the future! 

Senior Backend Engineer from Guest Team, Gamer, and Sport Addict.

More About &