Embarking on a Microservice Journey: Users Team’s Evolution in Software Development

Former chef and soldier, then turned developer. I love outdoorsmanship, and all things technology.

Introduction

In the constantly evolving world of software development, embracing new methodologies and technologies is not just a choice; it’s a necessity. At Mews, we are growing and going on a journey that marks a significant shift in our approach to software development – transitioning to our first set of microservices. Strategic decisions, technological advancements, and team dynamics drove this move. In this blog post, I’ll share my experience, challenges, and the rationale behind this pivotal transition in light of our efforts in the user team.

The journey begins

Our journey began with a directive from Mews RnD – a shift towards microservice architecture was on the horizon. Concurrently, our platform team was making strides in implementing this approach using the Atlas platform, coupled with Pulumi for infrastructure management and Octopus for deployment. These tools were not just random picks; they were carefully researched to ensure they aligned with our RnD management vision of efficient and scalable software development.

A mac computer setup showing code
Photo:Christopher Gower via unsplash.com

Why microservices?

The decision to move towards microservices was not trend-driven but a calculated strategy. As our CTO elucidated in a previous blog post (Are we migrating to microservices and should you? | Mews Developers), this approach was about aligning our development practices with the company’s growth in product complexity and team size. With nearly 300 people in RnD and growing, breaking down our monolithic architecture into more manageable microservices was and still is the next logical step. However, our monolith will not be forsaken and forgotten. It will be the hub of the product suite for a long time. What we have at Mews is the possibility to split out vertical slices of the monolith where sensible and applicable, thus allowing us to autonomously design for scale and ownership. 

The team’s role: Diverse backgrounds, unified goal

Our team, comprising members with diverse backgrounds and varied levels of experience, faced the challenge of adapting to this new direction. Embracing microservices wasn’t just about learning new technologies; it was a paradigm shift in our thinking and working style. This was particularly crucial for us, as our team is still finding its footing and defining its identity within the larger organization, especially when you combine this with a more product-driven development. 

It is also one of the headaches of the technology shift, as it is challenging to embrace new ways of doing things and implement new processes based on everything from planning and development to hosting and monitoring. Microservices change all of that. 

The opportunity: User provisioning integration

The opportunity to test our resolve and capabilities came with developing a user provisioning integration feature. This feature, aimed at supporting customers in integrating their users from their Identity Providers (IDPs) into Mews, presented the perfect scenario to implement a microservice. It was a chance to build a low-risk, high-impact service that interfaced with our existing monolithic system. The service would expose a SCIM 2.0 protocol API endpoint and then relay the messages into our monolith. 

Abstract drawing of a sphere representing the monolith
Photo: Aron Visuals via unsplash.com

Owning and monitoring: A responsibility shift

A vital aspect of this transition was the shift in responsibility. In a microservice architecture, teams are not just responsible for building their services; they own them entirely – from development to maintenance, hosting, and monitoring. This ownership fosters a deeper understanding of the service’s impact, cost implications, and scalability needs. It’s about fully comprehending the consequences of each line of code and architectural decision.

Owning the entire slice is one of the most significant mentality changes a product team must face. Even though tooling allows for automatic deployments and monitoring, it doesn’t come with dedicated personnel to do it. Thus, taking ownership can be a pain. Even though the team might be aware of the implications of such a change, fully embracing it can be difficult.

We discussed whether we wanted to conquer this as a team. 

Preparing for the future

Moving towards microservices was more than just a technological upgrade; it was about preparing our team for future challenges. It was an exercise in agility, adaptability, and foresight. As we continue to grow and evolve, these qualities will become increasingly crucial in maintaining our edge and delivering value to our customers.

Pushing for the microservice way, I knew we would hit snags regarding development, deployment, and monitoring. What I should have foreseen was how that would impact the team. Everything I worried about was a lot easier to do and implement. Our problems occurred in different areas than I expected. 

Closeup of html 5 code
Photo: Florian Olivo via unsplash.com

Tooling: The backbone provided by the platform team

GitHub

GitHub still serves as the foundation of our development process. Our repository per service makes setting up, maintaining, and documenting easy. 

When requesting a setup for a new service, you can provide a set of things, and the Mews “Atlas team” will set up everything for you—starting with the GitHub repository. 

Keep in mind that naming the service and everything tied to it does have an impact, as some of the setup is limited to a naming string size. 

We learned that the hard way. Luckily, most things can be changed after the service is up and running, as code changes, but some things can be challenging to change without setting up a new service. 

Atlas orchestrating the infrastructure with Pulumi

From a team perspective, this works out of the box. The platform team is providing the service as a coded infrastructure using Pulumi and then letting Atlas help us manage and track the configuration of our service in Azure. Atlas gives us a birds-eye view of our entire infrastructure. The Pulumi side of the equation allows us to define our infrastructure using programming. It also allows us to manage the containerized environment of our microservice in Azure. 

The orchestration was one of my worries before we started. But this has been working flawlessly. However, we have yet to require scaling and orchestrating our service fully. 

Seamless deployment with Octopus

Octopus Deploy plays a crucial role in our CI/CD pipeline. Once the code is merged in GitHub and our infrastructure is defined in Pulumi, Octopus takes over to handle the deployment of our microservices. It automates the deployment process, ensuring our service is reliably deployed to Azure Container apps without manual intervention. 

This part of the solution is also working as smoothly as a breeze. After doing a Github PR, it flows automatically into our development environment. Then, pushing the latest to demo and prod is done manually. It can be set up to do it all automatically, but at least to begin with, it is nice that there has to be a conscious decision to push to demo and prod. 

Photo: John H Oien via octopus.com

Monitoring with New Relic

New Relic as a tool is familiar to Mews, as we use it for all application monitoring. Running our service in New Relic is advantageous as it gets its Application Performance Monitoring (APM) and Services area of the log with its own dashboard. However, accessing it is restricted, and extracting proper data is a full-fledged knowledge area altogether. Logging and monitoring are two things that are more complicated to get right than I thought. I would spend more time defining our needs here the next time around. That would make it much easier to ensure we are logging and monitoring the right things and owning the whole flow as a team.

Photo: John H Oien via newrelic.com

Tooling challenges

Most of the tooling works out of the box, except for a few startup problems where granting the right users proper access everywhere could have been more seamless. However, this is evolving as the platform team gains more experience with the whole process. 

Also, when working with new tools, there is a learning curve that the team needs to embrace. Combining new features in an unknown area while at the same time learning new tools adds to the time to market. 

What I did not foresee and plan for

Consuming internal API endpoints. You would not rely on direct API calls in a proper micro/macro service environment. This is because of the scalability effect, as the monolith is scaled in a certain way. It does not allow for asynchronous dynamic scaling. Thus, pushing for scaling a microservice in multiple instances will only scale the service. At the same time, the monolithic endpoint will remain the same, which means that direct API calls will force us to scale up, not out in the same manner as the monolith. 

Another thing I did not consider to begin with is the way that our monolith implements authentication and authorization. Thus, two significant considerations had to be made: 

Spaghetti vires representing communication between the monolith and services
Photo: JJ Ying via unsplash.com

Firstly, looking at the communication strategy API vs Message Bus

Direct API calls: This method involves the microservice making direct requests to the monolith’s internal APIs. It’s straightforward to implement, offering immediate results for alpha and beta versions of the service. However, it will introduce tight coupling between the microservice and the monolith, impacting scalability and resilience.

Message bus/broker with Pub-Sub pattern: This approach provides a more decoupled architecture, where the microservice and the monolith communicate through a message bus. This pattern enhances scalability and resilience but adds complexity and requires more implementation time.

After careful consideration, we opted for the direct API call approach as a starting point. This decision was driven by our goal to roll out alpha and beta versions of the service quickly. While this method provided a faster route to deployment, it might not be the optimal long-term solution, especially concerning resilience and scalability.

If all teams consider this approach, we will end up in a scenario with the worst from monolithic and microservice architecture. That would popularly be called a distributed monolith, which I do not recommend as it isn’t scalable, and the dependency mess will be absolute. 

Secondly, bridging the gap between the service and the monolith

Having decided to go for the direct API calls, we faced the other issue: authentication and authorization. Bridging the gap between the monolith and the service. Obviously, we needed to secure the communication between the service and the monolith. We used an approach introduced earlier in the monolith using client secret setup with Mews encryption. 

This was easy to implement, and since mechanisms for this were already in place, and we had a prototype proving the concept, it was less of a headache than I thought. But, if I were to do it again, I would undoubtedly use Azure Managed Identities. The implementation is almost identical except that you give your application an ObjectId, which you use to request a token instead of using a client ID and secret. Hence, there is no need to store the client secret anywhere! And since all Mews services will use managed identities for orchestration, they do not need to be changed on the infrastructure side. 

Photo: John H Oien via DALL-E

Do you find the blog interesting?

Would you like to be the author of the next one at Mews R&D?

Oh, a third thing: Securing the microservice

We needed to secure the microservice itself as well. But this would need to be specific as the IDPs mainly want a URI and a token. Or a token endpoint in the service depending on what we want to support (like adding our application to the Azure Marketplace). We used the common/easy way of setting up a token and a URI in the Security Section at Mews, where you can enable user provisioning. To differentiate the tokens for different customers and IDPs, we added claims to the token so we can do lookups based on the tokens we receive and validate. 

For other services, there are other things to consider, like user authentication, whether a service should use passthrough, whether it should impersonate, or whether there are any other things to consider. 

This is still an ongoing discussion in Mews and needs to be evaluated on a service-to-service basis. No single option suits all. 

Key takeaways

My key takeaway is that our framework for vertically slicing our application is at a level where early adopters might want to dive in, as most of the platform is ready. But, carefully consider your needs regarding scalability and data concurrency. Also, consider the security aspect of your service. As long as you clearly understand the impacts of this, I encourage you to try it out. 

Pro tip: find your domain’s low-impact, low-risk area and dive in. 

AI Generated with a lot of boxes containing
Photo: John H Oien via DALL-E

Conclusion

Our journey towards creating our first microservice has been both challenging and rewarding. It represents a significant step in our evolution as a team and as part of the larger Mews ecosystem. As we move forward, we remain committed to exploring new horizons in software development, always with an eye on delivering exceptional value and experiences to our customers.

Former chef and soldier, then turned developer. I love outdoorsmanship, and all things technology.
Share:

More About