Build Well-Designed Microservice APIs to Avoid Technical Debt
Microservice Technical Debt Problems
In my experience, most of the effort associated with adopting and owning is due to lack of automation and the repayment of microservice technical debt. Teams moving from monolith to microservices might discover natural points of segregation in their existing code base. For those working in an object oriented language these points typically occur at an interface (in the object-oriented sense of the word, or protocol), or a point where an interface should be.
Microservice Technical Debt Problems
A problem with monolithic code bases is that even segregated components share the same memory. Specifically, they share access to the program itself. It is as easy to code to interface as it is to code to implementation. Further, it is as easy for a developer with access to an entire code base to refactor an interface as it is to refactor implementation. “But wait,” you might be thinking, “making things easy is a positive trait.” “Why is this a problem?”
Interfaces are great. They decouple component users from component implementations, which in-turn allows for greater reuse and simplified refactoring. But interfaces are so easy to iterate on in monolithic systems that their value is diminished to the point where people with strong short-term concerns fail to justify their use. In some cases this is manifest as an interface lazily extracted from some initial implementation. In other cases component consumers code directly against an implementation. In the worst of cases no interface is ever developed. These practices might be entirely justifiable, but in doing so the developer has created technical debt that may need to be paid down in the future.
Poorly designed or absent interfaces leak implementation details which, over time, limit your ability to easily iterate on that implementation. Bugs, component backing databases, and other side effects will become part of the effective contract between components. As the number of dependent components increases so does the complexity of any refactoring effort. Given enough time and adoption of a poorly abstracted component, iteration (even within a single monolithic coddbase) can grow to an overwhelming challenge.
As teams warm to the benefits of microservices it is easy to overlook the drastically different role of their APIs. It is tempting for the uninitiated to say things like, “We *just* need to take our existing code and put it behind a few APIs.” Or make claims like, “Defining APIs is easy.” Developers and teams without experience developing, supporting, or iterating in a distributed system will surely experience the pain of this paradigm shift.
Interfaces, in the object-oriented sense of the word provide the same system design value as a microservice interface (API). In breaking apart a monolithic project, components that previously shared access to the same memory now rely on distributed API definitions. The components will no longer share memory and be separated by network communication. Those subtle differences makes refactoring those API definitions more difficult. Whereas before a developer with access to the entire codebase could refactor and deploy the entire codebase, now the single developer might not have access to write to all dependent components or those components maybe on different release cycles. Changing an API in a distributed system can break that system; and discovering the problem can be even more difficult without comprehensive functional and integration testing.
For the first time many will learn that a service API contract is rarely well defined by any interface definition language. While most definitions will cover resource names, attributes, methods, and structures few will detail the expanded concerns introduced in a distributed system. Authentication and authorization, service discovery, latency and availability, request retry, backoff, rate limiting, and request quotas are rarely if ever included with API definitions. But effort required to define and implement those advanced concerns is almost irrelevant if the API itself has not been well defined and requires frequent iteration.
In migrating from a monolithic service to microservices it is a good idea to start by clearly defining the microservices among the components in that project. Take a few sprints to harden interfaces, or define them where missing. Take a client perspective and make sure that use-cases are covered. Do all of this while you’re still running in a monolithic codebase and iteration is simple.
Identify state management abstractions, keepers distinct of business logic, and components with different scaling concerns. In data processing or control loop applications identify the distinct actors in the system and their channels of communication. Some combination of these things will emerge as the distinct microservices in your new distributed system.
Move forward iteratively and extract each new service one at a time. With each iteration identify pain points and introduce automation to ease that pain in future iterations. Discover monitoring and alarming practices early and iterate often using automated tools. It can be tempting to skip this kind of automation, but doing so is another form of microservice technical debt. As you scale from one service to a few hundred that debt will be paid eventually, and interest-only payments will come in the form of slower release cycles or attrition.
At some point during or after a migration a new component will be developed. Fresh from the migration experience a team will have begun to think about interfaces differently, but the challenge and allure of feature implementation often pushes interface definition to a second-class concern. Up front API definition can be seen as a blocker to progress, and advocates as pushers of waterfall software design. Nothing could be further from the truth.
Schedule the delivery of a microservice interface at the beginning of a project. In doing so you’ve unblocked adopters, who can begin to code against the API. That adopter activity will help uncover missed use-cases or gaps early in the project.
Next, schedule delivery of a stub implementation. Stubs typically require low effort to develop and help validate client use-cases on the system being developed. Lessons learned will reinforce confidence in defined interfaces or inform early iterations.
With the API and stub in place the team is free to begin and iterate on functional implementations. Iterate on the real implementation where the stub was initially launched. Doing so will present adopters with another opportunity to discover cases where they were dependent on stub-implementation specific details (like fixed content).
Without initial development of the API integrations take on a more burdensome and waterfall-like development cycle. Implementation first service design makes unvalidated assumptions about the use-cases and iteration on the eventually exposed interface requires significantly more rework.
At the end of the day each of us are empowered to define our own success criteria. Microservice adoption does not *require* an interface-first development methodology, well defined interfaces, or even a specific degree of component isolation. I’ve known teams that throw caution to the wind and proceed without (what I would consider) adequate automation, functional testing, or interface forethought. I’ve known teams that have no intention of iterating, but usually the mood is that they will just accept the cost of doing so if the need is realized.
Microservice adopters are as free as anyone else to amass microservice technical debt, and only the adopter can make the determination if the long term cost is worth the short term time savings. It is my hope that this article has provided some transparency into common debt trade-offs, their long term cost, and strategies for paying it down.