Jason Warner (Now: MD @redpoint, Prior: CTO @github, @heroku, @Canonical) tweeted his thoughts on microservice, and then âGithubâs CTO says âmicroservice was a failureâ. ââ and it was buzzed off in part like that. I put it together to say letâs read the whole thing properly because thatâs really not a good idea.
Each chunk of meaning is written in the order of English sentence, then Japanese sentence. The Japanese sentences are written by changing expressions and cutting jokes, giving priority to comprehension when reading the Japanese sentences alone.
- Iâm including Nishioâs annotations in this format.
@jasoncwarner: Iâm convinced that one of the biggest architectural mistakes of the past decade was going full microservice On a spectrum of monolith to microservices, I suggest the following: Monolith > apps > services > microservices So, some thoughts I am convinced that one of the biggest architectural mistakes of the past decade has been trying to move to full microservices. On the spectrum from monoliths to microservices, I propose the following Monolith > Apps > Services > Microservices
@jasoncwarner: First, these are thoughts, not rules. Anyone that has built a large distributed system knows they donât really work that way and have to adapt with it Second, stage will be important. If you are reading this at a 5-50 person companyâŠjust stick with a monolith. Trust me @jasoncwarner: If you are reading this at a 10k person company, likely have all of these to some degree but here is where my quick thoughts might differ from what has been done in the past
- First, this is a concept, not a rule.
- Anyone who has built a large distributed system knows that it does not work the way it was conceived when it was designed and that adjustments must be made.
- The optimal design is not determined in advance by rules such as âthis is what you should do in this case,â but is revealed after the fact based on knowledge gained through actual operation.
- Second, the stage is important.
- If you are reading this in a company of 5-50 people, you should stick to monoliths. Trust me, you will.
- If you are a 10,000 person company reading this, you probably have all of these (monolith > app > service > microservice) to some degree!
- but my brief thoughts may differ from what your company has done in the past
Now, diversion. Letâs talk definitions. There exist no exact definitions for these all services - supporting apps/monoliths, core infra pieces, needed by lots of surface area, core compliance func, possibly not written by app teams (infra maintains them) microservices - few hundred lines of code, mostly one-offs, likely could/should be library or SDK
Now, letâs talk about definitions. Not all of these have exact definitions!
- Service
- It supports the application/monolith. It is a core part of the infrastructure.
- It is needed in many surface areas. (what surface area are you referring to?)
- Core Compliance Functions.
- App team may not write (infrastructure team will maintain)
- microservice
- It is a few hundred lines of code. Mostly one-offs. Likely/should become a library or SDK.
Ok, with all that out of the way, letâs talk why My thinking basically goes like this Speed & risk A. Itâs generally easier for entire engineering teams to work inside one big app (think entire site in Rails app) than to reason about the ways in which microservices will fail B. distributed systems, which you will have as you grow no matter what, are hard enough to reason about with introducing dozens let alone hundreds of microservices that have their own risk profiles C. as you go fully micro, you need to introduce new concepts to handle the sprawl D. A big one. Each bespoke infra service or microservice is an extreme version of debt IMV. Code is debt, but services are extreme versions of that. Really think about about what that means for you. Prefer your debt to be literal points of highest leverage not nice to haves
Now, let me tell you why. My thinking is basically as follows.
Speed and Risk
- A. It is generally easier to have the entire engineering team working within one large app than it is to reason about how microservices can fail. (e.g., an entire site is a Rails app, etc.)
- B. Distributed systems are necessary no matter what as they grow. However, it is very difficult to deploy dozens or hundreds of microservices with unique risk profiles.
- C. To be completely micro, new concepts need to be introduced to handle sprawl.
- The need for a layer to manage and connect the growing number of microservices in an unregulated manner.
- D. Key takeaway: in my opinion, tailor-made infrastructure services and microservices are the worst kind of technical debt. Code is a liability, but services are more of a liability.
- Think carefully about what it means to you. Debt is not something to be assumed on a ânice to haveâ basis, but rather to achieve the highest leverage.
I think a challenge we dist systems engineers have is we really like to not have duplication so we see something being done in a few places and think âletâs just pull this out and make a microservice out of itâ. In theory this is fine. And if done for a few tens of instances, it is fine. When it goes to several dozen microservices orâŠway worseâŠacross massive company boundaries (think one color service for all of Microsoft/Google/Apple) it becomes less technical and more org challenge And I know what Iâm presenting so many feels like some false dichotomies but in practice I find there are definite tech challenges with microservices, but there are even more org challenges with them
The challenge we distributed systems engineers have is that we want to avoid duplication. So we look at what is being done in several places and think, âLetâs take this out and create a microservice.
In theory, this is fine. And if it is done for dozens of instances, it is fine.
If it crosses dozens of microservices or, worse, huge corporate boundaries (letâs say, one color service for all of Microsoft/Google/Apple)
- This is not a technical challenge, but more of an organizational challenge.
- Microsoft, Google, and Apple each have their own âcolor selectionâ function in their applications. Theoretically, it is correct to argue that âif we carve it out into microservices, we can avoid code duplication,â but in practice, each company insists on a design that suits their own needs. It requires political, rather than technical, effort to reconcile the conflicting opinions of the stakeholders and reach a consensus. - Picture sharing common areas
Many will find my assertion to be a false dichotomy. But it is my experience in reality. Microservices have clear technical challenges, but there are even more organizational challenges.
And of all the things I worry about itâs this
-
First, infra (unless company is led by unusually with-it CEO) almost always gets short end of priority stick
-
Second, too many services typically leads to a lack of ownership problem and boundary issues
-
Third, you introduce even more tooling to deal with too many microservices
And most importantly, each microservice that could/should have been a library or sdk or something introduces production risk more code is indeed overhead, more services is customer facing prod/experience risk. Both approaches have overhead/risk but the % distribution is diff
And this is what I am concerned about.
- First, infrastructure is almost always a low priority (unless the company is led by a CEO who knows IT very well).
- Second, too many services create ownership and boundary issues.
- It becomes unclear which team has the authority to change which service and where the boundaries are. Problems arise when people want a fix but donât know who to tell, or when multiple teams think of each other as âthat team is responsible for the fixâ and the ball falls between them.
- Third, more tools will be deployed to deal with too many microservices.
And most importantly, each microservice that could/should have been made into a library, SDK, etc. brings its own production risks. Too much code is certainly an overhead, but too much service is a production/experience risk faced by the customer. Both approaches have overhead and risk, just in different proportions.
So this is typically what I recommend
- Be a monolith as long as possible
- Services start in infra for infra reasons, not app eng typically
- If breaking out mono, break to large apps, not small services
- Think that each new app is a virtual wall in your company
- Prefer libraries to microservices where possible The classic âwe introduced a color serviceâ is my favorite extreme example of where I would choose a library over a service. Yes extreme example but hey, it gets very talked about as quintessential example
Therefore, I recommend the typical method.
- 1: Be a monolith for as long as possible
- 2: Servitization is initiated by the infrastructure team for reasons on the infrastructure side. It is not initiated by the app engineer side.
- 3: When breaking away from monoliths, break them up into larger apps rather than smaller services.
- 4: Consider the new app to be a virtual wall within the company.
- The increase in the number of apps will create new âcommunication barriersâ (walls) within the company, and we should be cautious about this.
- 5: Prioritize libraries over microservices whenever possible
The classic example of âintroducing a color serviceâ is one of my favorite extreme examples of choosing libraries over services. It is certainly an extreme example, but it is very much talked about as the quintessential example.
âBut Jason, what about Amazon and Uber and ..?â
âBut Jason, what about Amazon, Uber and . are you?â
- Hey, you do you. Iâm just saying what Iâve gone through in experience
- If you have the success of Amazon when that mandate came down for services, go nuts
- These are more guidelines than rules
- hey, you do your thing. Iâm just saying what Iâve been through.
- if your company is making as much money as it was âwhen Amazon mandated servicingâ, then great! These are guidelines, not rules.
90% of all companies in the world could probably just be a monolith running against a primary db cluster with db backups, some caches and proxies and be done with it For the 10% of companies that hit planet scale (no pun intended here Sam) itâs gonna be art figuring this out Distributed systems combined with scaling companies is so complex and so few people have done it that itâs hard to draw specific lessons from those companies. Each context and instance is different. What Iâm talking about here is more thoughts on how to approach the problems
90% of the worldâs enterprises could probably get away with a monolith that works with a primary DB cluster, DB backups, caches, and proxies. For the global 10% of companies, this is a very difficult problem to solve.
- ( the expression âart/artisticâ)
The combination of distributed systems and expanding enterprises is so complex, and so few have done it, that it is difficult to draw specific lessons from them. Each context and example is different. What I am talking about here is an idea of how to approach the problem.
And going back to this Monolith > apps > services > microservices Itâs basically an approach to scale: be one big thing for as long as possible. Never overcorrect to too small of things, go through it as you grow (even hyper growth). This is for org and tech Again, itâs art
And back to this.
- Monolith > Apps > Services > Microservices
This is essentially an approach to scale:
- Be one big thing for as long as possible.
- Not to make things excessively small, but to keep up with growth. (Even during times of rapid growth.)
- This is true for both organizations and technology.
Again, this is art.
This page is auto-translated from [/nishio/Jason Warnerăšăă€ăŻăă”ăŒăăč](https://scrapbox.io/nishio/Jason Warnerăšăă€ăŻăă”ăŒăăč) using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. Iâm very happy to spread my thought to non-Japanese readers.