From Spring Microservices in Action by John Carnell
This article introduces the Spring Cloud and gives information about writing microservices for Spring deployments.
The one constant in the field of software development is that we, as software developers, sit in the middle of a sea of chaos and change. We all feel the churn as new technologies and approaches suddenly appear on the scene, causing us to re-evaluate how we build and deliver solutions for our customers. One example of this churn is the rapid adoption by many organizations of building applications using microservices. Microservices are distributed, loosely coupled software services that carry out a small number of well-defined tasks.
Why change the way we build applications?
We’re at an inflection point in history. Almost all aspects of modern society are wired together via the Internet. Companies that used to serve local markets are suddenly able reach out to a global customer base. However, with a larger global customer base also comes greater global competition. These competitive pressures mean the following forces are impacting the way developers have to think about building applications:
- Complexity has gone way up. Customers expect that all parts of an organization know who they are. “Siloed” applications that talk to a single database are no longer the norm. Today’s applications need to talk to multiple services and databases, residing not only inside a company’s data center, but also to external service providers over the Internet.
- Customers want faster delivery. Customers no longer want to wait annually for the next release or version of a software package. Instead, they expect the features in a software product to be unbundled, allowing new functionality to be released without having to wait for an entire product release.
- Performance and scalability. Global applications make it extremely difficult to predict how much transaction volume is going to be handled by an application, and when that transaction volume is going to hit. Applications need to be able to scale up across multiple servers quickly, and scale back down when the volume needs have passed.
- Customers expect their applications to be available. Customers are only one click away from a competitor, which means that a company’s applications must be highly resilient. Failures or problems in one part of the application shouldn’t bring down the entire application.
To meet the expectations listed above, we as application developers must embrace the paradox that building highly-scalable, highly-redundant applications necessitates breaking our applications into small services that can be built and deployed independently of one another. If we “unbundle” our applications and move them away from a single monolithic artifact, we can build systems that are:
- Decoupled services can be composed and rearranged to quickly deliver new functionality. The smaller the unit of code, the less complicated it is to change and the less time it takes to test/deploy.
- Decoupled services mean an application is no longer a single “ball of mud” where degradation in one part of the application causes the whole application to fail. Failures can be localized to a small part of the application and contained before the entire application experiences an outage. This also enables the applications to degrade gracefully in case of un-recoverable error.
- Scalable. Decoupled services can easily be distributed horizontally across multiple servers, making it possible to scale the features/services appropriately. With a monolithic application, where the logic for the application is intertwined, the entire application needs to scale, even if only a small part of the application is the bottleneck. Scaling on small services is localized and cost-effective.
To this end as we begin our discussion of microservices keep the following in mind:
What exactly is the cloud?
The term “cloud” has become overused. Every software vendor has a cloud and everyone’s platform is cloud-enabled, but if you cut through the hype there are three basic models of cloud-based computing. These are:
- Infrastructure as a Service (IaaS)
- Platform as a Service (PaaS)
- Software as a Service (SaaS)
To better understand these concepts, let’s map the everyday task of making a meal to the different models of cloud computing. When we want to eat a meal, we have four choices:
- We can make the meal at home.
- We can go to the grocery store and buy a pre-made meal that we must heat up before serving.
- We can get a meal delivered to our house.
- We can drive our car to eat at a restaurant.
Figure 1 shows each model.
Figure 1 The different cloud computing models come down to who is responsible for what: the cloud vendor or you
The difference between these options is who’s responsible for cooking these meals and where the meal is cooked. In the on-premise model, eating a meal at home requires you to do all of the work, using your own oven and ingredients already in the home. A store-bought meal is like using the Infrastructure as a Service (IaaS) model of computing. We’re using the store’s chef and oven to pre-bake the meal, but we’re still responsible for heating the meal and eating it at our house (and cleaning up the dishes afterwards).
In a Platform as a Service (PaaS) model we still have some responsibilities for the meal, but we further rely on a vendor to take care of the core tasks associated with making a meal. For example, in a PaaS model, you must supply the plates and furniture, but the restaurant owner provides the oven, ingredients and the Chef to cook them. In the Software as a Service (SaaS) model, we go to a restaurant where all the food’s prepared for us. We eat at the restaurant and we pay for the meal when we’re done. We also have no dishes to prepare or wash.
The key item at play in each of these models is one of control: who’s responsible for maintaining the infrastructure and what are the technology choices available for building the application? In an IaaS model, the cloud vendor provides the basic infrastructure, but you’re accountable for selecting the technology and building the final solution. On the other end of the spectrum, with a SaaS model, you’re a passive consumer of the service provided by the vendor and have no input on the technology selection or any accountability to maintain the infrastructure for the application.
Why the cloud and microservices?
One of the core concepts of a microservice-based architecture is that each service is packaged and deployed as its own discrete and independent artifact. Service instances should be able to be brought up quickly, and instances of the service should be indistinguishable from each other.
As a developer writing a microservice, sooner or later you’re going to have to decide whether your service is going to be deployed to a:
- Physical Server. You can build and deploy your microservices to a physical machine, but few organizations do this because physical servers are constrained. You can’t quickly ramp up the capacity of a physical server and it can become extremely costly to scale your microservice horizontally across multiple physical servers.
- Virtual Machine Images. One of the key benefits of microservices is being able to quickly start up and shut down microservice instances in response to scalability and service failure events. Virtual machines are the heart and soul of the major cloud providers. A microservice can be packaged up in a virtual machine image. Multiple instances of the service can then be quickly deployed and started in either an IaaS private or public cloud.
- Virtual Container. Virtual containers are a natural extension to deploying your microservices on a virtual machine image. Rather than deploying a service to a full virtual machine, many developers deploy their services as Docker containers (or equivalent container technology) to the cloud. Virtual containers run inside a virtual machine; using a virtual container, you can segregate a single virtual machine into a series of self-contained processes that share the same virtual machine image.
For this article, all the microservices and corresponding service infrastructure are deployed to an IaaS-based cloud provider. This is the most common deployment topology used for microservices.
- Simplified Infrastructure Management. IaaS cloud-providers give you the ability to have the most control over your services. New services can be started and stopped with simple API calls. With an IaaS cloud solution you only pay for the infrastructure that you use.
- Massive horizontal scalability. IaaS cloud providers allow you to quickly and succinctly start one or more instances of a service. This capability means you can scale services quickly, and allows you to quickly route around misbehaving or failing servers.
- High redundancy through geographic distribution. By necessity, IaaS providers have multiple data centers. By deploying your microservices using an IaaS cloud provider, you can gain a higher level of redundancy than using cluster in a data center.
The services built in this article are packaged as Docker containers. One of the reasons I chose Docker is that, as a container technology, Docker is deployable to all major cloud providers.
Microservices are more than just writing the code
The concepts around building individual microservices are easy to understand, but running and supporting a robust microservice application (particularly when running in the cloud) involves more than writing the services code. It involves having to think about how your services are going to be:
- Right-sized. How do we ensure that our microservices are properly sized so that we don’t have a microservice take on too much responsibility? Remember, properly sized, a service allows us to quickly make changes to an application and reduces the overall risk of an outage to the entire application?
- How do we manage the physical details of service invocation when in a microservice application, with multiple service instances that can quickly start and shut down?
- How do we protect our microservice consumers and the overall integrity of our application by routing around failing services and ensuring that we take a “fail-fast”?
- How do we ensure that every new instance of our service brought up is guaranteed to have the same configuration and code base as all the other service instances in production?
- How doesusing asynchronous processing and events minimize the direct dependencies between our services and ensure that we can gracefully scale our microservices?
This article takes a patterns-based approach as we answer the questions above. With a pattens-based approach, we layout common designs that can be used across different technology implementations. We’ve chosen to use Spring Boot and Spring Cloud to implement the patterns used in this article, but there’s nothing to keep you from taking the concepts presented here and use them with other technology platforms. Specifically, we cover the following four categories of patterns:
- Core Microservice Development Patterns
- Microservice Routing Patterns
- Microservice Client Resiliency Patterns
- Microservice Build/Deployment Patterns
Let’s walk through these patterns in more detail.
Core microservice development pattern
The core development microservice development pattern deals with the basics of building a microservice. Figure 2 highlights the topics we’ll cover around basic service design.
Figure 2 – In building a simple service, we must look beyond writing the code and think about how the service will be consumed (the interface), how the service will be communicated with (the communication protocol) and how we’ll manage the configuration of the service as it’s deployed to different environments
- Service Granularity. How do you approach decomposing a business domain down into microservices to ensure that the microservice has the right level of responsibility? Making a service too coarse-grained with responsibilities that overlap into different business problems domains makes the service difficult to maintain and change over time. Making the service too fine-grained increases the overall complexity of the application and turns the service into a “dumb” data abstraction layer with no logic but that needed to access the data store.
- Communication Protocols. How will developers communicate with your service? JSON is the ideal choice for microservices and has become the most common choice for sending and receiving data to microservice.
- Interface Design. What’s the best way to design the service interfaces that developers will use to call your service? How do you structure your service URLs to communicate service intent? What about versioning your services? A well-designed microservice interface makes using your service intuitive.
- Configuration Management of Service. How do you manage the configuration of your microservice so that, as it moves between different environments in the cloud, you never need to change the core application code or configuration?
- Event processing between services. How do you decouple your microservice using events that allow you to minimize hardcoded dependencies between your services and increase the resiliency of your application?
Microservice routing patterns
The microservice routing patterns deal with how a client application, that wants to consume a microservice, discovers the location of the service and is routed to it. In a cloud-based application, you might have hundreds of microservices instances running. This means you’ll need to abstract away the physical IP address of these services and have a single point of entry for service calls, that allows you to consistently enforce security and content policies for all service calls.
Figure 3 – Service discovery and routing are key parts of any large-scale microservice application.
Service discovery and routing answer the question of “how do I get my client’s request for a service to a specific instance of a service?”
- Service Discovery. How do you make your microservice discoverable to client applications without having the location of the service hardcoded into the application? How do you ensure that misbehaving microservice instances are removed from pool of available service instances?
- Service Routing. How do you provide a single entry point for your services to maintain security policies and ensure routing rules are applied uniformly to multiple services and service instances in your microservice applications? How do you ensure that each developer in your team doesn’t have to come up with their own solutions for providing routing to their services?
In figure 3 service discovery and service routing appear to have a hard-coded sequence between them (first comes service routing and the service discovery). To implement one pattern doesn’t require the other. For instance, we can implement service discovery without service routing. You can implement service routing without service discovery (even though its implementation is more difficult)
Microservice client resiliency patterns
Microservices architectures are highly distributed, and we must be extremely sensitive in how we prevent a problem in single service (or service instance) from cascading up and out to the consumers of the service. To this end, we’re going to cover four topics with client resiliency patterns:
- Client-side load balancing. How do we cache the location of our service instances on the service client, allowing calls to multiple instances of a microservice to be load balanced to the health instances of that microservice?
- Circuit Breakers Pattern. How do you prevent a client from continuing to call a service that‘s failing or suffering performance problems? When a service is running slowly, it consumes resources on the client calling it. We want failing microservice calls to fail fast.
- Fallback Pattern. When a service call fails, how do we provide a “plug-in” mechanism that allows the service client to try and carry out its work through some alternative means other than the microservice being called?
- Bulkhead Pattern. Microservice applications use multiple distributed resources to carry out their work. How do we compartmentalize these calls to ensure the misbehavior of one service call doesn’t negatively impact the rest of the application?
Figure 4 shows how these patterns protect the consumer of service from beingimpacted when a service is misbehaving.
Figure 4 With microservices, you must take care to protect the service caller from misbehaving services
Microservice build/deployment patterns
One of the core parts of a microservice architecture is that each instance of a microservice should be identical to its other instances. We can’t allow “configuration drift” (something changes on a server after it has been deployed) to occur, as this can introduce instability in your applications.
A phrase too often said
“I made only one small change on the stage server, but I forgot to make the change in production.” The resolution of many down systems I’ve worked on with critical situations teams over the years often started with those words from a developer or system administrator. Engineers (and most people in general) operate with good intentions. They don’t go to work to make mistakes or bring down systems. Instead they do the best they can, but they get busy or distracted. They tweak something on a server, fully intending to go back and do it in all environments.
At some later point, an outage occurs and everyone is left scratching their head wondering what’s different between the lower environments in production. I’ve found that the small size and limited scope of microservice makes it the perfect opportunity to introduce the concept of “immutable infrastructure” into an organization: Once a service is deployed, the infrastructure it’s running on is never touched again by human hands
An immutable infrastructure is critical to successfully using microservice architectures, because you must be able to guarantee in production that every microservice instance you start for a particular microservice is identical to its brethren.
To this end, our goal is to integrate the configuration of our infrastructure into our build deployment process, preventing us from deploying software artifacts like a Java WAR or EAR to an already running piece of infrastructure. Instead we want to build and compile our microservice and the virtual server image it’s running on as part of the build process. Then, when our microservice gets deployed, the entire machine image with the server running on it gets deployed.
Figure 5 illustrates this process.
Figure 5 We want the deployment of the microservice and the server its running on to be one atomic artifact deployed between environments.
Our goal with these patterns and topics is to ruthlessly expose and stamp out configuration drift as quickly as possible before it hits our upper environments like stage or production.