What are the Downsides of a Microservice based solution

A Microservice based solution has the following downsides:

  1. Distributing the application adds complexity for developers when they are designing and building the services and in testing and exception handling. It also adds latency to the system.
  2. Without a Microservice-oriented infrastructure, An application that has dozens of Microservices types and needs high scalability means a high degree of deployment complexity for IT operations and management.
  3. Atomic transactions between multiple Microservices usually are not possible. The business requirements must embrace eventual consistency between multiple Microservices.
  4. Increased global resource needs (total memory, drives, and network resources for all the servers or hosts). The higher degree of granularity and distributed services requires more global resources. However, given the low cost of resources in general and the benefit of being able to scale out just certain areas of the application compared to long-term costs when evolving monolithic applications, the increased use of resources is usually a good tradeoff for large, long-term applications.
  5. When the application is large, with dozens of Microservices, there are challenges and limitations if the application requires direct client-to-Microservice communications. When designing and building a complex application based on Microservices, you might consider the use of multiple API Gateways instead of the simpler direct client‑to‑Microservice communication approach.
  6. Deciding how to partition an end-to-end application into multiple Microservices is challenging. As You need to identify areas of the application that are decoupled from the other areas and that have a low number of hard dependencies. Ideally, each service should have only a small set of responsibilities. This is like the single responsibility principle (SRP) applied to classes, which states that a class should only have one reason to change. 

 

 

Advertisements

Implement Rest API in MuleSoft, Azure Logic Apps, Asp.Net core or Spring-Boot? MuleSoft: Step1- Defining an API in RAML

I have been working lately on comparing on comparing different technologies to build web API’s.. One of the main concerns was if we wanted to build a simple API service which technology would be easier, more productive to develop the service with. To provide a reference comparison I will build the same web service (in MuleSoft, Azure Logic App, Asp.Net Core, Spring Boot) and provide my notes as I go. The web service would provide the following functionality

  1. CRUD operations on an Authors entity
  2. CRUD operations on BOOKS entities where books are related to authors

All the Read (queries) should support:

  1. Filtering
  2. Searching
  3. Paging
  4. Supports Levels two and three in Richardson Maturity Model(see my previous post https://moustafarefaat.wordpress.com/2018/12/11/practical-rest-api-design-implementation-and-richardson-maturity-model/) . This means based on the Accept header of the request return the results as either:
    1. Pure JSON
    2. With HATEOUS

I will start with MuleSoft implementation.

Step 1. Define the API in RAML

With MuleSoft you get AnyPoint portal and you get the design center, which helps you designing the API RAML. There is API Designer visual Editor which can help you in the beginning.

 

 

 

Though it has many weakness such as:

  1. Once when you switch to RAML editor you cannot go back.
  2. You cannot define your own Media Types you have to use form the list.

 

To finalize the API definition in RAML I had to manually edit though the editor was helping to get started. Below is a fragment of the API in RAML (The full solution will be published on my GitHub https://github.com/RefaatM )

Notice in the RAML that I have defined two responses for the Get operation of the Authors Resources. Full RAML is at (https://github.com/RefaatM/MuleSoftRestAPIExample/tree/master/src/main/resources/api)

#%RAML 1.0

title: GTBooks

description: |


GTBooks Example

version: ‘1.0’

mediaType:

application/json

application/xml

protocols:

HTTP

baseUri: /api/v1.0

 

types:


CreateAuthor:


description: This is a new DataType


type: object


properties:


Name:


required: true


example: Moustafa Refaat


description: Author Name


type: string


Nationality:


required: true


example: Canadian


description: ‘Author Nationality ‘


type: string


Date-of-Birth:


required: true


example: ‘2018-12-09’


description: Author Date of Birth


type: date-only


Date-of-Death:


required: false


example: ‘2018-12-09’


description: Author Date of Beath


type: date-only

 


Author:


description: This is a new DataType


type: CreateAuthor


properties:


Id:


required: true


example: 1


description: Author Id


type: integer


Age:


required: true


maximum: 200


minimum: 8


example: 10


description: Author Age


type: integer

 


AuthorHateoas:


description: Author with Hateoas information LINKS


type: Author


properties:


Links:


required: true


description: Property description


type: array


items:


required: true


type: Link

 

 


Link:


description: Hateoas LINK


type: object


properties:


href:


required: true


example: /Book/10


description: URL Link


type: string


rel:


required: true


example: GetBook


description: Operation


type: string


method:


required: true


example: GET


description: ‘HTTP Method Get, PUT,..’


type: string

/author:


get:


responses:


‘200’:


body:


application/json:


type: array


items:


type: Author


application/hateaos+json:


type: array


items:


type: AuthorHateoas


‘304’: {}


‘400’: {}


‘500’: {}


headers:


Accept:


example: ‘application/json ‘


description: application/json or application/hateaos+json


type: string


queryParameters:


sort-by:


required: false


example: Example


description: sort by


type: string


filteryby:


required: false


example: Example


description: Property description


type: string

 

(.. to be continued)

Practical REST API Design, implementation and Richardson Maturity Model

Richardson Maturity Model classifies REST API maturity as follows

  • Level Zero: These services have a single URI and use a single HTTP method (typically POST). For example, most Web Services (WS-*)-based services use a single URI to identify an endpoint, and HTTP POST to transfer SOAP-based payloads, effectively ignoring the rest of the HTTP verbs. Similarly, XML-RPC based services which send data as Plain Old XML (POX). These are the most primitive way of building SOA applications with a single POST method and using XML to communicate between services.
  • Level One: These services employ many URIs but only a single HTTP verb – generally HTTP POST. They give each individual resource in their universe a URI. Every resource is separately identified by a unique URI – and that makes them better than level zero.
  • Level Two: Level two services host numerous URI-addressable resources. Such services support several of the HTTP verbs on each exposed resource – Create, Read, Update and Delete (CRUD) services. Here the state of resources, typically representing business entities, can be manipulated over the network. Here service designer expects people to put some effort into mastering the APIs – generally by reading the supplied documentation. Level 2 is the good use-case of REST principles, which advocate using different verbs based on the HTTP request methods and the system can have multiple resources.
  • Level Three: Level three of maturity makes use of URIs and HTTP and HATEOAS. This is the most mature level of Richardson’s model which encourages easy discoverability and makes it easy for the responses to be self-explanatory by using HATEOAS. The service leads consumers through a trail of resources, causing application state transitions as a result.

Where HATEOAS (Hypermedia as the Engine of Application State) is a constraint of the REST application architecture that keeps the RESTful style architecture unique from most other network application architectures. The term “hypermedia” refers to any content that contains links to other forms of media such as images, movies, and text. This architectural style lets you use hypermedia links in the response contents so that the client can dynamically navigate to the appropriate resource by traversing the hypermedia links. This is conceptually the same as a web user navigating through web pages by clicking the appropriate hyperlinks to achieve a final goal. Like a human’s interaction with a website, a REST client hits an initial API URI and uses the server-provided links to dynamically discover available actions and access the resources it needs. The client need not have prior knowledge of the service or the different steps involved in a workflow. Additionally, the clients no longer have to hard code the URI structures for different resources. This allows the server to make URI changes as the API evolves without breaking the clients.

Naturally you would want to build to highest standard and provide level three REST API. That would mean providing a links field as in the following example below from GT-IDStorm API

The data payload as you can see from this sample is huge, compared to the actual data returned.

{

“value”: [

{

“id”: “63b2c70e-2bcb-4335-9961-3d14be642163”,

“name”: “Entity-1”,

“description”: “Testing Entity 1”,

“links”: [

{

“href”: “https://localhost:44379/api/v1/entity/63b2c70e-2bcb-4335-9961-3d14be642163”,

“rel”: “self”,

“method”: “GET”

},

{

“href”: null,

“rel”: “get_entitydefinition_byname”,

“method”: “GET”

},

{

“href”: “https://localhost:44379/api/v1/entity/63b2c70e-2bcb-4335-9961-3d14be642163/full”,

“rel”: “get_full_entitydefinition”,

“method”: “GET”

},

{

“href”: “https://localhost:44379/api/v1/entity/63b2c70e-2bcb-4335-9961-3d14be642163”,

“rel”: “delete_entitydefinition”,

“method”: “DELETE”

},

{

“href”: “https://localhost:44379/api/v1/entity/63b2c70e-2bcb-4335-9961-3d14be642163/attributes”,

“rel”: “create_attribute_for_entitydefinition”,

“method”: “POST”

},

{

“href”: “https://localhost:44379/api/v1/entity/63b2c70e-2bcb-4335-9961-3d14be642163/attributes”,

“rel”: “get_attributes_for_entitydefinition”,

“method”: “GET”

},

{

“href”: “https://localhost:44379/api/v1/entity/63b2c70e-2bcb-4335-9961-3d14be642163/systems”,

“rel”: “create_system_for_entitydefinition”,

“method”: “POST”

},

{

“href”: “https://localhost:44379/api/v1/entity/63b2c70e-2bcb-4335-9961-3d14be642163/systems”,

“rel”: “get_system_for_entitydefinition”,

“method”: “GET”

},

{

“href”: “https://localhost:44379/api/v1/entity/63b2c70e-2bcb-4335-9961-3d14be642163/data”,

“rel”: “get_data_for_entitydefinition”,

“method”: “GET”

},

{

“href”: “https://localhost:44379/api/v1/entity/63b2c70e-2bcb-4335-9961-3d14be642163/data/GetEntityDataWithMissingSystems”,

“rel”: “get_data_WithMissingSystems_for_entitydefinition”,

“method”: “GET”

}

]

},

{

“id”: “54bc1f18-0fd5-43dd-9309-4d8659e3aa91”,

“name”: “Entity-10”,

“description”: “Testing Entity 10”,

“links”: [

{

“href”: “https://localhost:44379/api/v1/entity/54bc1f18-0fd5-43dd-9309-4d8659e3aa91”,

“rel”: “self”,

“method”: “GET”

},

{

“href”: null,

“rel”: “get_entitydefinition_byname”,

“method”: “GET”

},

{

“href”: “https://localhost:44379/api/v1/entity/54bc1f18-0fd5-43dd-9309-4d8659e3aa91/full”,

“rel”: “get_full_entitydefinition”,

“method”: “GET”

},

{

“href”: “https://localhost:44379/api/v1/entity/54bc1f18-0fd5-43dd-9309-4d8659e3aa91”,

“rel”: “delete_entitydefinition”,

“method”: “DELETE”

},

{

“href”: “https://localhost:44379/api/v1/entity/54bc1f18-0fd5-43dd-9309-4d8659e3aa91/attributes”,

“rel”: “create_attribute_for_entitydefinition”,

“method”: “POST”

},

{

“href”: “https://localhost:44379/api/v1/entity/54bc1f18-0fd5-43dd-9309-4d8659e3aa91/attributes”,

“rel”: “get_attributes_for_entitydefinition”,

“method”: “GET”

},

{

“href”: “https://localhost:44379/api/v1/entity/54bc1f18-0fd5-43dd-9309-4d8659e3aa91/systems”,

“rel”: “create_system_for_entitydefinition”,

“method”: “POST”

},

{

“href”: “https://localhost:44379/api/v1/entity/54bc1f18-0fd5-43dd-9309-4d8659e3aa91/systems”,

“rel”: “get_system_for_entitydefinition”,

“method”: “GET”

},

{

“href”: “https://localhost:44379/api/v1/entity/54bc1f18-0fd5-43dd-9309-4d8659e3aa91/data”,

“rel”: “get_data_for_entitydefinition”,

“method”: “GET”

},

{

“href”: “https://localhost:44379/api/v1/entity/54bc1f18-0fd5-43dd-9309-4d8659e3aa91/data/GetEntityDataWithMissingSystems”,

“rel”: “get_data_WithMissingSystems_for_entitydefinition”,

“method”: “GET”

}

]

}

],

“links”: [

{

“href”: “https://localhost:44379/api/v1/entity?orderBy=Name&searchQuery=Testing%20Entity%201&pageNumber=1&pageSize=10”,

“rel”: “self”,

“method”: “GET”

}

]

}

For example if we remove the HATEOAS requirement that data returned for the same query would be

This would less data would have huge impact on the system as a whole performance, Less traffic on the network, less data to process and manipulate by the client and servers.

[

{

“id”: “63b2c70e-2bcb-4335-9961-3d14be642163”,

“name”: “Entity-1”,

“description”: “Testing Entity 1”

},

{

“id”: “54bc1f18-0fd5-43dd-9309-4d8659e3aa91”,

“name”: “Entity-10”,

“description”: “Testing Entity 10”

}

]

I usually implement the API to have and accept header with multiple options

  • Application/json: returns just the data
  • Application/hateoas+json: return the data with the hateoas (Links) data.

I also implement another resource or operation that provides the links structures

In Conclusion

I would recommend implementing the API to:

  • Support Level Two and Leve Three at the same time by using the accept header for the request to
    • Application/json: returns just the data
    • Application/hateoas+json: return the data with the hateoas (Links) data.
  • Implement another resource or the root that would return the URLs (structures and operations) that are supported by the API.

As I found just support HATEAOS only would make the system pay heavy price on performance specially with large data loads while very few clients if any would utilize the links returned. I would love to hear your thoughts and experience on APIs with HATEAOS?

Microservices Interview Questions: 90 Technical Questions with Answers

Just got this Book published on Amazon Kindle check it out at https://www.amazon.ca/dp/B07KMD77YB/ref=sr_1_4?ie=UTF8&qid=1542400692&sr=8-4&keywords=microservices+interview+questions

Wisdom is learning all we can but having the humility to realize that we do not know it all. Microservices Interview Questions 90 Technical questions with clear and concise answers will help you gaining more wisdom in Microservices Interviews. The difference between a great Microservices consultant and someone who kind of knows some stuff is how you answer the interview questions in a way that will show how knowledgeable you are. The 90 questions I have assembled are for: job seekers (junior/senior developers, architects, team/technical leads), and interviewers.

Microservices Interview Questions are grouped into:

  • General Questions
  • Design Patterns Questions
  • API Design Questions
  • Containers and Orchestrations Questions.

Increase your earning potential by learning, applying and succeeding. Learn the fundamentals relating to Microservices based Application architecture in an easy to understand questions and answers approach. It covers 90 realistic interview Questions with answers that will impress your interviewer. A quick reference guide, a refresher and a roadmap covering a wide range of microservices architecture related topics & interview tips.

Sample Questions

  1. Why a Microservices architecture?


Microservices Architecture provides long-term agility. Microservices enable better maintainability in complex, large, and highly-scalable systems by letting you create applications based on many independently deployable services that each have granular and autonomous lifecycles. And Microservices can scale out independently.


Instead of having a single monolithic application that you must scale out as a unit, you can instead scale out specific Microservices. That way, you can scale just the functional area that needs more processing power or network bandwidth to support demand, rather than scaling out other areas of the application that do not need to be scaled. That means cost savings. Microservices approach allows agile changes and rapid iteration of each Microservice. Architecting fine-grained Microservices-based applications enables continuous integration and continuous delivery practices. It also accelerates delivery of new functions into the application. Fine-grained composition of applications also allows you to run and test Microservices in isolation, and to evolve them autonomously while maintaining clear contracts between them. As long as you do not change the interfaces or contracts of a Microservice, you can change the internal implementation of any Microservice or add new functionality without breaking other Microservices that use it.

  1. What is the Eventual Consistency?

Eventual consistency is an approach which allow you to implement data consistency within a Microservices architecture. It focuses on the idea of that the data within your system will be eventually consistent and it doesn’t have to be immediately consistent. For example, in an E-Commerce system, when a customer places an order, do you really need to immediately carry out all the transactions (Stock availability, charging the customer credit card, etc.)? There are certain data updates that can be eventually consistent, in line with the initial transaction that was triggered. This approach is based on the BASE model (Basic Availability, Soft state, and Eventually consistent). Data updates can be more relaxed, and don’t always have to have all the updates apply to the data immediately, slightly stale data to give approximate answers is okay sometimes. BASE model contrasts with the ACID model where all data related to the transaction must be immediately updated as part of the transaction. The system becomes more responsive because certain updates are done in the background and not done as part of the immediate transaction. The eventual consistency approach is highly useful for long running tasks. One thing to note about the eventual consistency approach is depending on the patterns you use, the actual time it takes for the data to become consistent will not be days, minutes or hours, it will potentially be seconds. Eventual data consistency across your Microservices architecture that happens within seconds is acceptable because of the gains you get in terms of performance and responsiveness across your system. Eventual consistency using the right patterns can be so immediate, and preparing for inconsistencies and dealing with race conditions might not actually be such a huge task. The traditional approach to eventual consistency has involved using data replication. Another approach to eventual consistency is Event based which works by raising events as part of transactions and actions in an asynchronous fashion as messages are placed on message brokers and queues.

  1. How to approach REST API Design?

    1. First, focus on the business entities that the web API exposes, what kind of CRUD operations needed. Creating a new entity record can be achieved by sending an HTTP POST request that contains the entity information. The HTTP response indicates whether the record was created successfully or not. When possible, resource URIs should be based on nouns and not verbs like the operations on the resource. A resource does not have to be based on a single physical data item.
    2. Avoid creating APIs that simply mirror the internal structure of a database. The purpose of REST is to model entities and the operations that an application can perform on those entities. A client should not be exposed to the internal implementation.
    3. Entities are often grouped together into collections (orders, customers). A collection is a separate resource from the item within the collection, and should have its own URI.
    4. Sending an HTTP GET request to the collection URI retrieves a list of items in the collection. Each item in the collection also has its own unique URI. An HTTP GET request to the item’s URI returns the details of that item.
    5. Adopt a consistent naming convention in URIs. In general, it helps to use plural nouns for URIs that reference collections. It’s a good practice to organize URIs for collections and items into a hierarchy.
    6. You should provide navigable links to associated resources in the body of the HTTP response message. Avoid requiring resource URIs more complex than collection/item/collection.
    7. Try to keep URIs relatively simple. Once an application has a reference to a resource, it should be possible to use this reference to find items related to that resource.
    8. Try to avoid “chatty” web APIs that expose many small resources. Such an API may require a client application to send multiple requests to find all of the data that it requires. Instead, you might want to denormalize the data and combine related information into bigger resources that can be retrieved with a single request. However, you need to balance this approach against the overhead of fetching data that the client doesn’t need. Retrieving large objects can increase the latency of a request and incur additional bandwidth costs.
    9. Avoid introducing dependencies between the web API and the underlying data sources. For example, if your data is stored in a relational database, the web API doesn’t need to expose each table as a collection of resources.

It might not be possible to map every operation implemented by a web API to a specific resource. You can handle such non-resource scenarios through HTTP requests that invoke a function and return the results as an HTTP response message. For example, a web API that starts some operation such as run validations could provide URIs that expose these operations as pseudo resources and use the query string to specify the parameters required.

Choosing a Platform for a new microservice

Microservices endorses using different technologies to build components of a solution or a system. You can have components based on Java or Scala with Spring or C# & .Net etc. But if you are going to build a new microservice which platform should you use? Most developer’s or architects would choose based on their background if they came from Java they choose Java or .net if they are from .net. This kind of reminds me of an old Dilbert comic in it Dilbert tells the marketing manager that he can draw a fish on the screen. It sounds to me like most architects choose a platform because they are familiar with not because of its merits. I am working on devising a decision tree to select which platform is more suitable for a microservices. So far if you are building a microservice to interact with Hadoop/Spark or Kafka Java/Scala/Spring is the platform to go, though there are lots of libraries that makes it easy to develop such microservices in .Net. I am still working on this but thought I should share it. Let me know your thoughts and why would you choose a platform over the other?

Java Scala /Spring MVC

  • Pros:

    • Respected in the business
    • Widespread
    • Constrained coding style might lead to easier to read code
    • Better ecosystem for libraries and tools
  • Cons:
    • Verbose, both with the language and framework
    • Does not have as good support for threading and asynchrony as C# has with async/await

C#/ASP.NET Core

  • Pros:
    • Asynchronous -oriented
    • Better generics, and integration with LINQ
    • Better IDE integration, and configuration setup
    • Quicker to get things up and running in
    • Able to be component oriented as well as MVC
    • Availability of other programming styles than OOP, including functional.
    • More low-level constructs allowing optimization
  • Cons:
    • Newer, slightly buggy
    • Not as battle-tested as Java
    • Heavy IDE (although you can use Visual Studio Code, too)

Microservices, TOGAF, Solution Blueprint and API Design


While providing microservices architecture consulting services, I found it necessary to extract the Interface Design from the Application Architecture. The Interface Design is usually part of the application architecture. However, the Interface design both UI and API are of extreme importance and value to the solution and to the organization offerings, it requires having a sperate section of its own. With the emergence of Microservices and many companies offering their services through APIs to allow clients to hock in and utilize their services. Or as business/marketing people refer to “Monetizing the API”. I think it is paramount to get the interface design more attention and specifically the API design. That is why I defined this new Interface Architecture phase in my customized TOGAF offering to the organization.

Architecture Type

Description

Business Architecture

The business strategy, governance, organization, and key business processes.

Interface Architecture

A blueprint for the individual application interfaces both User Interface and Application Programming Interface and their relationships to the core business processes of the organization.

Application Architecture

A blueprint for the individual application components to be deployed, their interactions, and their relationships to the core business processes of the organization.

Data Architecture

The structure of the application logical and physical data assets and data management resources.

Technology Architecture

The logical software and hardware capabilities that are required to support the deployment of business, data, and application services. This includes IT infrastructure, middleware, networks, communications, processing, and standards.

 

I would like to know your thoughts on this, and how did you handle the API design in the Solution Architecture Blueprint.

Enterprise Architecture and Domain Driven Design – Build Custom Business Logic as Microservice or part of the COTS Application?

As enterprise Architects, depending on the organization we are adopting readymade software applications (or COTS) and tailoring or customization of the software to the needs of the organization. The tailoring can involve UI changes, workflows changes, integrations with other systems in the organization and sometimes business logic added or modified. The UI changes usually must happen on the COTS (Component of the Shelf) application and so are workflows unless you are building a UI façade for this application. I saw one organization did build a completely different Web App to provide its employees access to work shift schedules as the Time and Attendance system could not handle the access by the hundreds of thousands of employees and save on licensing and infrastructure costs. This leaves us with Integrations and business logic. In this installment. I will address Business Logic.

Should we build the custom Business Logic as Microservice or part of the COTS Application?

Aha, this is a tough decision. I prefer to put all customizations out of the software package somewhere else. Would it be in Microservices or built as part of the ESB. I want to have the freedom to have the business logic as platform independent as possible. By moving the custom business logic out in webservices implemented by Microservices or part of the ESB I accomplish that. There are political aspects to this issue too? First the COTS vendor wants to have all the business logic customization in its application for the following reasons:

  1. The vendor wants to increase the $$$ from the custom development services they will provide.
  2. The vendor can enrich its application offerings and business logic by taking this customized business logic and generalizing it in future releasee.
  3. The more customizations and business logic embedded in the vendor logic the harder for the organization to switch to another competitor as they will the cost of customizations from scratch would add up.

The delivery department would prefer not to take ownership of new software components, especially if the organization does not have many software developers and depends on consulting companies.

My argument is that, all software be it ERP system like Dynamics AX or Time and Attendance systems is generic and the vendor offer it to your organization and your competitors. What differentiates your organization from your competitors is the custom process and business logic your organization uses. If you let the vendor build these customizations in their software then you end up subsidizing your competitor upgrade, and maybe giving them your edge. That is why I tend to recommend extracting any custom business logic new or modified and building it outside the COTS. Sometimes this is not possible due to the CTOS platform limitations or due to organizational standards. But whenever is possible extract any custom business logic outside of the COTS

Let me know your thoughts on that.

Microservices Simplified

In this blog, I will share my thoughts on how to architect complex software using microservices architecture, so that it’s flexible, scalable, and a competitive piece of software. I’ll start off first by introducing microservices, and what was before microservices, why microsoervices are so successful and useful now, and the design principles that are associated with microservices architecture.

What Is a Service?

A service is a piece of software which basically provides functionality to other components of software within your system. It basically provides a service to other pieces of software. The other pieces of software could be anything from a website to a mobile app or a desktop app, or even another service. The service basically provides functionality to these applications. And the communication between the software components and the service normally happen using some kind of communication protocol. A system which uses a service or multiple services in this fashion is known to have a service-oriented architecture, and this is normally abbreviated as SOA, or SOA, and the main idea behind SOA is, instead of building all the functionality in one big application, I instead use a service to provide a subset or just one functionality to the application, and this allows me to have many applications use the same cod, and in the future I can have newer or different types of systems connecting to the same service, reusing that functionality, and as a software architecture, SOA has been successful. It allows us to scale and to reuse functionality. A key characteristic of service-oriented architecture is the idea of having standardized contracts or interfaces. When a client application called the service, it called the service by calling a method. The signature of that method normally doesn’t change when the service changes, so you can upgrade a service without having to upgrade the clients, as long as the contract and the interface, i.e., the signature of the method doesn’t change. Also a service is stateless, so when a request comes in from a website to our service, that instance of the service does not have to remember the previous request from that specific customer, that specific client, it basically has all the information from the request that it needs in order to retrieve all the data associated with previous requests within the service. The microservices architecture is basically an improved version of service-oriented architecture, or in other terms SOA done the right way. Microsoervices Architecture shares all the key characteristics of the service-oriented architecture, of scalability, reusability, and standardized contracts and interface for backwards compatibility, and the idea of having a service that’s stateless.

Microservices Introduction

The microservices architecture is basically service-oriented architecture done well. Microservices basically introduce a new set of additional design principles to size a service correctly. Because there was no guidance in the past on how to size a service and what to include in a service, traditional service-oriented architecture resulted in monolithic large services, and because of the size of the service, these services became inefficient to scale up and change in an allowable way. Smaller services, i.e., microservices, basically provide services which are more efficiently scalable, which are flexible, and which can provide high performance in the areas where performance is required. An application which is based on microservices architecture is normally an application which is powered by multiple microservices, and each one of these microservices will provide a set of functions, a set or related functions, to a specific part of the application. Because the microservice normally has a single focus, it does one thing and it does it well. Microservice architecture also uses lightweight communication mechanism between clients and services and service to service. The communication mechanism has to be lightweight and quick, because when you carry out a transaction within a microservices architectured system, the transaction will be a distributed transaction which is completed by multiple services, therefore the services need to communicate to each other in a quick and efficient way over the network. The application interface for a microservice, also needs to be technology agnostic. This basically means the service needs to use an open communication protocol so that it does not dictate the technology that the client application needs to use. And by using open communication protocols, for example, like HTTP REST, we could easily have a .NET client application which talks to a Java-based microservice. In a monolithic service, you’re also likely to have a central database in order to share data between applications and services. In microservices architecture, each microservice has its own data storage. Another key characteristic of a microservice is that it is independently changeable. I can upgrade, enhance or fix a specific microservice without changing any of the clients or any of the other services within the system. And because microservices are independently changeable, they also need to be independently deployable. By modifying one microservice, I should be able to then deploy that change within my system independently from everything else without deploying anything else. We’ve already mentioned the fact that when you make a transaction within a microservices architectured system, the transaction is most likely to be completed by multiple services, multiple services which are distributed, and therefore, your transaction is also a distributed transaction. And because a microservices architectured system has so many moving parts, there’s a need for centralized tooling for management of the microservices. You need a tool, which will help you manage and see the health of your system because there are so many moving parts.

One of the reasons for the microservices architecture now is the need to respond to change quickly. The software market is really competitive nowadays. If your product can’t provide a feature that’s in demand, it will lose market share very quickly, and this is where microservices can split a large system into parts so we can upgrade and enhance individual parts in line with the market needs. So not only do we need to change parts of our system quickly, we also need to change them in a reliable way in order to keep the market happy, and microservices provides this reliability by having your system in many parts, so if one part of the system breaks it won’t break the entire system.

There is also a need for business domain-driven design. The architecture of our application needs to match the organization structure, or the structure of the business functions within the organization. Another reason why microservices architecture is now possible is because we now have automated test tools. We’ve already seen that in a microservices architecture transactions are distributed, and therefore, a transaction will be processed by several services before it’s complete. Therefore, the integration between those services needs to be tested, and testing these microservices together manually might be quite a complex task, but the good news is these automated tests automatically test the integration between our microservices, and this is why microservices architecture is now possible, because we have automated test tools which test integration between services. Release and deployment of microservices can also be complex, but we also have tools, centralized tools, which can carry out this function.

Another reason for the need to adopt microservices architecture is the need to adopt new technology. Because our system is now in several moving parts, we can easily change one part, i.e., a microservice from one technology stack to another technology stack in order to get a competitive edge. Another advancement in technology which makes microservices possible is the asynchronous communication technology. In our microservices architecture when we use distributed transactions, the distributed transaction might use several services in order to complete. Using asynchronous communication, the distributed transaction does not have to wait for individual services to complete their tasks before it’s complete.

The key benefits of the microservices Architecture

  1. Microservices have shorter development times. Because the system is split up into smaller moving parts, you can work on a moving part individually, you can have teams working on different parts concurrently, and because microservices are small in size and they have a single focus, the team have less to worry about in terms of scope. They know the one thing they’re working on has a certain scope, and there’s no need to worry about the entire system as long as they honor the contracts between the services.
  2. More reliable and faster Deployment. Because theservices are loosely coupled, developers can rework, change, and deploy individual components without deploying or affecting the entire system, and therefore, deployment is more reliable and faster. Shorter development times and reliable and faster deployment also enable frequent updates. As we’ve already briefly mentioned, frequent updates can give you a competitive edge in the marketplace. The microservices architecture also allows us to decouple changeable parts. For example, if we know our UI for our system, our user interface for our system, changes quite often, if it uses the microservices architecture, the UI is most likely decoupled from all the services in the background, and therefore you can change it independently from all the services.
  3. Enhanced Security. Microservices architecture also increases security. In a monolithic system you might have one central database with one system accessing that database, and therefore, all you need to do is hack that one system in order to gain access to the data. In the microservices architecture, each microservice has its own database, and each microservice can also have its own security mechanism, therefore, making the data distributed and making the data even more secure.
  4. Increased uptime, because when it comes to upgrading the system you will probably deploy one microservice at a time without affecting the rest of the system. And because the system is split up into business domains and business functions, when a problem arises we can probably quickly identify which service is responsible for that specific business function, and therefore resolve the problem within that microservice.
  5. Highly scalable, and better performance. When there’s a specific part of the system which is in demand, we can just scale that specific part up, instead of scaling the whole system up.
  6. Better support for distributed teams. We can give the ownership of a microservice to a particular development team so that there’s better ownership and knowledge about the microservice. Microservices allow us to use the right technology for specific parts in the system, and because each microservice is separate from the other microservice, they don’t share databases, and they have their own code base, you can easily have microservices being worked on concurrently by distributed teams. In the section of the module we’ll start looking at the design principles that enable microservices and enable these benefits that we get from microservices.

Microservices Design Principles

High Cohesion

Basically, the microservices content and functionality in terms of input and output must be coherent. It basically must have a single focus, and the thing it does it should do well within that single focus. The idea of a microservice having a single focus or a single responsibility is actually taken from the SOLID coding principles, and the single responsibility principle basically states that a class can only change for one reason, and this same principle is applied to microservices. It’s a useful principle because it allows us to control the size of the service, and we will not accidentally create a monolithic service by attaching other behaviors into the microservice which are not actually related. Because the high cohesion principle controls the size of the microservice and the scope of the contents of the microservice, the microservice is easily rewriteable as we are likely to have less of an attachment to a smaller code base, and obviously there will be fewer lines of code to rewrite because the microservice will be so small. And overall, if all our microservices have high cohesion, it makes our overall system highly scalable, flexible, and reliable. The system is more scalable because we can scale up individual microservices, which represent a specific business function or business domain which is in demand, instead of scaling up the whole system, and at the same time the system is more flexible because we can change and upgrade or change the functionality of specific business functions or business domains within our system, and then we have reliability because overall we are changing specific small parts within the system without affecting other parts within the system.

Autonomous

Microservices should also be autonomous. Autonomous meana a microservice should not be subject to change because of an external system it interacts with or an external system that interacts with it. There should be loose coupling between the microservices and between the microservices and the clients that use the microservices. A microservice should also be stateless. There should be no need to remember previous interactions that clients might have had with this service or other service instances in order to carry out the current request. And because microservices honor contracts and interfaces to other services and clients, they should be independently changeable and independently deployable. They should just slot back into the system after a change or enhancement, even though it has a newer version than any of the other components within the system. This also ensures our service is always backwards compatible. Having clear defined contracts between services also means that microservices can be concurrently developed by several teams, because there’s a clear definition of the input and output of a microservice. Separate teams can work on separate microservices, as long as they honor the contracts, development should go okay.

Business Domain Centric

A microservice should also be a business domain centric, and this means a service should represent a business function. The overall idea is to have a microservice represent a business function or a business domain, i.e., a part of the organization, because this helps scope the service and control the size of the service. This is an idea which is taken from domain driven design. You basically define a bounded context, which basically contains all the functionality which is related to a specific part of the business to a business domain or a business function, and you define the bounded context by defining boundaries and seams within the code. You basically highlight the areas where related functionality exists. There will be times when code relates to two different bounded contexts, and this is where we need to shuffle the code around so that the code ends up in the right place where it makes sense and it belongs in terms of business function or business domain. We need to aim for high cohesion, remember, making our microservices business domain centric, or to make our microservices responsive to business change. So as the business changes or the organization changes or functions within the business change, our microservices can change in the same way because our system is broken up into individual parts which are business domain centric. We can also change those parts which relate to specific parts in the organization which are changing.

Resilience

Microservices are resiliant. They embrace failure when it happens. Failure might be in the form of another service not responding to your service, or it might be a connection line to another system which has gone down, or it might be a third party system which fails to respond. Whatever the type of failure, our microservice needs to embrace that failure by degrading the functionality within our microservice or by using default functionality. An example of degrading functionality might be a scenario where we have a user interface microservice which basically draws an HTML page for available orders and promotions, but for whatever reason the promotion’s microservice is down and fails to respond, so our user interface microservice basically chooses to degrade that functionality, and it chooses not to display the promotions on the page. Another way of making microservices more resilient is by having multiple instances of microservices so they register themselves as they start up, and if any of them fail they deregister themselves, so our system or our load balancers, etc., are only ever aware of fully functioning microservices. We also need to be aware that there are different types of failures. So, for example, there might be exceptions or errors within a microservice, there might be delays in replying to a request, and there might also be complete unavailability of a microservice, and this is where we need to work out if we did need to degrade functionality or if we need to default functionality. Failures are also not just limited to the software itself. You might have network failures, and remember we’re using distributed transactions here where one transaction might go across the network and use several services before it actually completes, and therefore, again, we need to make our microservices resilient to network delays or unavailability. We also need to ensure that when our microservices are called and the input they receive as part of that request, that we can validate that input, and this might be input from services or from clients. We need to ensure that our microservices are resilient and can validate incoming data, and they don’t basically fall over because they’ve received something in an incorrect format.

Observable

We need a way to be able to observe our system’s health in terms of system status, in terms of logs, i.e., activity currently happening in the system and errors that are currently happening in the system. And this type of monitoring and logging needs to be centralized so that there is one place where we need to go to in order to view this information regarding the system’s health, and we need this level of monitoring and logging in a centralized place because we now have distributed transactions. In order for a transaction to complete, it must go across the network and use several services, therefore, knowing the health of the system is vital, and this kind of data will also be useful for quick problem solving because the whole system is distributed and there’s a lot going on. We need a quick way of working out where a potential problem possibly lies.

Automation

Automation in the form of tools, for example, tools to reduce testing. Automated testing will reduce the amount of time required for manual regression testing, and the time taken to test integration between services and clients, and also the time taken to set up test environments. Remember, in a microservices architecture our system is made up of several moving parts, and therefore testing can be quite complex, and this is where we need testing tools to automate some of that testing. We need tools, automated testing tools which give us quick feedback, so as soon as I change the microservice and check that we have called into our control system. As well as automation tools to help with testing, we need automation tools to help with deployment, a tool which basically provides a pipeline to deployment. It gives our microservice a deployment ready status, so when you check a change in, the tests pass, and then the deployment status is at ready, and then the tool knows that this build of the microservice is now ready for deployment. So not only does this tool provide a pipeline with a status for each deployable build of a microservice, it also provides a way of actually physically moving the build to the target machine or the target cloud system, so the physical deployment of the software will be all automatic, and therefore it will be reliable because it’s preconfigured with a target where the software needs to go, and it will be configured and tested once, and therefore it should work every time. The idea of using automation tools for deployment falls under a category called continuous deployment.