Azure Integration: 1 – Azure Logic Apps an Introduction

Logic Apps are part of Azure Integration collection. Azure integration is a collection of various Azure tools and frameworks that addresses all integration needs for an Enterprise. Logic Apps are used for core workflow processing. Service bus is Microsoft’s cloud-based queuing mechanism. It provides a durable queuing infrastructure that is highly scalable. Event grid is Azure’s event processing system. It provides near real-time eventing to many events inside Azure. API Management is used to safeguard and secure both internal and external APIs. Azure integration is functions are a lightweight streamlined unit of work that can be used to perform specific tasks. Using the on-premise Data Gateway, Logic Apps can seamlessly call into your on-premise systems. The on-premises Data Gateway has support for many systems including on-premise SQL server, SharePoint, SAP, IBM DB2, as well as BizTalk Server. The on-premises Data Gateway can even be used to monitor a local filesystem. This blog addresses Logic Apps only.

Logic Apps Advantages

  • Logic Apps is a serverless platform which means is you don’t have to worry about the management of those servers.
    • Auto-Scaling: You simply deploy your resources and the framework will ensure it gets deployed to the correct environments. You also have auto scaling based on demand. Unlike building out a SQL Azure database where you must specify a resource size, when you need more resources for your Logic Apps, they will automatically be provisioned by the platform and your solution will auto scale to meet your demand. Along with this is high availability. This is just built into the platform.
    • Use-based billing. What this means is you only pay for the resources that you use.
  • Easy to learn as Logic Apps provides a drag and drop developer experience.
  • 200+ different connectors for integration with PAAS and SAAS offerings, and on-premise systems. This suite of connectors is constantly growing and expanding. Including Enterprise connectors for systems like IBM, MQ, and SAP. In addition to support for AS2, X12, and EDIFACT messaging.
  • Ability to create custom connectors around APIs, so you can provide a custom development experience for your developers.
  • Monitoring and logging right out of the box. It is built right into the framework and accessible via the Azure Portal.
  • Seamless integration with other Azure features. This provides a rapid development experience for integrating with service bus, Azure functions, custom APIs, and more.
  • It’s easy and extremely powerful.
  • Logic Apps are very good at connecting cloud-based systems, bridging connections from on-premise systems to the cloud and vice versa. Azure Integration and Hybrid Solutions

Design and Development of Logic Apps

You have two options for building our Logic Apps:

  • The web-based designer is a simple convenient option that hosts the designer right inside the Azure portal. This designer is feature-rich and allows you to author your Logic Apps very quickly. One of the big benefits of the web-based designer is it’s very easy to test your Logic Apps because they’re already sitting out there inside your Azure subscription. The web-based designer works great for building short Logic Apps or doing proof of concepts.
  • There’s also a designer a plug-in that supports Visual Studio. The Visual Studio plug-in is a great choice for authoring Enterprise-grade Logic apps. However, you will need to deploy them into an Azure subscription to run them.

There are a couple different deployment options. Inside the web designer, you can clone your Logic App, and you can move them to other resource groups, and even other Azure subscriptions. If you’re using the Visual Studio Designer, you can create an Azure Resource Management, or ARM, template. With the ARM template, you can then use PowerShell to deploy your Logic Apps into your Azure subscription. Using the ARM templates in PowerShell, it’s very simple to deploy your Logic Apps into multiple Azure subscriptions. This makes it easy for rolling out Logic Apps to multiple environments. Another benefit of the ARM templates is that they can be checked into source control.

Connectors

Connectors are pre-built, Microsoft managed wrappers around APIs to greatly simplify interaction with those systems Connections contain environment-specific details that a connector uses for connecting to a data source including destination and authentication details. Connections live at the Resource Group level. Connectors show up inside the Azure portal underneath the segment called API connections. Under API connections, you will see all of your individual API connections that you have created inside of that Resource Group. Connections can be created using PowerShell or using the web or Visual Studio Designer. The connections that you create for your connectors can be independently managed outside of your Logic Apps. You can go under API connections inside the Azure Portal and edit connection information. You will also see your connections show up underneath your Logic App. The connections that you create for your connectors can be shared across multiple Logic Apps.

Triggers

Now let’s talk about the different ways that we can start a new instance of a Logic App. Triggers are used to initiate a Logic App. Triggers can be broken down into two main types, reactive and proactive.

  • Recurrence trigger, you specify a time interval in which that Logic App will execute.
  • Polling-based triggers., you specify a polling interval in which the Logic App will wake up and look for new work to do based on the connector that you’re using for the trigger.
  • Event-based which the Logic App can be triggered off of events that happen inside of Azure.
  • TTP and webhook triggers, so inbound HTTP requests can start your Logic Apps.
  • You can write custom API apps and use them as triggers for your Logic Apps as well.

Actions

Actions are the building blocks of Logic Apps. Most actions are also connectors. There are a few different ways to add new actions to a Logic App. The first way is at the end of the Logic App. Underneath all the other actions you will see a button to add a new step. If you click this button, you will be able to add a new action. If you are working with actions like the scope or condition action, you have the ability to add additional actions inside them. At the bottom of that action, you will see an add action button. The last way to add a new action inside a Logic App is in between two existing actions. Simply hover between the existing actions, and a plus will show up. Click on the plus and you will have the ability to add an action in between the two existing ones. Once you add a new action, the add action dialog box will pop up. This will give you the ability to search connectors and actions to find what you’re looking for. You can also narrow it down by selecting All, Built-in, Connectors, Enterprise, or Custom. This will display all the connectors that are available. When you click on a connector, it will list all available actions that are supported by that connector. Most connectors have more than one action. If you do not select a connector, you can simply select actions in the Actions selection menu and scroll through all available actions. There’s a lot of actions available to you. Since there’s so many actions available to you, it’s highly recommended to select the connector first and then see the list of available actions for that connector

Flow Control

Flow control is used to control the sequence of logic through a Logic App. All flow control actions are listed under control.

  • The if-then logic is very popular. It is used to supply a condition and then if that condition evaluates to true executes certain logic, if it evaluates to false, you would execute different logic. Inside the Logic App Designer, you have a rich condition editor where you can add multiple parameters to a single condition. You could also switch to advanced mode and edit this in JSON if you need more control over your condition.
  • Switch allows you to choose on a value and have multiple cases based on that value. There’s also a default case if none of the matching cases evaluate to true.
  • For-each allows you to loop around an array. Any array in preceding actions are available for selection for your for-each. You simply select your array in the select an output from a previous step text box, and it will automatically loop over those elements. One thing to point out about the for-each inside a Logic App is that the default behavior is to run in parallel. So by default, the Logic App will execute 20 concurrent calls to your for-each, looping around your array elements. To change this behavior, you would simply click on the … on the right side and go to settings. Underneath settings, you would select override default, and here you can change the degree of parallelism. You would simply slide it all the way down to the left to 1 if you want it done to run in sequence. This supports up to a maximum of 50 concurrent executions.
  • Do until. This allows you to select a condition to evaluate and loop around a set of actions until that condition is true.
  • Scope actions can also be used inside a Logic App. The scope action allows you to group multiple actions together and then have evaluations done on the results of the group of actions as a whole. This can be useful if you want to ensure multiple steps are successful before you continue on in a Logic App. The scope shapes will return the results of every action inside that scope as an array.
  • Terminate allows you to end the execution of your Logic App based on conditions you define in your workflow. When you add the terminate action to your Logic App, you can set the status. You can set the status to either failed, canceled, or succeeded. You can also set the error code and the error message for while you’re terminating your Logic App.

MuleSoft: Designing Integration Applications Wisdom

In this blog I will go through best practices for design integration applications. Wisdom that I have garnered through projects, MuleSoft recommendations, reviews of MuleSoft project and discussions with MuleSoft specialists.

General

  • Connector retry/Until successful/retry APIs should be present for all connections / connectors. This is an obvious one networks, and the internet have occasional disconnections. So you should always retry a few times before giving up and abandoning the operation.
  • High volume processes should be coupled with MuleSoft Batch framework and appropriate queuing mechanisms wherever necessary. This to make the processing faster and more reliable but be cautious about which queuing infrastructure you are using. VM queues are mostly queuing in memory which might cause out of memory issues.
  • Exceptions are logged to an agreed upon location. Best of course is to a ticketing system like ServiceNow or through regular logging and having log mentoring system like Splunk to collect the logs and issue waring. Refrain from utilizing Emails to send errors to support teams. Things get messy with emails and sometime tracking is lost.
  • Long-running processes should provide a way to inspect progress to date. Usually this is done through sending notification through a hookup API or pushing the progress to the logs. But it important to have a way to see that so far 60% of the data load has been processed
  • Processes are designed to be loosely coupled and promote reuse where possible. Adopt microservices sensibly not to small and not large.
  • Adopt the MuleSoft API-Led connectivity approach sensibly. Aha, this is a tricky and controversial one. Many novice Developers/Architects just follow the 3-layer API-Led pattern (System API, Process API, Experience API) religiously without thinking of the consequences. There are times when the three tiers are required other times you only need two tiers only. For example if the integration is a batch job that picks up files or records from a DB and push them to Salesforce. Then you only need System API Layer and Integration layer (no need for experience or process API layers). See below a summary of eh API Led Connectivity Approach.
    • System APIs should expose a canonical schema (project or domain scope) when there is an identified Canonical Schema for the project, domain, or organization scope. Do not just replicate the source system API removing any system specific complexities. I have seen implementation where the developers just replicated the source system API just removing the authentication for source system. This meant spending 1-4 weeks to develop test an API that removes Source System Authentication with another authentication system for the system API. As a Manager or from the client side why did we spend 4 weeks = 160 hrs at $200 per hour = 32K to develop something that is does not add 32K worth value and would cost us in the future to maintain. The reason we use a Middle wear like MuleSoft to implement integrations is to make it easy to replace system and reduce vendor dependencies. For example, if we are integrating Salesforce, SAP, Workday, and Shopify for example. If after say 2 years the crop decided to replace SAP with Dynamics AX. Now if the System API for SAP exposed SAP API with just minor modifications for authentication. Then the Dynamics AX system API does the same the all the process or integration applications would have to be changed and recoded. This is the main reason that Enterprise Service Bus had such a bad reputation. Because of bad implementations. As I wrote in my Book “BizTalk the Practical Course” http://www.lulu.com/shop/moustafa-refaat/biztalk-the-practical-course/paperback/product-4661215.html Yes I know this MuleSoft but theory is the same. It is like Quick Sort in C#, Java, C++, Scala, Python. You are still implementing “Quick Sort” same algorithm same theory different tool. Read the full discussion in the preview Page 35.

  • When creating a canonical schema stick to the project/domain scope and do not try to create a generic canonical schema for the whole organization.

I cannot stress this enough, while MuleSoft promotes the Three-Tire structure and Application API network, it does not always make sense to use this approach in all situations. Strive to design the integration architecture to be: –

  1. Easy to maintain
  2. As modular as possible
  3. Any components that can be reused should be isolated into its own library or application

The MuleSoft API-led connectivity approach


API-led connectivity is a methodical way to connect data to applications through a series of reusable and purposeful modern APIs that are each developed to play a specific role – unlock data from systems, compose data into processes, or deliver an experience. API-led connectivity provides an approach for connecting and exposing assets through APIs. As a result, these assets become discoverable through self-service without losing control.

  • System APIs: In the example, data from SAP, Salesforce and ecommerce systems is unlocked by putting APIs in front of them. These form a System API tier, which provides consistent, managed, and secure access to backend systems.
  • Process APIs: Then, one builds on the System APIs by combining and streamlining customer data from multiple sources into a “Customers” API (breaking down application silos). These Process APIs take core assets and combines them with some business logic to create a higher level of value. Importantly, these higher-level objects are now useful assets that can be further reused, as they are APIs themselves.
  • Experience APIs: Finally, an API is built that brings together the order status and history, delivering the data specifically needed by the Web app. These are Experience APIs that are designed specifically for consumption by a specific end-user app or device. These APIs allow app developers to quickly innovate on projects by consuming the underlying assets without having to know how the data got there. In fact, if anything changes to any of the systems of processes underneath, it may not require any changes to the app itself.

Defining the API data model

The APIs you have identified and started defining in RAML definitions exchange data representations of business concepts, mostly in JSON format. Examples are:

  • The JSON representation of the Policy Holder of a Motor Policy returned by the “Motor Policy Holder Search SAPI”
  • The XML representation of a Quote returned by the “Aggregator Quote Creation EAPI” to the Aggregator
  • The JSON representation of a Motor Quote to be created for a given Policy Holder passed to the “Motor Quote PAPI”
  • The JSON representation of any kind of Policy returned by the “Policy Search PAPI”

All data types that appear in an API (i.e., the interface) form the API data model of that API. The API data model should be specified in the RAML definition of the API. API data models are clearly visible across the application network because they form an important part of the interface contract for each API.

The API data model is conceptually clearly separate from similar models that may be used inside the API implementation, such as an object-oriented or functional domain model, and/or the persistent data model (database schema) used by the API implementation. Only the API data model is visible to API clients in particular and to the application network in general – all other forms of models are not. Consequently, only the API data model is the subject of this discussion.

Enterprise Data Model versus Bounded Context Data Models

The data types in the API data models of different APIs can be more or less coordinated:

  • In an Enterprise Data Model – often called Canonical Data Model, but the discussion here uses the term Enterprise Data Model throughout – there is exactly one canonical definition of each data type, which is reused in all APIs that require that data type, within all of Acme Insurance
  • E.g., one definition of Policy that is used in APIs related to Motor Claims, Home Claims, Motor Underwriting, Home Underwriting, etc.
  • In a Bounded Context Data Model several Bounded Contexts are identified within Acme Insurance by their usage of common terminology and concepts. Each Bounded Context then has its own, distinct set of data type definitions – the Bounded Context Data Model. The Bounded Context Data Models of separate Bounded Contexts are formally unrelated, although they may share some names. All APIs in a Bounded Context reuse the Bounded Context Data Model of that Bounded Context
  • E.g., the Motor Claims Bounded Context has a distinct definition of Policy that is formally unrelated to the definition of Policy in the Home Underwriting Bounded Context
  • In the extreme case, every API defines its own API data model. Put differently, every API is in a separate Bounded Context with its own Bounded Context Data Model.

Abstracting backend systems with System APIs

System APIs mediate between backend systems and Process APIs by unlocking data in these backend systems:

  • Should there be one System API per backend system or many?
  • How much of the intricacies of the backend system should be exposed in the System APIs in front of that backend system? In other words, how much to abstract from the backend system data model in the API data model of the System APIs in front of that backend system?

General guidance:

  • System APIs, like all APIs, should be defined at a granularity that makes business sense and adheres to the Single Responsibility Principle.
  • It is therefore very likely that any non-trivial backend system must be fronted by more than one System API
  • If an Enterprise Data Model is in use, then
    • the API data model of System APIs should make use of data types from that Enterprise Data Model
    • the corresponding API implementation should translate between these data types from the Enterprise Data Model and the native data model of the backend system
  • If no Enterprise Data Model is in use, then
    • each System API should be assigned to a Bounded Context, the API data model of System APIs should make use of data types from the corresponding Bounded Context Data Model
    • the corresponding API implementation should translate between these data types from the Bounded Context Data Model and the native data model of the backend system
    • In this scenario, the data types in the Bounded Context Data Model are defined purely in terms of their business characteristics and are typically not related to the native data model of the backend system. In other words, the translation effort may be significant
  • If no Enterprise Data Model is in use, and the definition of a clean Bounded Context Data Model is considered too much effort, then
    • the API data model of System APIs should make use of data types that approximately mirror those from the backend system
    • same semantics and naming as backend system
    • but for only those data types that fit the functionality of the System API in question backend system often are Big Balls of Mud that cover many distinct Bounded Contexts
    • lightly sanitized e.g., using idiomatic JSON data types and naming, correcting misspellings, …
    • expose all fields needed for the given System API’s functionality, but not significantly more ◦ making good use of REST conventions

The latter approach, i.e., exposing in System APIs an API data model that basically mirrors that of the backend system, does not provide satisfactory isolation from backend systems through the System API tier on its own. In particular, it will typically not be possible to “swap out” a backend system without significantly changing all System APIs in front of that backend system – and therefore the API implementations of all Process APIs that depend on those System APIs! This is so because it is not desirable to prolong the life of a previous backend system’s data model in the form of the API data model of System APIs that now front a new backend system. The API data models of System APIs following this approach must therefore change when the backend system is replaced. On the other hand:

  • It is a very pragmatic approach that adds comparatively little overhead over accessing the backend system directly
  • Isolates API clients from intricacies of the backend system outside the data model (protocol, authentication, connection pooling, network address, …)
  • Allows the usual API policies to be applied to System APIs
  • Makes the API data model for interacting with the backend system explicit and visible, by exposing it in the RAML definitions of the System APIs
  • Further isolation from the backend system data model does occur in the API implementations of the Process API tier

MuleSoft Application Modularization

Mule allows you to run applications side-by-side in the same instance. Each Mule application should represent a coherent set of business or technical functions and, as such, should be coded, tested, built, released, versioned and deployed as a whole. Splitting particular functions into individual applications allows a coarse-grained approach to modularity and is useful when keeping elements of your application running while others could go through some maintenance operations. For optimum modularity:

Consider what functions are tightly interrelated and keep them together in the same Mule application: they will form sub-systems of your whole solution.

  • Establish communication channels between the different Mule applications: the VM transport will not be an option here, as it can’t be used across different applications. Prefer the TCP or HTTP transports for synchronous channels and JMS for asynchronous ones

What are the Downsides of a Microservice based solution

A Microservice based solution has the following downsides:

  1. Distributing the application adds complexity for developers when they are designing and building the services and in testing and exception handling. It also adds latency to the system.
  2. Without a Microservice-oriented infrastructure, An application that has dozens of Microservices types and needs high scalability means a high degree of deployment complexity for IT operations and management.
  3. Atomic transactions between multiple Microservices usually are not possible. The business requirements must embrace eventual consistency between multiple Microservices.
  4. Increased global resource needs (total memory, drives, and network resources for all the servers or hosts). The higher degree of granularity and distributed services requires more global resources. However, given the low cost of resources in general and the benefit of being able to scale out just certain areas of the application compared to long-term costs when evolving monolithic applications, the increased use of resources is usually a good tradeoff for large, long-term applications.
  5. When the application is large, with dozens of Microservices, there are challenges and limitations if the application requires direct client-to-Microservice communications. When designing and building a complex application based on Microservices, you might consider the use of multiple API Gateways instead of the simpler direct client‑to‑Microservice communication approach.
  6. Deciding how to partition an end-to-end application into multiple Microservices is challenging. As You need to identify areas of the application that are decoupled from the other areas and that have a low number of hard dependencies. Ideally, each service should have only a small set of responsibilities. This is like the single responsibility principle (SRP) applied to classes, which states that a class should only have one reason to change. 

 

 

Implement Rest API in MuleSoft, Azure Logic Apps, Asp.Net core or Spring-Boot? MuleSoft: Step1- Defining an API in RAML

I have been working lately on comparing on comparing different technologies to build web API’s.. One of the main concerns was if we wanted to build a simple API service which technology would be easier, more productive to develop the service with. To provide a reference comparison I will build the same web service (in MuleSoft, Azure Logic App, Asp.Net Core, Spring Boot) and provide my notes as I go. The web service would provide the following functionality

  1. CRUD operations on an Authors entity
  2. CRUD operations on BOOKS entities where books are related to authors

All the Read (queries) should support:

  1. Filtering
  2. Searching
  3. Paging
  4. Supports Levels two and three in Richardson Maturity Model(see my previous post https://moustafarefaat.wordpress.com/2018/12/11/practical-rest-api-design-implementation-and-richardson-maturity-model/) . This means based on the Accept header of the request return the results as either:
    1. Pure JSON
    2. With HATEOUS

I will start with MuleSoft implementation.

Step 1. Define the API in RAML

With MuleSoft you get AnyPoint portal and you get the design center, which helps you designing the API RAML. There is API Designer visual Editor which can help you in the beginning.

 

 

 

Though it has many weakness such as:

  1. Once when you switch to RAML editor you cannot go back.
  2. You cannot define your own Media Types you have to use form the list.

 

To finalize the API definition in RAML I had to manually edit though the editor was helping to get started. Below is a fragment of the API in RAML (The full solution will be published on my GitHub https://github.com/RefaatM )

Notice in the RAML that I have defined two responses for the Get operation of the Authors Resources. Full RAML is at (https://github.com/RefaatM/MuleSoftRestAPIExample/tree/master/src/main/resources/api)

#%RAML 1.0

title: GTBooks

description: |


GTBooks Example

version: ‘1.0’

mediaType:

application/json

application/xml

protocols:

HTTP

baseUri: /api/v1.0

 

types:


CreateAuthor:


description: This is a new DataType


type: object


properties:


Name:


required: true


example: Moustafa Refaat


description: Author Name


type: string


Nationality:


required: true


example: Canadian


description: ‘Author Nationality ‘


type: string


Date-of-Birth:


required: true


example: ‘2018-12-09’


description: Author Date of Birth


type: date-only


Date-of-Death:


required: false


example: ‘2018-12-09’


description: Author Date of Beath


type: date-only

 


Author:


description: This is a new DataType


type: CreateAuthor


properties:


Id:


required: true


example: 1


description: Author Id


type: integer


Age:


required: true


maximum: 200


minimum: 8


example: 10


description: Author Age


type: integer

 


AuthorHateoas:


description: Author with Hateoas information LINKS


type: Author


properties:


Links:


required: true


description: Property description


type: array


items:


required: true


type: Link

 

 


Link:


description: Hateoas LINK


type: object


properties:


href:


required: true


example: /Book/10


description: URL Link


type: string


rel:


required: true


example: GetBook


description: Operation


type: string


method:


required: true


example: GET


description: ‘HTTP Method Get, PUT,..’


type: string

/author:


get:


responses:


‘200’:


body:


application/json:


type: array


items:


type: Author


application/hateaos+json:


type: array


items:


type: AuthorHateoas


‘304’: {}


‘400’: {}


‘500’: {}


headers:


Accept:


example: ‘application/json ‘


description: application/json or application/hateaos+json


type: string


queryParameters:


sort-by:


required: false


example: Example


description: sort by


type: string


filteryby:


required: false


example: Example


description: Property description


type: string

 

(.. to be continued)

Practical REST API Design, implementation and Richardson Maturity Model

Richardson Maturity Model classifies REST API maturity as follows

  • Level Zero: These services have a single URI and use a single HTTP method (typically POST). For example, most Web Services (WS-*)-based services use a single URI to identify an endpoint, and HTTP POST to transfer SOAP-based payloads, effectively ignoring the rest of the HTTP verbs. Similarly, XML-RPC based services which send data as Plain Old XML (POX). These are the most primitive way of building SOA applications with a single POST method and using XML to communicate between services.
  • Level One: These services employ many URIs but only a single HTTP verb – generally HTTP POST. They give each individual resource in their universe a URI. Every resource is separately identified by a unique URI – and that makes them better than level zero.
  • Level Two: Level two services host numerous URI-addressable resources. Such services support several of the HTTP verbs on each exposed resource – Create, Read, Update and Delete (CRUD) services. Here the state of resources, typically representing business entities, can be manipulated over the network. Here service designer expects people to put some effort into mastering the APIs – generally by reading the supplied documentation. Level 2 is the good use-case of REST principles, which advocate using different verbs based on the HTTP request methods and the system can have multiple resources.
  • Level Three: Level three of maturity makes use of URIs and HTTP and HATEOAS. This is the most mature level of Richardson’s model which encourages easy discoverability and makes it easy for the responses to be self-explanatory by using HATEOAS. The service leads consumers through a trail of resources, causing application state transitions as a result.

Where HATEOAS (Hypermedia as the Engine of Application State) is a constraint of the REST application architecture that keeps the RESTful style architecture unique from most other network application architectures. The term “hypermedia” refers to any content that contains links to other forms of media such as images, movies, and text. This architectural style lets you use hypermedia links in the response contents so that the client can dynamically navigate to the appropriate resource by traversing the hypermedia links. This is conceptually the same as a web user navigating through web pages by clicking the appropriate hyperlinks to achieve a final goal. Like a human’s interaction with a website, a REST client hits an initial API URI and uses the server-provided links to dynamically discover available actions and access the resources it needs. The client need not have prior knowledge of the service or the different steps involved in a workflow. Additionally, the clients no longer have to hard code the URI structures for different resources. This allows the server to make URI changes as the API evolves without breaking the clients.

Naturally you would want to build to highest standard and provide level three REST API. That would mean providing a links field as in the following example below from GT-IDStorm API

The data payload as you can see from this sample is huge, compared to the actual data returned.

{

“value”: [

{

“id”: “63b2c70e-2bcb-4335-9961-3d14be642163”,

“name”: “Entity-1”,

“description”: “Testing Entity 1”,

“links”: [

{

“href”: “https://localhost:44379/api/v1/entity/63b2c70e-2bcb-4335-9961-3d14be642163”,

“rel”: “self”,

“method”: “GET”

},

{

“href”: null,

“rel”: “get_entitydefinition_byname”,

“method”: “GET”

},

{

“href”: “https://localhost:44379/api/v1/entity/63b2c70e-2bcb-4335-9961-3d14be642163/full”,

“rel”: “get_full_entitydefinition”,

“method”: “GET”

},

{

“href”: “https://localhost:44379/api/v1/entity/63b2c70e-2bcb-4335-9961-3d14be642163”,

“rel”: “delete_entitydefinition”,

“method”: “DELETE”

},

{

“href”: “https://localhost:44379/api/v1/entity/63b2c70e-2bcb-4335-9961-3d14be642163/attributes”,

“rel”: “create_attribute_for_entitydefinition”,

“method”: “POST”

},

{

“href”: “https://localhost:44379/api/v1/entity/63b2c70e-2bcb-4335-9961-3d14be642163/attributes”,

“rel”: “get_attributes_for_entitydefinition”,

“method”: “GET”

},

{

“href”: “https://localhost:44379/api/v1/entity/63b2c70e-2bcb-4335-9961-3d14be642163/systems”,

“rel”: “create_system_for_entitydefinition”,

“method”: “POST”

},

{

“href”: “https://localhost:44379/api/v1/entity/63b2c70e-2bcb-4335-9961-3d14be642163/systems”,

“rel”: “get_system_for_entitydefinition”,

“method”: “GET”

},

{

“href”: “https://localhost:44379/api/v1/entity/63b2c70e-2bcb-4335-9961-3d14be642163/data”,

“rel”: “get_data_for_entitydefinition”,

“method”: “GET”

},

{

“href”: “https://localhost:44379/api/v1/entity/63b2c70e-2bcb-4335-9961-3d14be642163/data/GetEntityDataWithMissingSystems”,

“rel”: “get_data_WithMissingSystems_for_entitydefinition”,

“method”: “GET”

}

]

},

{

“id”: “54bc1f18-0fd5-43dd-9309-4d8659e3aa91”,

“name”: “Entity-10”,

“description”: “Testing Entity 10”,

“links”: [

{

“href”: “https://localhost:44379/api/v1/entity/54bc1f18-0fd5-43dd-9309-4d8659e3aa91”,

“rel”: “self”,

“method”: “GET”

},

{

“href”: null,

“rel”: “get_entitydefinition_byname”,

“method”: “GET”

},

{

“href”: “https://localhost:44379/api/v1/entity/54bc1f18-0fd5-43dd-9309-4d8659e3aa91/full”,

“rel”: “get_full_entitydefinition”,

“method”: “GET”

},

{

“href”: “https://localhost:44379/api/v1/entity/54bc1f18-0fd5-43dd-9309-4d8659e3aa91”,

“rel”: “delete_entitydefinition”,

“method”: “DELETE”

},

{

“href”: “https://localhost:44379/api/v1/entity/54bc1f18-0fd5-43dd-9309-4d8659e3aa91/attributes”,

“rel”: “create_attribute_for_entitydefinition”,

“method”: “POST”

},

{

“href”: “https://localhost:44379/api/v1/entity/54bc1f18-0fd5-43dd-9309-4d8659e3aa91/attributes”,

“rel”: “get_attributes_for_entitydefinition”,

“method”: “GET”

},

{

“href”: “https://localhost:44379/api/v1/entity/54bc1f18-0fd5-43dd-9309-4d8659e3aa91/systems”,

“rel”: “create_system_for_entitydefinition”,

“method”: “POST”

},

{

“href”: “https://localhost:44379/api/v1/entity/54bc1f18-0fd5-43dd-9309-4d8659e3aa91/systems”,

“rel”: “get_system_for_entitydefinition”,

“method”: “GET”

},

{

“href”: “https://localhost:44379/api/v1/entity/54bc1f18-0fd5-43dd-9309-4d8659e3aa91/data”,

“rel”: “get_data_for_entitydefinition”,

“method”: “GET”

},

{

“href”: “https://localhost:44379/api/v1/entity/54bc1f18-0fd5-43dd-9309-4d8659e3aa91/data/GetEntityDataWithMissingSystems”,

“rel”: “get_data_WithMissingSystems_for_entitydefinition”,

“method”: “GET”

}

]

}

],

“links”: [

{

“href”: “https://localhost:44379/api/v1/entity?orderBy=Name&searchQuery=Testing%20Entity%201&pageNumber=1&pageSize=10”,

“rel”: “self”,

“method”: “GET”

}

]

}

For example if we remove the HATEOAS requirement that data returned for the same query would be

This would less data would have huge impact on the system as a whole performance, Less traffic on the network, less data to process and manipulate by the client and servers.

[

{

“id”: “63b2c70e-2bcb-4335-9961-3d14be642163”,

“name”: “Entity-1”,

“description”: “Testing Entity 1”

},

{

“id”: “54bc1f18-0fd5-43dd-9309-4d8659e3aa91”,

“name”: “Entity-10”,

“description”: “Testing Entity 10”

}

]

I usually implement the API to have and accept header with multiple options

  • Application/json: returns just the data
  • Application/hateoas+json: return the data with the hateoas (Links) data.

I also implement another resource or operation that provides the links structures

In Conclusion

I would recommend implementing the API to:

  • Support Level Two and Leve Three at the same time by using the accept header for the request to
    • Application/json: returns just the data
    • Application/hateoas+json: return the data with the hateoas (Links) data.
  • Implement another resource or the root that would return the URLs (structures and operations) that are supported by the API.

As I found just support HATEAOS only would make the system pay heavy price on performance specially with large data loads while very few clients if any would utilize the links returned. I would love to hear your thoughts and experience on APIs with HATEAOS?

Microservices Interview Questions: 90 Technical Questions with Answers

Just got this Book published on Amazon Kindle check it out at https://www.amazon.ca/dp/B07KMD77YB/ref=sr_1_4?ie=UTF8&qid=1542400692&sr=8-4&keywords=microservices+interview+questions

Wisdom is learning all we can but having the humility to realize that we do not know it all. Microservices Interview Questions 90 Technical questions with clear and concise answers will help you gaining more wisdom in Microservices Interviews. The difference between a great Microservices consultant and someone who kind of knows some stuff is how you answer the interview questions in a way that will show how knowledgeable you are. The 90 questions I have assembled are for: job seekers (junior/senior developers, architects, team/technical leads), and interviewers.

Microservices Interview Questions are grouped into:

  • General Questions
  • Design Patterns Questions
  • API Design Questions
  • Containers and Orchestrations Questions.

Increase your earning potential by learning, applying and succeeding. Learn the fundamentals relating to Microservices based Application architecture in an easy to understand questions and answers approach. It covers 90 realistic interview Questions with answers that will impress your interviewer. A quick reference guide, a refresher and a roadmap covering a wide range of microservices architecture related topics & interview tips.

Sample Questions

  1. Why a Microservices architecture?


Microservices Architecture provides long-term agility. Microservices enable better maintainability in complex, large, and highly-scalable systems by letting you create applications based on many independently deployable services that each have granular and autonomous lifecycles. And Microservices can scale out independently.


Instead of having a single monolithic application that you must scale out as a unit, you can instead scale out specific Microservices. That way, you can scale just the functional area that needs more processing power or network bandwidth to support demand, rather than scaling out other areas of the application that do not need to be scaled. That means cost savings. Microservices approach allows agile changes and rapid iteration of each Microservice. Architecting fine-grained Microservices-based applications enables continuous integration and continuous delivery practices. It also accelerates delivery of new functions into the application. Fine-grained composition of applications also allows you to run and test Microservices in isolation, and to evolve them autonomously while maintaining clear contracts between them. As long as you do not change the interfaces or contracts of a Microservice, you can change the internal implementation of any Microservice or add new functionality without breaking other Microservices that use it.

  1. What is the Eventual Consistency?

Eventual consistency is an approach which allow you to implement data consistency within a Microservices architecture. It focuses on the idea of that the data within your system will be eventually consistent and it doesn’t have to be immediately consistent. For example, in an E-Commerce system, when a customer places an order, do you really need to immediately carry out all the transactions (Stock availability, charging the customer credit card, etc.)? There are certain data updates that can be eventually consistent, in line with the initial transaction that was triggered. This approach is based on the BASE model (Basic Availability, Soft state, and Eventually consistent). Data updates can be more relaxed, and don’t always have to have all the updates apply to the data immediately, slightly stale data to give approximate answers is okay sometimes. BASE model contrasts with the ACID model where all data related to the transaction must be immediately updated as part of the transaction. The system becomes more responsive because certain updates are done in the background and not done as part of the immediate transaction. The eventual consistency approach is highly useful for long running tasks. One thing to note about the eventual consistency approach is depending on the patterns you use, the actual time it takes for the data to become consistent will not be days, minutes or hours, it will potentially be seconds. Eventual data consistency across your Microservices architecture that happens within seconds is acceptable because of the gains you get in terms of performance and responsiveness across your system. Eventual consistency using the right patterns can be so immediate, and preparing for inconsistencies and dealing with race conditions might not actually be such a huge task. The traditional approach to eventual consistency has involved using data replication. Another approach to eventual consistency is Event based which works by raising events as part of transactions and actions in an asynchronous fashion as messages are placed on message brokers and queues.

  1. How to approach REST API Design?

    1. First, focus on the business entities that the web API exposes, what kind of CRUD operations needed. Creating a new entity record can be achieved by sending an HTTP POST request that contains the entity information. The HTTP response indicates whether the record was created successfully or not. When possible, resource URIs should be based on nouns and not verbs like the operations on the resource. A resource does not have to be based on a single physical data item.
    2. Avoid creating APIs that simply mirror the internal structure of a database. The purpose of REST is to model entities and the operations that an application can perform on those entities. A client should not be exposed to the internal implementation.
    3. Entities are often grouped together into collections (orders, customers). A collection is a separate resource from the item within the collection, and should have its own URI.
    4. Sending an HTTP GET request to the collection URI retrieves a list of items in the collection. Each item in the collection also has its own unique URI. An HTTP GET request to the item’s URI returns the details of that item.
    5. Adopt a consistent naming convention in URIs. In general, it helps to use plural nouns for URIs that reference collections. It’s a good practice to organize URIs for collections and items into a hierarchy.
    6. You should provide navigable links to associated resources in the body of the HTTP response message. Avoid requiring resource URIs more complex than collection/item/collection.
    7. Try to keep URIs relatively simple. Once an application has a reference to a resource, it should be possible to use this reference to find items related to that resource.
    8. Try to avoid “chatty” web APIs that expose many small resources. Such an API may require a client application to send multiple requests to find all of the data that it requires. Instead, you might want to denormalize the data and combine related information into bigger resources that can be retrieved with a single request. However, you need to balance this approach against the overhead of fetching data that the client doesn’t need. Retrieving large objects can increase the latency of a request and incur additional bandwidth costs.
    9. Avoid introducing dependencies between the web API and the underlying data sources. For example, if your data is stored in a relational database, the web API doesn’t need to expose each table as a collection of resources.

It might not be possible to map every operation implemented by a web API to a specific resource. You can handle such non-resource scenarios through HTTP requests that invoke a function and return the results as an HTTP response message. For example, a web API that starts some operation such as run validations could provide URIs that expose these operations as pseudo resources and use the query string to specify the parameters required.

Choosing a Platform for a new microservice

Microservices endorses using different technologies to build components of a solution or a system. You can have components based on Java or Scala with Spring or C# & .Net etc. But if you are going to build a new microservice which platform should you use? Most developer’s or architects would choose based on their background if they came from Java they choose Java or .net if they are from .net. This kind of reminds me of an old Dilbert comic in it Dilbert tells the marketing manager that he can draw a fish on the screen. It sounds to me like most architects choose a platform because they are familiar with not because of its merits. I am working on devising a decision tree to select which platform is more suitable for a microservices. So far if you are building a microservice to interact with Hadoop/Spark or Kafka Java/Scala/Spring is the platform to go, though there are lots of libraries that makes it easy to develop such microservices in .Net. I am still working on this but thought I should share it. Let me know your thoughts and why would you choose a platform over the other?

Java Scala /Spring MVC

  • Pros:

    • Respected in the business
    • Widespread
    • Constrained coding style might lead to easier to read code
    • Better ecosystem for libraries and tools
  • Cons:
    • Verbose, both with the language and framework
    • Does not have as good support for threading and asynchrony as C# has with async/await

C#/ASP.NET Core

  • Pros:
    • Asynchronous -oriented
    • Better generics, and integration with LINQ
    • Better IDE integration, and configuration setup
    • Quicker to get things up and running in
    • Able to be component oriented as well as MVC
    • Availability of other programming styles than OOP, including functional.
    • More low-level constructs allowing optimization
  • Cons:
    • Newer, slightly buggy
    • Not as battle-tested as Java
    • Heavy IDE (although you can use Visual Studio Code, too)

Microservices, TOGAF, Solution Blueprint and API Design


While providing microservices architecture consulting services, I found it necessary to extract the Interface Design from the Application Architecture. The Interface Design is usually part of the application architecture. However, the Interface design both UI and API are of extreme importance and value to the solution and to the organization offerings, it requires having a sperate section of its own. With the emergence of Microservices and many companies offering their services through APIs to allow clients to hock in and utilize their services. Or as business/marketing people refer to “Monetizing the API”. I think it is paramount to get the interface design more attention and specifically the API design. That is why I defined this new Interface Architecture phase in my customized TOGAF offering to the organization.

Architecture Type

Description

Business Architecture

The business strategy, governance, organization, and key business processes.

Interface Architecture

A blueprint for the individual application interfaces both User Interface and Application Programming Interface and their relationships to the core business processes of the organization.

Application Architecture

A blueprint for the individual application components to be deployed, their interactions, and their relationships to the core business processes of the organization.

Data Architecture

The structure of the application logical and physical data assets and data management resources.

Technology Architecture

The logical software and hardware capabilities that are required to support the deployment of business, data, and application services. This includes IT infrastructure, middleware, networks, communications, processing, and standards.

 

I would like to know your thoughts on this, and how did you handle the API design in the Solution Architecture Blueprint.

Enterprise Architecture and Domain Driven Design – Build Custom Business Logic as Microservice or part of the COTS Application?

As enterprise Architects, depending on the organization we are adopting readymade software applications (or COTS) and tailoring or customization of the software to the needs of the organization. The tailoring can involve UI changes, workflows changes, integrations with other systems in the organization and sometimes business logic added or modified. The UI changes usually must happen on the COTS (Component of the Shelf) application and so are workflows unless you are building a UI façade for this application. I saw one organization did build a completely different Web App to provide its employees access to work shift schedules as the Time and Attendance system could not handle the access by the hundreds of thousands of employees and save on licensing and infrastructure costs. This leaves us with Integrations and business logic. In this installment. I will address Business Logic.

Should we build the custom Business Logic as Microservice or part of the COTS Application?

Aha, this is a tough decision. I prefer to put all customizations out of the software package somewhere else. Would it be in Microservices or built as part of the ESB. I want to have the freedom to have the business logic as platform independent as possible. By moving the custom business logic out in webservices implemented by Microservices or part of the ESB I accomplish that. There are political aspects to this issue too? First the COTS vendor wants to have all the business logic customization in its application for the following reasons:

  1. The vendor wants to increase the $$$ from the custom development services they will provide.
  2. The vendor can enrich its application offerings and business logic by taking this customized business logic and generalizing it in future releasee.
  3. The more customizations and business logic embedded in the vendor logic the harder for the organization to switch to another competitor as they will the cost of customizations from scratch would add up.

The delivery department would prefer not to take ownership of new software components, especially if the organization does not have many software developers and depends on consulting companies.

My argument is that, all software be it ERP system like Dynamics AX or Time and Attendance systems is generic and the vendor offer it to your organization and your competitors. What differentiates your organization from your competitors is the custom process and business logic your organization uses. If you let the vendor build these customizations in their software then you end up subsidizing your competitor upgrade, and maybe giving them your edge. That is why I tend to recommend extracting any custom business logic new or modified and building it outside the COTS. Sometimes this is not possible due to the CTOS platform limitations or due to organizational standards. But whenever is possible extract any custom business logic outside of the COTS

Let me know your thoughts on that.

Microservices Simplified

In this blog, I will share my thoughts on how to architect complex software using microservices architecture, so that it’s flexible, scalable, and a competitive piece of software. I’ll start off first by introducing microservices, and what was before microservices, why microsoervices are so successful and useful now, and the design principles that are associated with microservices architecture.

What Is a Service?

A service is a piece of software which basically provides functionality to other components of software within your system. It basically provides a service to other pieces of software. The other pieces of software could be anything from a website to a mobile app or a desktop app, or even another service. The service basically provides functionality to these applications. And the communication between the software components and the service normally happen using some kind of communication protocol. A system which uses a service or multiple services in this fashion is known to have a service-oriented architecture, and this is normally abbreviated as SOA, or SOA, and the main idea behind SOA is, instead of building all the functionality in one big application, I instead use a service to provide a subset or just one functionality to the application, and this allows me to have many applications use the same cod, and in the future I can have newer or different types of systems connecting to the same service, reusing that functionality, and as a software architecture, SOA has been successful. It allows us to scale and to reuse functionality. A key characteristic of service-oriented architecture is the idea of having standardized contracts or interfaces. When a client application called the service, it called the service by calling a method. The signature of that method normally doesn’t change when the service changes, so you can upgrade a service without having to upgrade the clients, as long as the contract and the interface, i.e., the signature of the method doesn’t change. Also a service is stateless, so when a request comes in from a website to our service, that instance of the service does not have to remember the previous request from that specific customer, that specific client, it basically has all the information from the request that it needs in order to retrieve all the data associated with previous requests within the service. The microservices architecture is basically an improved version of service-oriented architecture, or in other terms SOA done the right way. Microsoervices Architecture shares all the key characteristics of the service-oriented architecture, of scalability, reusability, and standardized contracts and interface for backwards compatibility, and the idea of having a service that’s stateless.

Microservices Introduction

The microservices architecture is basically service-oriented architecture done well. Microservices basically introduce a new set of additional design principles to size a service correctly. Because there was no guidance in the past on how to size a service and what to include in a service, traditional service-oriented architecture resulted in monolithic large services, and because of the size of the service, these services became inefficient to scale up and change in an allowable way. Smaller services, i.e., microservices, basically provide services which are more efficiently scalable, which are flexible, and which can provide high performance in the areas where performance is required. An application which is based on microservices architecture is normally an application which is powered by multiple microservices, and each one of these microservices will provide a set of functions, a set or related functions, to a specific part of the application. Because the microservice normally has a single focus, it does one thing and it does it well. Microservice architecture also uses lightweight communication mechanism between clients and services and service to service. The communication mechanism has to be lightweight and quick, because when you carry out a transaction within a microservices architectured system, the transaction will be a distributed transaction which is completed by multiple services, therefore the services need to communicate to each other in a quick and efficient way over the network. The application interface for a microservice, also needs to be technology agnostic. This basically means the service needs to use an open communication protocol so that it does not dictate the technology that the client application needs to use. And by using open communication protocols, for example, like HTTP REST, we could easily have a .NET client application which talks to a Java-based microservice. In a monolithic service, you’re also likely to have a central database in order to share data between applications and services. In microservices architecture, each microservice has its own data storage. Another key characteristic of a microservice is that it is independently changeable. I can upgrade, enhance or fix a specific microservice without changing any of the clients or any of the other services within the system. And because microservices are independently changeable, they also need to be independently deployable. By modifying one microservice, I should be able to then deploy that change within my system independently from everything else without deploying anything else. We’ve already mentioned the fact that when you make a transaction within a microservices architectured system, the transaction is most likely to be completed by multiple services, multiple services which are distributed, and therefore, your transaction is also a distributed transaction. And because a microservices architectured system has so many moving parts, there’s a need for centralized tooling for management of the microservices. You need a tool, which will help you manage and see the health of your system because there are so many moving parts.

One of the reasons for the microservices architecture now is the need to respond to change quickly. The software market is really competitive nowadays. If your product can’t provide a feature that’s in demand, it will lose market share very quickly, and this is where microservices can split a large system into parts so we can upgrade and enhance individual parts in line with the market needs. So not only do we need to change parts of our system quickly, we also need to change them in a reliable way in order to keep the market happy, and microservices provides this reliability by having your system in many parts, so if one part of the system breaks it won’t break the entire system.

There is also a need for business domain-driven design. The architecture of our application needs to match the organization structure, or the structure of the business functions within the organization. Another reason why microservices architecture is now possible is because we now have automated test tools. We’ve already seen that in a microservices architecture transactions are distributed, and therefore, a transaction will be processed by several services before it’s complete. Therefore, the integration between those services needs to be tested, and testing these microservices together manually might be quite a complex task, but the good news is these automated tests automatically test the integration between our microservices, and this is why microservices architecture is now possible, because we have automated test tools which test integration between services. Release and deployment of microservices can also be complex, but we also have tools, centralized tools, which can carry out this function.

Another reason for the need to adopt microservices architecture is the need to adopt new technology. Because our system is now in several moving parts, we can easily change one part, i.e., a microservice from one technology stack to another technology stack in order to get a competitive edge. Another advancement in technology which makes microservices possible is the asynchronous communication technology. In our microservices architecture when we use distributed transactions, the distributed transaction might use several services in order to complete. Using asynchronous communication, the distributed transaction does not have to wait for individual services to complete their tasks before it’s complete.

The key benefits of the microservices Architecture

  1. Microservices have shorter development times. Because the system is split up into smaller moving parts, you can work on a moving part individually, you can have teams working on different parts concurrently, and because microservices are small in size and they have a single focus, the team have less to worry about in terms of scope. They know the one thing they’re working on has a certain scope, and there’s no need to worry about the entire system as long as they honor the contracts between the services.
  2. More reliable and faster Deployment. Because theservices are loosely coupled, developers can rework, change, and deploy individual components without deploying or affecting the entire system, and therefore, deployment is more reliable and faster. Shorter development times and reliable and faster deployment also enable frequent updates. As we’ve already briefly mentioned, frequent updates can give you a competitive edge in the marketplace. The microservices architecture also allows us to decouple changeable parts. For example, if we know our UI for our system, our user interface for our system, changes quite often, if it uses the microservices architecture, the UI is most likely decoupled from all the services in the background, and therefore you can change it independently from all the services.
  3. Enhanced Security. Microservices architecture also increases security. In a monolithic system you might have one central database with one system accessing that database, and therefore, all you need to do is hack that one system in order to gain access to the data. In the microservices architecture, each microservice has its own database, and each microservice can also have its own security mechanism, therefore, making the data distributed and making the data even more secure.
  4. Increased uptime, because when it comes to upgrading the system you will probably deploy one microservice at a time without affecting the rest of the system. And because the system is split up into business domains and business functions, when a problem arises we can probably quickly identify which service is responsible for that specific business function, and therefore resolve the problem within that microservice.
  5. Highly scalable, and better performance. When there’s a specific part of the system which is in demand, we can just scale that specific part up, instead of scaling the whole system up.
  6. Better support for distributed teams. We can give the ownership of a microservice to a particular development team so that there’s better ownership and knowledge about the microservice. Microservices allow us to use the right technology for specific parts in the system, and because each microservice is separate from the other microservice, they don’t share databases, and they have their own code base, you can easily have microservices being worked on concurrently by distributed teams. In the section of the module we’ll start looking at the design principles that enable microservices and enable these benefits that we get from microservices.

Microservices Design Principles

High Cohesion

Basically, the microservices content and functionality in terms of input and output must be coherent. It basically must have a single focus, and the thing it does it should do well within that single focus. The idea of a microservice having a single focus or a single responsibility is actually taken from the SOLID coding principles, and the single responsibility principle basically states that a class can only change for one reason, and this same principle is applied to microservices. It’s a useful principle because it allows us to control the size of the service, and we will not accidentally create a monolithic service by attaching other behaviors into the microservice which are not actually related. Because the high cohesion principle controls the size of the microservice and the scope of the contents of the microservice, the microservice is easily rewriteable as we are likely to have less of an attachment to a smaller code base, and obviously there will be fewer lines of code to rewrite because the microservice will be so small. And overall, if all our microservices have high cohesion, it makes our overall system highly scalable, flexible, and reliable. The system is more scalable because we can scale up individual microservices, which represent a specific business function or business domain which is in demand, instead of scaling up the whole system, and at the same time the system is more flexible because we can change and upgrade or change the functionality of specific business functions or business domains within our system, and then we have reliability because overall we are changing specific small parts within the system without affecting other parts within the system.

Autonomous

Microservices should also be autonomous. Autonomous meana a microservice should not be subject to change because of an external system it interacts with or an external system that interacts with it. There should be loose coupling between the microservices and between the microservices and the clients that use the microservices. A microservice should also be stateless. There should be no need to remember previous interactions that clients might have had with this service or other service instances in order to carry out the current request. And because microservices honor contracts and interfaces to other services and clients, they should be independently changeable and independently deployable. They should just slot back into the system after a change or enhancement, even though it has a newer version than any of the other components within the system. This also ensures our service is always backwards compatible. Having clear defined contracts between services also means that microservices can be concurrently developed by several teams, because there’s a clear definition of the input and output of a microservice. Separate teams can work on separate microservices, as long as they honor the contracts, development should go okay.

Business Domain Centric

A microservice should also be a business domain centric, and this means a service should represent a business function. The overall idea is to have a microservice represent a business function or a business domain, i.e., a part of the organization, because this helps scope the service and control the size of the service. This is an idea which is taken from domain driven design. You basically define a bounded context, which basically contains all the functionality which is related to a specific part of the business to a business domain or a business function, and you define the bounded context by defining boundaries and seams within the code. You basically highlight the areas where related functionality exists. There will be times when code relates to two different bounded contexts, and this is where we need to shuffle the code around so that the code ends up in the right place where it makes sense and it belongs in terms of business function or business domain. We need to aim for high cohesion, remember, making our microservices business domain centric, or to make our microservices responsive to business change. So as the business changes or the organization changes or functions within the business change, our microservices can change in the same way because our system is broken up into individual parts which are business domain centric. We can also change those parts which relate to specific parts in the organization which are changing.

Resilience

Microservices are resiliant. They embrace failure when it happens. Failure might be in the form of another service not responding to your service, or it might be a connection line to another system which has gone down, or it might be a third party system which fails to respond. Whatever the type of failure, our microservice needs to embrace that failure by degrading the functionality within our microservice or by using default functionality. An example of degrading functionality might be a scenario where we have a user interface microservice which basically draws an HTML page for available orders and promotions, but for whatever reason the promotion’s microservice is down and fails to respond, so our user interface microservice basically chooses to degrade that functionality, and it chooses not to display the promotions on the page. Another way of making microservices more resilient is by having multiple instances of microservices so they register themselves as they start up, and if any of them fail they deregister themselves, so our system or our load balancers, etc., are only ever aware of fully functioning microservices. We also need to be aware that there are different types of failures. So, for example, there might be exceptions or errors within a microservice, there might be delays in replying to a request, and there might also be complete unavailability of a microservice, and this is where we need to work out if we did need to degrade functionality or if we need to default functionality. Failures are also not just limited to the software itself. You might have network failures, and remember we’re using distributed transactions here where one transaction might go across the network and use several services before it actually completes, and therefore, again, we need to make our microservices resilient to network delays or unavailability. We also need to ensure that when our microservices are called and the input they receive as part of that request, that we can validate that input, and this might be input from services or from clients. We need to ensure that our microservices are resilient and can validate incoming data, and they don’t basically fall over because they’ve received something in an incorrect format.

Observable

We need a way to be able to observe our system’s health in terms of system status, in terms of logs, i.e., activity currently happening in the system and errors that are currently happening in the system. And this type of monitoring and logging needs to be centralized so that there is one place where we need to go to in order to view this information regarding the system’s health, and we need this level of monitoring and logging in a centralized place because we now have distributed transactions. In order for a transaction to complete, it must go across the network and use several services, therefore, knowing the health of the system is vital, and this kind of data will also be useful for quick problem solving because the whole system is distributed and there’s a lot going on. We need a quick way of working out where a potential problem possibly lies.

Automation

Automation in the form of tools, for example, tools to reduce testing. Automated testing will reduce the amount of time required for manual regression testing, and the time taken to test integration between services and clients, and also the time taken to set up test environments. Remember, in a microservices architecture our system is made up of several moving parts, and therefore testing can be quite complex, and this is where we need testing tools to automate some of that testing. We need tools, automated testing tools which give us quick feedback, so as soon as I change the microservice and check that we have called into our control system. As well as automation tools to help with testing, we need automation tools to help with deployment, a tool which basically provides a pipeline to deployment. It gives our microservice a deployment ready status, so when you check a change in, the tests pass, and then the deployment status is at ready, and then the tool knows that this build of the microservice is now ready for deployment. So not only does this tool provide a pipeline with a status for each deployable build of a microservice, it also provides a way of actually physically moving the build to the target machine or the target cloud system, so the physical deployment of the software will be all automatic, and therefore it will be reliable because it’s preconfigured with a target where the software needs to go, and it will be configured and tested once, and therefore it should work every time. The idea of using automation tools for deployment falls under a category called continuous deployment.