1: Practically Designing and API to be An Asset

Any software you design will be used in ways you did not anticipate, that is especially true when you are design API’s which nowadays refers to Web REST APIs. There are different audiences or users for any API. There are the direct API users you know of that are defined in the specifications or solution design/architecture documents, like Web or Mobile clients for example. Then there are the ones you did not anticipate that would come at later stages. For example the API might be used to synchronize data with another system or to extract data for migration to another system. Nowadays, APIs are being considered as assets that would live longer than the projects or systems they are being built for. Taking that into account, there are many things that needs to be considered when designing an API. Let us assume we are designing and API for Authors/Books microservice as shown in the diagram below.

 

 

In the Book Author example, we want to implement CRUD (Create, read, Update and Delete) API for Author and Book entities. The requirements for this API

  1. Search and listing functionality to:
    1. Get a List Authors with various search terms
    2. Get author books
    3. Search books by Author, Genre
  2. Add, update, delete Author or Book

To implement these requirements and have this API as an asset for later use we take the following steps

1: Define the Data type for the API Data Objects

Based on the requirement we first define the schemas for the Author and Book objects that the API will support.

For each resource the API supports we define a schema for:

  1. Data Manipulation Object
  2. Full object
  3. Full Object with HATEOAS (Links for operations)

In RAML we can define those objects as follows :

#%RAML 1.0

title: GT Books API Types

types:


AuthorMod:


description: This is Author for Data Manipulation


type: object


properties:


Name:


required: true


example: Moustafa Refaat


description: Author Name


type: string


Nationality:


required: true


example: Canadian


description: Author Nationality


type: string


Date-of-Birth:


required: true


example: 2018-12-09


description: Author Date of Birth


type: date-only


Date-of-Death:


required: false


example: 2018-12-09


description: Author Date of Death


type: date-only

 


Author:


description: This is the Full Author


type: AuthorMod


properties:


Id:


required: true


example: 1


description: Author Id


type: integer


Age:


required: false


maximum: 200


minimum: 8


example: 10


description: Author Age


type: integer

 


AuthorHateoas:


description: Author with Hateoas information LINKS


type: Author


properties:


Links:


required: true


description: Property description


type: array


items:


required: true


type: Link

 


BookMod:


description:
Book Info for Data Manipulation


type: object


properties:


AuthorId:


required: true


example: 1


description: Author Id


type: integer


Name:


required: true


example: Example


description: Book Name


type: string


Genre:


required: true


example: Example


description: Book Genre


type: string


Stars-Rating:


required: false


maximum: 5


minimum: 0


example: 1


description: Book Rating


type: integer


ISBN:


required: true


example: Example


description: Book ISBN


type: string


PublishDate:


required: true


example: 2018-12-09


description: Book Publish Date


type: date-only

 


Book:


description: Book Info


type: BookMod


properties:


Id:


required: true


example: 1


description: Book Id


type: integer


AuthorName:


required: true


example: Moustafa Refaat


description: Author Name


type: string

 


BookHateoas:


description: Book Information with Hateoas links


type: Book


properties:


Links:


required: true


description: Property description


type: array


items:


required: true


type: Link

 


Link:


description: Hateoas LINK


type: object


properties:


href:


required: true


example: /Book/10


description: URL Link


type: string


rel:


required: true


example: GetBook


description: Operation


type: string


method:


required: true


example: GET


description: HTTP Method Get, PUT,..


type: string

 

2: Define URLs Resources for the API

For each Resource define and implement all the HTTP methods (Get, Post, Put, Delete, Patch, Head, Options) even if you are not going to use them now. You can make the implementation return “403 forbidden” message to the client to indicate that this operation is not supported. For our example we will have

  • /Authors
    • Get: Search/List Authors, return body list of authors matching the search/list criteria with headers containing the paging information
    • Post: Create a new Author, return created author
    • Put: Not Supported return “403 Forbidden” error
    • Delete: Not Supported return “403 Forbidden” error
    • Patch: Not Supported return “403 Forbidden” error
    • Head: Return empty body
    • Options: returns the supported methods by this resource “Get, Post, Head, Options”.
  • /Authors/{id}
    • Get: return author with supplied ID
    • Post: Not Supported return “403 Forbidden” error
    • Put: Update the author and return the updated author
    • Delete: Deletes the author
    • Patch: Not Supported return “403 Forbidden” error
    • Head: Not Supported return “403 Forbidden” error
    • Options: returns “Get, Put, Delete, Options”
  • /Authors/{id}/Books
    • Get: Search/List of books for Author, return body list of books matching the search/list criteria with headers containing the paging information
    • Post: Create a new book for author with Id returns created Book
    • Put: Not Supported return “403 Forbidden” error
    • Delete: Not Supported return “403 Forbidden” error
    • Patch: Not Supported return “403 Forbidden” error
    • Head: Returns same as Get with empty body
    • Options: Returns “Get, Post, Head, Options”
  • /Books
    • Get: Search/List of books, return body list of books matching the search/list criteria with headers containing the paging information
    • Post: Create a new book for author with Id returns created Book
    • Put: Not Supported return “403 Forbidden” error
    • Delete: Not Supported return “403 Forbidden” error
    • Patch: Not Supported return “403 Forbidden” error
    • Head: returns same as Get with empty body
    • Options: returns “Get, Post, Head, Options”
  • /Books/{id}
    • Get: Returns the book with ID
    • Post: Not Supported return “403 Forbidden” error
    • Put: update the book
    • Delete: Deletes the book
    • Patch: Not Supported return “403 Forbidden” error
    • Head: Not Supported return “403 Forbidden” error
    • Options: returns “Get, Put, Delete, Options”

 

To be continued…

Advertisements

MuleSoft: Understanding Exception Handling

I have been approached by several developers taking the Any point Platform Development: Fundamentals (Mule 4) training about the Exception Handling and the different scenarios in Module 10. The way it is described can be confusing. So here is how I understand it. Any MuleSoft flow like the one below

How this would be executed? Any Point studio and MuleSoft Runtime would convert it into a Java byte code. So it would be generated as a method or a function in Java. MuleSoft would put the code for this function within a Try -catch scope. If you have defined exceptions handlers in the error handling module it will emit code to catch and handle those exceptions. If there is not and there is a global error handler, it will emit code for the catch exceptions of the global error handler. Here is the thing that catches developers by surprise. If you have defined local handlers, then those cases are the only cases that would be handled and not the combination of the local cases and global error handler case, only one of them if there is a local error handler defined then that is it if there is not then the global error handler is emitted as catch cases

The 2nd point is On Error Propagate and On Error Continue options. If you chooser on Error Propagate then the coded emitted will throw the caught exception at the end of each catch. If you chose On Error continue then the Exception is not thrown. Think of it as if the code written below If you have been a Java , C#, C++. Or Python developer you should understand this basic programming concepts.


public void MianMethod(){
try {
OnErrorPropagate();
OnErrorContinue();
} catch (Exception E)
{
// default handler or global handler if defined
}
}
public void OnErrorPropagate() throws EOFException
{
try{
throw new EOFException();
}
catch( EOFException e)
{
Logger.getLogger(“ExceptionHandler”).log(Level.SEVERE,“Errot”,e.toString());
throw e;
}
}
public void OnErrorContinue()
{
try{
throw new EOFException();
}
catch( EOFException e)
{
Logger.getLogger(“ExceptionHandler”).log(Level.SEVERE,“Errot”,e.toString());

}
}
Hope this helps

MuleSoft: MCD Level 1 Mule 4 Certification Experience

OK Passed the Exam. How was it. I cannot tell you what the questions was and frankly I do not quite remember them. But here are my thoughts about the exam.

  1. You must go through the training course (which is free online, do all the exercises as there are concepts that are not in the slides or materials. And do the DYI exercises. I personally did them maybe 4 or 5 times. And I took the 3.8 online free course though that might confuse you as there changes between 3.8 and 4.0 like flowVars which is gone completely.
  2. I would recommend going quickly over the mule runtime documentation. And just trying to build a practical example.
  3. An understanding of Java and Spring either MVC or Boot frameworks would help but not necessary as it will help you understand how flows are translated into Spring code then compiled
  4. If you have time on your hands, go to git hub and download the free Sample for MuleSoft from MuleSoft and the open source code. Yes, that is an overkill.
  5. Now how about the questions, They are mostly tricky questions, most of the answers are extremely similar. You have to consider them thoroughly and find the best answer. Many are very simply minor colon, or semi-colon difference. Very low level syntax error so yeah if you have special glasses for the computer use them!.
  6. If you are taking the course from Home make sure you turn off your cell phone, phone etc. and do not speak to yourself. And yes you will not be able to use your big screen TV, you will be limited to the laptop screen.
  7. Hope this helps. I wish I can say it was easy I answered it in almost halftime with reviews, but I was really annoyed by the questions and every question is just trying to trick you.

MuleSoft: Tricky Question in the DataweaveTraining Quiz

I am working on MuleSoft certification and taking the MuleSoft 4.1 Fundamentals course. An interesting question in the module quiz is as bellow

Refer to the exhibit. What is valid DataWeave code to transform the input JSON payload to the output XML payload?

Answers

A.


B.


C.


D.


So here we need to have XML attributes so the “@” is required to define the attributes this disqualifies options B and D. Now A and C look very similar except that C uses a ‘;’ yep a semi-colon that is invalid, so the only correct answer is A.

The interesting thing about this question is that it is a small hidden trick. Maybe that is the kind of questions, we should expect in the certification exam?

MuleSoft: File:List Interesting Observation

Working with MuleSoft file connector, I was expecting that the File->List (https://docs.mulesoft.com/connectors/file/file-list) operation would return a list of fileInfo objects (you know path, size etc) but it actually it returns a list of the contents of the files in the directory. This seemed odd to me as the documentation states


“The List operation returns a List of Messages, where each message represents any file or folder found within the Directory Path (directoryPath). By default, the operation does not read or list files or folders within any sub-folders of directoryPath.
To list files or folders within any sub-folders, you can set the recursive parameter to


https://docs.mulesoft.com/connectors/file/file-list

Here is the sample I was working with

I was intending to put a read file operation in the foreach however that just gave me an error

Here is sample of the logged messages

That was a head scratch-er I thought I had done some mistake in the list parameters, but it seems that is how the file connector list operator works. Below you will see that part of the message for each fine the typeAttributes have the fileInfo information.

What are the Downsides of a Microservice based solution

A Microservice based solution has the following downsides:

  1. Distributing the application adds complexity for developers when they are designing and building the services and in testing and exception handling. It also adds latency to the system.
  2. Without a Microservice-oriented infrastructure, An application that has dozens of Microservices types and needs high scalability means a high degree of deployment complexity for IT operations and management.
  3. Atomic transactions between multiple Microservices usually are not possible. The business requirements must embrace eventual consistency between multiple Microservices.
  4. Increased global resource needs (total memory, drives, and network resources for all the servers or hosts). The higher degree of granularity and distributed services requires more global resources. However, given the low cost of resources in general and the benefit of being able to scale out just certain areas of the application compared to long-term costs when evolving monolithic applications, the increased use of resources is usually a good tradeoff for large, long-term applications.
  5. When the application is large, with dozens of Microservices, there are challenges and limitations if the application requires direct client-to-Microservice communications. When designing and building a complex application based on Microservices, you might consider the use of multiple API Gateways instead of the simpler direct client‑to‑Microservice communication approach.
  6. Deciding how to partition an end-to-end application into multiple Microservices is challenging. As You need to identify areas of the application that are decoupled from the other areas and that have a low number of hard dependencies. Ideally, each service should have only a small set of responsibilities. This is like the single responsibility principle (SRP) applied to classes, which states that a class should only have one reason to change. 

 

 

Implement Rest API in MuleSoft, Azure Logic Apps, Asp.Net core or Spring-Boot? MuleSoft: Step1- Defining an API in RAML

I have been working lately on comparing on comparing different technologies to build web API’s.. One of the main concerns was if we wanted to build a simple API service which technology would be easier, more productive to develop the service with. To provide a reference comparison I will build the same web service (in MuleSoft, Azure Logic App, Asp.Net Core, Spring Boot) and provide my notes as I go. The web service would provide the following functionality

  1. CRUD operations on an Authors entity
  2. CRUD operations on BOOKS entities where books are related to authors

All the Read (queries) should support:

  1. Filtering
  2. Searching
  3. Paging
  4. Supports Levels two and three in Richardson Maturity Model(see my previous post https://moustafarefaat.wordpress.com/2018/12/11/practical-rest-api-design-implementation-and-richardson-maturity-model/) . This means based on the Accept header of the request return the results as either:
    1. Pure JSON
    2. With HATEOUS

I will start with MuleSoft implementation.

Step 1. Define the API in RAML

With MuleSoft you get AnyPoint portal and you get the design center, which helps you designing the API RAML. There is API Designer visual Editor which can help you in the beginning.

 

 

 

Though it has many weakness such as:

  1. Once when you switch to RAML editor you cannot go back.
  2. You cannot define your own Media Types you have to use form the list.

 

To finalize the API definition in RAML I had to manually edit though the editor was helping to get started. Below is a fragment of the API in RAML (The full solution will be published on my GitHub https://github.com/RefaatM )

Notice in the RAML that I have defined two responses for the Get operation of the Authors Resources. Full RAML is at (https://github.com/RefaatM/MuleSoftRestAPIExample/tree/master/src/main/resources/api)

#%RAML 1.0

title: GTBooks

description: |


GTBooks Example

version: ‘1.0’

mediaType:

application/json

application/xml

protocols:

HTTP

baseUri: /api/v1.0

 

types:


CreateAuthor:


description: This is a new DataType


type: object


properties:


Name:


required: true


example: Moustafa Refaat


description: Author Name


type: string


Nationality:


required: true


example: Canadian


description: ‘Author Nationality ‘


type: string


Date-of-Birth:


required: true


example: ‘2018-12-09’


description: Author Date of Birth


type: date-only


Date-of-Death:


required: false


example: ‘2018-12-09’


description: Author Date of Beath


type: date-only

 


Author:


description: This is a new DataType


type: CreateAuthor


properties:


Id:


required: true


example: 1


description: Author Id


type: integer


Age:


required: true


maximum: 200


minimum: 8


example: 10


description: Author Age


type: integer

 


AuthorHateoas:


description: Author with Hateoas information LINKS


type: Author


properties:


Links:


required: true


description: Property description


type: array


items:


required: true


type: Link

 

 


Link:


description: Hateoas LINK


type: object


properties:


href:


required: true


example: /Book/10


description: URL Link


type: string


rel:


required: true


example: GetBook


description: Operation


type: string


method:


required: true


example: GET


description: ‘HTTP Method Get, PUT,..’


type: string

/author:


get:


responses:


‘200’:


body:


application/json:


type: array


items:


type: Author


application/hateaos+json:


type: array


items:


type: AuthorHateoas


‘304’: {}


‘400’: {}


‘500’: {}


headers:


Accept:


example: ‘application/json ‘


description: application/json or application/hateaos+json


type: string


queryParameters:


sort-by:


required: false


example: Example


description: sort by


type: string


filteryby:


required: false


example: Example


description: Property description


type: string

 

(.. to be continued)

Practical REST API Design, implementation and Richardson Maturity Model

Richardson Maturity Model classifies REST API maturity as follows

  • Level Zero: These services have a single URI and use a single HTTP method (typically POST). For example, most Web Services (WS-*)-based services use a single URI to identify an endpoint, and HTTP POST to transfer SOAP-based payloads, effectively ignoring the rest of the HTTP verbs. Similarly, XML-RPC based services which send data as Plain Old XML (POX). These are the most primitive way of building SOA applications with a single POST method and using XML to communicate between services.
  • Level One: These services employ many URIs but only a single HTTP verb – generally HTTP POST. They give each individual resource in their universe a URI. Every resource is separately identified by a unique URI – and that makes them better than level zero.
  • Level Two: Level two services host numerous URI-addressable resources. Such services support several of the HTTP verbs on each exposed resource – Create, Read, Update and Delete (CRUD) services. Here the state of resources, typically representing business entities, can be manipulated over the network. Here service designer expects people to put some effort into mastering the APIs – generally by reading the supplied documentation. Level 2 is the good use-case of REST principles, which advocate using different verbs based on the HTTP request methods and the system can have multiple resources.
  • Level Three: Level three of maturity makes use of URIs and HTTP and HATEOAS. This is the most mature level of Richardson’s model which encourages easy discoverability and makes it easy for the responses to be self-explanatory by using HATEOAS. The service leads consumers through a trail of resources, causing application state transitions as a result.

Where HATEOAS (Hypermedia as the Engine of Application State) is a constraint of the REST application architecture that keeps the RESTful style architecture unique from most other network application architectures. The term “hypermedia” refers to any content that contains links to other forms of media such as images, movies, and text. This architectural style lets you use hypermedia links in the response contents so that the client can dynamically navigate to the appropriate resource by traversing the hypermedia links. This is conceptually the same as a web user navigating through web pages by clicking the appropriate hyperlinks to achieve a final goal. Like a human’s interaction with a website, a REST client hits an initial API URI and uses the server-provided links to dynamically discover available actions and access the resources it needs. The client need not have prior knowledge of the service or the different steps involved in a workflow. Additionally, the clients no longer have to hard code the URI structures for different resources. This allows the server to make URI changes as the API evolves without breaking the clients.

Naturally you would want to build to highest standard and provide level three REST API. That would mean providing a links field as in the following example below from GT-IDStorm API

The data payload as you can see from this sample is huge, compared to the actual data returned.

{

“value”: [

{

“id”: “63b2c70e-2bcb-4335-9961-3d14be642163”,

“name”: “Entity-1”,

“description”: “Testing Entity 1”,

“links”: [

{

“href”: “https://localhost:44379/api/v1/entity/63b2c70e-2bcb-4335-9961-3d14be642163”,

“rel”: “self”,

“method”: “GET”

},

{

“href”: null,

“rel”: “get_entitydefinition_byname”,

“method”: “GET”

},

{

“href”: “https://localhost:44379/api/v1/entity/63b2c70e-2bcb-4335-9961-3d14be642163/full”,

“rel”: “get_full_entitydefinition”,

“method”: “GET”

},

{

“href”: “https://localhost:44379/api/v1/entity/63b2c70e-2bcb-4335-9961-3d14be642163”,

“rel”: “delete_entitydefinition”,

“method”: “DELETE”

},

{

“href”: “https://localhost:44379/api/v1/entity/63b2c70e-2bcb-4335-9961-3d14be642163/attributes”,

“rel”: “create_attribute_for_entitydefinition”,

“method”: “POST”

},

{

“href”: “https://localhost:44379/api/v1/entity/63b2c70e-2bcb-4335-9961-3d14be642163/attributes”,

“rel”: “get_attributes_for_entitydefinition”,

“method”: “GET”

},

{

“href”: “https://localhost:44379/api/v1/entity/63b2c70e-2bcb-4335-9961-3d14be642163/systems”,

“rel”: “create_system_for_entitydefinition”,

“method”: “POST”

},

{

“href”: “https://localhost:44379/api/v1/entity/63b2c70e-2bcb-4335-9961-3d14be642163/systems”,

“rel”: “get_system_for_entitydefinition”,

“method”: “GET”

},

{

“href”: “https://localhost:44379/api/v1/entity/63b2c70e-2bcb-4335-9961-3d14be642163/data”,

“rel”: “get_data_for_entitydefinition”,

“method”: “GET”

},

{

“href”: “https://localhost:44379/api/v1/entity/63b2c70e-2bcb-4335-9961-3d14be642163/data/GetEntityDataWithMissingSystems”,

“rel”: “get_data_WithMissingSystems_for_entitydefinition”,

“method”: “GET”

}

]

},

{

“id”: “54bc1f18-0fd5-43dd-9309-4d8659e3aa91”,

“name”: “Entity-10”,

“description”: “Testing Entity 10”,

“links”: [

{

“href”: “https://localhost:44379/api/v1/entity/54bc1f18-0fd5-43dd-9309-4d8659e3aa91”,

“rel”: “self”,

“method”: “GET”

},

{

“href”: null,

“rel”: “get_entitydefinition_byname”,

“method”: “GET”

},

{

“href”: “https://localhost:44379/api/v1/entity/54bc1f18-0fd5-43dd-9309-4d8659e3aa91/full”,

“rel”: “get_full_entitydefinition”,

“method”: “GET”

},

{

“href”: “https://localhost:44379/api/v1/entity/54bc1f18-0fd5-43dd-9309-4d8659e3aa91”,

“rel”: “delete_entitydefinition”,

“method”: “DELETE”

},

{

“href”: “https://localhost:44379/api/v1/entity/54bc1f18-0fd5-43dd-9309-4d8659e3aa91/attributes”,

“rel”: “create_attribute_for_entitydefinition”,

“method”: “POST”

},

{

“href”: “https://localhost:44379/api/v1/entity/54bc1f18-0fd5-43dd-9309-4d8659e3aa91/attributes”,

“rel”: “get_attributes_for_entitydefinition”,

“method”: “GET”

},

{

“href”: “https://localhost:44379/api/v1/entity/54bc1f18-0fd5-43dd-9309-4d8659e3aa91/systems”,

“rel”: “create_system_for_entitydefinition”,

“method”: “POST”

},

{

“href”: “https://localhost:44379/api/v1/entity/54bc1f18-0fd5-43dd-9309-4d8659e3aa91/systems”,

“rel”: “get_system_for_entitydefinition”,

“method”: “GET”

},

{

“href”: “https://localhost:44379/api/v1/entity/54bc1f18-0fd5-43dd-9309-4d8659e3aa91/data”,

“rel”: “get_data_for_entitydefinition”,

“method”: “GET”

},

{

“href”: “https://localhost:44379/api/v1/entity/54bc1f18-0fd5-43dd-9309-4d8659e3aa91/data/GetEntityDataWithMissingSystems”,

“rel”: “get_data_WithMissingSystems_for_entitydefinition”,

“method”: “GET”

}

]

}

],

“links”: [

{

“href”: “https://localhost:44379/api/v1/entity?orderBy=Name&searchQuery=Testing%20Entity%201&pageNumber=1&pageSize=10”,

“rel”: “self”,

“method”: “GET”

}

]

}

For example if we remove the HATEOAS requirement that data returned for the same query would be

This would less data would have huge impact on the system as a whole performance, Less traffic on the network, less data to process and manipulate by the client and servers.

[

{

“id”: “63b2c70e-2bcb-4335-9961-3d14be642163”,

“name”: “Entity-1”,

“description”: “Testing Entity 1”

},

{

“id”: “54bc1f18-0fd5-43dd-9309-4d8659e3aa91”,

“name”: “Entity-10”,

“description”: “Testing Entity 10”

}

]

I usually implement the API to have and accept header with multiple options

  • Application/json: returns just the data
  • Application/hateoas+json: return the data with the hateoas (Links) data.

I also implement another resource or operation that provides the links structures

In Conclusion

I would recommend implementing the API to:

  • Support Level Two and Leve Three at the same time by using the accept header for the request to
    • Application/json: returns just the data
    • Application/hateoas+json: return the data with the hateoas (Links) data.
  • Implement another resource or the root that would return the URLs (structures and operations) that are supported by the API.

As I found just support HATEAOS only would make the system pay heavy price on performance specially with large data loads while very few clients if any would utilize the links returned. I would love to hear your thoughts and experience on APIs with HATEAOS?

What is Solution Architecture,

In my opinion, Solution Architecture is defining the design or organization of different systems and components to fulfill a business needs and requirements. By systems and components, I mean software applications, and the infrastructure (hardware devices, virtual machines, on-premises or on the cloud) the software runs on. Solution architecture is the combination of Application Architecture, Integration Architecture and Infrastructure Architecture. Solution Architecture takes a high-level description of all Applications, Integrations and Infrastructure architectures in an implementation. For example, with any new application implementation in the enterprise, that new system needs definitions of:

  • Application architecture, that is what components of the application would be utilized and how they will be used by the organization, which components would need customizations and how the system would be configured to accomplish the goals the business wants.
  • Integration with other systems. The new application at least would need to connect to the enterprise Identity management store and how to provision and manage users access to the system and export information to the enterprise data warehouse or bigdata system such as Hadoop or analytics systems, and diagnostics and monitoring to the enterprise monitoring system.
  • Infrastructure, that is an environment to run on wither on premises or in the cloud, that is servers, operating systems, networks, firewall, storage, security and access control, auditing etc.

As solution architect will need to understand the business requirements, the enterprise IT standards, and work on the selection of the appropriate software application to satisfy the business requirements, the enterprise standards, project budgets, and author a solution blueprint that describes how the proposed architecture. Solution architecture is tightly coupled with Enterprise Architecture and infrastructure Architecture teams in most organizations. Solution architecture is one of the key methods, by which enterprise architecture delivers value to the organization. Solution architecture activities take place during solution ideation, solution design, and solution implementation. During ideation, solution architecture establishes the complete business context for the solution and defines the vision and requirements for the solution. During design, solution architecture elaborates potential options, which may include RFIs, RFPs or prototype development. It selects the most optimal option and develops the roadmap for the selected solution. During implementation, solution architecture communicates the architecture to the stakeholders, and guides the implementation team.

Typically, a Solution Architect will work project managers on defining tasks, estimates, discusses architecturally significant requirements with stakeholders, designs a solution architecture blueprint, evaluate proposals, communicates with designers and stakeholders, document solution alternatives. There are four core activities in solution architecture design.

  1. Architectural analysis is the process of understanding the environment in which a proposed solution will operate and determining the requirements for the solution. The input or requirements to the analysis activity can include items such as:
    1. what the system will do when operational (the functional requirements)
    2. how well the system will perform runtime non-functional requirements such as reliability, operability, performance efficiency, security, compatibility.
    3. development-time non-functional requirements such as maintainability and transferability
    4. business requirements and environmental contexts of a system that may change over time, such as legal, social, financial, competitive, and technology concerns
    5. Organizational development capabilities
  2. Architectural synthesis or design is the process of creating an architecture. Given the architecturally significant requirements determined by the analysis, the current state of the design and the results of any evaluation activities, the design is created and improved.
  3. Architecture evaluation is the process of determining how well the current design or a portion of it satisfies the requirements derived during analysis. An evaluation can occur whenever an architect is considering a design decision, it can occur after some portion of the design has been completed, it can occur after the final design has been completed or it can occur after the system has been constructed.
  4. Knowledge management and communication is the activity of exploring and managing knowledge that is essential to designing a solution architecture. A solution architect does not work in isolation. SA get inputs, functional and non-functional requirements and design contexts, from various stakeholders; and provides outputs to stakeholders. Solution architecture knowledge management activity is about finding, communicating, and retaining knowledge. As software architecture design issues are intricate and interdependent, a knowledge gap in design reasoning can lead to incorrect software architecture design.
  5. Documentation is the activity of recording the design generated during the solution architecture process. A solution architecture is described in a solution blueprint that usually contains
    1. Business Architecture section: The business strategy, governance, organization, and key business processes
    2. Interface Architecture (APIs): A blueprint for the individual application interfaces both User Interface and Application Programming Interface and their relationships to the core business processes of the organization.
    3. Application Architecture section: A blueprint for the individual application components to be deployed, their interactions, and their relationships to the core business processes of the organization.
    4. Data Architecture Section: The structure of the application logical and physical data assets and data management resources
    5. Technology Architecture Section: The logical software and hardware capabilities that are required to support the deployment of business, data, and application services. This includes IT infrastructure, middleware, networks, communications, processing, and standards.
    6. Delivery Approach Section: Describes the approach of the solution delivery, what are the phases, would there be POC, would there be a Pilot implementation, who would be doing what, the responsibilities of various internal and external teams.

What do we mean by Architecture?

Architecture is defined as “The fundamental organization of a system, embodied in its components, their relationships to each other and the environment, and the principles governing its design and evolution. An architecture has two meanings depending upon the context:

  1. A formal description of a system, or a detailed plan of the system at a component level to guide its implementation.
  2. The structure of components, their inter-relationships, and the principles and guidelines governing their design and evolution over time

Microservices Interview Questions: 90 Technical Questions with Answers

Just got this Book published on Amazon Kindle check it out at https://www.amazon.ca/dp/B07KMD77YB/ref=sr_1_4?ie=UTF8&qid=1542400692&sr=8-4&keywords=microservices+interview+questions

Wisdom is learning all we can but having the humility to realize that we do not know it all. Microservices Interview Questions 90 Technical questions with clear and concise answers will help you gaining more wisdom in Microservices Interviews. The difference between a great Microservices consultant and someone who kind of knows some stuff is how you answer the interview questions in a way that will show how knowledgeable you are. The 90 questions I have assembled are for: job seekers (junior/senior developers, architects, team/technical leads), and interviewers.

Microservices Interview Questions are grouped into:

  • General Questions
  • Design Patterns Questions
  • API Design Questions
  • Containers and Orchestrations Questions.

Increase your earning potential by learning, applying and succeeding. Learn the fundamentals relating to Microservices based Application architecture in an easy to understand questions and answers approach. It covers 90 realistic interview Questions with answers that will impress your interviewer. A quick reference guide, a refresher and a roadmap covering a wide range of microservices architecture related topics & interview tips.

Sample Questions

  1. Why a Microservices architecture?


Microservices Architecture provides long-term agility. Microservices enable better maintainability in complex, large, and highly-scalable systems by letting you create applications based on many independently deployable services that each have granular and autonomous lifecycles. And Microservices can scale out independently.


Instead of having a single monolithic application that you must scale out as a unit, you can instead scale out specific Microservices. That way, you can scale just the functional area that needs more processing power or network bandwidth to support demand, rather than scaling out other areas of the application that do not need to be scaled. That means cost savings. Microservices approach allows agile changes and rapid iteration of each Microservice. Architecting fine-grained Microservices-based applications enables continuous integration and continuous delivery practices. It also accelerates delivery of new functions into the application. Fine-grained composition of applications also allows you to run and test Microservices in isolation, and to evolve them autonomously while maintaining clear contracts between them. As long as you do not change the interfaces or contracts of a Microservice, you can change the internal implementation of any Microservice or add new functionality without breaking other Microservices that use it.

  1. What is the Eventual Consistency?

Eventual consistency is an approach which allow you to implement data consistency within a Microservices architecture. It focuses on the idea of that the data within your system will be eventually consistent and it doesn’t have to be immediately consistent. For example, in an E-Commerce system, when a customer places an order, do you really need to immediately carry out all the transactions (Stock availability, charging the customer credit card, etc.)? There are certain data updates that can be eventually consistent, in line with the initial transaction that was triggered. This approach is based on the BASE model (Basic Availability, Soft state, and Eventually consistent). Data updates can be more relaxed, and don’t always have to have all the updates apply to the data immediately, slightly stale data to give approximate answers is okay sometimes. BASE model contrasts with the ACID model where all data related to the transaction must be immediately updated as part of the transaction. The system becomes more responsive because certain updates are done in the background and not done as part of the immediate transaction. The eventual consistency approach is highly useful for long running tasks. One thing to note about the eventual consistency approach is depending on the patterns you use, the actual time it takes for the data to become consistent will not be days, minutes or hours, it will potentially be seconds. Eventual data consistency across your Microservices architecture that happens within seconds is acceptable because of the gains you get in terms of performance and responsiveness across your system. Eventual consistency using the right patterns can be so immediate, and preparing for inconsistencies and dealing with race conditions might not actually be such a huge task. The traditional approach to eventual consistency has involved using data replication. Another approach to eventual consistency is Event based which works by raising events as part of transactions and actions in an asynchronous fashion as messages are placed on message brokers and queues.

  1. How to approach REST API Design?

    1. First, focus on the business entities that the web API exposes, what kind of CRUD operations needed. Creating a new entity record can be achieved by sending an HTTP POST request that contains the entity information. The HTTP response indicates whether the record was created successfully or not. When possible, resource URIs should be based on nouns and not verbs like the operations on the resource. A resource does not have to be based on a single physical data item.
    2. Avoid creating APIs that simply mirror the internal structure of a database. The purpose of REST is to model entities and the operations that an application can perform on those entities. A client should not be exposed to the internal implementation.
    3. Entities are often grouped together into collections (orders, customers). A collection is a separate resource from the item within the collection, and should have its own URI.
    4. Sending an HTTP GET request to the collection URI retrieves a list of items in the collection. Each item in the collection also has its own unique URI. An HTTP GET request to the item’s URI returns the details of that item.
    5. Adopt a consistent naming convention in URIs. In general, it helps to use plural nouns for URIs that reference collections. It’s a good practice to organize URIs for collections and items into a hierarchy.
    6. You should provide navigable links to associated resources in the body of the HTTP response message. Avoid requiring resource URIs more complex than collection/item/collection.
    7. Try to keep URIs relatively simple. Once an application has a reference to a resource, it should be possible to use this reference to find items related to that resource.
    8. Try to avoid “chatty” web APIs that expose many small resources. Such an API may require a client application to send multiple requests to find all of the data that it requires. Instead, you might want to denormalize the data and combine related information into bigger resources that can be retrieved with a single request. However, you need to balance this approach against the overhead of fetching data that the client doesn’t need. Retrieving large objects can increase the latency of a request and incur additional bandwidth costs.
    9. Avoid introducing dependencies between the web API and the underlying data sources. For example, if your data is stored in a relational database, the web API doesn’t need to expose each table as a collection of resources.

It might not be possible to map every operation implemented by a web API to a specific resource. You can handle such non-resource scenarios through HTTP requests that invoke a function and return the results as an HTTP response message. For example, a web API that starts some operation such as run validations could provide URIs that expose these operations as pseudo resources and use the query string to specify the parameters required.