MuleSoft: Coding Standards

Establishing coding standards is essential for successful implementation of a program. The smooth functioning of software programs is vital for the success of most organizations. Coding standards are a series of procedure for a specific programming language that determines the programming style, procedures, methods, for various aspects of the program written in that language. A coding standard ensures that all developers writing the code in a language write according to the guidelines specified. This makes the code easy to understand and provides consistency in the code. The completed source code should indicate as if the code has been written by a single developer in a single session. In the following sections I provide a sample coding standard for MuleSoft code. Please let me know your thoughts by email or through the comments.

Guiding principles

This section contains some general rules and guidance integration development should follow. Any deviations from the standard practices must be discussed with, and validated by, the technical lead on the project. This list is not intended to be exhaustive and may be supplemented during the life of the project:

    • Client first: The code must meet requirements. The solution must be cost-effective.
    • The code must be as readable as possible.
    • The code must be as simple as possible.
    • The code should reasonably isolate code that can be reused.
    • Use common design patterns where applicable.
    • Reuse a library instead of rolling your own solution to an already-solved problem.
    • Do ask if you are unsure of anything.
    • Do ensure that any modifications to the design or architecture are thought-through, well-designed and conform to n-tier architecture design principles ( ).
    • Do reach out to authors of work items if alternative approaches exist for a given requirement, or if you have any concerns about any assigned work items, e.g. missing acceptance criteria.
    • Do avoid duplication of code.
    • Do add any objects that have been modified to version control as soon as possible.
    • Do alert responsible team members as to any issues or defects that you may discover while executing unrelated work items.
    • Don’t add code to troubleshoot or rectify a defect in any environment other than the development environment.


  • All mule elements supporting the “name” attribute for object reference should be camel case starting with a lowercase letter.
  • For mule elements that support the “Notes” section, write comments describing the purpose, functionality etc., as if you would write comments in a Java/Scala/C#/ Python function or method definition.
  • Breakup flows into separate flows or sub flows which:
    • Makes the graphical view more intuitive
    • Makes XML code easier to read
    • Enables code reuse through the reuse of flows /sub-flows
    • Provides separation between an interface and implementation
    • Easier to test
  • Always define Error handling for all flows.
  • Encapsulate global elements in a configuration file.
  • Create multiple applications, separated by functionality.
  • If deployment is on premises use a domain project to share global configuration elements between applications, which helps with:
    • Consistency between applications.
    • Expose multiple services within the domain on the same port.
    • Share the. Connection to persistent storage.
    • Utilize VM connector for communications between applications.
  • Use Application properties to provide easier way to manage configurations for different environments.
  • Create a YAML properties file the “src/main/resources” folder “{env}-config.yaml”.
  • Define Metadata in “src/main/resources/application-types.xml” for all canonical schemas, all connectors that do not create the metadata automatically.

MuleSoft Development Standards for Project Naming Convention

  • SystemAPI Apps: {source}sapi
  • ProcessAPI Apps: {process}papi
  • ExperienceAPI Apps – {Web/Mobile/Machine}eapi
  • Integration APPs – {sourceSystem}and{targetSystem}int == These are batch/or scheduled integrations

    Note that: Not all implementations would have all these types of projects.

MuleSoft Development Standards for Transformations/ DataWeave

  • Write comments.
  • Keep the code simple.
  • Provide sample data for various scenarios to test the transformation with.
  • Define a utility.dwl that stores all the common dataweave functions such as common currency, time, string conversions.
  • Save the code in an external DWL file in the src/main/resources folder for complex transformations should be stored in external DWLs (as its performance intensive).
  • Use of Dataweave libraries before writing your own dataweave functions.

MuleSoft Development Standards for Flows

  • Minimize flow complexity to improve performance.
  • Each variable defined in the flow is used by the process.
  • Transactions are used appropriately.
  • All calls to external components are wrapped in an exception-handling Scope.
  • No DataWeave contains an excessive amount of code that could alternately be included in an external component.
  • All Loops have clearly defined exit conditions.
  • All variables are explicitly instantiated.
  • All Flows has trace points inserted to enable debugging in later environments.

MuleSoft Leading Practices for Deployment and Administration

Figure 1: Continuous Integration and Continuous Deployment

Utilize Anypoint platform support for CI/CD using

  • The Mule Maven plugin to automate building, packaging and deployment of Mule Applications.
  • The MUnit Maven plugin to automate test execution.

MuleSoft Leading Practices for Testing

  • It is recommended to have the following test cycles before deployment into the production environment:
    • Unit testing the artifacts (build phase). Build MUnit Tests for all flows
    • End-to-end integration testing (SIT)
    • User acceptance testing (UAT)
    • Performance testing
  • Before deployment the solution should have been successfully tested while running under user accounts with permissions identical to the production environment.
  • Messages are validated against their schema as per the use case requirements.
  • At a minimum, the developer should conduct development unit tests & end-to-end system integration tests for every interface before certifying the interface ready for release for the QA phase of testing.

MuleSoft: Designing Integration Applications Wisdom

In this blog I will go through best practices for design integration applications. Wisdom that I have garnered through projects, MuleSoft recommendations, reviews of MuleSoft project and discussions with MuleSoft specialists.


  • Connector retry/Until successful/retry APIs should be present for all connections / connectors. This is an obvious one networks, and the internet have occasional disconnections. So you should always retry a few times before giving up and abandoning the operation.
  • High volume processes should be coupled with MuleSoft Batch framework and appropriate queuing mechanisms wherever necessary. This to make the processing faster and more reliable but be cautious about which queuing infrastructure you are using. VM queues are mostly queuing in memory which might cause out of memory issues.
  • Exceptions are logged to an agreed upon location. Best of course is to a ticketing system like ServiceNow or through regular logging and having log mentoring system like Splunk to collect the logs and issue waring. Refrain from utilizing Emails to send errors to support teams. Things get messy with emails and sometime tracking is lost.
  • Long-running processes should provide a way to inspect progress to date. Usually this is done through sending notification through a hookup API or pushing the progress to the logs. But it important to have a way to see that so far 60% of the data load has been processed
  • Processes are designed to be loosely coupled and promote reuse where possible. Adopt microservices sensibly not to small and not large.
  • Adopt the MuleSoft API-Led connectivity approach sensibly. Aha, this is a tricky and controversial one. Many novice Developers/Architects just follow the 3-layer API-Led pattern (System API, Process API, Experience API) religiously without thinking of the consequences. There are times when the three tiers are required other times you only need two tiers only. For example if the integration is a batch job that picks up files or records from a DB and push them to Salesforce. Then you only need System API Layer and Integration layer (no need for experience or process API layers). See below a summary of eh API Led Connectivity Approach.
    • System APIs should expose a canonical schema (project or domain scope) when there is an identified Canonical Schema for the project, domain, or organization scope. Do not just replicate the source system API removing any system specific complexities. I have seen implementation where the developers just replicated the source system API just removing the authentication for source system. This meant spending 1-4 weeks to develop test an API that removes Source System Authentication with another authentication system for the system API. As a Manager or from the client side why did we spend 4 weeks = 160 hrs at $200 per hour = 32K to develop something that is does not add 32K worth value and would cost us in the future to maintain. The reason we use a Middle wear like MuleSoft to implement integrations is to make it easy to replace system and reduce vendor dependencies. For example, if we are integrating Salesforce, SAP, Workday, and Shopify for example. If after say 2 years the crop decided to replace SAP with Dynamics AX. Now if the System API for SAP exposed SAP API with just minor modifications for authentication. Then the Dynamics AX system API does the same the all the process or integration applications would have to be changed and recoded. This is the main reason that Enterprise Service Bus had such a bad reputation. Because of bad implementations. As I wrote in my Book “BizTalk the Practical Course” Yes I know this MuleSoft but theory is the same. It is like Quick Sort in C#, Java, C++, Scala, Python. You are still implementing “Quick Sort” same algorithm same theory different tool. Read the full discussion in the preview Page 35.

  • When creating a canonical schema stick to the project/domain scope and do not try to create a generic canonical schema for the whole organization.

I cannot stress this enough, while MuleSoft promotes the Three-Tire structure and Application API network, it does not always make sense to use this approach in all situations. Strive to design the integration architecture to be: –

  1. Easy to maintain
  2. As modular as possible
  3. Any components that can be reused should be isolated into its own library or application

The MuleSoft API-led connectivity approach

API-led connectivity is a methodical way to connect data to applications through a series of reusable and purposeful modern APIs that are each developed to play a specific role – unlock data from systems, compose data into processes, or deliver an experience. API-led connectivity provides an approach for connecting and exposing assets through APIs. As a result, these assets become discoverable through self-service without losing control.

  • System APIs: In the example, data from SAP, Salesforce and ecommerce systems is unlocked by putting APIs in front of them. These form a System API tier, which provides consistent, managed, and secure access to backend systems.
  • Process APIs: Then, one builds on the System APIs by combining and streamlining customer data from multiple sources into a “Customers” API (breaking down application silos). These Process APIs take core assets and combines them with some business logic to create a higher level of value. Importantly, these higher-level objects are now useful assets that can be further reused, as they are APIs themselves.
  • Experience APIs: Finally, an API is built that brings together the order status and history, delivering the data specifically needed by the Web app. These are Experience APIs that are designed specifically for consumption by a specific end-user app or device. These APIs allow app developers to quickly innovate on projects by consuming the underlying assets without having to know how the data got there. In fact, if anything changes to any of the systems of processes underneath, it may not require any changes to the app itself.

Defining the API data model

The APIs you have identified and started defining in RAML definitions exchange data representations of business concepts, mostly in JSON format. Examples are:

  • The JSON representation of the Policy Holder of a Motor Policy returned by the “Motor Policy Holder Search SAPI”
  • The XML representation of a Quote returned by the “Aggregator Quote Creation EAPI” to the Aggregator
  • The JSON representation of a Motor Quote to be created for a given Policy Holder passed to the “Motor Quote PAPI”
  • The JSON representation of any kind of Policy returned by the “Policy Search PAPI”

All data types that appear in an API (i.e., the interface) form the API data model of that API. The API data model should be specified in the RAML definition of the API. API data models are clearly visible across the application network because they form an important part of the interface contract for each API.

The API data model is conceptually clearly separate from similar models that may be used inside the API implementation, such as an object-oriented or functional domain model, and/or the persistent data model (database schema) used by the API implementation. Only the API data model is visible to API clients in particular and to the application network in general – all other forms of models are not. Consequently, only the API data model is the subject of this discussion.

Enterprise Data Model versus Bounded Context Data Models

The data types in the API data models of different APIs can be more or less coordinated:

  • In an Enterprise Data Model – often called Canonical Data Model, but the discussion here uses the term Enterprise Data Model throughout – there is exactly one canonical definition of each data type, which is reused in all APIs that require that data type, within all of Acme Insurance
  • E.g., one definition of Policy that is used in APIs related to Motor Claims, Home Claims, Motor Underwriting, Home Underwriting, etc.
  • In a Bounded Context Data Model several Bounded Contexts are identified within Acme Insurance by their usage of common terminology and concepts. Each Bounded Context then has its own, distinct set of data type definitions – the Bounded Context Data Model. The Bounded Context Data Models of separate Bounded Contexts are formally unrelated, although they may share some names. All APIs in a Bounded Context reuse the Bounded Context Data Model of that Bounded Context
  • E.g., the Motor Claims Bounded Context has a distinct definition of Policy that is formally unrelated to the definition of Policy in the Home Underwriting Bounded Context
  • In the extreme case, every API defines its own API data model. Put differently, every API is in a separate Bounded Context with its own Bounded Context Data Model.

Abstracting backend systems with System APIs

System APIs mediate between backend systems and Process APIs by unlocking data in these backend systems:

  • Should there be one System API per backend system or many?
  • How much of the intricacies of the backend system should be exposed in the System APIs in front of that backend system? In other words, how much to abstract from the backend system data model in the API data model of the System APIs in front of that backend system?

General guidance:

  • System APIs, like all APIs, should be defined at a granularity that makes business sense and adheres to the Single Responsibility Principle.
  • It is therefore very likely that any non-trivial backend system must be fronted by more than one System API
  • If an Enterprise Data Model is in use, then
    • the API data model of System APIs should make use of data types from that Enterprise Data Model
    • the corresponding API implementation should translate between these data types from the Enterprise Data Model and the native data model of the backend system
  • If no Enterprise Data Model is in use, then
    • each System API should be assigned to a Bounded Context, the API data model of System APIs should make use of data types from the corresponding Bounded Context Data Model
    • the corresponding API implementation should translate between these data types from the Bounded Context Data Model and the native data model of the backend system
    • In this scenario, the data types in the Bounded Context Data Model are defined purely in terms of their business characteristics and are typically not related to the native data model of the backend system. In other words, the translation effort may be significant
  • If no Enterprise Data Model is in use, and the definition of a clean Bounded Context Data Model is considered too much effort, then
    • the API data model of System APIs should make use of data types that approximately mirror those from the backend system
    • same semantics and naming as backend system
    • but for only those data types that fit the functionality of the System API in question backend system often are Big Balls of Mud that cover many distinct Bounded Contexts
    • lightly sanitized e.g., using idiomatic JSON data types and naming, correcting misspellings, …
    • expose all fields needed for the given System API’s functionality, but not significantly more ◦ making good use of REST conventions

The latter approach, i.e., exposing in System APIs an API data model that basically mirrors that of the backend system, does not provide satisfactory isolation from backend systems through the System API tier on its own. In particular, it will typically not be possible to “swap out” a backend system without significantly changing all System APIs in front of that backend system – and therefore the API implementations of all Process APIs that depend on those System APIs! This is so because it is not desirable to prolong the life of a previous backend system’s data model in the form of the API data model of System APIs that now front a new backend system. The API data models of System APIs following this approach must therefore change when the backend system is replaced. On the other hand:

  • It is a very pragmatic approach that adds comparatively little overhead over accessing the backend system directly
  • Isolates API clients from intricacies of the backend system outside the data model (protocol, authentication, connection pooling, network address, …)
  • Allows the usual API policies to be applied to System APIs
  • Makes the API data model for interacting with the backend system explicit and visible, by exposing it in the RAML definitions of the System APIs
  • Further isolation from the backend system data model does occur in the API implementations of the Process API tier

MuleSoft Application Modularization

Mule allows you to run applications side-by-side in the same instance. Each Mule application should represent a coherent set of business or technical functions and, as such, should be coded, tested, built, released, versioned and deployed as a whole. Splitting particular functions into individual applications allows a coarse-grained approach to modularity and is useful when keeping elements of your application running while others could go through some maintenance operations. For optimum modularity:

Consider what functions are tightly interrelated and keep them together in the same Mule application: they will form sub-systems of your whole solution.

  • Establish communication channels between the different Mule applications: the VM transport will not be an option here, as it can’t be used across different applications. Prefer the TCP or HTTP transports for synchronous channels and JMS for asynchronous ones

MuleSoft: Salesforce Synchronization with Retry Sample

In addition to MuleSoft connection Retry, this pattern adds the retry of rejected or failed records by the target system validation, in case another retry would succeed


In a recent project I was working on, I needed to keep Salesforce objects updates synchronized with Siebel. The basic integration scenario is shown in Figure 1.

Figure 1: Integration Scenario

There is one complication though, sometimes Siebel will reject updates or new records as validations fails due to missing data or racing conditions with other data records. Rather than letting records fail and logging errors for support or the user to resubmit, a retry approach after a period of time could remedy most of the errors. As depicted in figure 2, the logic for the synch is as follows:

  1. The synchronization job is schedule to run every 10 minutes.
  2. Fetch the last datetime the synch had run (in case the synch job was stopped for any reason such as maintenance).
  3. Fetch all modified or created accounts IDs from Salesforce since the last run
  4. Process Accounts
    1. Fetch the Account Objects for the records ID from steps 3
    2. Do the necessary conversions to Siebel format and send the updates to Siebel
    3. Send the Siebel IDs for accepted records to Salesforce
    4. For records that got rejected by Siebel, keep track of the record IDs and store them in MuleSoft Object Store with the number of retries.
  5. If there are Stored Rejected Record IDs from a previous run in MuleSoft Object Sore
    1. Fetch the record ID from Object Store
    2. Execute Process Accounts step 4
    3. If the number of retries exceeds 3 times log the rejection and create a ServiceNow ticker so support can track and resolve the issue

Figure 2: Synchronization with Capturing errors and Retry

The Sample

In the sample code found at my GitHub (, I am simulating Siebel with a MySQL DB. The implementation will synchronize the Salesforce Account changes with a table in MySQL DB. The next few sections, walks through the code details

Project Structure

When you open the sample code in AnyPoint Studio, you will find the project code consisting of:

  1. Src\main\mule which contains the 4 workflows
  2. Src\main\java which contains 1 java helper class
  3. Src\main\resources which contains the configurations config.yaml

The other packages are the standard packages of any MuleSoft project. The 4 Workflows, 1 Java class and config.yaml implement the solutions.

Figure 3: Project Structure

General Workflow and Global Configuration

Figure 4: General Workflow

Following the MuleSoft best practices the Global workflow is where you will find the Global Exception Handler in this sample it just logs the error (figure 4) and the global configuration to use a config file for the different environments and the connections to Salesforce and MySQL DB (figure 5).

Figure 5: Global Configuration

Figure 6 shows that the Default Error Hander is set to the “General Error Handler” in the General Workflow. For a refresher on MuleSoft Exception Handling see MuleSoft: Understanding Exception Handling/

Figure 6: Global Error handler Configuration

Figure 7 Shows the setting of config.yaml as the source for the configuration information. It is important not to hard code any environment or properties.

Figure 7: Configuration properties source file

The listing below shows the contents of the config.yaml . All sensitive information is masked with * to run the sample you will have to enter the proper information for your environment.


username: “**************”

password: “***********”

token: “*****”


host: “localhost”

port: “3306”

user: “root”

password: “****”

database: “synchsample”

Figure 8 shows the configuration of the Salesforce connection with parametrized information to be retrieved from config.yaml

Figure 8: Salesforce configuration

Figure 9 shows the configuration of the MySQL DB connection with parametrized information to be retrieved from config.yaml

Figure 9: MySQL Connection configuration


Figure 10: MainFlow

The mainflow is triggered by the scheduler the logic is :

  1. Log the start time
  2. Call the Get Duration subflow
  3. Query Sales force for update accounts
  4. Convert the result to a list of IDs to query
  5. Call Record Processing subflow
  6. Log the operation is finished
  7. In case of any error just log it.

CallRecordsProcessing Workflow

Figure 11: CallRecordProcessing Subworkflow

  1. Check if the list of record IDs is empty
    1. Not Empty: call Process Records
    2. Empty: log no modified records found

GetDuration Subworkflow

Figure 12: GetDuration Subworkflow

  1. Check if Object store contains saved lastTimeRun
  2. Contains LastTimeRun:
    1. Fetch the lastTime
    2. Call Helper Java method to get the duration
  3. Does not contain LastTimeRun:
    1. Set the duration to 10 minutes

The Java method code is listed below which just calculates the duration since last time the workflow ran till now in minutes

package sfsynchwithretrypattern;

import java.time.Duration;

import java.time.LocalDateTime;

import java.time.ZonedDateTime;

public class StaticHelpers {

    public static long getDuration(ZonedDateTime lastTime) {

        LocalDateTime now =;

     Duration duration = Duration.between(now, lastTime.toLocalDateTime());

     return Math.abs(duration.toMinutes());



StoreRetryAccounts Workflow

Figure 13: StoreRetryAccounts SubWorkflow

  1. Check if there are stored RetryIDs
    1. RetryIDS stored: Append the new IDs to the existing IDS
    2. Does not Exist: set the store load to the new IDs
  2. Store the updated value

ProcessRecords SubWorkflow

Figure 14: ProcessRecords SubWorkflow

  1. Retrieve accounts with IDs
  2. For each retrieve account IF
    1. Check if the Account info exists in the DB
      1. Exists: update the account information
      2. Does not Exist: Insert a new record

    Note here I am taking an new records and saving them in the retry ID to simulate rejections. This should be moved to where you receive results and the errors

RetryProcessing Workflow

The retryprocessing workflow is triggered by a schedular, it starts with:

  1. Check if there are any retry id stored
    1. Exists: retrive the stored IDs and process them
    2. Does not Exist: Log no Stored IDs were found.


This is a good way to handle transient errors that occasionally occur during integrating system, when a simple retry might resolve the issue rather than having to ask the user to re-submit or getting support to chase the error. I hope this helps you in your projects. Feedback is welcome.

Figure 15: MySQL DB Table with Updates

MuleSoft: Understanding Exception Handling

I have been approached by several developers taking the Any point Platform Development: Fundamentals (Mule 4) training about the Exception Handling and the different scenarios in Module 10. The way it is described can be confusing. So here is how I understand it. Any MuleSoft flow like the one below

How this would be executed? Any Point studio and MuleSoft Runtime would convert it into a Java byte code. So it would be generated as a method or a function in Java. MuleSoft would put the code for this function within a Try -catch scope. If you have defined exceptions handlers in the error handling module it will emit code to catch and handle those exceptions. If there is not and there is a global error handler, it will emit code for the catch exceptions of the global error handler. Here is the thing that catches developers by surprise. If you have defined local handlers, then those cases are the only cases that would be handled and not the combination of the local cases and global error handler case, only one of them if there is a local error handler defined then that is it if there is not then the global error handler is emitted as catch cases

The 2nd point is On Error Propagate and On Error Continue options. If you chooser on Error Propagate then the coded emitted will throw the caught exception at the end of each catch. If you chose On Error continue then the Exception is not thrown. Think of it as if the code written below If you have been a Java , C#, C++. Or Python developer you should understand this basic programming concepts.

public void MianMethod(){
try {
} catch (Exception E)
// default handler or global handler if defined
public void OnErrorPropagate() throws EOFException
throw new EOFException();
catch( EOFException e)
throw e;
public void OnErrorContinue()
throw new EOFException();
catch( EOFException e)

Hope this helps

MuleSoft: MCD Level 1 Mule 4 Certification Experience

OK Passed the Exam. How was it. I cannot tell you what the questions was and frankly I do not quite remember them. But here are my thoughts about the exam.

  1. You must go through the training course (which is free online, do all the exercises as there are concepts that are not in the slides or materials. And do the DYI exercises. I personally did them maybe 4 or 5 times. And I took the 3.8 online free course though that might confuse you as there changes between 3.8 and 4.0 like flowVars which is gone completely.
  2. I would recommend going quickly over the mule runtime documentation. And just trying to build a practical example.
  3. An understanding of Java and Spring either MVC or Boot frameworks would help but not necessary as it will help you understand how flows are translated into Spring code then compiled
  4. If you have time on your hands, go to git hub and download the free Sample for MuleSoft from MuleSoft and the open source code. Yes, that is an overkill.
  5. Now how about the questions, They are mostly tricky questions, most of the answers are extremely similar. You have to consider them thoroughly and find the best answer. Many are very simply minor colon, or semi-colon difference. Very low level syntax error so yeah if you have special glasses for the computer use them!.
  6. If you are taking the course from Home make sure you turn off your cell phone, phone etc. and do not speak to yourself. And yes you will not be able to use your big screen TV, you will be limited to the laptop screen.
  7. Hope this helps. I wish I can say it was easy I answered it in almost halftime with reviews, but I was really annoyed by the questions and every question is just trying to trick you.

MuleSoft: File:List Interesting Observation

Working with MuleSoft file connector, I was expecting that the File->List ( operation would return a list of fileInfo objects (you know path, size etc) but it actually it returns a list of the contents of the files in the directory. This seemed odd to me as the documentation states

“The List operation returns a List of Messages, where each message represents any file or folder found within the Directory Path (directoryPath). By default, the operation does not read or list files or folders within any sub-folders of directoryPath.
To list files or folders within any sub-folders, you can set the recursive parameter to

Here is the sample I was working with

I was intending to put a read file operation in the foreach however that just gave me an error

Here is sample of the logged messages

That was a head scratch-er I thought I had done some mistake in the list parameters, but it seems that is how the file connector list operator works. Below you will see that part of the message for each fine the typeAttributes have the fileInfo information.

Implement Rest API in MuleSoft, Azure Logic Apps, Asp.Net core or Spring-Boot? MuleSoft: Step1- Defining an API in RAML

I have been working lately on comparing on comparing different technologies to build web API’s.. One of the main concerns was if we wanted to build a simple API service which technology would be easier, more productive to develop the service with. To provide a reference comparison I will build the same web service (in MuleSoft, Azure Logic App, Asp.Net Core, Spring Boot) and provide my notes as I go. The web service would provide the following functionality

  1. CRUD operations on an Authors entity
  2. CRUD operations on BOOKS entities where books are related to authors

All the Read (queries) should support:

  1. Filtering
  2. Searching
  3. Paging
  4. Supports Levels two and three in Richardson Maturity Model(see my previous post . This means based on the Accept header of the request return the results as either:
    1. Pure JSON
    2. With HATEOUS

I will start with MuleSoft implementation.

Step 1. Define the API in RAML

With MuleSoft you get AnyPoint portal and you get the design center, which helps you designing the API RAML. There is API Designer visual Editor which can help you in the beginning.




Though it has many weakness such as:

  1. Once when you switch to RAML editor you cannot go back.
  2. You cannot define your own Media Types you have to use form the list.


To finalize the API definition in RAML I had to manually edit though the editor was helping to get started. Below is a fragment of the API in RAML (The full solution will be published on my GitHub )

Notice in the RAML that I have defined two responses for the Get operation of the Authors Resources. Full RAML is at (

#%RAML 1.0

title: GTBooks

description: |

GTBooks Example

version: ‘1.0’






baseUri: /api/v1.0




description: This is a new DataType

type: object



required: true

example: Moustafa Refaat

description: Author Name

type: string


required: true

example: Canadian

description: ‘Author Nationality ‘

type: string


required: true

example: ‘2018-12-09’

description: Author Date of Birth

type: date-only


required: false

example: ‘2018-12-09’

description: Author Date of Beath

type: date-only



description: This is a new DataType

type: CreateAuthor



required: true

example: 1

description: Author Id

type: integer


required: true

maximum: 200

minimum: 8

example: 10

description: Author Age

type: integer



description: Author with Hateoas information LINKS

type: Author



required: true

description: Property description

type: array


required: true

type: Link




description: Hateoas LINK

type: object



required: true

example: /Book/10

description: URL Link

type: string


required: true

example: GetBook

description: Operation

type: string


required: true

example: GET

description: ‘HTTP Method Get, PUT,..’

type: string







type: array


type: Author


type: array


type: AuthorHateoas

‘304’: {}

‘400’: {}

‘500’: {}



example: ‘application/json ‘

description: application/json or application/hateaos+json

type: string



required: false

example: Example

description: sort by

type: string


required: false

example: Example

description: Property description

type: string


(.. to be continued)