Salesforce: Integration Patterns Simplified

In todays connected systems no application is an island. When implementing a CRM system like Salesforce, it will need to exchange data with other systems in the organization like ERP, People Systems etc. These other systems can be in the cloud and/or on-premises. As Salesforce is SaaS, and is cloud based, most of the integrations are done through Web API calls. Basically, to integrate with Salesforce there is three main Scenarios:

  • Remote Process Invocation: – that is when the business process is a distributed across multiple applications including Salesforce and business process progress has to be communicated across these applications
  • Batch Data Synchronization: – This is the most common scenario where you need to keep data between different systems periodically synchronized. It is also used for the initial data migration
  • Data Virtualization: – utilizing Salesforce Connect, external data is virtually available in Salesforce as External Objects.

There is a fourth scenario that Salesforce Integration Patterns and Practices documentation describes which is based on Salesforce UI subscribing to Salesforce events to update the UI. I do not consider this to be an integration pattern as in my opinion it is an internal implementation pattern for Salesforce. Below is a high-level description of these integration patterns. In a real-world Salesforce implementation, you would typically implement several of these patterns for different integration scenarios.

Remote Process Invocation (Web API REST/ SOAP Call)

  1. Request-Reply: Synchronous Call

In this scenario when an event occurs in Salesforce (such as a user entering certain information) Salesforce makes a Web API call to another system to inform it of the event with a data payload, Salesforce would wait for the remote system to complete the processing. This pattern is suitable for real time integrations where the data payload is small, and the continuation of the business process depends on the remote system completing processing. Things to consider are:

  • How Salesforce authenticates into the Remote system,
  • securing the Web API call,
  • time for the remote system to finish processing and return the results,
  • Salesforce governor limits, Number of Web API Calls to remote system.
  1. Fire and Forget: Asynchronous Call

In this scenario when an event occurs in Salesforce (such as a user entering certain information) Salesforce makes a Web API call to another system to inform it of the event with a data payload, the remote system returns confirmation of receipt of the event. Salesforce would not wait for the remote system to finish processing. When the remote system finishes processing it calls back Salesforce to inform it of completion with the data that Salesforce needs. This pattern is suitable for real time integrations where the data payload is small, but the continuation of the process does not depend on the remote system completing the processing. Things to consider here are:

  • How Salesforce authenticates into the Remote system,
  • securing the Web API call,
  • time for the remote system to finish processing and return the results,
  • Salesforce governor limits,
    • Number of Web API Calls to remote system
    • Number of remote system Web AP Call-backs to Salesforce
      • Is it one call-back for each request?
      • Or the remote system calls back onetime for several requests.
  1. Remote Call-In

The remote system connects and authenticate with Salesforce to notify Salesforce about external events, create records, and update existing records. There are to variation to this scenario:

  1. The remote system calls into Salesforce to query, insert, update, upsert or delete data. You implement this pattern using either REST API for notifications from external events or SOAP API to query a Salesforce object. The sequence of events is the same when using REST API. the solutions related to this pattern allow for:
    1. The remote system to call the Salesforce APIs to query the database and execute single-object operations (create, update, delete, and so on).
    2. The remote system to call the Salesforce REST API composite resources to perform a series of object operations.
    3. Remote system to call custom-built Salesforce APIs (services) that can support multi-object transactional operations and custom pre/post processing logic.

  1. In an event-driven architecture, the remote system calls into Salesforce using SOAP API, REST API, or Bulk API to publish an event to the Salesforce event bus. Event subscribers can be on the Salesforce Platform such as Process Builder Processes, Flows, or Lightning Components, Apex triggers. Event subscribers can also be external to the Salesforce Platform such as CometD subscribers.

Things to consider here are:

  • How Remote System authenticates with Salesforce
  • Securing the Web API call,
  • Salesforce governor limits, Number of remote system Web AP Calls to Salesforce
    • Is it one call-back for each request?

Batch Data Synchronization

The scenario for this pattern is the initial data migration to Salesforce and the nightly/weekly/monthly updates. Here, the immediate propagation of data between systems is not important such as moving data to reporting systems or data warehouse et. The focus is on extracting data from source system, transforming it and loading it to the target system. An ETL middleware would be of great help in this scenario. Things to consider here are:

  • Data volume.
  • Securing the communication pipelines between Salesforce, ETL too and remote system.
  • Securing data while in transmission between the systems.

Data Virtualization

This pattern is about how to view, search, and modify data that’s stored outside of Salesforce, without moving the data from the external system into Salesforce. There are various options to consider when applying solutions based on this pattern:

  • Would a declarative/point-and-click outbound integration or UI mashup in Salesforce be satisfactory
  • Is there a large amount of data that you don’t want to copy into your Salesforce org?
  • Do you need to access small amounts of remote system data at any one time?
  • Do you need real-time access to the latest data?
  • Do you store your data in the cloud or in a back-office system, but want to display or process that data in your Salesforce org?
  • Do you have data residency concerns for storing certain types of data in Salesforce?

You can Use Salesforce Connect to access data from external sources, along with your Salesforce data. Pull data from legacy systems such as SAP, Microsoft, and Oracle in real time without making a copy of the data in Salesforce. Salesforce Connect maps data tables in external systems to external objects in your org.. Accessing an external object fetches the data from the external system in real time. Salesforce Connect lets you:

  • Query data in an external system.
  • Create, update, and delete data in an external system.
  • Access external objects via list views, detail pages, record feeds, custom tabs, and page layouts.
  • Define relationships between external objects and standard or custom objects to integrate data from different sources.
  • Enable Chatter feeds on external object pages for collaboration.
  • Run reports (limited) on external data.
  • View the data on the Salesforce mobile app.

To access data stored on an external system using Salesforce Connect, you can use one of the following adapters:

  • OData 2.0 adapter or OData 4.0 adapter — connects to data exposed by any OData 2.0 or 4.0 producer.
  • Cross-org adapter — connects to data that’s stored in another Salesforce org. The cross-org adapter uses the standard Lightning Platform REST API. Unlike OData, the cross-org adapter directly connects to another org without needing an intermediary web service.
  • Custom adapter created via Apex — if the OData and cross-org adapters aren’t suitable for your needs, develop your own adapter with the Apex Connector Framework.

Things to consider here are:

  • External objects behave like custom objects, but some features aren’t available for external objects.
  • External objects can impact report performance.
  • Securing the connection to the remote system.
  • Cross-Site Request Forgery (CSRF) on OData external data sources.

UI Update Based on Data Changes

While Salesforce Integration patterns and practices describes this as an integration pattern, I think of it as Salesforce internal development techniques. As you are using Lighting components to refresh the UI with updated data in the Salesforce due to record updates or events. I just put here for completion.

MuleSoft, Copying Logs

Anypoint platform control plane allows you to access the logs for cloud hub and on-premises deployments. However depending on the organization subscription for on-premises deployments the logs might not be available. I had a client who had such a subscription. While the admin had access to the VM, the Mule support team did not have access to the VM. In order to resolve this issue I created a simple solution that would copy the logs to an FTP server that Mule support team had access to.

Logs Copier Flow

The Flow fires every (schedule.everyminutes), Sets the SinceDateTime to Now – ½ schedule.everyMinutes. If there are any files changed, for each file:

  1. Set the file name to year/filename 
  2. Call the save file Flow. 

The Save file Flow is shown in below.

GraphQL API with Node.js Sample


In this sample, I am exploring building an API with Node.js and GraphQL. For a long time, I have been building APIs in SOAP and REST usually with either in code like C#/ASP.Net, Java/ Spring or ESB platforms like BizTalk and MuleSoft (AnyPoint Platform). Node.js has been gaining a lot of popularity with many successful products on the market. To get a feel of how it is to develop an API with JavaScript/Node.Js platform, I developed this sample survey GraphQL API.

Why GraphQL ( ?

GraphQL is a query language for APIs and a runtime for fulfilling those queries it provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. With GraphQL you can add new fields and types to the API without impacting existing clients. Aging fields can be deprecated and hidden from tools. By using a single evolving version, GraphQL APIs give apps continuous access to new features and encourage cleaner, more maintainable server code.

Why Node.js ( ?

Node.js is a free, open-sourced, cross-platform JavaScript run-time environment that lets developers write command line tools and server-side scripts outside of a browser. Node.js is an open-source and cross-platform JavaScript runtime environment. Node.js runs the V8 JavaScript engine, the core of Google Chrome, outside of the browser. This allows Node.js to be very performant. A Node.js app is run in a single process, without creating a new thread for every request. Node.js provides a set of asynchronous I/O primitives in its standard library that prevent JavaScript code from blocking.

Node.js has a unique advantage because millions of frontend developers that write JavaScript for the browser are now able to write the server-side code in addition to the client-side code without the need to learn a completely different language.

The Sample Code

The code represents an API for a simplified survey application. The Survey Object has Id, Title and description fields. Each survey can have multiple questions. Each question has an Id, question text and question type. The question type can be either open text or a range. The data store for the sample is a MYSql database. The code utilizes and projects to implement the API. The source code is structured into the following folders:

config: contains the code for creating the database schema

Src/db: contains code for the database schema and functionality

sec/graphql: contains the code for the GraphQL server and GraphQL schema initialization

src/graphql/survey: contains the code for defining and implementing the API.

scr/test: contains a sample Unit test code.

Downloading and Running the Sample

Download the code from the GitHub Repo at

  1. Create the DB by running the following command:

    CREATE DATABASE mr_invoices CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;

  2. Update the DB_PASS to your DB_user root in the .env file.
  3. Run npm install to install all the dependences
  4. Run the db migrations: npm run db:migrate to create the tables for Survey and Survey Questions in the DB
  5. Run: npm start. The GraphQL is available at http://localhost:3000/graphql
  6. In the qraphql playground you can
    1. Add a survey with questions

      mutation {

      addSurvey(title: ” COVID-19 Response Survey”,

      description: “Collecting information about COVID-19 Response”,


      {question:”Can you provide feedbacks to this code challenge?”, questionType: OpenText},

      { question:” How would you rate Canada reaction to Covid-19?”, questionType: Range, rangeFrom: 1, rangeTo: 5},

      { question:” How would you rate USA reaction to Covid-19?”, questionType: Range, rangeFrom: 1, rangeTo: 5}]

      ){ id, title, description, questions{id,surveyId,question,questionType} }


    2. Add survey without questions.

      mutation {

      addSurvey(title: ” COVID-19 Response Survey”,

      description: “Collecting information about COVID-19 Response”){ id, title, description}


    3. Update survey with or without some questions

      mutation {

      updateSurvey(id:23, title: “World wide COVID-19 Response Survey”,

      questions:[{id:23, question:”How would you rate Canada response to COVI-19?”, questionType: Range, rangeTo:10}])

      {id,title,description, questions{id,surveyId,question,questionType} }


    4. Add a question to an existing survey


      addQuestion(surveyId: 23, question:”How would you rate the UK response to COVID-19″, questionType: Range, rangeFrom: 1, rangeTo: 5)



    5. Update a question


      updateQuestion(id:23, surveyId:23, question:”How would you rate the USA response to COVID-19?”, questionType: Range, rangeFrom: 1, rangeTo: 10)



    6. Query for a survey by id


      survey(id:6){id, title,description,questions{id,surveyId,question,questionType}}


    7. Query for a survey by criteria


      surveys(search:{title:”COVID-19 Resposne Survey”}){id, title,description,questions{id,surveyId,question,questionType}}


    8. Query for questions by criteria


      questions(search:{surveyId: 23}){ id,surveyId,question,questionType}



GraphQL is the way that any new model API should be built. Forget about REST RAML or OpenAPI/Swagger. I used to say modern APIs should be built in REST not SOAP. But GraphQL takes API design and delivery to new levels of robustness and reusability. No longer do you need to have URLs with v1.0 v2.0 etc. One URL would support the new and old queries. Yes, with GraphQL you will still have to monitor any queries or mutations that you deprecate usage till you can remove them completely. Though, it is easier to keep the new queries and operations backward compatible.

Node.js: Very impressive. I was able within a few hours to create a fully functioning API with MYSql DB as a Data Store. JavaScript in JIT compiled now, and hardware is way faster so performance should not be an issue. Though I struggle with the fact that it is not strongly typed code.

Now the question would be would I drop C#/ASP.Net, Java/Spring Boot for Node.Js as the implementation platform for new API’s? my gut feeling is no. As I have strong background in C# and Java not sure I will prefer to build the back-end server in JavaScript. Though I might be wrong. I will build the same sample in both C#/ and Java/Spring Boot and make a comparison.

Let me know which environment you prefer to develop an API and why?

MuleSoft: Creating Salesforce Objects in Sequence

Figure 1: Wait till Bulk Job is finished

When creating a parent child objects in Salesforce, while using Bulk API, after posting the parent objects for “Upsert” operation, the call for creating the child objects must wait till the Parent Objects Job has finished. Unfortunately, Salesforce does not provide any way to control the sequence of bulk jobs execution. So the solution I came up with as shown in figure 1, is to loop using Try and until successful, pulling the Salesforce for the Bulk Job status and if not successful fail and try again.

  1. The flow starts by calling a Java Static Class (code below) to wait for configurable period before trying to pull the status of the job.




package mbccrmgroupsysapi;


* @author moustafa.refaat



public class Helpers {

public static void waitDurationMS(long waitTime) {

try {


} catch (InterruptedException e) {

// TODO Auto-generated catch block





  1. set the payload to the jobInfo

  2. call get Bulk info API

  1. based on the results create a true/false result if the job completed or not

  2. if the result is false, through a custom error/exception

  3. This will make the until successful keep trying tile the number of trials exhausted waiting the specified period between re-tries

Hope this was helpful to you let me know if you need any help.

Azure Integration: 1 – Azure Logic Apps an Introduction

Logic Apps are part of Azure Integration collection. Azure integration is a collection of various Azure tools and frameworks that addresses all integration needs for an Enterprise. Logic Apps are used for core workflow processing. Service bus is Microsoft’s cloud-based queuing mechanism. It provides a durable queuing infrastructure that is highly scalable. Event grid is Azure’s event processing system. It provides near real-time eventing to many events inside Azure. API Management is used to safeguard and secure both internal and external APIs. Azure integration is functions are a lightweight streamlined unit of work that can be used to perform specific tasks. Using the on-premise Data Gateway, Logic Apps can seamlessly call into your on-premise systems. The on-premises Data Gateway has support for many systems including on-premise SQL server, SharePoint, SAP, IBM DB2, as well as BizTalk Server. The on-premises Data Gateway can even be used to monitor a local filesystem. This blog addresses Logic Apps only.

Logic Apps Advantages

  • Logic Apps is a serverless platform which means is you don’t have to worry about the management of those servers.
    • Auto-Scaling: You simply deploy your resources and the framework will ensure it gets deployed to the correct environments. You also have auto scaling based on demand. Unlike building out a SQL Azure database where you must specify a resource size, when you need more resources for your Logic Apps, they will automatically be provisioned by the platform and your solution will auto scale to meet your demand. Along with this is high availability. This is just built into the platform.
    • Use-based billing. What this means is you only pay for the resources that you use.
  • Easy to learn as Logic Apps provides a drag and drop developer experience.
  • 200+ different connectors for integration with PAAS and SAAS offerings, and on-premise systems. This suite of connectors is constantly growing and expanding. Including Enterprise connectors for systems like IBM, MQ, and SAP. In addition to support for AS2, X12, and EDIFACT messaging.
  • Ability to create custom connectors around APIs, so you can provide a custom development experience for your developers.
  • Monitoring and logging right out of the box. It is built right into the framework and accessible via the Azure Portal.
  • Seamless integration with other Azure features. This provides a rapid development experience for integrating with service bus, Azure functions, custom APIs, and more.
  • It’s easy and extremely powerful.
  • Logic Apps are very good at connecting cloud-based systems, bridging connections from on-premise systems to the cloud and vice versa. Azure Integration and Hybrid Solutions

Design and Development of Logic Apps

You have two options for building our Logic Apps:

  • The web-based designer is a simple convenient option that hosts the designer right inside the Azure portal. This designer is feature-rich and allows you to author your Logic Apps very quickly. One of the big benefits of the web-based designer is it’s very easy to test your Logic Apps because they’re already sitting out there inside your Azure subscription. The web-based designer works great for building short Logic Apps or doing proof of concepts.
  • There’s also a designer a plug-in that supports Visual Studio. The Visual Studio plug-in is a great choice for authoring Enterprise-grade Logic apps. However, you will need to deploy them into an Azure subscription to run them.

There are a couple different deployment options. Inside the web designer, you can clone your Logic App, and you can move them to other resource groups, and even other Azure subscriptions. If you’re using the Visual Studio Designer, you can create an Azure Resource Management, or ARM, template. With the ARM template, you can then use PowerShell to deploy your Logic Apps into your Azure subscription. Using the ARM templates in PowerShell, it’s very simple to deploy your Logic Apps into multiple Azure subscriptions. This makes it easy for rolling out Logic Apps to multiple environments. Another benefit of the ARM templates is that they can be checked into source control.


Connectors are pre-built, Microsoft managed wrappers around APIs to greatly simplify interaction with those systems Connections contain environment-specific details that a connector uses for connecting to a data source including destination and authentication details. Connections live at the Resource Group level. Connectors show up inside the Azure portal underneath the segment called API connections. Under API connections, you will see all of your individual API connections that you have created inside of that Resource Group. Connections can be created using PowerShell or using the web or Visual Studio Designer. The connections that you create for your connectors can be independently managed outside of your Logic Apps. You can go under API connections inside the Azure Portal and edit connection information. You will also see your connections show up underneath your Logic App. The connections that you create for your connectors can be shared across multiple Logic Apps.


Now let’s talk about the different ways that we can start a new instance of a Logic App. Triggers are used to initiate a Logic App. Triggers can be broken down into two main types, reactive and proactive.

  • Recurrence trigger, you specify a time interval in which that Logic App will execute.
  • Polling-based triggers., you specify a polling interval in which the Logic App will wake up and look for new work to do based on the connector that you’re using for the trigger.
  • Event-based which the Logic App can be triggered off of events that happen inside of Azure.
  • TTP and webhook triggers, so inbound HTTP requests can start your Logic Apps.
  • You can write custom API apps and use them as triggers for your Logic Apps as well.


Actions are the building blocks of Logic Apps. Most actions are also connectors. There are a few different ways to add new actions to a Logic App. The first way is at the end of the Logic App. Underneath all the other actions you will see a button to add a new step. If you click this button, you will be able to add a new action. If you are working with actions like the scope or condition action, you have the ability to add additional actions inside them. At the bottom of that action, you will see an add action button. The last way to add a new action inside a Logic App is in between two existing actions. Simply hover between the existing actions, and a plus will show up. Click on the plus and you will have the ability to add an action in between the two existing ones. Once you add a new action, the add action dialog box will pop up. This will give you the ability to search connectors and actions to find what you’re looking for. You can also narrow it down by selecting All, Built-in, Connectors, Enterprise, or Custom. This will display all the connectors that are available. When you click on a connector, it will list all available actions that are supported by that connector. Most connectors have more than one action. If you do not select a connector, you can simply select actions in the Actions selection menu and scroll through all available actions. There’s a lot of actions available to you. Since there’s so many actions available to you, it’s highly recommended to select the connector first and then see the list of available actions for that connector

Flow Control

Flow control is used to control the sequence of logic through a Logic App. All flow control actions are listed under control.

  • The if-then logic is very popular. It is used to supply a condition and then if that condition evaluates to true executes certain logic, if it evaluates to false, you would execute different logic. Inside the Logic App Designer, you have a rich condition editor where you can add multiple parameters to a single condition. You could also switch to advanced mode and edit this in JSON if you need more control over your condition.
  • Switch allows you to choose on a value and have multiple cases based on that value. There’s also a default case if none of the matching cases evaluate to true.
  • For-each allows you to loop around an array. Any array in preceding actions are available for selection for your for-each. You simply select your array in the select an output from a previous step text box, and it will automatically loop over those elements. One thing to point out about the for-each inside a Logic App is that the default behavior is to run in parallel. So by default, the Logic App will execute 20 concurrent calls to your for-each, looping around your array elements. To change this behavior, you would simply click on the … on the right side and go to settings. Underneath settings, you would select override default, and here you can change the degree of parallelism. You would simply slide it all the way down to the left to 1 if you want it done to run in sequence. This supports up to a maximum of 50 concurrent executions.
  • Do until. This allows you to select a condition to evaluate and loop around a set of actions until that condition is true.
  • Scope actions can also be used inside a Logic App. The scope action allows you to group multiple actions together and then have evaluations done on the results of the group of actions as a whole. This can be useful if you want to ensure multiple steps are successful before you continue on in a Logic App. The scope shapes will return the results of every action inside that scope as an array.
  • Terminate allows you to end the execution of your Logic App based on conditions you define in your workflow. When you add the terminate action to your Logic App, you can set the status. You can set the status to either failed, canceled, or succeeded. You can also set the error code and the error message for while you’re terminating your Logic App.

MuleSoft: Coding Standards

Establishing coding standards is essential for successful implementation of a program. The smooth functioning of software programs is vital for the success of most organizations. Coding standards are a series of procedure for a specific programming language that determines the programming style, procedures, methods, for various aspects of the program written in that language. A coding standard ensures that all developers writing the code in a language write according to the guidelines specified. This makes the code easy to understand and provides consistency in the code. The completed source code should indicate as if the code has been written by a single developer in a single session. In the following sections I provide a sample coding standard for MuleSoft code. Please let me know your thoughts by email or through the comments.

Guiding principles

This section contains some general rules and guidance integration development should follow. Any deviations from the standard practices must be discussed with, and validated by, the technical lead on the project. This list is not intended to be exhaustive and may be supplemented during the life of the project:

    • Client first: The code must meet requirements. The solution must be cost-effective.
    • The code must be as readable as possible.
    • The code must be as simple as possible.
    • The code should reasonably isolate code that can be reused.
    • Use common design patterns where applicable.
    • Reuse a library instead of rolling your own solution to an already-solved problem.
    • Do ask if you are unsure of anything.
    • Do ensure that any modifications to the design or architecture are thought-through, well-designed and conform to n-tier architecture design principles ( ).
    • Do reach out to authors of work items if alternative approaches exist for a given requirement, or if you have any concerns about any assigned work items, e.g. missing acceptance criteria.
    • Do avoid duplication of code.
    • Do add any objects that have been modified to version control as soon as possible.
    • Do alert responsible team members as to any issues or defects that you may discover while executing unrelated work items.
    • Don’t add code to troubleshoot or rectify a defect in any environment other than the development environment.


  • All mule elements supporting the “name” attribute for object reference should be camel case starting with a lowercase letter.
  • For mule elements that support the “Notes” section, write comments describing the purpose, functionality etc., as if you would write comments in a Java/Scala/C#/ Python function or method definition.
  • Breakup flows into separate flows or sub flows which:
    • Makes the graphical view more intuitive
    • Makes XML code easier to read
    • Enables code reuse through the reuse of flows /sub-flows
    • Provides separation between an interface and implementation
    • Easier to test
  • Always define Error handling for all flows.
  • Encapsulate global elements in a configuration file.
  • Create multiple applications, separated by functionality.
  • If deployment is on premises use a domain project to share global configuration elements between applications, which helps with:
    • Consistency between applications.
    • Expose multiple services within the domain on the same port.
    • Share the. Connection to persistent storage.
    • Utilize VM connector for communications between applications.
  • Use Application properties to provide easier way to manage configurations for different environments.
  • Create a YAML properties file the “src/main/resources” folder “{env}-config.yaml”.
  • Define Metadata in “src/main/resources/application-types.xml” for all canonical schemas, all connectors that do not create the metadata automatically.

MuleSoft Development Standards for Project Naming Convention

  • SystemAPI Apps: {source}sapi
  • ProcessAPI Apps: {process}papi
  • ExperienceAPI Apps – {Web/Mobile/Machine}eapi
  • Integration APPs – {sourceSystem}and{targetSystem}int == These are batch/or scheduled integrations

    Note that: Not all implementations would have all these types of projects.

MuleSoft Development Standards for Transformations/ DataWeave

  • Write comments.
  • Keep the code simple.
  • Provide sample data for various scenarios to test the transformation with.
  • Define a utility.dwl that stores all the common dataweave functions such as common currency, time, string conversions.
  • Save the code in an external DWL file in the src/main/resources folder for complex transformations should be stored in external DWLs (as its performance intensive).
  • Use of Dataweave libraries before writing your own dataweave functions.

MuleSoft Development Standards for Flows

  • Minimize flow complexity to improve performance.
  • Each variable defined in the flow is used by the process.
  • Transactions are used appropriately.
  • All calls to external components are wrapped in an exception-handling Scope.
  • No DataWeave contains an excessive amount of code that could alternately be included in an external component.
  • All Loops have clearly defined exit conditions.
  • All variables are explicitly instantiated.
  • All Flows has trace points inserted to enable debugging in later environments.

MuleSoft Leading Practices for Deployment and Administration

Figure 1: Continuous Integration and Continuous Deployment

Utilize Anypoint platform support for CI/CD using

  • The Mule Maven plugin to automate building, packaging and deployment of Mule Applications.
  • The MUnit Maven plugin to automate test execution.

MuleSoft Leading Practices for Testing

  • It is recommended to have the following test cycles before deployment into the production environment:
    • Unit testing the artifacts (build phase). Build MUnit Tests for all flows
    • End-to-end integration testing (SIT)
    • User acceptance testing (UAT)
    • Performance testing
  • Before deployment the solution should have been successfully tested while running under user accounts with permissions identical to the production environment.
  • Messages are validated against their schema as per the use case requirements.
  • At a minimum, the developer should conduct development unit tests & end-to-end system integration tests for every interface before certifying the interface ready for release for the QA phase of testing.

MuleSoft: Designing Integration Applications Wisdom

In this blog I will go through best practices for design integration applications. Wisdom that I have garnered through projects, MuleSoft recommendations, reviews of MuleSoft project and discussions with MuleSoft specialists.


  • Connector retry/Until successful/retry APIs should be present for all connections / connectors. This is an obvious one networks, and the internet have occasional disconnections. So you should always retry a few times before giving up and abandoning the operation.
  • High volume processes should be coupled with MuleSoft Batch framework and appropriate queuing mechanisms wherever necessary. This to make the processing faster and more reliable but be cautious about which queuing infrastructure you are using. VM queues are mostly queuing in memory which might cause out of memory issues.
  • Exceptions are logged to an agreed upon location. Best of course is to a ticketing system like ServiceNow or through regular logging and having log mentoring system like Splunk to collect the logs and issue waring. Refrain from utilizing Emails to send errors to support teams. Things get messy with emails and sometime tracking is lost.
  • Long-running processes should provide a way to inspect progress to date. Usually this is done through sending notification through a hookup API or pushing the progress to the logs. But it important to have a way to see that so far 60% of the data load has been processed
  • Processes are designed to be loosely coupled and promote reuse where possible. Adopt microservices sensibly not to small and not large.
  • Adopt the MuleSoft API-Led connectivity approach sensibly. Aha, this is a tricky and controversial one. Many novice Developers/Architects just follow the 3-layer API-Led pattern (System API, Process API, Experience API) religiously without thinking of the consequences. There are times when the three tiers are required other times you only need two tiers only. For example if the integration is a batch job that picks up files or records from a DB and push them to Salesforce. Then you only need System API Layer and Integration layer (no need for experience or process API layers). See below a summary of eh API Led Connectivity Approach.
    • System APIs should expose a canonical schema (project or domain scope) when there is an identified Canonical Schema for the project, domain, or organization scope. Do not just replicate the source system API removing any system specific complexities. I have seen implementation where the developers just replicated the source system API just removing the authentication for source system. This meant spending 1-4 weeks to develop test an API that removes Source System Authentication with another authentication system for the system API. As a Manager or from the client side why did we spend 4 weeks = 160 hrs at $200 per hour = 32K to develop something that is does not add 32K worth value and would cost us in the future to maintain. The reason we use a Middle wear like MuleSoft to implement integrations is to make it easy to replace system and reduce vendor dependencies. For example, if we are integrating Salesforce, SAP, Workday, and Shopify for example. If after say 2 years the crop decided to replace SAP with Dynamics AX. Now if the System API for SAP exposed SAP API with just minor modifications for authentication. Then the Dynamics AX system API does the same the all the process or integration applications would have to be changed and recoded. This is the main reason that Enterprise Service Bus had such a bad reputation. Because of bad implementations. As I wrote in my Book “BizTalk the Practical Course” Yes I know this MuleSoft but theory is the same. It is like Quick Sort in C#, Java, C++, Scala, Python. You are still implementing “Quick Sort” same algorithm same theory different tool. Read the full discussion in the preview Page 35.

  • When creating a canonical schema stick to the project/domain scope and do not try to create a generic canonical schema for the whole organization.

I cannot stress this enough, while MuleSoft promotes the Three-Tire structure and Application API network, it does not always make sense to use this approach in all situations. Strive to design the integration architecture to be: –

  1. Easy to maintain
  2. As modular as possible
  3. Any components that can be reused should be isolated into its own library or application

The MuleSoft API-led connectivity approach

API-led connectivity is a methodical way to connect data to applications through a series of reusable and purposeful modern APIs that are each developed to play a specific role – unlock data from systems, compose data into processes, or deliver an experience. API-led connectivity provides an approach for connecting and exposing assets through APIs. As a result, these assets become discoverable through self-service without losing control.

  • System APIs: In the example, data from SAP, Salesforce and ecommerce systems is unlocked by putting APIs in front of them. These form a System API tier, which provides consistent, managed, and secure access to backend systems.
  • Process APIs: Then, one builds on the System APIs by combining and streamlining customer data from multiple sources into a “Customers” API (breaking down application silos). These Process APIs take core assets and combines them with some business logic to create a higher level of value. Importantly, these higher-level objects are now useful assets that can be further reused, as they are APIs themselves.
  • Experience APIs: Finally, an API is built that brings together the order status and history, delivering the data specifically needed by the Web app. These are Experience APIs that are designed specifically for consumption by a specific end-user app or device. These APIs allow app developers to quickly innovate on projects by consuming the underlying assets without having to know how the data got there. In fact, if anything changes to any of the systems of processes underneath, it may not require any changes to the app itself.

Defining the API data model

The APIs you have identified and started defining in RAML definitions exchange data representations of business concepts, mostly in JSON format. Examples are:

  • The JSON representation of the Policy Holder of a Motor Policy returned by the “Motor Policy Holder Search SAPI”
  • The XML representation of a Quote returned by the “Aggregator Quote Creation EAPI” to the Aggregator
  • The JSON representation of a Motor Quote to be created for a given Policy Holder passed to the “Motor Quote PAPI”
  • The JSON representation of any kind of Policy returned by the “Policy Search PAPI”

All data types that appear in an API (i.e., the interface) form the API data model of that API. The API data model should be specified in the RAML definition of the API. API data models are clearly visible across the application network because they form an important part of the interface contract for each API.

The API data model is conceptually clearly separate from similar models that may be used inside the API implementation, such as an object-oriented or functional domain model, and/or the persistent data model (database schema) used by the API implementation. Only the API data model is visible to API clients in particular and to the application network in general – all other forms of models are not. Consequently, only the API data model is the subject of this discussion.

Enterprise Data Model versus Bounded Context Data Models

The data types in the API data models of different APIs can be more or less coordinated:

  • In an Enterprise Data Model – often called Canonical Data Model, but the discussion here uses the term Enterprise Data Model throughout – there is exactly one canonical definition of each data type, which is reused in all APIs that require that data type, within all of Acme Insurance
  • E.g., one definition of Policy that is used in APIs related to Motor Claims, Home Claims, Motor Underwriting, Home Underwriting, etc.
  • In a Bounded Context Data Model several Bounded Contexts are identified within Acme Insurance by their usage of common terminology and concepts. Each Bounded Context then has its own, distinct set of data type definitions – the Bounded Context Data Model. The Bounded Context Data Models of separate Bounded Contexts are formally unrelated, although they may share some names. All APIs in a Bounded Context reuse the Bounded Context Data Model of that Bounded Context
  • E.g., the Motor Claims Bounded Context has a distinct definition of Policy that is formally unrelated to the definition of Policy in the Home Underwriting Bounded Context
  • In the extreme case, every API defines its own API data model. Put differently, every API is in a separate Bounded Context with its own Bounded Context Data Model.

Abstracting backend systems with System APIs

System APIs mediate between backend systems and Process APIs by unlocking data in these backend systems:

  • Should there be one System API per backend system or many?
  • How much of the intricacies of the backend system should be exposed in the System APIs in front of that backend system? In other words, how much to abstract from the backend system data model in the API data model of the System APIs in front of that backend system?

General guidance:

  • System APIs, like all APIs, should be defined at a granularity that makes business sense and adheres to the Single Responsibility Principle.
  • It is therefore very likely that any non-trivial backend system must be fronted by more than one System API
  • If an Enterprise Data Model is in use, then
    • the API data model of System APIs should make use of data types from that Enterprise Data Model
    • the corresponding API implementation should translate between these data types from the Enterprise Data Model and the native data model of the backend system
  • If no Enterprise Data Model is in use, then
    • each System API should be assigned to a Bounded Context, the API data model of System APIs should make use of data types from the corresponding Bounded Context Data Model
    • the corresponding API implementation should translate between these data types from the Bounded Context Data Model and the native data model of the backend system
    • In this scenario, the data types in the Bounded Context Data Model are defined purely in terms of their business characteristics and are typically not related to the native data model of the backend system. In other words, the translation effort may be significant
  • If no Enterprise Data Model is in use, and the definition of a clean Bounded Context Data Model is considered too much effort, then
    • the API data model of System APIs should make use of data types that approximately mirror those from the backend system
    • same semantics and naming as backend system
    • but for only those data types that fit the functionality of the System API in question backend system often are Big Balls of Mud that cover many distinct Bounded Contexts
    • lightly sanitized e.g., using idiomatic JSON data types and naming, correcting misspellings, …
    • expose all fields needed for the given System API’s functionality, but not significantly more ◦ making good use of REST conventions

The latter approach, i.e., exposing in System APIs an API data model that basically mirrors that of the backend system, does not provide satisfactory isolation from backend systems through the System API tier on its own. In particular, it will typically not be possible to “swap out” a backend system without significantly changing all System APIs in front of that backend system – and therefore the API implementations of all Process APIs that depend on those System APIs! This is so because it is not desirable to prolong the life of a previous backend system’s data model in the form of the API data model of System APIs that now front a new backend system. The API data models of System APIs following this approach must therefore change when the backend system is replaced. On the other hand:

  • It is a very pragmatic approach that adds comparatively little overhead over accessing the backend system directly
  • Isolates API clients from intricacies of the backend system outside the data model (protocol, authentication, connection pooling, network address, …)
  • Allows the usual API policies to be applied to System APIs
  • Makes the API data model for interacting with the backend system explicit and visible, by exposing it in the RAML definitions of the System APIs
  • Further isolation from the backend system data model does occur in the API implementations of the Process API tier

MuleSoft Application Modularization

Mule allows you to run applications side-by-side in the same instance. Each Mule application should represent a coherent set of business or technical functions and, as such, should be coded, tested, built, released, versioned and deployed as a whole. Splitting particular functions into individual applications allows a coarse-grained approach to modularity and is useful when keeping elements of your application running while others could go through some maintenance operations. For optimum modularity:

Consider what functions are tightly interrelated and keep them together in the same Mule application: they will form sub-systems of your whole solution.

  • Establish communication channels between the different Mule applications: the VM transport will not be an option here, as it can’t be used across different applications. Prefer the TCP or HTTP transports for synchronous channels and JMS for asynchronous ones

TOGAF Certification Series 7: TOGAF® 9 Certified ADM Phases E,F,G,H And Requirements Management

Chapter 9 Phase E: Opportunities & Solutions

  • The objectives of Phase E: Opportunities and Solutions are to:
    • Generate the initial complete version of the Architecture Roadmap, based upon the gap analysis and candidate Architecture Roadmap components from Phases B, C, and D
    • Determine whether an incremental approach is required, and if so identify Transition Architectures that will deliver continuous business value
  • Phase E is a collaborative effort with stakeholders required from both the business and IT sides. It should include both those that implement and those that operate the infrastructure. It should also include those responsible for strategic planning, especially for creating the Transition Architectures, if required.
  • Phase E consists of the following steps:
    • 1. Determine/confirm key corporate change attributes
    • 2. Determine business constraints for implementation
    • 3. Review and consolidate Gap Analysis results from Phases B to D
    • 4. Review consolidated requirements across related business functions
    • 5. Consolidate and reconcile interoperability requirements
    • 6. Refine and validate dependencies
    • 7. Confirm readiness and risk for business transformation
    • 8. Formulate Implementation and Migration Strategy
    • 9. Identify and group major work packages
    • 10. Identify Transition Architectures
    • 11. Create the Architecture Roadmap & Implementation and Migration Plan
  • The most significant issue to be addressed is business interoperability. Most SBBs or COTS will have their own embedded business processes. Changing the embedded business processes will often require so much work, that the advantages of re-using solutions will be lost with updates being costly and possibly requiring a complete rework. Furthermore, there may be a workflow aspect between multiple systems that has to be taken into account. The acquisition of COTS software has to be seen as a business decision that may require rework of the domain architectures. The enterprise architect will have to ensure that any change to the business interoperability requirements is signed off by the business architects and architecture sponsors in a revised Statement of Architecture Work.

Chapter 10 Phase F: Migration Planning

  • The objectives of Phase F: Migration Planning are to:
    • Finalize the Architecture Roadmap and the supporting Implementation and Migration Plan
    • Ensure that the Implementation and Migration Plan is coordinated with the enterprise’s approach to managing and implementing change in the enterprise’s overall change portfolio
    • Ensure that the business value and cost of work packages and Transition Architectures is understood by key stakeholders
  • Phase F consists of the following steps:
    • 1. Confirm management framework interactions for the Implementation and Migration Plan
    • 2. Assign a business value to each work package
    • 3. Estimate resource requirements, project timings, and availability/delivery vehicle
    • 4. Prioritize the migration projects through the conduct of a cost/benefit assessment and risk validation
    • 5. Confirm Architecture Roadmap and update Architecture Definition Document
    • 6. Complete the Implementation and Migration Plan
    • 7. Complete the architecture development cycle and document lessons learned
  • A technique to assess business value is to draw up a matrix based on a value index dimension and a risk index dimension. An example is shown in Figure 12. The value index should include criteria such as compliance to principles, financial contribution, strategic alignment, and competitive position. The risk index should include criteria such as size and complexity, technology, organizational capacity, and impact of a failure. Each criterion should be assigned an individual weight. The index and its criteria and weighting should be developed and approved by senior management. It is important to establish the decision-making criteria before the options are known.

Chapter 11 Phase G: Implementation Governance

  • The objectives of Phase G: Implementation Governance are to:
    • Ensure conformance with the Target Architecture by implementation projects
    • Perform appropriate Architecture Governance functions for the solution and any implementation-driven architecture Change Requests
  • The Architecture Contract produced in this phase features prominently in the area of Architecture Governance (see Chapter 22). It is often used as the means to driving change. In order to ensure that the Architecture Contract is effective and efficient, the following aspects of the governance framework should be introduced in this phase:
    • Simple process
    • People-centered authority
    • Strong communication
    • Timely responses and effective escalation process
    • Supporting organization structures
  • Phase G consists of the following steps:
    • Confirm scope and priorities for deployment with development management
    • Identify deployment resources and skills
    • Guide development of solutions deployment
    • Perform enterprise Architecture Compliance Reviews
    • Implement business and IT operations
    • Perform post-implementation review and close the implementation

Chapter 12 Phase H: Architecture Change Management

  • The objectives of Phase H: Architecture Change Management are to:
    • Ensure that the architecture lifecycle is maintained
    • Ensure that the Architecture Governance Framework is executed
    • Ensure that the enterprise Architecture Capability meets current requirements
  • Phase H consists of the following steps:
    • 1. Establish value realization process
    • 2. Deploy monitoring tools
    • Manage risks
    • 4. Provide analysis for architecture change management
    • 5. Develop change requirements to meet performance targets
    • 6. Manage governance process
    • 7. Activate the process to implement change

Chapter 13 ADM Architecture Requirements Management

  • The objectives of the Requirements Management phase are to:
    • Ensure that the Requirements Management process is sustained and operates for all relevant ADM phases
    • Manage architecture requirements identified during any execution of the ADM cycle or a phase
    • Ensure that relevant architecture requirements are available for use by each phase as the phase is executed

TOGAF Certification Series 6: TOGAF® 9 Certified



Chapter 2 Preliminary Phase

  • The objectives of the Preliminary Phase are to:
    • Determine the Architecture Capability desired by the organization:
      • Review the organizational context for conducting enterprise architecture
      • Identify and scope the elements of the enterprise organizations affected by the Architecture Capability
      • Identify the established frameworks, methods, and processes that intersect with the Architecture Capability
      • Establish a Capability Maturity target
    • Establish the Architecture Capability:
      • Define and establish the Organizational Model for Enterprise Architecture
      • Define and establish the detailed process and resources for architecture governance
      • Select and implement tools that support the Architecture Capability
      • Define the architecture principles
  • An Architecture Framework is a tool for assisting in the acceptance, production, use, and maintenance of architectures


Chapter 3 Phase A: Architecture Vision

  • The objectives of Phase A: Architecture Vision are to:
    • Develop a high-level aspirational vision of the capabilities and business value to be delivered as a result of the proposed enterprise architecture
    • Obtain approval for a Statement of Architecture Work that defines a program of works to develop and deploy the architecture outlined in the Architecture Vision
  • Phase A consists of the following steps:
    • 1. Establish the architecture project
    • 2. Identify stakeholders, concerns, and business requirements
    • 3. Confirm and elaborate business goals, business drivers, and constraints
    • 4. Evaluate business capabilities
    • 5. Assess readiness for business transformation
    • 6. Define scope
    • 7. Confirm and elaborate architecture principles, including business principles
    • 8. Develop Architecture Vision
    • 9. Define the Target Architecture value propositions and KPIs
    • 10. Identify the business transformation risks and mitigation activities
    • 11. Develop Statement of Architecture Work; secure approval

  • The outputs of this phase are:
    • Statement of Architecture Work
    • Refined statements of business principles, business goals, and business drivers
    • Architecture principles
    • Capability assessment
    • Tailored Architecture Framework
    • Architecture Vision, including:
      • Problem description
      • Objective of the Statement of Architecture Work
      • Summary views
      • Business scenario (optional)
      • Refined key high-level stakeholder requirements
    • Draft Architecture Definition Document (see Section 4.5.1), including (when in scope):
      • Baseline Business Architecture (high-level)
      • Baseline Data Architecture (high-level)
      • Baseline Application Architecture (high-level)
      • Baseline Technology Architecture (high-level)
      • Target Business Architecture (high-level)
      • Target Data Architecture (high-level)
      • Target Application Architecture (high-level)
      • Target Technology Architecture (high-level)
    • Communications Plan
    • Additional content populating the Architecture Repository

Chapter 4 Phase B: Business Architecture

  • The objectives of Phase B: Business Architecture are to:
    • Develop the Target Business Architecture that describes how the enterprise needs to operate to achieve the business goals, and respond to the strategic drivers set out in the Architecture Vision, in a way that addresses the Request for Architecture Work and stakeholder concerns
    • Identify candidate Architecture Roadmap components based upon gaps between the Baseline and Target Business Architectures
  • Phase B consists of the following steps:
    • 1. Select reference models, viewpoints, and tools
    • 2. Develop Baseline Business Architecture Description
    • 3. Develop Target Business Architecture Description
    • 4. Perform Gap Analysis
    • 5. Define candidate roadmap components
    • 6. Resolve impacts across the Architecture Landscape
    • 7. Conduct formal stakeholder review
    • 8. Finalize the Business Architecture
    • 9. Create the Architecture Definition Document


Chapter 5 Phase C: Information Systems Architectures

  • The objectives of Phase C: Information Systems Architectures are to:
    • Develop the Target Information Systems (Data and Application) Architectures, describing how the enterprise’s Information Systems Architecture will enable the Business Architecture and the Architecture Vision, in a way that addresses the Request for Architecture Work and stakeholder concerns
    • Identify candidate Architecture Roadmap components based upon gaps between the Baseline and Target Information Systems (Data and Application) Architectures


Chapter 6 Phase C: Data Architecture

  • The objectives of the Data Architecture part of Phase C are to:
    • Develop the Target Data Architecture that enables the Business Architecture and the Architecture Vision, while addressing the Request for Architecture Work and stakeholder concerns
    • Identify candidate Architecture Roadmap components based upon gaps between the Baseline and Target Data Architectures
  • Data Architecture consists of the following steps:
    • 1. Select reference models, viewpoints, and tools
    • 2. Develop Baseline Data Architecture Description
    • 3. Develop Target Data Architecture Description
    • 4. Perform Gap Analysis
    • 5. Define candidate roadmap components
    • 6. Resolve impacts across the Architecture Landscape
    • 7. Conduct formal stakeholder review
    • 8. Finalize the Data Architecture
    • 9. Create Architecture Definition Document

Chapter 7 Phase C: Application Architecture

  • The objectives of the Application Architecture part of Phase C are to:
    • Develop the Target Application Architecture that enables the Business Architecture and the Architecture Vision, while addressing the Request for Architecture Work and stakeholder concerns
    • Identify candidate Architecture Roadmap components based upon gaps between the Baseline and Target Application Architectures
  • Phase C: Application Architecture consists of the following steps:
    • 1. Select reference models, viewpoints, and tools
    • 2. Develop Baseline Application Architecture Description
    • 3. Develop Target Application Architecture Description
    • 4. Perform Gap Analysis
    • 5. Define candidate roadmap components
    • 6. Resolve impacts across the Architecture Landscape
    • 7. Conduct formal stakeholder review
    • 8. Finalize the Application Architecture
    • 9. Create Architecture Definition Document

Chapter 8 Phase D: Technology Architecture

  • The objectives of Phase D: Technology Architecture are to:
    • Develop the Target Technology Architecture that enables the logical and physical application and data components and the Architecture Vision, addressing the Request for Architecture Work and stakeholder concerns
    • Identify candidate Architecture Roadmap components based upon gaps between the Baseline and Target Technology Architectures
  • Phase D consists of the following steps:
    • 1. Select reference models, viewpoints, and tools
    • 2. Develop Baseline Technology Architecture Description
    • 3. Develop Target Technology Architecture Description
    • 4. Perform Gap Analysis
    • 5. Define candidate roadmap components
    • 6. Resolve impacts across the Architecture Landscape
    • 7. Conduct formal stakeholder review
    • 8. Finalize the Technology Architecture
    • 9. Create Architecture Definition Document
  • Components of the Architecture Definition Document: The topics that should be addressed in the Architecture Definition Document related to Technology Architecture are as follows:
    • Baseline Technology Architecture, if appropriate
    • Target Technology Architecture, including:
      • Technology components and their relationships to information systems
      • Technology platforms and their decomposition, showing the combinations of technology required to realize a particular technology “stack”
      • Environments and locations with a grouping of the required technology into computing environments (e.g., development, production)
      • Expected processing load and distribution of load across technology components
      • Physical (network) communications
      • Hardware and network specifications
    • Views corresponding to the selected viewpoints addressing key stakeholder concerns.

TOGAF Certification Series 5: Building Blocks

Chapter 11 Building Blocks

  • A building block is a package of functionality defined to meet business needs across an organization. A building block has published interfaces to access functionality. A building block may interoperate with other, possibly inter-dependent building blocks.
  • An architecture is a composition of:
    • A set of building blocks depicted in an architectural model
    • A specification of how those building blocks are connected to meet the overall requirements of an information system
  • Architecture Building Blocks (ABBs) are architecture documentation and models from the enterprise’s Architecture Repository classified according to the Architecture Continuum.
  • The characteristics of ABBs are as follows:
    • They define what functionality will be implemented.
    • They capture architecture requirements; e.g., Business, Data, Application, and Technology requirements.
    • They direct and guide the development of Solution Building Blocks
  • The characteristics of ABBs are as follows:
    • They define what functionality will be implemented.
    • They capture architecture requirements; e.g., Business, Data, Application, and Technology requirements.
    • They direct and guide the development of Solution Building Blocks

  • Building blocks are what you use; patterns can tell you how you use them, when, why, and what trade-offs you have to make in doing that. Patterns offer the promise of helping the architect to identify combinations of Architecture and/or Solution Building Blocks (ABBs/SBBs) that have been proven to deliver effective solutions in the past and may provide the basis for effective solutions in the future.

Chapter 12 ADM Deliverables

  • Architecture Building Blocks (ABBs): ABBs are architecture documentation and models from the enterprise’s Architecture Repository.
  • Architecture Contract: Architecture Contracts are the joint agreements between development partners and sponsors on the deliverables, quality, and fitness-for-purpose of an architecture. They are produced in Phase G: Architecture Governance. Successful implementation of these agreements will be delivered through effective Architecture Governance.
  • Architecture Definition Document: The Architecture Definition Document is the deliverable container for the core architectural artifacts created during a project and for important related information. The Architecture Definition Document spans all architecture domains (Business, Data, Application, and Technology) and also examines all relevant states of the architecture (baseline, transition, and target).
  • Architecture Definition Document versus Architecture Requirements Specification The Architecture Definition Document is a companion to the Architecture Requirements Specification, with a complementary objective: The Architecture Definition Document provides a qualitative view of the solution and aims to communicate the intent of the architects. The Architecture Requirements Specification provides a quantitative view of the solution, stating measurable criteria that must be met during the implementation of the architecture.
  • Architecture Requirements Specification: The Architecture Requirements Specification provides a set of quantitative statements that outline what an implementation project must do in order to comply with the architecture. An Architecture Requirements Specification will typically form a major component of an implementation contract or a contract for more detailed Architecture Definition.
  • Architecture Roadmap: The Architecture Roadmap lists individual work packages that will realize the Target Architecture and lays them out on a timeline to show progression from the Baseline Architecture to the Target Architecture. The Architecture Roadmap highlights individual work packages’ business value at each stage. Transition Architectures necessary to effectively realize the Target Architecture are identified as intermediate steps. The Architecture Roadmap is incrementally developed throughout Phases E and F, and informed by the roadmap components developed in Phases B, C, and D.
  • The Architecture Vision is created in Phase A and provides a high-level summary of the changes to the enterprise that will follow from successful deployment of the Target Architecture.
  • Business principles, business goals, and business drivers provide context for architecture work, by describing the needs and ways of working employed by the enterprise. These will have usually been defined elsewhere in the enterprise prior to the architecture activity. Many factors that lie outside the consideration of architecture discipline may have significant implications for the way that architecture is developed.

Chapter 13 TOGAF Reference Models

  • Major characteristics of a Foundation Architecture include the following:
    • It reflects general computing requirements.
    • It reflects general building blocks.
    • It defines technology standards for implementing these building blocks. It provides direction for products and services.
    • It reflects the function of a complete, robust computing environment that can be used as a foundation.
    • It provides open system standards, directions, and recommendations.
    • It reflects directions and strategies.
  • The TRM has two main components:
    • 1. A taxonomy that defines terminology, and provides a coherent description of the components and conceptual structure of an information system
    • 2. A model, with an associated TRM graphic, that provides a visual representation of the taxonomy, as an aid to understanding

  • The Integrated Information Infrastructure Reference Model: The III-RM is a reference model that focuses on the Application Software space, and is a “Common Systems Architecture” in Enterprise Continuum terms. The III-RM is a subset of the TOGAF TRM in terms of its overall scope, but it also expands certain parts of the TRM – in particular, the business applications and infrastructure applications parts – in order to provide help in addressing one of the key challenges facing the enterprise architect today: the need to design an integrated information infrastructure to enable Boundaryless Information Flow.

  • Boundaryless Information Flow 1. A trademark of The Open Group. 2. A shorthand representation of “access to integrated information to support business process improvements” representing a desired state of an enterprise’s infrastructure specific to the business needs of the organization. An infrastructure that provides Boundaryless Information Flow has open standard components that provide services in a customer’s extended enterprise that:  Combine multiple sources of information  Securely deliver the information whenever and wherever it is needed, in the right context for the people or systems using that information