Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Monday, June 4, 2018 5:24 PM

First day of Integrate 2018! THE yearly conference for Microsoft integration. This is the day 1 recap of the sessions presented at Integrate 2018 with the views and opinions of the Codit staff.

Codit is back in London for Integrate 2018! We are very proud to once again have the biggest delegation at Integrate, representing the biggest Microsoft integration company! This blog post was put together by each and every one of our colleagues attending Integrate 2018.

Keynote - The Microsoft Integration Platform - Jon Fancey

Jon Fancey, Principal Program Manager at Microsoft, opened the conference with the Keynote session focusing on change and disruption. He took us on a journey through time explaining the different phases of technology changes he experienced and how important it is to maintain a critical view on technologies that are adopted within organizations. It's important to constantly keep questioning and assess your methods and to adapt to new technology changes.

"If you don't disrupt, someone else will disrupt you. The other guy doesn't care if you like change or not." 

Jon called Matthew Fortunka to the stage. Matthew is the Head of Development for Car Buying at He presented a real-world example of how leveraged Microsoft's Azure cloud to transform and optimize their business model, to cause a disruption to the way people buy cars and car insurance.

Next up - demo time for the Microsoft Team. In an extensive 4-part demo we were shown an end-to-end example of a backorder being processed for their imaginary Contoso Retail shop. By chaining most of the Microsoft Integration building blocks together they revealed some interesting new features to come:

  • Integration Service Environment: VNET integration for Logic Apps! Allowing connection of Logic Apps to resources hosted on Azure VMs - without the use of the On-Premise Data Gateway.
  • SAP Trigger for Logic Apps: Allowing SAP messages to trigger a Logic App. The team confirmed this is an event based trigger with no polling used. Together with the current existing SAP connector, the SAP Trigger enables bi-directional communication between SAP and Logic Apps. It is available in private preview as of today.

Introduction to Logic Apps - Kevin Lam & Derek Li

The first ‘real’ session of the day was renamed to ‘Be an Integration Hero with Logic Apps’ by Kevin Lam and Derek Li of Microsoft.

Staring with an introduction to Logic Apps they explained there were some 200 connectors available, and reviewed the basic building blocks used to create a Logic App such as Triggers & Actions. A demo followed in which they showed how easy it is to build a Logic App, starting off with ‘Hello World’, and moving to showing a weather forecast within a few clicks.

The manageability of Logic Apps was then discussed using Visual Studio, B2B, Security followed by a brief explanation of Rule based alerts. 

In a second scenario we were given a demonstration on how to build a Logic App for a more complex scenario in which they received a .jpg of an invoice to blob storage, used OCR, then called a Function to process the text, and send a mail when the total amount was over $10.

Finally new features for Logic Apps were detailed: China Cloud, Mocking data for testing, OAuth request triggers, Managed Service Identity support, Key Vault support and output property obfuscation.

An interesting presentation with a focus on how Logic Apps simplify the development process and how quickly Logic Apps can be implemented.

Azure Functions role in integration workloads - Jeff Hollan

In the third session of the day, Jeff Hollan took us into the realm of using Azure Functions in integration workloads.

Jeff started off by explaining some key concepts in Azure Functions (Function App, Trigger, Bindings), and gave us an overview of where you can develop Azure Functions: everywhere! (Portal, Visual Studio, VS Code, IntelliJ, Notepad...). He gave a small demo on how to quickly create a Azure Function in Visual Studio. These applications which you can write anywhere, you can also run anywhere.

The advantage of Azure is of course that it's possible to auto-scale instances as needed as was demonstrated by throwing 30.000 requests (1000/sec for 30 seconds) to the Azure Function that was written in the demo before.

To end, Jeff gave some Tips and Best Practices for Azure Functions, covering various domains:

Instance and Resource management, how to properly instantiate some variables (e.g.: SQLConnection) and how to finetune the host.json file to manipulate how many functions are executed in every instance.

To close we were given some insights in Durable Functions, by implementing the Durable Task Framework there is now support for long running processes.  This makes it a great new feature, which can replace Logic apps in some scenarios, and work alongside Logic apps in others! 

Hybrid integration with Legacy Systems - Paul Larsen & Valerie Robb

In this fourth session of the day Paul Larsen & Valerie Robb took us on a journey through the new upcoming features of BizTalk 2016.

It was announced Cumulative Update 5 will be compliant with both US government accessibility standard and GDPR.

It will have compatibility for SQL Server 2016 SP2.

Feature Pack 3 will have support for advanced scheduling and Office365 adapters:

  • Office365 Outlook Email send and receive
  • Office365 Outlook Calendar
  • Office365 Outlook Contacts

API Management overview - Miao Jiang

In this session, Miao Jiang discussed the rise of API's and the wide variety of use cases for API's. Popular customer use cases he cited include enterprise API catalog, customer and partner integration, mobile enablement and IoT.

API Management, Miao advised, could be used to decouple consumers from backend APIs, manage the lifecycle, monitor and measure and monetize APIs. Consumers meanwhile can use the developer portal to discover, learn and try out APIs.

Besides this he noted that VNETs or ExpressRoute deliver connectivity to on-premise APIs or APIs in another cloud environment while policy documents are used to control the behavior of groups of APIs, such as rate limiting, caching, transformation and many more. 

In his demo we saw the usage of policy expressions and named values.

Announcements that were made include Application Insights integration, versions and revisions, capacity metrics, auto scale and Azure Key Vault integration. A very nice overview of API Management. Looking forward to the in-depth sessions that will follow!

Eventing, Serverless and the Extensible Enterprise - Clemens Vasters

Clemens Vasters, the lead architect at the Azure Messaging team, entertained us with an enlightening architectural talk about "Event Driven Applications."  In his talk he stressed that services should be autonomous entities with clear ownership. He focused on choosing the right service communication protocol for the job. He identified two types of data exchanges:  

  • Messaging: includes an intent/expectation by the message sender (e.g., commands, transfers). Azure Service Bus is well-suited for this type of data exchanges.
  • Eventing: is mostly about reporting facts, telling what just happened.
    • Discrete events are independent and immediately actionable (e.g., fire alarm). Azure Event Grid is a good match for these type of events.
    • Event series should be analyzed first, before you can react to them (e.g. temperature threshold). This analysis could be done through Azure Stream Analytics that is wired up to Event Hubs, which allows state full and partitioned data consumption. Event Hubs now supports the Apache Kafka protocol.

Next, Clemens explained the efforts done on standardizing how cloud events should be handled, through the CNCF Cloud Events specification. Many vendors have committed to this new standard, so fingers crossed for wide adoption. Event Grid added recently native support for CNCF Cloud Events.

The Reactive Cloud: Azure Event Grid (Eventing and streaming with Azure) - Dan Rosanova

The amount of data that is being processed in the cloud is increasing every day and in his talk, Dan focused on the application of Azure Event Grid and Azure Event Hub to deal with this growth. Het presented the two technologies as great solutions, but which to use when depends on context.

After introducing Azure Event Grid, a platform for ingesting data from many providers, he mentioned that the product has been extended with features like hybrid endpoints, a dead letter endpoint, and an option to limit the amount of delivering retries.

The second part of his talk focused on Azure Event hub, including the possibility to use Azure Time Series Inside: a module for viewing, monitoring and searching in streams. A new asset is the ability to consume data from Kafka environments.

Dan concluded his talk by stating that Azure Event Grid should be used for the fan in of data, whereas Azure Event Hub is meant for fan-out purposes.

Enterprise Integration using Logic Apps - Divya Swarnkar / Jon Fancey

Jon Fancey and Divya Swarnkar took the stage with a session on Enterprise Integration with Logic Apps and walked us through the improvements made in the past year - and what's coming up next.

Following the earlier announcement of the SAP trigger for Logic Apps being in private preview a short demo showed this new trigger in action. The trigger contains an optional field where the message type can be specified (via the namespace we know from integrating SAP with BizTalk) to make the trigger listen for a certain type. When left empty the trigger will fire for every message sent to a specific SAP ProgramId.

The logic apps team will also supply us with an action capable of generating the SAP schemas and are working on a way to allow those schemas to be directly stored in the Integration Account.

Jon walked us through the improvements made on mapping. From liquid templates, which was announced last year, over custom assemblies, to XSLT 3.0 support.

A demo of the OMS template for Logic Apps management showed us the bulk resubmit feature and tracked properties (that can now be configured in the designer instead of codeview only). These functions have been available for some time, but Divya did mention that they are working on a Bulk Download feature and a way to identify runs that have been resubmitted.

They concluded the session with an overview of what's coming next for Enterprise Integration. Specifically the announcement the Integration Account will support a consumption based pricing model in the future. This was met with a round of applause!

Microsoft Flow in the Enterprise - Kent Weare

In the last session of day 1 Kent Weare, Principal Program Manager Microsoft Flow, presented about Flow. Microsoft Flow is an offering in the Microsoft Cloud and sits on top of Logic Apps. The difference with Logic Apps is that you have less control of the flow definition, for instance, you cannot access a code-behind page. Office 365 users have access to Flow, so definitely explore it yourself!

Application Productivity (BAP). In the center of BAP, you will find PowerApps, and Power BI targetted for the power users. Flow and the Common Data Services, support these applications for data in- and out, or have a business process in place. On top of Flow, you can build applications for Dynamics, SharePoint (flow is a replacement of workflows in SharePoint), or build a standalone application.

With Flow, application creation is democratized for end users, and to fill the gap for them between packaged application or missing features in these applications. Furthermore, the democratization is to have less involvement from IT for applications that have less or no impact on critical IT processes.  

In Kent's session, he showed how to point-and-click a business process in a designer, and run it. In total there were four demo's for various scenarios he showed - Flow is mostly THE Automation tool around Office365 applications

  • Change and Incident Management using Teams and Flow Bot
  • Flow integration with Excel
  • Intelligent Customer Service
  • Hot dog or not hot dog

After the demo, Kent wrapped his session up with sharing the future roadmap of Flow.

 Thank you for reading our blog post, feel free to comment or give us feedback in person. 

This blogpost was prepared by:

Bart Cocquyt
Charles Storm
Danny Buysse
Jasper Defesche
Jef Cools
Jonathan Gurevich
Keith Grima
Matthijs den Haan
Michel Pauwels
Niels van der Kaap
Nils Gruson
Peter Brouwer
Ronald Lokers
Steef-Jan Wiggers
Tim Lewis
Toon Vanhoutte
Wouter Seye

Posted on Wednesday, May 23, 2018 3:58 PM

Toon Vanhoutte by Toon Vanhoutte

What to do if a message is invalid? The default answer is mostly to make corrections at the source system and resend the messages. But… What if the source system cannot resend those messages in the right format? What if the source is a very important customer of yours and we're dealing with just-in-time orders? In some scenarios, you need to keep the business running and you need to solve the issue asap on your integration layer. This is where edit - resubmit functionality plays a key role: correct the invalid message and reinject it into your integration engine.

I'm aware that this is a debatable feature and some organizations do not allow it. My opinion is that it could be tolerated under certain circumstances, in order not to block the business. However, there must always be an action to solve the problem at its root, otherwise you'll end up with an employee that edit/resubmits messages as a fulltime occupation!

There are some enhancements to the resubmit feature in the Logic Apps release pipeline, but the edit functionality has a quite low priority at the moment, which makes perfectly sense in my opinion. In this blog post, I'll show you how you can setup and edit - resubmit process for invalid messages, using standard Logic Apps actions.


In this scenario, we're receiving orders from an FTP server. Each time a new order is placed on the FTP server, it must be validated as a first step. In case the order is valid, it can be sent towards the ERP system. Otherwise, there should be human intervention to modify the message and resubmit it into the order process.


First of all, we need a message store where business users can modify messages and mark them for resubmit. A SharePoint document library seems a perfect fit for this requirement: it's a well-known environment for business users, there's the opportunity to modify documents online and by adding a custom column to the document library, we can decide what messages that must be resubmitted. Logic Apps has a first-class integration with SharePoint, so we're safe!

Second, we need a design that allows to resubmit messages into the order process. This is done by splitting the process in three separate Logic Apps:

  1. Receive the order from FTP. Decouple the receive protocol, which allows others to be added in the future.
  2. Process the order, which means in our case to validate the order against its schema.
  3. Send the order the ERP system. Again decoupling protocol handling from message processing.

In case the message turns out to be invalid, it gets sent to the SharePoint document library. A business user modifies the message and sets the Resubmit column to Yes. Another Logic App is polling the document library, waiting for messages that have Resubmit set to Yes. If there is such a message, it gets received and deleted from the document library, and it's sent again to the Logic App that validates the message.



The Validate Process

The Validate Logic App looks like this. The message is received by the Request trigger. Then, a Validate JSON performs the message validation. This could also be performed by the XML Validation action from the Integration Account. If the validate succeeded, the Send to ERP Logic App is invoked. If the validate failed, the message gets uploaded to the SharePoint document library, via the Create File action. Afterwards, the Logic Apps run gets terminated with a Success status.


The Create File action looks like this:


It's configured with the following Run After setting:


The Document Library

The document library looks as follows. Invalid messages get uploaded here and can be modified by functional key users. By changing the Resubmit column to Yes, the message gets resubmitted!

The Resubmit Process

This process is pretty straightforward. A polling trigger fires when a file is created or modified in the document library. If the file has the Resubmit value set to Yes, its content is received and sent back to the Validate Logic App. After this succeeds, the file gets deleted from the document library.



By combining SharePoint and its easy-to-use Logic App connectors, we can easily enrich our integrations with human intervention! This edit/resubmit scenario is just one use case. Think also about approval processes, or tasks that must be completed before an automated process should kick in…


Categories: Azure
written by: Toon Vanhoutte

Posted on Friday, May 18, 2018 1:59 PM

Tom Kerkhove by Tom Kerkhove

GDPR mandates that you make data available to users on their request. In this post I show you how you can use Azure Serverless to achieve this with very little effort.

GDPR is around the corner which mandates that every company serving European customers need to comply with a lot of additional rules such as being transparent in what data is being stored, how it is being processed, and more.

The most interesting ones are actually that you need to be able to request what data they are storing about you and make it available to you, all of it. A great example is how Google allows you to select the data you want to have and give it to you, try it here.

Being inspired by this, I decided to build a similar flow running on Azure and show how easy it is to achieve this.

Consolidating user data with Azure Serverless

In this sample, I'm using a fictitious company that is called Themis Inc. which provides a web application where users can signup, create a profile and does awesome things. That application is powered by a big data set of survey information which is being processed to analyze and see if the company can deliver targeted ads for specific users.

Unfortunately, this means that the company is storing Personal Identifiable Information (PII) for the user profile and the survey results for that user. Both of these datasets need to be consolidated and provided as a download to the user.

For the sake of this sample, we are actually using the StackExchange data set and the web app simply allows me to request all my stored information.

This is a perfect fit for Azure Serverless where we will combine Azure Data Factory_, the unsung serverless hero,_ with Azure Logic Apps, Azure Event Grid and Azure Data Lake Analytics.

How it all fits together

If we look at the consolidation process, it actually consists of three steps:

  1. Triggering the data consolidation and send an email to the customer that we are working on it
  2. Consolidating, compressing and making the data available for download
  3. Sending an email to the customer with a link to the data

Here is an overview of all the pieces fit together:

Azure Logic Apps is a great way to orchestrate steps that are part of your application. Because of this, we are using a Logic App that is in charge of handling new data consolidation requests that were requested by customers in the web app. It will trigger the Data Factory pipeline that is in charge of preparing all the data. After that, it will get basic profile information about the user by calling the Users API and send out an email that the process has started.

The core of this flow is being managed by an Azure Data Factory pipeline which is great to orchestrate one or more data operations that represent a business process. In our case, it will get all the user information from our Azure SQL DB and get all data, related to that specific user, in our big data set that is stored on Azure Data Lake Store. Both data sets are being moved to a container in Azure Blob Storage and compressed after which a new Azure Event Grid event is being published with a link to the data.

To consolidate all the user information from our big data set we are using U-SQL because it allows me to write a very small script and submit this, while Azure Data Lake Analytics runs and looks through your data. This is where Data Lake Analytics shines because you don't need to be a big data expert to use it, it does all the heavy lifting for you by determining how it needs to execute it, scale it, and so on.

Last but not least, a second Logic App is subscribing to our custom Event Grid topic and sends out emails to customers with a link to their data.

By using Azure Event Grid topics, we remove the responsibility of the pipeline to know who should act on his outcome and trigger it. It also makes our current architecture flexible by providing extension points that can be used by other processes to integrate with it in the process in case we need to make the process more complex. It also removes the responsibility from the pipeline to know who should act on his outcome.

This is not the end

Users can now download their stored data, great! But there is more...

Use an API Gateway

The URLs that are currently exposed by our Logic Apps & Data Factory pipelines are generated by Azure and are tightly coupled to those resources.

As the cloud is constantly changing, this can become a problem when you decide to use another service or somebody simply deletes and you need to recreate it where it will have a new URL. Azure API Management is a great service for this where it will basically shield away from the backend process from the consumer and act as an API gateway. This means that if your backend changes; you don't need to update all you consumers, simply update the gateway instead.

Azure Data Factory pipelines can be triggered via HTTP calls but this has to be done via a REST API - Great! The downside is that it is secured via Azure AD which brings some overhead in certain scenarios. Using Azure API Management, you can shield this from your consumers by using an API key and leave the AD authentication up to the API gateway.

User Deletion

GDPR mandates that every platform needs to give a user the capability to delete all the data for a specific user on request. To achieve this a similar approach can be used or even refactor the current process so that they re-use certain components such as the Logic Apps.


Azure Serverless is a very great way to focus on what we need to achieve and not worry about the underlying infrastructure. Another big benefit is that we only need to pay for what we are using. Given this flow will be used very sporadically this is perfect because we don't want to set up an infrastructure which needs to be maintained and hosted if it will only be used once a month.

Azure Event Grid makes it easy to decouple our processes during this flow and provide more extension points where there is a need for this.

Personally, I am a fan of Azure Data Factory because it makes me as a developer so easy to automate data processes and comes with the complete package - Code & visual editor, built-in monitoring, etc.

Last but not least, this is a wonderful example of how you can combine both Azure Logic Apps & Azure Data Factory to build automated workflows. While at first, they can seem as competitors, they are actually a perfect match - One focusses on the application orchestration while the other one does the data orchestration. You can read more about this here.

Want to see this in action? Attend my "Next Generation of Data Integration with Azure Data Factory" talk at Intelligent Cloud Conference on 29th of May.

In a later post, we will go more into detail on how we can use these components to build this automated flow. Curious to see the details already? Everything will be available on GitHub.

Thanks for reading,


Posted on Thursday, May 17, 2018 12:00 AM

Frederik Gheysels by Frederik Gheysels

This article will guide you through the process of exposing the debug-information for your project using a symbol server on VSTS with private build agents.

We'll focus on exposing the symbols using IIS while pointing out some caveats along the way.


I believe that we've all experienced the situation where you're debugging an application and would like to step into the code of a dependent assembly that has been written by you or another team in your company but is not part of the current code repository.
Exposing the debug-information of that assembly via a symbol server allows you to do that.

While setting up a symbol-server and indexing pdb files was quite a hassle 10 years ago, it currently is a piece of cake when you use VSTS for your automated builds.

Build definition

To enable the possibility of stepping into the code of your project, the debug-symbols of that assembly must be exposed to the public.

This is done by adding the Index sources & Publish symbols task to your VSTS build definition:
This task will in fact add some extra information to the pdb files that are created during the build process. 

Additional information, such as where the source files can be found and what version of the sources were used during the build will be added to the pdb files.

After that, the pdb files will be published via a Symbol Server.

Once this task has been added, it still needs some simple configuration:

Since VSTS is now also a symbol server, the easiest way to publish your symbols is to select Symbol Server in this account/collection.  
When this option is selected, you should be good to go and don't have to worry about the remainder of this article.

However, since some projects are configured with private build agents, I want to explore the File share Symbol Server type in this article.

Select File Share as the Symbol Server type and specify the path to the location where the debug-symbols must be stored.

See the image below for an example:

When selecting this option, you'll publish the symbols to a File share which means that you'll need to have access to the build-server.  This implies that a Private Build Agent must be used that runs on a server that is under your (or your organizations) control.
Note that the path must be an UNC path and may not end with a backslash, otherwise the task will fail.
This means that the folder that will ultimately contain the symbol-files must be shared and needs the correct permissions.
Make sure that the user under which the build runs, has sufficient rights to write and modify files on that share. Granting the VSTS_AgentService group or the Network Service group Modify rights on that directory should suffice.

At this point, you can trigger the build and verify if the Index sources & Publish symbols task succeeded.

If it succeeded, you should see that some directories are created in the location where the symbols should be stored and you should find pdb files inside those directories.

If nothing has been added to the folder, you should inspect the logs and see what went wrong.

Maybe no *.pdb files have been found, possibly because the path to the build-output folder is incorrect.

It's also possible that *.pdb files have been found but cannot be indexed. This is common when publishing symbols projects that target .NET Core or .NET Standard. In those cases, you might find a warning in the log of the Index & Publish task that looks like this:
Skipping: somefile.pdb because it is a Portable PDB

It seems that the Index sources & Publish symbols task does not support Portable pdb files. To expose debug information for these assemblies, SourceLink  must be used, but this is beyond the scope of this article.

There is a quick workaround however: change the build settings of your project and specify that the debug information should not be portable but must be Full or Pdb only. This can be specified in the Advanced Build settings of your project in Visual Studio.

This workaround enables that the symbols can be indexed but using them while debugging will only be possible on Windows platforms which defeats a bit the purpose of having a .NET core assembly.

Exposing the debug symbols via HTTP

Now that the debug symbols are there, they should be exposed so that users of your assembly / package can make use of them.

One way to do this, is serving the symbols via a webserver.

To do this, install and configure IIS on the server where your build agent runs.

Create a Virtual Directory

This step is fairly simple: In IIS Manager, just create a virtual directory for the folder that contains the debug symbols:

Configure MIME type for the pdb files

IIS will refuse to serve files with an unknown MIME - type. Therefore, you'll have to specify the MIME type for the *.pdb files. If you fail to do so, IIS will return a HTTP 404 status code (NotFound) when a pdb file is requested.

To configure the MIME type for *.pdb files, open IIS Manager and click open the MIME types section and specify a new MIME type for the .pdb extension:


Depending on who should have access to the debug-symbols, the correct authentication method has to be setup.

If anyone may download the debug symbols, then IIS must be configured to use Anonymous Authentication.

To enable Anonymous Authentication, open the Authentication pane in IIS and enable Anonymous Authentication. If the Anonymous Authentication option is not listed, then use Turn Windows feature on and off to enable it.

Having access to the debugging information does not imply that everybody also has access to the source code, as we'll see later in the article.

Configure Visual Studio to access the symbol server

Now that the debug information is available, the only thing left to do is enable Visual Studio to use those symbols.

To do this, open the Debug Options in Visual Studio and check the Enable source server support option in the General section.
You might also want to uncheck the Enable Just My Code option to avoid that you'll have initiate the loading of the symbol files manually via the 
Modules window in Visual Studio:
Next to that, Visual Studio also needs to know where the symbols that are exposed can be found. This is done by adding the URL that exposes your symbols as a symbol location in Visual Studio:

Now, everything should be in place to be able to debug through the source of an external library, as we'll see in the next section.

In Action

When everything is setup correctly, you should now be able to step through the code of an external library.

As an example, I have a little program called AgeCalculator that uses a simple NuGet package AgeUtils.Lib for which I have exposed its symbols:

While debugging the program, you can see in the Modules window of Visual Studio that symbols for the external dll AgeUtils.Lib have been loaded.  This means that Visual Studio has found the pdb file that matches the version of the AgeUtils.Lib assembly that is currently in use.

When a line of code is encountered where functionality from the NuGet package is called, you can just step into it.
As can be seen in the Output Window, Visual Studio attempts to download the correct version of the Age.cs source code file from the source-repository. 

The debugger knows how this file is named, which version is required and where it can be found since all information is present in the pdb file that it has downloaded from the symbol server!

When the debugger attempts to retrieve the correct code-file, you'll need to enter some credentials.  Once this is done, the source-file is downloaded and you'll be able to step through it:

Now, you'll be able to find out why that external library isn't working as expected! :)

Happy debugging!

Categories: Technology
Tags: Debugging
written by: Frederik Gheysels

Posted on Friday, April 13, 2018 12:25 PM

Tom Kerkhove by Tom Kerkhove

Azure API Management released a new version that changes the OpenAPI interpretation. This article dives into the potential impact on of the consumer experience of your APIs.

Providing clean and well-documented APIs is a must. This allows your consumers to know what capabilities you provide, what they are for and what to expect.

This is where the OpenAPI specification, aka Swagger, comes in and defines how APIs should be defined across the industry, regardless of what technology is underneath it.

Recently, the Azure API Management team started releasing a new version of the product with some new features and some important changes in how they interpret the OpenAPI specification while importing/exporting them.

Before we dive into the changes to OpenAPI interpretation. I'd like to highlight that they've also added the capability to display the id of a specific operation. In the past, you still had to use the old Publisher portal for this but now you can find it via API > Operation > Frontend.

Next to that, as of last Sunday, the old Publisher portal should be fully gone now, except for the analytics part.

OpenAPI Interpretation

The latest version also changes the way OpenAPI specifications are being interpreted and are now fully based on operation as defined by the OpenAPI spec.

Here are the changes in a nutshell:

  • Id of the operation - Operation Id is based on operation.operationId, otherwise it is being generated similar to get-foo
  • Name of the operation - Display name is based on operation.summary, otherwise it will use operation.operationId. If that is not specified, it will generate a name similar to Get - /foo
  • Description of the operation - Description is based on operation.description

I like this change because it makes sense, however, this can be a breaking change in your API documentation depending on how you achieved it in the past.

The reason for this is that before rolling out this change the interpretation was different:

  • Id of the operation was a generated id
  • Name of the operation was based on operation.operationId
  • Description of the operation was based on operation.description and falls back on operation.summary

How I did it in the past

For all the projects I work on I use Swashbuckle because it's very easy to setup, use and ties into the standard XML documentation.

Here is an example of the documentation I provide for my health endpoint for Sello, which I use for demos.

As you notice, everything is right there and via the operation I specify what the operation is called and give a brief summary of what it does and what my consumers can expect as responses.

The OpenAPI specification that is generated will look like this:

Once this is imported into Azure API Management the developer experience was similar to this:

However, this approach is no longer what I'd like to offer to my consumers because if you import it after the new version it looks like this:

How I'm doing it today

Aligning with the latest interpretation was fairly easy to be honest, instead of providing a description what the operation does via summary I started using remarksinstead.

Next to that, I'm now using summary to give the operation a friendly name and assigned a better operationId via SwaggerOperation.

This is how it looks in code:

The new OpenAPI specification is compatible with the recent changes and will look like this:

Once this is imported the developer experience is maintained and looks similar to this:

When you go to the details of the new operation in the Azure portal, you will see that all our information is succesfully imported:


Azure API Management rolled out a change to the OpenAPI interpretation to provide more flexibility so you can define the operation id to use and align with the general specification.

This change is great, but it might have an impact on your current API documentation, similar to what I've experienced. With the above changes, you are good to go and your consumers will not even notice it.

Thanks for reading,