Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Monday, February 20, 2017 3:41 PM

Toon Vanhoutte by Toon Vanhoutte

Lately, I tried to connect to a Service Bus queue with limited permissions (Listen only). I encountered an issue that I want to share with you, so it can save you some time!


When you manage a Service Bus namespace, it's important to think about security.  The recommended way to deal with it, is by leveraging its Shared Access Signature (SAS) authentication and authorization mechanism.  You are able to configure SAS policies on your complete ServiceBus namespace or on individual queues and topics.  Use what best meets your expectations!

On the 'coditblog' queue, I created a ReadOnly shared access policy that only contains the Listen claim.  This policy was intended to be used by a Logic App that only needs to read messages from the queue. 

After creating the policy, I copied the primary connection string.

Then I created a Logic App from scratch, by adding the ServiceBus trigger 'When a message is received in a queue (auto-complete)'.  A connection was created by providing a connection name and the copied connection string.

When trying to select the queue name, I got the following exception:
Could not retrieve values. ConnectionString should not include EntityPath.

I double checked my connection string several times and tried multiple variations, but without any success. After some investigation, it turned out that the connector requires the Manage claim to navigate through the list of available queues. A misleading exception message...

Luckily we are not blocked by this! Just choose 'Enter custom value', type the queue name and you're good to go!

The Logic Apps starts successfully when a message arrives on the queue!

Hope this can save you some troubleshooting time!


Categories: Azure
Tags: Logic Apps
written by: Toon Vanhoutte

Posted on Thursday, February 16, 2017 5:20 PM

Toon Vanhoutte by Toon Vanhoutte

This blog post covers some aspects about exception handling in case you expose nested workflows as a web service. When dealing with web services, it's important to ensure that no time outs are triggered and that meaningful exception messages are returned to the client. Let's have a look how this can be done.


As a starting point, I create Logic App 1 that is exposed as a web service and which invokes Logic App 2 in a synchronous way. The second Logic App just puts a request on a Service Bus queue. Once that action is completed, a response is returned and the first Logic App can return a success to the consuming application.

Configure retry policy

Because I want to avoid timeouts because of automatic retries, I update the Call Logic App 2 action, so it will not perform a retry. This is done in the code view. Depending on your use case, you could also limit the number of retries. The minimal time interval between retries is 20 seconds, which is also the default value. This is quite high for a web service scenario, as most clients time out after 60 seconds.

Run an exception scenario

To see what happens in case of an exception, I reconfigure Logic App 2 to send to a non-existing queue. In both Logic Apps, the Response action is skipped because the Logic App stops processing in case of a failure.

The web service call results in a meaningless exception message for the consuming application:

Optimize for exceptions

Ensure a meaningful exception is returned

Let's first change the Logic App 2, so it returns the real Service Bus exception in case of a failure. The recommended way, is to add a Scope around the Send Message action. The details of the scope outcome are provided via the @result function. More information is given here. Depending on the resulting outcome of the scope, a different response can be returned.

Remark the Switch statement, based on the status of the scope.  The @result function returns an array, that provides a result for each action within the scope.   The status of the first action is extracted via this expression: @result('scope')[0]['status'].
In the Failure Response, I return the resulting status code - @result('scope')[0]['outputs']['statusCode'] - and exception message - @result('scope')[0]['outputs']['body']['message']

Changing Logic App 1 to meet the expectations is a lot easier.  Just ensure it returns the status code and message from the invoked Logic App.

Ensure the Logic App continues on failure

Another optimization we must perform, is to configure the Response action and Switch statement to also run in case the preceding action fails. This can be done within the code view:

Inspect the result

Exception scenario

If we trigger an exception scenario, the run detail of Logic App 2 looks good. The Scope has failed, which is detected by the Switch statement and the Logic App returns the Failure Response.

The outcome of Logic App 1 also meets the expectations, as the response of Logic App 2 is returned to the consuming client.

The web service client also gets a nice exception message.

Success scenario

We completely focused on the exception scenario, so it's good to double check that the happy path still behaves well. Just update Logic App 2, to send now to an existing queue. Logic App 2 sends a success response, which is great!

Logic App 1 shows only green statuses:

The web service client gets a confirmation that the message got queued!


The default settings for retry policies and exception handling might not always fit your scenario. Make sure you understand the Logic Apps capabilities on this matter and apply them according to your needs. Small prototyping exercises can serve as basis for development guidelines and best practices.

Happy exception handling!

Categories: Azure
Tags: Logic Apps
written by: Toon Vanhoutte

Posted on Thursday, February 9, 2017 4:00 PM

Massimo Crippa by Massimo Crippa

In Azure API Management, Groups are used to manage the visibility of products to developers so the developers can view and consume the APIs that are contained into the groups in which they belong.

Suppose that we have a custom group for developers affiliated with a specific business partner and we want to allow those developers (that signed up with different identity providers) to access only to the partner's relevant Products.

Let's combine Logic Apps, Azure API Management and ARM together to automate the user group association.

In short: no matter what which identity provider (AAD, Google, Twitter, etc..) is used to sign up, when the user belongs to the domain it should be added to the "Codit Dev Team" custom group.

The basic idea here is to use logic apps as a batch process to get the list of registered users and then call a child logic app to assign the current developer to a proper custom group to manage the product visibility.

Logic Apps and Azure API Management

There are three ways to invoke an API Management endpoint from a Logic App:

  • API Management connector. The connector is pretty straightforward. You first select an APIM tenant, the API and then the operation to be called. Finally, the available headers and parameters are automatically displayed. The APIM connector by default shows only the APIM tenants created in the same subscription where the Logic App was created. 
  • Http + Swagger connector. This connector provides a similar user experience as the APIM connector. The shape of the API with the parameters are automatically integrated in the designer.
  • Http connector. It requires to specify HTTP verb, URL, Headers and Body to perform an HTTP call. Simple as that!

In this exercise, all the services that had been integrated are located in different Azure subscriptions therefore I used only Http and Http+Swagger connectors.

Manage 'em all

With the "Every API should be a managed API" mantra in mind and with the final goal to have a more information about which API is called and its performance we created a facade API for every HTTP call.

Here the list of managed APIs:

  • Every call to the Azure Resource Manager (get users, get groups by user, add user to group)
  • Get the token to authorize the ARM call
  • Call the child Logic App

And here the Logic App workflows that had been created. 

Some other benefits we got from the virtualization: 

  • Use of a single authorization model between Logic App and APIM by providing an API Key via the "Ocp-Apim-Subscription-Key" header.
  • Balancing complexity and simplicity. The ARM authentication is delegated to the API Management layer.
  • Apply a consistent resource naming convention. 

Azure API Management Policies

The policy engine is where the core power of Azure API Management lies. Let's go through the policies that have been configured for this exercise. 

Get the bearer token

A token API with a GET operation is used by the ARM facade API to get the bearer token to authorize the call to Azure Resource Manager endpoint. The policy associated to the "get-token" operation changes the HTTP request method and sets the body of the request to be sent to the AAD token endpoint using the password flow.

Call the ARM

This is the call to the ARM endpoint (get users, get groups by user, add user to group). The "send-request"-policy is used to perform a call to the private token API and to store the response in the bearerToken context property.

The "set-header" policy in combination with a policy expression is used to extract the token and to add it as a header to the request sent to the ARM endpoint.

This policy can be improved by adding the policy expressions to store and retrieve the token from the cache. Here an example. 

Logic Apps facade API

The Logic Apps workflows that expose an HTTP trigger call can be called by using the POST verb only and passing the parameters in the body of the request.

The child workflow that takes care to assign a user to a specific group has been virtualized via Azure API Management to change the URL segments as here{uid}/groups/{groupname} and to change the request method to PUT.


Thanks to this simple Logic App and some APIM power I can now be sure that every new colleague that sign up to our developer portal is automatically associated to the internal developer team so that he/she can get access to a broader set of APIs.

A similar result can be achieved using the Azure B2B/B2C integration in combination with the AAD security groups but, at the time of writing, the APIM integration with AAD B2C has not been completed yet.

Another benefit of managed APIs is the gain of visibility of the exposed assets and their performance. Discover how an API is used, information about the consumer and be able to discover the trends that are most impacting the business.



Categories: API Management
Tags: Azure
written by: Massimo Crippa

Posted on Tuesday, February 7, 2017 2:33 PM

Toon Vanhoutte by Toon Vanhoutte

Logic Apps offers the long-requested resubmit feature. This is very a powerful capability to take into account when designing an error handling strategy. However, do not take the resubmit for granted. Ensure you know how it behaves or you might end-up with some unpleasant surprises at the end of your project. Read about it on the internet, discuss with your colleagues and eventually get your hands dirty! Some small prototyping during design phase is definitely recommended.

Design to benefit from resubmit!

The resubmit functionality will create a new instance of the Logic App by firing an identical trigger message as within the originally failed Logic App. Depending on the type of trigger message, this is helpful or not. However, you can design to benefit from resubmit. Let's have a closer look!

The Logic App below kicks off on a configured time interval. It iterates through all files in the folder, parses them into XML, executes a transformation, sends them to the output folder and eventually deletes them. If something goes wrong, you cannot use the resubmit. The two main reasons are:

  • The trigger message does not contain the data you act upon, so resubmitting does not guarantee that you re-process the same message.
  • One Logic App handles multiple messages, so it's not possible to simply resubmit just one of them

Let's adjust this into a more convenient design.  Here we use the file trigger, that ensures that one Logic App only handles one message.  The trigger also contains the payload of the message, so a resubmit guarantees that the same data will be reprocessed.  Now we can fully benefit from the resubmit function.

We can further improve this Logic App. In case the last delete action fails, we can still resubmit the message. However, this will result in the message being written twice to the output folder, which is not desired. In order to optimize this, let's split this logic app in two...

The first Logic App receives the file content, passes it to the second logic app and deletes the file from the input folder.

The second Logic App takes care of the message processing: flat file parsing, transformation and writing the file to the output.  Remark that the Request / Response actions are set at the beginning of the Logic App, which actually means that the processing logic is called in an asynchronous fashion (fire and forget) from the perspective of the consuming Logic App.

With such a design, the message can be deleted already from the input folder, even if the creation of the output file fails. Via the resubmit, you are still able to recover from the failure. Remember: design to benefit from resubmit!

Think about data retention!

If your error handling strategy is built on top of the resubmit function, you need to consider the duration that your Logic App history is available. According to the documentation, the Logic App storage retention is 90 days. Seems more than sufficient for most integration scenarios!

Resubmit Caveats

HTTP Request / Response

Consider the following Logic App. It starts with a request / response, followed by additional processing logic that is simulated by a Delay shape.


What happens if we resubmit this?  As there is no real client application, the response action is skipped.  The execution of template action 'Response' is skipped: the client application is not waiting for a response from service. Cool! The engine does not fail on this and nicely skips the unnecessary step.  However, as a consequence, the processing logic is also skipped which is not our intention.

This issue can be tackled by diving into the code view.  Navigate to the Delay action and add the Skipped status in its runAfter section.  This means that the Delay action will be executed whenever the preceding Response action succeeded (normal behavior) or was skipped (resubmit behavior).

This trick results in the desired outcome for a resubmit:

ServiceBus PeekLock

Consider the following Logic App. It starts with a Service Bus PeekLock - Complete combination, a best practice to avoid message loss, followed by additional processing logic that is simulated by a Delay shape.

What happens if we resubmit this?  As the message was already completed in the queue by the original run, we get an exception: "Failed to complete the message with the lock token 'baf877b2-d46f-4fae-8267-02903d9a9642'. The lock on the message has been lost".  This exception causes the Logic App to fail completely, so the resubmit is not valuable.

As a workaround, you can update the Delay action within the code view and add the Failed status in its runAfter section.  This is not the prettiest solution, but I couldn't find a better alternative.  Please share your ideas below, if you identified a better approach.

On a resubmit, we now get the desired behavior.

Are singleton Logic Apps respected?

Logic Apps provide the ability to have singleton workflows. This can be done by adding the "operationOptions" : "SingleInstance" to the polling trigger. Triggers are skipped in case a Logic App instance is still active. Read more on creating a Logic Apps Singleton instance.

I was curious to see whether resubmit respects the singleton instance of a Logic App. Therefore, I created a Logic App with a 30 seconds delay. At the moment a Logic App instance was active, I toggled a resubmit. This resulted in two active instances at the same time.

It's important to be aware of the fact that a resubmit might violate your singleton logic!

Against what version is the Logic App resubmitted?

What happens in case you have a failed instance of Logic App v1? Afterwards, you apply changes to the workflow definition, which results in v2. Will the resubmit of the failed v1 instance, result in a v1 or v2 instance that is fired?

I created a very simple v1 Logic App. It receives an HTTP request and terminates the workflow with the message "This is version 1!".

Then I've modified this Logic App so it terminates with the message "This is version 2!".  Afterwards, I've resubmitted the failed v1 instance and it terminated with the "This is version 2!" message.

So remember that resubmitting is always performed against the latest deployed version of your Logic App. Increasing the "contentVersion" explicitly for every modification, does not alter this behavior.

Feedback to the product team

The resubmit feature is very powerful from a runtime perspective, but it can be improved from an operational point of view. It would be nice to have visibility on the resubmit, so that you know as an operator that specific failed workflows can be ignored, because they were resubmitted already. This can be achieved by adding an additional workflow status: Resubmitted. Another alternative solution is allowing an operator to query for failed Logic App run that were not resubmitted yet. Do you like this suggestion? Please vote for it!

At the moment, Logic Apps does not support a resume function. It would be nice that you could explicitly add a persistence point to the Logic App, so that in case of a failure you can resume from the last point of persistence. Let the product team know if you're interested in this feature. You can work around this limitation by splitting your Logic App into multiple smaller workflows. This kind of introduces a resume function, because you can resubmit at every small workflow. As a drawback, the monitoring experience becomes more painful.

Nice to see that the "singleton" feature has been introduced! Why not taking it one level further and make the number of allowed concurrent instances configurable? In this way, we can easily "throttle" the logic apps by configuration in case the backend system cannot handle a high load. Vote here if you like this suggestion!

Hope this write-up gave you some new insights into Logic Apps resubmit capabilities!

Categories: Azure
written by: Toon Vanhoutte

Posted on Monday, February 6, 2017 12:21 PM

Glenn Colpaert by Glenn Colpaert

In this blog post I will demonstrate how easy it is to expose SAP functionality to the cloud with the new functionalities provided by Logic Apps. I will create a Logic App that exposes BAPI functionality to an external website.

It was 2015 when Microsoft first announced Azure App Services. Since then the platform has gone through a lot of changes and tons of new capabilities and features were added. Especially when looking at Logic Apps.

In July of 2015 I wrote a blogpost on how to expose SAP functionality to the cloud using both API Apps and Logic Apps. For people that are feeling a bit nostalgic you can find the blogpost here.

As already said, since then the platform has gone through a lot of changes and I've been receiving lots of feedback to re-write the above blogpost to include those new changes. Now that the new SAP Connector is finally available in public preview, the wait is over...

What will be build?

Just like in the previous blogpost we will keep it as simple as possible so we can focus on the main goal of this blogpost; showing the method on how to expose SAP data via Logic Apps.
In this sample we will create a Logic App that will expose an HTTP Endpoint, that Logic App will then call a BAPI function on the SAP System via the on premises data gateway and return the result directly to the caller.

Configuring and installing the on premises data gateway

A detailed installation and configuration guide can be found on following link: 

Important to mention is that the on premises data gateway requires a 'Work account'. The configuration of the gateway will fail if you are using a Microsoft Account. If your Azure subscription is linked to a Microsoft Account and not to a work account you can use following workaround you create your own 'Work account' via your Microsoft Account subscription.

When the on premises data gateway is installed and configured you can use and configure the gateway in the Azure Portal. Detailed instructions can be found here: 

Creating the Logic App

Before creating the Logic App make sure that you have an instance of the on premise data gateway service in Azure and that you've installed the SAP NCO Client libraries (both x86 and x64 versions) on the gateway Machine.

Add a Request trigger as first step of your Logic App, this Request trigger will host an endpoint in Azure that you can use you send POST requests to your Logic App. 

As a second step add the SAP Application Server – Send to SAP action.

Fill in all necessary connection options to your SAP Server. Please take note that here we are using the on premises data gateway to connect to the gateway machine.

After you successfully created the connection towards the SAP Server you will be able to select a SAP Action. BAPI’s/IDOC’s/RFC and TRFC are available.

It’s also perfectly possible to manually add this action to the Logic App Connector. In the case below the action was added manually. As an input message we are passing the Request Body to the SAP Server.

As a last step in this demo application, we are returning the result of the SAP Action as a Response of our Logic App.

The image below is the final result of our Logic App. 


Testing the Logic App

You can test this Logic App with Postman, as an INPUT provide your SAP XML and the result of your call will be the result XML of SAP.  



Categories: Azure
Tags: Azure, Logic Apps
written by: Glenn Colpaert