wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Thursday, February 9, 2017 4:00 PM

Massimo Crippa by Massimo Crippa

In Azure API Management, Groups are used to manage the visibility of products to developers so the developers can view and consume the APIs that are contained into the groups in which they belong.

Suppose that we have a custom group for developers affiliated with a specific business partner and we want to allow those developers (that signed up with different identity providers) to access only to the partner's relevant Products.

Let's combine Logic Apps, Azure API Management and ARM together to automate the user group association.

In short: no matter what which identity provider (AAD, Google, Twitter, etc..) is used to sign up, when the user belongs to the @codit.eu domain it should be added to the "Codit Dev Team" custom group.

The basic idea here is to use logic apps as a batch process to get the list of registered users and then call a child logic app to assign the current developer to a proper custom group to manage the product visibility.

Logic Apps and Azure API Management

There are three ways to invoke an API Management endpoint from a Logic App:

  • API Management connector. The connector is pretty straightforward. You first select an APIM tenant, the API and then the operation to be called. Finally, the available headers and parameters are automatically displayed. The APIM connector by default shows only the APIM tenants created in the same subscription where the Logic App was created. 
  • Http + Swagger connector. This connector provides a similar user experience as the APIM connector. The shape of the API with the parameters are automatically integrated in the designer.
  • Http connector. It requires to specify HTTP verb, URL, Headers and Body to perform an HTTP call. Simple as that!

In this exercise, all the services that had been integrated are located in different Azure subscriptions therefore I used only Http and Http+Swagger connectors.

Manage 'em all

With the "Every API should be a managed API" mantra in mind and with the final goal to have a more information about which API is called and its performance we created a facade API for every HTTP call.

Here the list of managed APIs:

  • Every call to the Azure Resource Manager (get users, get groups by user, add user to group)
  • Get the token to authorize the ARM call
  • Call the child Logic App

And here the Logic App workflows that had been created. 

Some other benefits we got from the virtualization: 

  • Use of a single authorization model between Logic App and APIM by providing an API Key via the "Ocp-Apim-Subscription-Key" header.
  • Balancing complexity and simplicity. The ARM authentication is delegated to the API Management layer.
  • Apply a consistent resource naming convention. 

Azure API Management Policies

The policy engine is where the core power of Azure API Management lies. Let's go through the policies that have been configured for this exercise. 

Get the bearer token

A token API with a GET operation is used by the ARM facade API to get the bearer token to authorize the call to Azure Resource Manager endpoint. The policy associated to the "get-token" operation changes the HTTP request method and sets the body of the request to be sent to the AAD token endpoint using the password flow.

Call the ARM

This is the call to the ARM endpoint (get users, get groups by user, add user to group). The "send-request"-policy is used to perform a call to the private token API and to store the response in the bearerToken context property.

The "set-header" policy in combination with a policy expression is used to extract the token and to add it as a header to the request sent to the ARM endpoint.

This policy can be improved by adding the policy expressions to store and retrieve the token from the cache. Here an example. 

Logic Apps facade API

The Logic Apps workflows that expose an HTTP trigger call can be called by using the POST verb only and passing the parameters in the body of the request.

The child workflow that takes care to assign a user to a specific group has been virtualized via Azure API Management to change the URL segments as here https://api.codit.eu/managedarm/users/{uid}/groups/{groupname} and to change the request method to PUT.

Conclusion

Thanks to this simple Logic App and some APIM power I can now be sure that every new colleague that sign up to our developer portal is automatically associated to the internal developer team so that he/she can get access to a broader set of APIs.

A similar result can be achieved using the Azure B2B/B2C integration in combination with the AAD security groups but, at the time of writing, the APIM integration with AAD B2C has not been completed yet.

Another benefit of managed APIs is the gain of visibility of the exposed assets and their performance. Discover how an API is used, information about the consumer and be able to discover the trends that are most impacting the business.

Cheers

Massimo

Categories: API Management
Tags: Azure
written by: Massimo Crippa

Posted on Tuesday, February 7, 2017 2:33 PM

Toon Vanhoutte by Toon Vanhoutte

Logic Apps offers the long-requested resubmit feature. This is very a powerful capability to take into account when designing an error handling strategy. However, do not take the resubmit for granted. Ensure you know how it behaves or you might end-up with some unpleasant surprises at the end of your project. Read about it on the internet, discuss with your colleagues and eventually get your hands dirty! Some small prototyping during design phase is definitely recommended.

Design to benefit from resubmit!

The resubmit functionality will create a new instance of the Logic App by firing an identical trigger message as within the originally failed Logic App. Depending on the type of trigger message, this is helpful or not. However, you can design to benefit from resubmit. Let's have a closer look!

The Logic App below kicks off on a configured time interval. It iterates through all files in the folder, parses them into XML, executes a transformation, sends them to the output folder and eventually deletes them. If something goes wrong, you cannot use the resubmit. The two main reasons are:

  • The trigger message does not contain the data you act upon, so resubmitting does not guarantee that you re-process the same message.
  • One Logic App handles multiple messages, so it's not possible to simply resubmit just one of them

Let's adjust this into a more convenient design.  Here we use the file trigger, that ensures that one Logic App only handles one message.  The trigger also contains the payload of the message, so a resubmit guarantees that the same data will be reprocessed.  Now we can fully benefit from the resubmit function.

We can further improve this Logic App. In case the last delete action fails, we can still resubmit the message. However, this will result in the message being written twice to the output folder, which is not desired. In order to optimize this, let's split this logic app in two...

The first Logic App receives the file content, passes it to the second logic app and deletes the file from the input folder.

The second Logic App takes care of the message processing: flat file parsing, transformation and writing the file to the output.  Remark that the Request / Response actions are set at the beginning of the Logic App, which actually means that the processing logic is called in an asynchronous fashion (fire and forget) from the perspective of the consuming Logic App.

With such a design, the message can be deleted already from the input folder, even if the creation of the output file fails. Via the resubmit, you are still able to recover from the failure. Remember: design to benefit from resubmit!

Think about data retention!

If your error handling strategy is built on top of the resubmit function, you need to consider the duration that your Logic App history is available. According to the documentation, the Logic App storage retention is 90 days. Seems more than sufficient for most integration scenarios!

Resubmit Caveats

HTTP Request / Response

Consider the following Logic App. It starts with a request / response, followed by additional processing logic that is simulated by a Delay shape.

 

What happens if we resubmit this?  As there is no real client application, the response action is skipped.  The execution of template action 'Response' is skipped: the client application is not waiting for a response from service. Cool! The engine does not fail on this and nicely skips the unnecessary step.  However, as a consequence, the processing logic is also skipped which is not our intention.

This issue can be tackled by diving into the code view.  Navigate to the Delay action and add the Skipped status in its runAfter section.  This means that the Delay action will be executed whenever the preceding Response action succeeded (normal behavior) or was skipped (resubmit behavior).

This trick results in the desired outcome for a resubmit:

ServiceBus PeekLock

Consider the following Logic App. It starts with a Service Bus PeekLock - Complete combination, a best practice to avoid message loss, followed by additional processing logic that is simulated by a Delay shape.

What happens if we resubmit this?  As the message was already completed in the queue by the original run, we get an exception: "Failed to complete the message with the lock token 'baf877b2-d46f-4fae-8267-02903d9a9642'. The lock on the message has been lost".  This exception causes the Logic App to fail completely, so the resubmit is not valuable.

As a workaround, you can update the Delay action within the code view and add the Failed status in its runAfter section.  This is not the prettiest solution, but I couldn't find a better alternative.  Please share your ideas below, if you identified a better approach.

On a resubmit, we now get the desired behavior.

Are singleton Logic Apps respected?

Logic Apps provide the ability to have singleton workflows. This can be done by adding the "operationOptions" : "SingleInstance" to the polling trigger. Triggers are skipped in case a Logic App instance is still active. Read more on creating a Logic Apps Singleton instance.

I was curious to see whether resubmit respects the singleton instance of a Logic App. Therefore, I created a Logic App with a 30 seconds delay. At the moment a Logic App instance was active, I toggled a resubmit. This resulted in two active instances at the same time.

It's important to be aware of the fact that a resubmit might violate your singleton logic!

Against what version is the Logic App resubmitted?

What happens in case you have a failed instance of Logic App v1? Afterwards, you apply changes to the workflow definition, which results in v2. Will the resubmit of the failed v1 instance, result in a v1 or v2 instance that is fired?

I created a very simple v1 Logic App. It receives an HTTP request and terminates the workflow with the message "This is version 1!".

Then I've modified this Logic App so it terminates with the message "This is version 2!".  Afterwards, I've resubmitted the failed v1 instance and it terminated with the "This is version 2!" message.

So remember that resubmitting is always performed against the latest deployed version of your Logic App. Increasing the "contentVersion" explicitly for every modification, does not alter this behavior.

Feedback to the product team

The resubmit feature is very powerful from a runtime perspective, but it can be improved from an operational point of view. It would be nice to have visibility on the resubmit, so that you know as an operator that specific failed workflows can be ignored, because they were resubmitted already. This can be achieved by adding an additional workflow status: Resubmitted. Another alternative solution is allowing an operator to query for failed Logic App run that were not resubmitted yet. Do you like this suggestion? Please vote for it!

At the moment, Logic Apps does not support a resume function. It would be nice that you could explicitly add a persistence point to the Logic App, so that in case of a failure you can resume from the last point of persistence. Let the product team know if you're interested in this feature. You can work around this limitation by splitting your Logic App into multiple smaller workflows. This kind of introduces a resume function, because you can resubmit at every small workflow. As a drawback, the monitoring experience becomes more painful.

Nice to see that the "singleton" feature has been introduced! Why not taking it one level further and make the number of allowed concurrent instances configurable? In this way, we can easily "throttle" the logic apps by configuration in case the backend system cannot handle a high load. Vote here if you like this suggestion!

Hope this write-up gave you some new insights into Logic Apps resubmit capabilities!
Toon

Categories: Azure
written by: Toon Vanhoutte

Posted on Monday, February 6, 2017 12:21 PM

Glenn Colpaert by Glenn Colpaert

In this blog post I will demonstrate how easy it is to expose SAP functionality to the cloud with the new functionalities provided by Logic Apps. I will create a Logic App that exposes BAPI functionality to an external website.

It was 2015 when Microsoft first announced Azure App Services. Since then the platform has gone through a lot of changes and tons of new capabilities and features were added. Especially when looking at Logic Apps.

In July of 2015 I wrote a blogpost on how to expose SAP functionality to the cloud using both API Apps and Logic Apps. For people that are feeling a bit nostalgic you can find the blogpost here.

As already said, since then the platform has gone through a lot of changes and I've been receiving lots of feedback to re-write the above blogpost to include those new changes. Now that the new SAP Connector is finally available in public preview, the wait is over...

What will be build?

Just like in the previous blogpost we will keep it as simple as possible so we can focus on the main goal of this blogpost; showing the method on how to expose SAP data via Logic Apps.
In this sample we will create a Logic App that will expose an HTTP Endpoint, that Logic App will then call a BAPI function on the SAP System via the on premises data gateway and return the result directly to the caller.

Configuring and installing the on premises data gateway

A detailed installation and configuration guide can be found on following link: 
https://docs.microsoft.com/en-us/azure/app-service-logic/app-service-logic-gateway-install 

Important to mention is that the on premises data gateway requires a 'Work account'. The configuration of the gateway will fail if you are using a Microsoft Account. If your Azure subscription is linked to a Microsoft Account and not to a work account you can use following workaround you create your own 'Work account' via your Microsoft Account subscription.
https://docs.microsoft.com/en-us/azure/virtual-machines/virtual-machines-windows-create-aad-work-id?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json#locate-your-default-directory-in-the-azure-classic-portal

When the on premises data gateway is installed and configured you can use and configure the gateway in the Azure Portal. Detailed instructions can be found here: https://docs.microsoft.com/en-us/azure/app-service-logic/app-service-logic-gateway-connection 

Creating the Logic App

Before creating the Logic App make sure that you have an instance of the on premise data gateway service in Azure and that you've installed the SAP NCO Client libraries (both x86 and x64 versions) on the gateway Machine.


Add a Request trigger as first step of your Logic App, this Request trigger will host an endpoint in Azure that you can use you send POST requests to your Logic App. 

As a second step add the SAP Application Server – Send to SAP action.

Fill in all necessary connection options to your SAP Server. Please take note that here we are using the on premises data gateway to connect to the gateway machine.

 
After you successfully created the connection towards the SAP Server you will be able to select a SAP Action. BAPI’s/IDOC’s/RFC and TRFC are available.

 
It’s also perfectly possible to manually add this action to the Logic App Connector. In the case below the action was added manually. As an input message we are passing the Request Body to the SAP Server.

 
As a last step in this demo application, we are returning the result of the SAP Action as a Response of our Logic App.


The image below is the final result of our Logic App. 

 

Testing the Logic App

You can test this Logic App with Postman, as an INPUT provide your SAP XML and the result of your call will be the result XML of SAP.  

Cheers,

Glenn

Categories: Azure
Tags: Azure, Logic Apps
written by: Glenn Colpaert

Posted on Wednesday, February 1, 2017 3:00 PM

Toon Vanhoutte by Toon Vanhoutte

In many integration projects, the importance of idempotent receivers is overlooked. This blog post summarizes what idempotent receivers are, why we need them and how we can achieve it.

What?

Let's first have a closer look at the definition of idempotence, according to Wikipedia. "Idempotence is the property of certain operations in mathematics and computer science, that can be applied multiple times without changing the result beyond the initial application." The meaning of this definition is explained as: "a function is idempotent if, whenever it is applied twice to any value, it gives the same result as if it were applied once; i.e., ƒ(ƒ(x)) ≡ ƒ(x)".

If we apply this on integration, it means that a system is idempotent when it can process a specific message multiple times, while still retaining the same end-result. As a real-life example, an ERP system is idempotent if only one sales order is created, even if the CreateSalesOrder command message was submitted multiple times by the integration layer.

Why?

Often, customers request the integration layer to perform duplicate detection, so the receiving systems should not be idempotent. This statement is only partially true. Duplicate detection on the middleware layer can discard messages that are received more than once. However, even in case a message is only received once by the middleware, it may still end-up multiple times in the receiving system. Below you can find two examples of such edge cases.

Web service communication

Nowadays, integration leverages more and more the power of API's. API's are built on top of the HTTP protocol, which can cause issues due to its nature. Let's consider the following situations:

  1. In this case, all is fine. The service processed the request successfully and the client is aware of this.

  2. Here there is also no problem. The service failed processing the request and the client knows about it. The client will retry. Eventually the service will only process the message once.

  3. This is a dangerous situation in which client and service are misaligned on the status. The service successfully processed the message, however the HTTP 200 response never reached the client. The client times out and will retry. In this case the message is processed twice by the server, so idempotence might be needed.

Asynchronous queueing

In case a message queueing system is used, idempotency is required if the queue supports guaranteed at-least-once delivery. Let's take Azure Service Bus queues as an example. Service Bus queues support the PeekLock mode. When you peek a message from the queue, it becomes invisible for other receivers during a specific time window. You can explicitly remove the message from the queue, by executing a Complete command.

In the example below, the client peaks the message from the queue and sends it to the service. Server side processing goes fine and the client receives the confirmation from the service. However, the client is not able to complete the message because of an application crash or a network interference. In this case, the message will become visible again on the queue and will be presented a second time towards the service. As a consequence, idempotence might be required.

How?

The above scenarios showcase that duplicate data entries can be avoided most of the time, however in specific edge cases a message might be processed twice. Within the business context of your project, you need to determine if this is an issue. If 1 out of 1000 emails is sent twice, this is probably not a problem. If, however 1 out of 1000 sales orders are created twice, this can have a huge business impact. The problem can be resolved by implementing exactly-once delivery or by introducing idempotent receivers.

Exactly-once delivery

The options to achieve exactly-once delivery on a protocol level are rather limited. Exactly-once delivery is very difficult to achieve between systems of different technologies. Attempts to provide an interoperable exactly-once protocol, such as SOAP WS-ReliableMessaging, ended up very complex and often not interoperable in practice. In case the integration remains within the same technology stack, some alternative protocols can be considered. On a Windows platform, Microsoft Distributed Transaction Coordinator can ensure exactly-once delivery (or maybe better exactly-once processing). The BizTalk Server SQL adapter and the NServiceBus MSMQ and SQL transport are examples that leverage this transactional message processing.

On the application level, the integration layer could be made responsible to check first against the service if the message was already processed. If this turns out to be true, the message can be discarded; otherwise the message must be delivered to the target system. Be aware that this results in chatty integrations, which may influence performance in case of a high message throughput.

Idempotent receiver

Idempotence can be established within the message itself. A classic example to illustrate this is a financial transaction. A non-idempotent message contains a command to increase your bank balance with € 100. If this message gets processed twice, it's positive for you, but the bank won't like it. It's better to create a command message that states that the resulting bank balance must be € 12100.  This example clearly solves idempotence, but is not built for concurrent transactions.

An idempotent message is not always an option. In such cases the receiving application must take responsibility to ensure idempotence. This can be done by maintaining a list of message id's that have been processed already. If a message arrives with an id that is already on the list, it gets discarded. When it's not possible to have a message id within the message, you can keep a list of processed hash values. Another way of achieving idempotence is to set unique constraints on the Id of the data entity. Instead of pro-actively checking if a message was already processed, you can just give it a try and handle the unique constraint exception.

Lately, I see more and more SaaS providers that publish idempotent upsert services, which I can only encourage! The term upsert means that the service itself can determine whether it needs to perform an insert or an update of the data entity. Preferably, this upsert is performed based on a functional id (e.g. customer number) and not based on an internal GUID, as otherwise you do not benefit from the performance gains.

Conclusion

For each integration you set up, it's important to think about idempotence. That's why Codit has it by default on its Integration Analysis Checklist. Investigate the probability of duplicate data entries, based on the used protocols. If your business case needs to avoid this at all time, check whether the integration layer takes responsibility on this matter or if the receiving system provides idempotent service endpoints. The latter is mostly the best performing choice.

Do you have other ways to deal with this? Do not hesitate to share your experience via the comments section!

Thanks for reading!

Toon

Categories: Architecture
Tags: Design
written by: Toon Vanhoutte

Posted on Wednesday, December 21, 2016 11:23 AM

Tom Kerkhove by Tom Kerkhove

When maintaining dozens, or even hundreds of Azure resources it is crucial to keep track of what is going on and how they behave. To achieve this, Azure provides a set of functionalities and services to help you monitor them.

One of those functionalities is the ability to configure Azure Alerts on your assets. By creating an alert you define how and under what circumstances you want to be notified of a specific event, ie. sending an email when DTU capacity is over 90% of your production database.

Unfortunately, receiving emails for alerts can be annoying and is not a flexible approach for handling alerts. What if I want to automate a process when our database is overloaded? Should I parse my mailbox? No!

Luckily, you can configure your Azure Alerts to push to a webhook where you process the notifications and Azure Logic Apps is a perfect fit for this! By configuring all your Azure Alerts to push the events to your Azure Logic Apps, you decouple the processing from the notification medium and can easily change the way an event can be handle.

Today I will show you how you can push all your Azure Alerts to a dedicated Slack channel, but you can use other Logic App Connectors to fit your need as well!

Creating a basic Azure Alert Handler

For starters, we will create a new Logic App that will receive all the event notifications - In this example azure-alert-handler.

Note: It is a best practice to host it as close to your resources as possible so I provision it in West Europe.

Once it is provisioned, we can start by adding a new Request Triggerconnector. This trigger will expose a webhook that can be called on a dedicated URL. This URL is generated once you save it for the first time.

As you can see you can also define the schema of the events that will be received but more on this later on.

Now that our Logic App is ready to receive events, we can configure our Azure Alerts. In this scenario we will create an alert on an API, but you could do this on almost any Azure resource. Here we will configure it to get a notification once there are more than 100 HTTP Server Errors in 5 minutes.

To achieve this, navigate to your Web App and search for "Alerts".

Click on "New Alert", define what the alert should monitor and specify the URL of the Request Trigger in our Logic App.

We are good to go! This means that if our Alert will change its state, it will push a notification to our webhook inside our Logic Apps.

You can see all events coming in our Logic App by using the Trigger History and All Runs of the Logic App itself.

When you select a Run you can view the details and thus what it has sent. Based on this sample payload, I generated the schema with jsonschema.net and used that to define the schema of the webhook. 

Didn't specify the schema? Don't worry you can still change it!

While this is great, I don't want to come back every 5 minutes to see whether or not there were new events.

Since we are using Slack internally this is a good fit consolidate all alerts in a dedicated channel so that we have everything in one place.

To push the notification to Slack, add a new Slack (Post Message) Action and authenticate with your Slack team.

Once you are authenticated, it is fairly simple to configure the connector - You need to tell it to what channel you want to push messages, what the message should look like or even other things like the name of the bot et al.

Here is an overview of all the settings of the Slack connector that you can use.

I used the @JSON function to parse the JSON input dynamically, later on we will have a look how we can simplify this.

"Alert *'@{JSON(string(trigger().outputs.body)).context.name}'* is currently *@{JSON(string(trigger().outputs.body)).status}* for *@{JSON(string(trigger().outputs.body)).context.resourceName}* in @{JSON(string(trigger().outputs.body)).context.resourceRegion}_(@{JSON(string(trigger().outputs.body)).context.portalLink})_"

Once our updated Logic App is triggered you should start receiving messages in Slack.

Tip - You can also Resubmit a previous run, this allows you to take the original input and re-run it again with that information.

Awesome! However, it tends to be a bit verbose because it mentions a lot of information, despite that it's already resolved. Nothing we can't fix with Logic Apps!

Sending specific messages based on the alert status

In Logic Apps you can add a Condition that allows you to execute certain logic if some criteria are met.

In our case we will create a more verbose message when a new Alert is 'Activated' while for other statuses we only want to give a brief update about that alert.

As you can see we are no longer parsing the JSON dynamically but rather using dynamic content, thanks to our Request Trigger Schema. This allows us to create more human-readable messages while omitting the complexity of the JSON input.

Once our new alerts are coming in it will now send customized messages based on the event!

Monitoring the Monitoring

The downside of centralizing the processing of something is that you create a single-point-of-failure. If our Logic App is unable to process events we won't see Slack messages assuming that everything is fine, while it certainly isn't. Because of that, we need to monitor the monitoring!

If you search for "Alerts" in your Logic App you will notice that you can create alerts for it as well. 
As you can see there are no alerts available by default so we will add one.

In our case we want to be notified if a certain amount of runs is failing. When that happens we want to receive an email. You could setup another webhook as well but I think emails are a good fit here.

Wrapping up

Thanks to this simple Logic App I can now easily customize the processing of our Azure Alerts without having to change any Alerts.

This approach also gives more flexibility in how we process them - If we have to process database alerts differently, we want to change Slack with SMS or another integration it is just a matter of changing the flow.

But don't forget to monitor the monitoring!

Thanks for reading,

Tom Kerkhove.

PS - Thank you Glenn Colpaert for the jsonschema.net tip!

Categories: Azure
Tags: Logic Apps
written by: Tom Kerkhove