wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Monday, February 6, 2017 12:21 PM

Glenn Colpaert by Glenn Colpaert

In this blog post I will demonstrate how easy it is to expose SAP functionality to the cloud with the new functionalities provided by Logic Apps. I will create a Logic App that exposes BAPI functionality to an external website.

It was 2015 when Microsoft first announced Azure App Services. Since then the platform has gone through a lot of changes and tons of new capabilities and features were added. Especially when looking at Logic Apps.

In July of 2015 I wrote a blogpost on how to expose SAP functionality to the cloud using both API Apps and Logic Apps. For people that are feeling a bit nostalgic you can find the blogpost here.

As already said, since then the platform has gone through a lot of changes and I've been receiving lots of feedback to re-write the above blogpost to include those new changes. Now that the new SAP Connector is finally available in public preview, the wait is over...

What will be build?

Just like in the previous blogpost we will keep it as simple as possible so we can focus on the main goal of this blogpost; showing the method on how to expose SAP data via Logic Apps.
In this sample we will create a Logic App that will expose an HTTP Endpoint, that Logic App will then call a BAPI function on the SAP System via the on premises data gateway and return the result directly to the caller.

Configuring and installing the on premises data gateway

A detailed installation and configuration guide can be found on following link: 
https://docs.microsoft.com/en-us/azure/app-service-logic/app-service-logic-gateway-install 

Important to mention is that the on premises data gateway requires a 'Work account'. The configuration of the gateway will fail if you are using a Microsoft Account. If your Azure subscription is linked to a Microsoft Account and not to a work account you can use following workaround you create your own 'Work account' via your Microsoft Account subscription.
https://docs.microsoft.com/en-us/azure/virtual-machines/virtual-machines-windows-create-aad-work-id?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json#locate-your-default-directory-in-the-azure-classic-portal

When the on premises data gateway is installed and configured you can use and configure the gateway in the Azure Portal. Detailed instructions can be found here: https://docs.microsoft.com/en-us/azure/app-service-logic/app-service-logic-gateway-connection 

Creating the Logic App

Before creating the Logic App make sure that you have an instance of the on premise data gateway service in Azure and that you've installed the SAP NCO Client libraries (both x86 and x64 versions) on the gateway Machine.


Add a Request trigger as first step of your Logic App, this Request trigger will host an endpoint in Azure that you can use you send POST requests to your Logic App. 

As a second step add the SAP Application Server – Send to SAP action.

Fill in all necessary connection options to your SAP Server. Please take note that here we are using the on premises data gateway to connect to the gateway machine.

 
After you successfully created the connection towards the SAP Server you will be able to select a SAP Action. BAPI’s/IDOC’s/RFC and TRFC are available.

 
It’s also perfectly possible to manually add this action to the Logic App Connector. In the case below the action was added manually. As an input message we are passing the Request Body to the SAP Server.

 
As a last step in this demo application, we are returning the result of the SAP Action as a Response of our Logic App.


The image below is the final result of our Logic App. 

 

Testing the Logic App

You can test this Logic App with Postman, as an INPUT provide your SAP XML and the result of your call will be the result XML of SAP.  

Cheers,

Glenn

Categories: Azure
Tags: Azure, Logic Apps
written by: Glenn Colpaert

Posted on Wednesday, February 1, 2017 3:00 PM

Toon Vanhoutte by Toon Vanhoutte

In many integration projects, the importance of idempotent receivers is overlooked. This blog post summarizes what idempotent receivers are, why we need them and how we can achieve it.

What?

Let's first have a closer look at the definition of idempotence, according to Wikipedia. "Idempotence is the property of certain operations in mathematics and computer science, that can be applied multiple times without changing the result beyond the initial application." The meaning of this definition is explained as: "a function is idempotent if, whenever it is applied twice to any value, it gives the same result as if it were applied once; i.e., ƒ(ƒ(x)) ≡ ƒ(x)".

If we apply this on integration, it means that a system is idempotent when it can process a specific message multiple times, while still retaining the same end-result. As a real-life example, an ERP system is idempotent if only one sales order is created, even if the CreateSalesOrder command message was submitted multiple times by the integration layer.

Why?

Often, customers request the integration layer to perform duplicate detection, so the receiving systems should not be idempotent. This statement is only partially true. Duplicate detection on the middleware layer can discard messages that are received more than once. However, even in case a message is only received once by the middleware, it may still end-up multiple times in the receiving system. Below you can find two examples of such edge cases.

Web service communication

Nowadays, integration leverages more and more the power of API's. API's are built on top of the HTTP protocol, which can cause issues due to its nature. Let's consider the following situations:

  1. In this case, all is fine. The service processed the request successfully and the client is aware of this.

  2. Here there is also no problem. The service failed processing the request and the client knows about it. The client will retry. Eventually the service will only process the message once.

  3. This is a dangerous situation in which client and service are misaligned on the status. The service successfully processed the message, however the HTTP 200 response never reached the client. The client times out and will retry. In this case the message is processed twice by the server, so idempotence might be needed.

Asynchronous queueing

In case a message queueing system is used, idempotency is required if the queue supports guaranteed at-least-once delivery. Let's take Azure Service Bus queues as an example. Service Bus queues support the PeekLock mode. When you peek a message from the queue, it becomes invisible for other receivers during a specific time window. You can explicitly remove the message from the queue, by executing a Complete command.

In the example below, the client peaks the message from the queue and sends it to the service. Server side processing goes fine and the client receives the confirmation from the service. However, the client is not able to complete the message because of an application crash or a network interference. In this case, the message will become visible again on the queue and will be presented a second time towards the service. As a consequence, idempotence might be required.

How?

The above scenarios showcase that duplicate data entries can be avoided most of the time, however in specific edge cases a message might be processed twice. Within the business context of your project, you need to determine if this is an issue. If 1 out of 1000 emails is sent twice, this is probably not a problem. If, however 1 out of 1000 sales orders are created twice, this can have a huge business impact. The problem can be resolved by implementing exactly-once delivery or by introducing idempotent receivers.

Exactly-once delivery

The options to achieve exactly-once delivery on a protocol level are rather limited. Exactly-once delivery is very difficult to achieve between systems of different technologies. Attempts to provide an interoperable exactly-once protocol, such as SOAP WS-ReliableMessaging, ended up very complex and often not interoperable in practice. In case the integration remains within the same technology stack, some alternative protocols can be considered. On a Windows platform, Microsoft Distributed Transaction Coordinator can ensure exactly-once delivery (or maybe better exactly-once processing). The BizTalk Server SQL adapter and the NServiceBus MSMQ and SQL transport are examples that leverage this transactional message processing.

On the application level, the integration layer could be made responsible to check first against the service if the message was already processed. If this turns out to be true, the message can be discarded; otherwise the message must be delivered to the target system. Be aware that this results in chatty integrations, which may influence performance in case of a high message throughput.

Idempotent receiver

Idempotence can be established within the message itself. A classic example to illustrate this is a financial transaction. A non-idempotent message contains a command to increase your bank balance with € 100. If this message gets processed twice, it's positive for you, but the bank won't like it. It's better to create a command message that states that the resulting bank balance must be € 12100.  This example clearly solves idempotence, but is not built for concurrent transactions.

An idempotent message is not always an option. In such cases the receiving application must take responsibility to ensure idempotence. This can be done by maintaining a list of message id's that have been processed already. If a message arrives with an id that is already on the list, it gets discarded. When it's not possible to have a message id within the message, you can keep a list of processed hash values. Another way of achieving idempotence is to set unique constraints on the Id of the data entity. Instead of pro-actively checking if a message was already processed, you can just give it a try and handle the unique constraint exception.

Lately, I see more and more SaaS providers that publish idempotent upsert services, which I can only encourage! The term upsert means that the service itself can determine whether it needs to perform an insert or an update of the data entity. Preferably, this upsert is performed based on a functional id (e.g. customer number) and not based on an internal GUID, as otherwise you do not benefit from the performance gains.

Conclusion

For each integration you set up, it's important to think about idempotence. That's why Codit has it by default on its Integration Analysis Checklist. Investigate the probability of duplicate data entries, based on the used protocols. If your business case needs to avoid this at all time, check whether the integration layer takes responsibility on this matter or if the receiving system provides idempotent service endpoints. The latter is mostly the best performing choice.

Do you have other ways to deal with this? Do not hesitate to share your experience via the comments section!

Thanks for reading!

Toon

Categories: Architecture
Tags: Design
written by: Toon Vanhoutte

Posted on Wednesday, December 21, 2016 11:23 AM

Tom Kerkhove by Tom Kerkhove

When maintaining dozens, or even hundreds of Azure resources it is crucial to keep track of what is going on and how they behave. To achieve this, Azure provides a set of functionalities and services to help you monitor them.

One of those functionalities is the ability to configure Azure Alerts on your assets. By creating an alert you define how and under what circumstances you want to be notified of a specific event, ie. sending an email when DTU capacity is over 90% of your production database.

Unfortunately, receiving emails for alerts can be annoying and is not a flexible approach for handling alerts. What if I want to automate a process when our database is overloaded? Should I parse my mailbox? No!

Luckily, you can configure your Azure Alerts to push to a webhook where you process the notifications and Azure Logic Apps is a perfect fit for this! By configuring all your Azure Alerts to push the events to your Azure Logic Apps, you decouple the processing from the notification medium and can easily change the way an event can be handle.

Today I will show you how you can push all your Azure Alerts to a dedicated Slack channel, but you can use other Logic App Connectors to fit your need as well!

Creating a basic Azure Alert Handler

For starters, we will create a new Logic App that will receive all the event notifications - In this example azure-alert-handler.

Note: It is a best practice to host it as close to your resources as possible so I provision it in West Europe.

Once it is provisioned, we can start by adding a new Request Triggerconnector. This trigger will expose a webhook that can be called on a dedicated URL. This URL is generated once you save it for the first time.

As you can see you can also define the schema of the events that will be received but more on this later on.

Now that our Logic App is ready to receive events, we can configure our Azure Alerts. In this scenario we will create an alert on an API, but you could do this on almost any Azure resource. Here we will configure it to get a notification once there are more than 100 HTTP Server Errors in 5 minutes.

To achieve this, navigate to your Web App and search for "Alerts".

Click on "New Alert", define what the alert should monitor and specify the URL of the Request Trigger in our Logic App.

We are good to go! This means that if our Alert will change its state, it will push a notification to our webhook inside our Logic Apps.

You can see all events coming in our Logic App by using the Trigger History and All Runs of the Logic App itself.

When you select a Run you can view the details and thus what it has sent. Based on this sample payload, I generated the schema with jsonschema.net and used that to define the schema of the webhook. 

Didn't specify the schema? Don't worry you can still change it!

While this is great, I don't want to come back every 5 minutes to see whether or not there were new events.

Since we are using Slack internally this is a good fit consolidate all alerts in a dedicated channel so that we have everything in one place.

To push the notification to Slack, add a new Slack (Post Message) Action and authenticate with your Slack team.

Once you are authenticated, it is fairly simple to configure the connector - You need to tell it to what channel you want to push messages, what the message should look like or even other things like the name of the bot et al.

Here is an overview of all the settings of the Slack connector that you can use.

I used the @JSON function to parse the JSON input dynamically, later on we will have a look how we can simplify this.

"Alert *'@{JSON(string(trigger().outputs.body)).context.name}'* is currently *@{JSON(string(trigger().outputs.body)).status}* for *@{JSON(string(trigger().outputs.body)).context.resourceName}* in @{JSON(string(trigger().outputs.body)).context.resourceRegion}_(@{JSON(string(trigger().outputs.body)).context.portalLink})_"

Once our updated Logic App is triggered you should start receiving messages in Slack.

Tip - You can also Resubmit a previous run, this allows you to take the original input and re-run it again with that information.

Awesome! However, it tends to be a bit verbose because it mentions a lot of information, despite that it's already resolved. Nothing we can't fix with Logic Apps!

Sending specific messages based on the alert status

In Logic Apps you can add a Condition that allows you to execute certain logic if some criteria are met.

In our case we will create a more verbose message when a new Alert is 'Activated' while for other statuses we only want to give a brief update about that alert.

As you can see we are no longer parsing the JSON dynamically but rather using dynamic content, thanks to our Request Trigger Schema. This allows us to create more human-readable messages while omitting the complexity of the JSON input.

Once our new alerts are coming in it will now send customized messages based on the event!

Monitoring the Monitoring

The downside of centralizing the processing of something is that you create a single-point-of-failure. If our Logic App is unable to process events we won't see Slack messages assuming that everything is fine, while it certainly isn't. Because of that, we need to monitor the monitoring!

If you search for "Alerts" in your Logic App you will notice that you can create alerts for it as well. 
As you can see there are no alerts available by default so we will add one.

In our case we want to be notified if a certain amount of runs is failing. When that happens we want to receive an email. You could setup another webhook as well but I think emails are a good fit here.

Wrapping up

Thanks to this simple Logic App I can now easily customize the processing of our Azure Alerts without having to change any Alerts.

This approach also gives more flexibility in how we process them - If we have to process database alerts differently, we want to change Slack with SMS or another integration it is just a matter of changing the flow.

But don't forget to monitor the monitoring!

Thanks for reading,

Tom Kerkhove.

PS - Thank you Glenn Colpaert for the jsonschema.net tip!

Categories: Azure
Tags: Logic Apps
written by: Tom Kerkhove

Posted on Wednesday, December 21, 2016 9:35 AM

Pieter Vandenheede by Pieter Vandenheede

On December 13th I spoke at BTUG.be XL about BizTalk 2016, the new features and the key aspects of its Always On support. In this post I share with you my slide deck.

Last week, I had a great time at BTUG.be, while presenting my session on BizTalk 2016.

I presented the new features in BizTalk Server 2016 RTM and a few takeaway from SQL Server 2016. More specifically, and in-depth, on SQL Server AlwaysOn support for BizTalk Server 2016 on-premise and in the Azure cloud, as well as an intro on the new Logic App adapter and how to install and connect it to your on-premise BizTalk Server.

As promised there, please find my slide deck below via SlideShare:

Contact me if you have any questions regarding the slides, I'd be happy to answer you.

The other speakers there were Glenn Colpaert (session about Azure Functions), Kristof Rennen (session on Building scalable and resilient solutions using messaging) and Nino Crudele (session on Holistic approaches to Integration).

As always, it was nice to talk to the people present. A big thank you to BTUG.be for having me again!

Enjoy the slide deck!

Pieter

Categories: Community
written by: Pieter Vandenheede

Posted on Thursday, December 1, 2016 2:05 PM

Massimo Crippa by Massimo Crippa

Don’t dump your internal data model on your clients. Work outside-in, design your API with the clients in mind. Build your server side API once and then tailor the API to different clients (Backend-For-Frontends pattern).

The nature of mobile experience is often different than the desktop mobile experience. Different screen size and different functionalities. We normally display less data and it’s a good practice to perform less calls to avoid to kill the battery life. 
A common way to accommodate more than one type of device and UI is to add more functionalities over time to a compound API for multiple clients. At the end of the day this could result in a complex and not easy to maintain API.

The BFF pattern offers a possible solution to this problem: having a dedicated backend API for every type of client. The BFF pattern is growing in popularity especially its implementation within API management gateways.

In this post, we will see how to leverage the power and the flexibility of the Azure API Management policy engine to reduce the complexity of one the downstream APIs therefore make it more suitable for mobile clients.

Excel as data service

On August 3rd, Microsoft announced the general availability of the Microsoft Excel REST API for Office 365. This API open new opportunities for developers to create new digital experiences using Excel as backed service.

Carpe diem! Don’t miss the chance and let’s use Excel as it would be one of the downstream services that power my brand new mobile application. To use Excel as data service I first created a new Excel file in my Office 365 drive and created a table inside the worksheet to define the area where the data will be stored.

To write a single line on the excel workbook we must:

  • refer to the workbook (specifying the user id and the workbook id)
  • create a session in order to get the workbook-session-id value.
  • post/get the data adding the “workbook-session-id” as http header.

And what about the structure of the data to be sent? What about the response? In the picture below the request/response example to GET the rows from the Excel table. 

BFF (aka “experience APIs”)

The goals of this exercise is to create an API dedicated for the mobile experience, so to remove the complexity in the URL/ HTTP headers, have a simpler inbound/outbound data contracts and hide the details about the downstream service.

Here is where API Management comes into the picture allowing the API publisher to change the behavior of the API through configuration policies, so that developers can iterate quickly on the client apps, so that innovation can happen at faster pace. 

An API has been added to the APIM gateway and three operations has been configured: Init (to create a session), send message (save a message on the excel workbook) and get messages (list of all the sent messages).

Simplify the URL

First step is to create the BFF mobile API then add the rewrite URI policy to expose a simpler URI in the gateway.

Remove HTTP header complexity

In this step we want to avoid to inject the "workbook-session-id" header all of the time. So the main idea is to create a init operation that call the "createSession" on the Excel REST API, read the "id" value from the response and store into the workbook-session-id into the gateway cache.

To achieve that let's use a combination of policies associated to the INIT operation.

  • set-body to specify that the data need to be persisted on the Excel workbook
  • set-variable to read "id" response and store into the workbook-session-id variable
  • cache-store-value to store the workbook-session-id into the cache using the JWT token as a cache key.
  • set-body to return a custom 200 response 

On the outbound, in case of valid response, the session identifier is read via context.Response.Body 

The policy associated to the GET messages operation, retrieves the workbook-session-id parameter from the cache, adds to the session header and forward the request to the downstream service.

Simplify the data contract (message transformation)

The goal of this step is having a data contract tailored to the client. Simpler and compact in terms of size.

The contract to send a message has been reduced to the minimum, a string. In the inbound policy the message is enriched with the name of the sender (from the JWT token) and a timestamp. The set body policy is used to to create the json object to be forwarded to the underlying API.

On the outbound channel the result set of the GET Messages is filtered to reduce the data transferred over the wire and it's mapped to a simpler JSON structure.

Hide the backend details

As a final step, some HTTP headers are deleted (product scope policy) to hide the details of downstream service.

In Action

Conclusion

The BFF supports transformative design and moves the underlying system into a better, less-coupled state giving the dev teams the autonomy to iterate quickly on the client apps and deliver new digital experiences faster.

The tight coupling between the client and the API is therefore moved in the API Management layer where we can benefit of capabilities like aggregation, transformation and the possibility to change the behavior of the API by configuration.

Cheers

Massimo

Categories: Azure
written by: Massimo Crippa