wiki

Codit Wiki

Informatie wordt geladen...

Codit Blog

Gepost op woensdag 24 mei 2017 12:00

Toon Vanhoutte door Toon Vanhoutte

This blog post covers several ways to optimize the monitoring experience of your Logic Apps. Logic Apps provide two ways to add functional data to your workflow runs: tracked properties and outputs. This functional data can be used to improve operational experience, to find a specific run based on business data and to serve as a base for reporting. Both options are discussed in depth and compared.

Introduction

Tracked Properties

As the documentation states: Tracked properties can be added onto actions in the workflow definition to track inputs or outputs in diagnostics data. This can be useful if you wish to track data like an "order ID" in your telemetry

Tracked properties can be added in code view to your Logic Apps actions in this way:

Outputs

As the documentation states: Outputs specify information that can be returned from a workflow run. For example, if you have a specific status or value that you want to track for each run, you can include that data in the run outputs. The data appears in the Management REST API for that run, and in the management UI for that run in the Azure portal. You can also flow these outputs to other external systems like PowerBI for creating dashboards. Outputs are not used to respond to incoming requests on the Service REST API

Outputs can be added in code view to your Logic Apps workflow in this way. Remark that you can also specify their data type:

Runtime Behaviour

Do runtime exceptions influence tracking?

What happens if my Logic App fails? Are the business properties still tracked or are they only available in a happy scenario? After some testing, I can conclude the following:

  • Tracked properties are only tracked if the appropriate action is executed.
  • Outputs are only tracked if the whole Logic App completes successfully.

When using tracked properties, my advice is to assign them to the first action (not supported on triggers) of your Logic App, so they are certainly tracked.

Do tracking exceptions influence runtime?

Another important aspect on tracking / monitoring is the fact that it should never influence the runtime behaviour. It's unacceptable that a Logic App run fails, because a specific data field to be tracked is missing. Let's have a look what happens in case we try to track a property that does not exist, e.g.: "Reference": "@{triggerBody()['xxx']}"

In both cases the Logic App ends in a failed state, however there are some differences:

Tracked Properties

  • The action that is configured with tracked properties fails.
  • There is a TrackedPropertiesEvaluationFailed exception.

Outputs

  • All actions completed successfully, despite the fact that the Logic App run failed.
  • In the Run Details, an exception is shown for the specific output. Depending on the scenario, I've encountered two exceptions:

    >  The template language expression 'triggerBody()['xxx']' cannot be evaluated because property 'xxx' doesn't exist, available properties are 'OrderId, Customer, Product, Quantity'.

    > The provided value for the workflow parameter 'Reference' at line '1' and column '57' is not valid.

Below you can find a screen capture of the workflow details.

This is not the desired behaviour. Luckily, we can easily avoid this. Follow this advice and you'll be fine:

  • For strings: Use the question mark operator to reference potential null properties of an object without a runtime error. In case of a null reference, an empty string will be tracked. Example: "Reference": "@{triggerBody()?['xxx']}"

  • For integers: Use the int function to convert the property to an integer, in combination with the coalesce function which returns the first non-null object in the arguments passed. Example: "Quantity": "@int(coalesce(triggerBody()?['xxx'], 0))"

Azure Portal

The first place where issues are investigated is the Azure Portal. So it would be very helpful to have these business properties displayed over there. Tracked properties are nowhere to be found in the Azure Portal, but fortunately the outputs are displayed nicely in the Run Details section. There is however no option to search for a specific Logic App run, based on these outputs.

Operations Management Suite

In this post, I won't cover the Logic Apps integration with Operations Management Suite (OMS) in depth, as there are already good resources available on the web. If you are new to this topic, be sure to check out this blog post and this webcast.

Important take-aways on Logic Apps integration with OMS:

  • Only tracked properties are available in OMS, there's no trace of outputs within OMS.
  • OMS can be used to easily search a specific run, based on business properties
    > The status field is the status of the particular action, not of the complete run
    > The resource runId can be used to find back the run details in the Azure Portal

  • OMS can be used to easily create reports on the tracked business properties
  • Think about data retention. The free pricing tier gives you data retention of 7 days.

Management API

Another way to search for particular Logic Apps runs, is by using the Azure Service Management API. This might come in handy if you want to develop your own dashboard on top of Logic Apps.
The documentation clearly describes operations available to retrieve historical Logic Apps run in a programmatic way. This can be done easily with the Microsoft.Azure.Management.Logic library. Outputs are easier to access than tracked properties, as they reside on the WorkflowRun level. The following code snippet can serve as a starting point:

Unfortunately, the filter operations that are supported are limited to the ones available in the Azure Portal:

This means you can only find specific Logic Apps runs - based on outputs or tracked properties - if you navigate through all of them, which is not feasible from a performance perspective.

Conclusion

Comparison

Below you can find a comparison table of the investigated features. My current advice is to use both of them, as you'll get the most in return. 

Feature Tracked Properties Outputs
Tracking Level Action Logic App Run
Runtime behaviour    If action executes If Logic App succeeds 
Azure Portal No integration Visible in run details
OMS Full integration No integration
Management API No search No search

Feedback to Product Team

Monitoring is one of areas that needs some improvements to provide all we need for operating and monitoring an integration environment. Here are my feature requests in order of priority:

  • Ensure outputs are also monitored in case a workflow run fails. Vote here.
  • Provide OMS integration for outputs. Vote here.
  • Extend the search filter in the Management API. Vote here.
  • Provide a unique URL to directly redirect to a specific Logic App run.

Hope this post clarifies the monitoring capabilities of Logic Apps. 

Feel free to vote for the mentioned feature requests!
Toon

Categorieën: Azure
Tags: Logic Apps
geschreven door: Toon Vanhoutte

Gepost op vrijdag 19 mei 2017 11:29

Toon Vanhoutte door Toon Vanhoutte

Receiving data from a SQL table and processing it towards other back-end systems; It's a very common use case in integration. Logic Apps has all the required functionality in its toolbox to fulfill this integration need. This blog post explains how you can do this in a reliable fashion, in case you're dealing with mission critical interfaces where no data loss is accepted.

Scenario

Let's discuss the scenario briefly.  We need to consume data from the following table.  All orders with the status New must be processed!

The table can be created with the following SQL statement:

First Attempt

Solution

To receive the data, I prefer to create a stored procedure. This avoids maintaining potentially complex SQL queries within your Logic App. The following stored procedure selects the first order with status New and updates its status to Processed in the same statement. Remark that it also returns the @@ROWCOUNT, as this will come in handy in the next steps.

The Logic App fires with a Recurrence trigger.  The stored procedure gets executed and via the ReturnCode we can easily determine whether it returned an order or not.  In case an order is retrieved, its further processing can be performed, which will not be covered in this post.

Evaluation

If you have a BizTalk background, this is a similar approach on using a polling SQL receive location. One very important difference: the BizTalk receive adapter executes the stored procedure within the same distributed transaction as it persists the data in the MessageBox, whereas Logic Apps is completely built on API's that have no notion of MSDTC at all.

In failure situations, when a database shuts down or the network connection drops, it could be that the order is already marked as Processed, but it never reaches the Logic App. Depending on the returned error code, your Logic App will end up in a Failed state without clear description or the Logic App will retry automatically (for error codes 429 and 5xx). In both situations you're facing data loss, which is not acceptable for our scenario.

Second attempt

Solution

We need to come up with a reliable way of receiving the data. Therefore, I suggest to implement a similar pattern as the Azure Service Bus Peek-Lock. Data is received in 2 phases:

  1. You mark the data as Peeked, which means it has been assigned to a receiving process
  2. You mark the data as Completed, which means it has been received by the receiving process

Next to these two explicit processing steps, there must be a background task which reprocesses messages that have the Peeked status for a too long duration. This makes our solution more resilient.

Let's create the first stored procedure that marks the order as Peeked.

The second stored procedure accepts the OrderId and marks the order as Completed.

The third stored procedure should be executed by a background process, as it sets the status back to New for all orders that have the Peeked status for more than 1 hour.

Let's consume now the two stored procedures from within our Logic App.  First we Peek for a new order and when we received it, the order gets Completed.  The OrderId is retrieved via this expression: @body('Execute_PeekNewOrder_stored_procedure')?['ResultSets']['Table1'][0]['Id']

The background task could be executed by a SQL Agent Job (SQL Server only) or by another Logic App that is fired every hour.

Evaluation

Happy with the result? Not a 100%! What if something goes wrong during further downstream processing of the order? The only way to reprocess the message is by changing its status in the origin database, which can be a quite cumbersome experience for operators. Why can't we just resume the Logic App in case of an issue?

Third Attempt

Solution

As explained over here, Logic Apps has an extremely powerful mechanism of resubmitting workflows. Because Logic Apps has - at the time of writing - no triggers for SQL Server, a resubmit of the Recurrence trigger is quite useless. Therefore I only want to complete my order when I'm sure that I'll be able to resubmit it if something fails during its further processing. This can be achieved by splitting the Logic App in two separate workflows.

The first Logic App peeks for the order and parses the result into a JSON representation. This JSON is passed to the next Logic App.

The second Logic App gets invoked by the first one.  This Logic App completes the order first and performs afterwards the further processing.  In case something goes wrong, a resubmit of the second Logic App can be initiated.

Evaluation

Very happy with the result as:

  • The data is received from the SQL table in a reliable fashion
  • The data can be resumed in case further processing fails

Conclusion

Don't forget that every action is HTTP based, which can have an impact on reliability. Consider a two-phased approach for receiving data, in case you cannot afford message loss. The same principle can also by applied on receiving files: read the file content in one action and delete the file in another action. Always think upfront about resume / resubmit scenarios. Triggers are better suited for resubmit than actions, so if there are triggers available: always use them!

This may sound overkill to you, as these considerations will require some additional effort. My advice is to determine first if your business scenario must cover such edge case failure situations. If yes, this post can be a starting point for you final solution design.

Liked this post? Feel free to share with others!
Toon

 

 

Categorieën: Azure
geschreven door: Toon Vanhoutte

Gepost op maandag 15 mei 2017 16:43

Sam Vanhoutte door Sam Vanhoutte

Microsoft has just wrapped up its annual developer conference. In this blog, we will summarize the most important trends and announcements for you.

Sam Vanhoutte, CTO of Codit, was present at Microsoft Build 2017 and is reporting for us. Codit collaborates with Microsoft and their product teams, but we also place value in coming to these events and hearing about all the new announcements. The main focus for Build, by the way, is on developers. Therefore, we will not be talking about upcoming versions of Windows 10.

Unlike other years, Build was not held in San Francisco but instead has moved to Seattle. There, CEO Satya Nadella shared his vision for developers. He points out that there are a lot of opportunities, but that opportunities also come with responsibilities.

Nadella indicated the evolution of the Internet using a few numbers. In 1992, the total internet volume was 100 gigabytes per day. Today, the world consumes 17,5 million times more, only now per second. Ninety percent of all data was generated over the past two years. Of course, this number will multiply because of the billions of connected devices and sensors that are going to be connected in the near future.

                                 

Microsoft in numbers

Microsoft also mentioned numbers about themselves. For instance: there are now 500 million Windows 10 devices. The commercial version of Office 365 has 100 million active users per month and Cortana has over 140 million users per month.

Azure Active Directory is currently being used by 12 million organizations. And over 90 percent of Fortune 500 companies use the Microsoft Cloud.

IoT

Microsoft is currently investing heavily in IoT, and notes that in the smart industry 'the edge' is going to be very important. ‘The edge’ means the periphery of the network, where sensors and devices generate their data (and sometimes process data) before it is moved to a cloud service.

This is also reflected in the product news: Azure IoT Edge is combination of the intelligent cloud and the intelligent edge. The system enables cloud functionality to be executed in 'the edge'. This functionality includes gateways, machine learning, stream analytics and functions that can now be executed locally, but are managed and configured from the cloud.

The reason is that it is increasingly common for large amounts of data to be processed quickly. Sometimes this is better done locally. It must be noted that Microsoft is not the only one to move in that direction. Bosch also mentioned such local processing, quoting the term ‘fog computing’.

Codit is pleased with the expansion. Sam Vanhoutte said "This is also very important for Codit and it is a major evolution in Microsoft's IoT offering, closely involving us as Azure IoT Elite partner".

AI

In the area of artificial intelligence, Microsoft presented products like Microsoft Graph. The software links people, their activities and their devices together. Combined with machine learning and cognitive services, the company gave a demo where alerts were being generated when unauthorized persons use certain items on the work floor.

In addition, Microsoft also introduced several new cognitive services, many them being fully customizable. This allows you to get started with machine learning by doing things like ‘feeding’ images of certain objects to the system. Another demo showed live captioning during a PowerPoint presentation. This is something that IBM also demonstrated during their Watson conference last fall.

Azure

The fact that Microsoft is more than Windows and Office becomes clear as soon as Azure moves into the spotlight. For example, a new database service by the name of Azure CosmosDB was presented, able to store and present data quickly and reliably on a worldwide scale. Storage possibilities include NoSQL documents, graphs and key-value.

 Also new are two managed databases: Managed MySQL and Managed PostreSQL.

For containers, developers had the choice of using Docker, Kubernetes and Mesos on Azure. This is supplemented with Service Fabric for Windows or Linux. Also, live debugging is possible in a production environment without impacting users.

But there is more...

Finally, Microsoft announced the Azure mobile app, Visual Studio for Mac and a built-in script console in the Azure portal. These are all things that make the life of devops easier.

For developers, Microsoft announces the XAML standard 1.0. It should make it even easier to develop apps that run on Windows, iOS and Android. Microsoft also expanded on how to make apps more accessible to people with disabilities.

What does this mean for Belgium?

According to Sam Vanhoutte, Microsoft's cloud tools can help companies in their digital transformation. "It will be easier for many customers to achieve innovation in an inexpensive and fast-paced manner. Fail fast and pay as you grow." Also, the increased availability of AI can provide efficiency gains for local companies.

The fact that Microsoft is now offering ‘edge’ products and services is a new trend in the cloud landscape. However, according to Vanhoutte, it is not yet clear how all these new features are included in the pricing.

Codit also received kudos during the Build conference. Swiss Re, a Swiss insurance company and a customer of Codit, was asked to present a case in which travelers received reimbursement immediately after their flight had been delayed.

If you want to learn about all the announcements done at Build 2017, you can go to Channel 9, Microsoft's video service, and rerun the live streams.

Note: This article was first published via Data News (in Dutch). 

Categorieën: Azure, Architecture, Technology
geschreven door: Sam Vanhoutte

Gepost op maandag 15 mei 2017 15:15

Toon Vanhoutte door Toon Vanhoutte

Recently, the product team released a first feature pack for BizTalk Server 2016. Via this way, Microsoft aims to provide more agility into the release model of BizTalk Server. The feature pack contains a lot of new and interesting features, of which the automated deployment from VSTS is probably the most important one. This blog post looks at what is included in this offering and compares it with existing BTDF functionality.

In case you are interested in a detailed walk-through on how to set up continuous deployment, please check out this blog post on Continuous Deployment in BizTalk 2016, Feature Pack 1.

What is included?

Below, you can find a bullet point list of features included in this release.

  • An application version has been added and can be easily specified.
  • Automated deployment from VSTS, using a local deploy agent.
  • Automated deployment of schemas, maps, pipelines and orchestrations.
  • Automated import of multiple binding files.
  • Binding file management through VSTS environment variables.
  • Update of specific assemblies in an existing BizTalk application (with downtime)

What is not included?

This is a list of features that are currently not supported by the new VSTS release task:

  • Build BizTalk projects in VSTS hosted build servers.
  • Deployment to a remote BizTalk server (local deploy agent required)
  • Deployment to a multi-server BizTalk environment.
  • Deployment of shared artifacts (e.g. a schema that is used by several maps)
  • Deployment of more advanced artifacts: BAM, BRE, ESB Toolkit…
  • Control of which host instances / ports / orchestrations should be (re)started
  • Undeploy a specific BizTalk application, without redeploying it again.
  • Use the deployment task in TFS 2015 Update 2+ (no download supported)
  • Execute the deployment without the dependency of VSTS.

Conclusion!

Microsoft released this VSTS continuous deployment service into the wild, clearly stating that this is a first step in the BizTalk ALM story. That sounds very promising to me, as we can expect more functionality to be added in future feature packs!

After intensively testing the solution, I must conclude that there is a stable and solid foundation to build upon. I really like the design and how it is integrated with VSTS. This foundation can now be extended with the missing pieces, so we end up with great release management!

At the moment, this functionality can be used by BizTalk Server 2016 Enterprise customers that have a single server environment and only use the basic BizTalk artifacts. Other customers should still rely on the incredibly powerful BizTalk Deployment Framework (BTDF), until the next BizTalk Feature Pack release. At that moment in time, we can re-evaluate again! I'm quite confident that we're heading in the good direction!

Looking forward for more on this topic!

Toon

Categorieën: BizTalk
geschreven door: Toon Vanhoutte

Gepost op vrijdag 5 mei 2017 16:56

Massimo Crippa door Massimo Crippa

The Analytics module in Azure API Management provides insights about the health and usage levels of your APIs, to identify key trends that impact the business. Analytics also provides a number of filtering and sorting options, to better understand who is using what, but what if I want more? For example, how about drill down reports or getting mobile access?

I am a big fan of Power BI so, let's combine the power of Azure Functions and the simplicity of the APIM REST APIs, to flow the analytics data to Power BI.

The picture below displays my scenario: Azure functions connect and combine APIs from different Azure services (AAD, APIM, Storage) to create a smooth and lightweight integration.

It's a serverless architecture which means that we don't have to worry about the infrastructure so we can focus on the business logic, having rapid iterations and a faster time to market.

The APIM analytics (aggregated data) can be read by calling the report REST API. This information can then be written to Azure Tables and automatically synchronized with Power BI. 

 

Function

The Azure function:

  1. Is triggered via HTTP POST. It accepts a body parameter with the report name (byApi, byGeo, byOperation, byProduct, bySubscrition, byUser) and the day to export.

  2. Calls the AAD token endpoint using the resource owner password flow to get the access token to authorize the ARM call.

  3. Calls the APIM rest API (https://management.azure.com/subscriptions/9124e1d1-c144-1ec2-7cb2-bef226961d93/resourceGroups/rg-apim/providers/Microsoft.ApiManagement/service/apim-codit-dev/reports/bySubscription?api-version=2016-07-07&$filter=timestamp%20ge%20datetime'2017-02-15T00:00:00'%20and%20timestamp%20le%20datetime'2017-02-16T00:00:00')

  4. Iterate through the JTokens in the response body to build a collection of IEnumerable<DynamicTableEntity> that is passed to the CloudTable.ExecuteBatch to persist the data in the azure storage. 

Because I am using a second function to extract and load (to azure storage) additional APIM tables (e.g. apis, products, users etc..), I found this article very useful on reusing code in different Azure functions.

I created a logic app to trigger the functions multiple times, one per report to be exported. The code can support any new aggregation or additional fields added in the future without any modification.

Power BI

Using Power BI desktop I put together some visualizations and pushed them to the Power BI service. The report dataset is synced with the Azure tables one time per day, which is configurable. Here below, you can see screens from my mobile phone (left) and the desktop experience (right).

Conclusion

Even though the same result can be achieved using other Azure services like Webjobs or Data Factory, Azure functions provide multiple benefits like a simple programming model, the abstraction of servers and the possibility to use a simple editor to build, test and monitor the code without leaving your browser. That's a perfect fit for quick development cycle, faster adaptation that gains business advantages, isn't it?

Cheers,

Massimo

Categorieën: Azure
geschreven door: Massimo Crippa