wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Friday, June 23, 2017 10:39 AM

Stijn Moreels by Stijn Moreels

As you may know, from previous blog posts, I use FAKE as build script tool to automate my compiling, testing, inspection, documentation and many other things. FAKE has already a lot of functionality in place, but they didn’t have any support for StyleCop. Until now.

StyleCop and F#

Why StyleCop?

StyleCop is a tool that analyzes your source files with the default Microsoft coding conventions describing how code should look like.

It’s not that I agree with every StyleCop rule, but there are more rules that I do agree on then otherwise.

FAKE already had support for FxCop, but since ReSharper had Intellisense support for StyleCop in place, I found it reasonable that my automated build (local and remote) is depending on an analyzing tool that I both can use in my development practice as in my automated build.

StyleCop Command Line

FAKE is written in F#, which means the whole .NET framework is available to us. Besides working with FAKE and some personal fun-projects, I didn’t have much experience with F# so it was a fun challenge.

StyleCop already has some command line tools that you can use to analyze your source files, so in theory my StyleCop implementation of F# could just use some of those tools and pass the right arguments with it?

F# Implementation

F# is a functional language with imperative features (such as mutable values, foreach…). My goal was to write a purely functional implementation of the command line tools that analyzed the source files.

I’m not going to run through every bit of the code, you can do that yourself with the link on the end of this post.

Before we get started, some practical numbers:

  • I wrote the implementation in 2-3 hours (so it certainly can be improved)
  • The single file contains around 100 lines of code

Just for fun, I checked the length of some C# implementation, and found that all together they had 650 lines of code.

Imperative to Functional

For Each For Each

One of the things I had to implement, was the possibility to analyze Source, Project and Solution Files. Source files could be analyzed directly; project files must first be decomposed in source files and solution files must first be decomposed in project files.

When I looked at the C# implementation, you could see that they had implemented a foreach, in a foreach, in a foreach, to get the three different levels of lists.
Each source file must be wrapped in a StyleCop Project object so it can be analyzed, so you must indeed go through every project file and solution file to obtain all those underlying source files.

Functional programming has different approaches. What we want to do is: “create a StyleCop Project for every source file”. That was my initial idea. I don’t want to know where those files came from (project, solution). I came up with this solution:

Every StyleCop Project must have an identifier which must be incremental by each source file it analyzes.

In my whole file, there are no foreach loops, but this function is called three times directly and time indirectly. So, you could say it’s the core of the file.

The function takes a start ID, together with the files to run through, to create StyleCop Project instances. It’s a nice example of the Tail Recursion in Functional Programming, where you let a function run through every element in a list (via the tail).
At the end, we combine all the projects to StyleCop Project instances and return that list.

Shorthanded Increment

In the C# implementation, they used the shorthanded increment (++) to assign the next integer as the Project Id. In the previous snippet, you see that the ID is also sent with the function and is increased internally. This way we can reuse this function because we could start from zero, but from any valid integer number. And that’s what I’ve done.

The source files can call this function directly, project files go through the source files and the solution files go through the project files but they all uses this function, the Tail Recursion. At the end, we combine all the StyleCop Projects created from the source files, project files and solution files.

I could have created a counter function that has a Closure inside to count the ID’s though:

This would have reduced the arguments we must send with the function, and would remove the implicit reference to the project ids and the project length.

Feel free to contribute!

C# Assemblies

The assemblies used in this file, are also written in C#. So, this is an example of how C# assemblies can be used in F# files without much effort. The bad side is that a “list” in C# isn’t the same as a “list” in F#, so some conversions are needed.

Also, Currying or Partial Functioning in F# isn’t possible with C# objects. If this was possible, I think my implementation would look just that bit more Functional than now.

Conclusion

Personally, I was surprised that I could write a full implementation in just 2 - 3 hours. F# is a fun language to play with, but also to write solid declarative implementations quickly. I hope that I can use my functional skills more in actual production projects.

My interest in functional programming has increased by implementing an implementation for StyleCop and I expect that not only my functional programming skills will become better in the future but also my object-oriented programming skills.

Thanks for reading and check the implementation on my GitHub.

Remember that I only worked for 2 - 3 hours; so, contribute to the FAKE library if you have any suggestions because EVERYTHING can be written better.

Categories: Technology
Tags: Code Quality, F#
written by: Stijn Moreels

Posted on Thursday, June 22, 2017 2:41 PM

Toon Vanhoutte by Toon Vanhoutte

In this blog post, I'll explain in depth the routing slip pattern and how you can leverage it within enterprise integration scenarios. As always I'll have a look at the benefits, but also the pitfalls will get some well-deserved attention.

The Pattern

Introduction

A routing slip is a configuration that specifies a sequence of processing steps (services). This routing slip must be attached to the message to be processed. Each service (processing step) is designed to receive the message, perform its functionality (based on the configuration) and invoke the next service. In that way, a message gets processed sequentially by multiple services, without the need of a coordinating component. The schema below is taken from Enterprise Integration Patterns.

Some examples of this pattern are:

Routing Slip

Routing slips can be configured in any language, JSON or XML are quite popular. An example of a simple routing slip can be found below. The header contains the name of the routing slip and a counter that carries the current step number. Each service is represented by a routing step. A step has its own name to identify the service to be invoked and has a specific key-value configuration pairs.

Remark that this is just one way to represent a routing slip. Feel free to add your personal flavor…

Assign Routing Slip

There are multiple ways to assign a routing slip to a message. Let's have a look:

  • External: the source system already attaches the routing slip to the message
  • Static: when a message is received, a fixed routing slip is attached to it
  • Dynamic: when a message is received, a routing slip is attached, based on some business logic
  • Scheduled: the integration layer has routing slips scheduled that contain also a command to retrieve a message

Service

A service is considered as a "step" within your routing slip. When defining a service, you need to design it to be generic. The executed logic within the service must be based on the configuration, if any is required. Ensure your service has a single responsibility and there's a clear boundary of its scope.

A service must consist of three steps:

  • Receive the message
  • Process the message, based on the routing slip configuration
  • Invoke the next service, based on the routing slip configuration

There are multiple ways to invoke services:

  • Synchronous: the next service is invoked without any persistence in between (e.g. in memory). This has the advantage that it will perform faster.
  • Asynchronous: the next service is invoked with persistence in between (e.g. a queue). This has the advantage that reliability increases, but performance degrades.

Think on the desired way to invoke services. If required, a combination of sync and async can be supported.

Advantages

Encourages reuse

Integrations are composed of reusable and configurable building blocks. The routing slip pattern forces you to analyze, develop and operate in a streamlined manner. Reuse is heavily encouraged on different levels: the way analysis is performed, how patterns are implemented, the way releases are rolled out and how operational tasks are performed. One unified way of working, built on reusability.

Configuration based

Your integration is completely driven by the assigned routing slip. There are no hard-coded links between components. This allows you to change its behavior without the need of a re-deployment. This configuration also serves as a great source of documentation, as it explains exactly what message exchanges are running on your middleware and what they exactly do.

Faster release cycles

Once you have set up a solid routing slip framework, you can increase your release cadence. By leveraging your catalogue of reusable services, you heavily benefit from previous development efforts. The focus is only on the specifics of a new message exchange, which are mostly data bound (e.g. mapping). There's also a tremendous increase of agility, when it comes to small changes. Just update the routing slip configuration and it has an immediate effect on your production workload.

Technology independent

A routing slip is agnostic to the underlying technology stack. The way the routing slip is interpreted, is of course specific to the technology used. This introduces ways to have a unified integration solution, even if it is composed of several different technologies. It enables also cross technology message exchanges. As an example, you can have an order that is received via an AS2 Logic App, being transformed and sent to an on premise BizTalk Server that inserts it into the mainframe, all governed by a single routing slip config.

Provides visibility

A routing slip can introduce more visibility into the message exchanges, for sure from an operational perspective. If a message encounters an issue, operations personnel can immediately consult the routing slip to see where the message comes from, what steps are already executed and where it is heading to. This visibility can be improved, by updating the routing slip with some extra historical information, such as the service start and end time. Why even not including an URL in the routing slip that points to a wiki page or knowledge base about that interface type?

Pitfalls

Not enough reusability

Not every integration project is well-suited to use the routing slip pattern. During analysis phase, it's important to identity the integration needs and to see if there are a lot of similarities between all message exchanges. When a high level of reusability is detected, the routing slip pattern might be a good fit. If all integrations are too heterogenous, you'll introduce more overhead than benefits.

Too complex logic

A common pitfall is adding too much complexity into the routing slip. Try to stick as much as possible to a sequential series of steps (services) that are executed. Some conditional decision logic inside a routing slip might be acceptable, but define clear boundaries for such logic. Do not start writing you own workflow engine, with its own workflow language. Keep the routing slip logic clean and simple, to stick to the purpose of a routing slip.

Limited control

In case of maintenance of the surrounding systems, you often need to stop a message flow. Let's take the scenario where you face the following requirement: "Do not send orders to SAP for the coming 2 hours". One option is to stop a message exchange at its source, e.g. stop receiving messages from an SFTP server. In case this is not accepted, as these orders are also sent to other systems that should not be impacted, things get more complicated. You can stop the generic service that sends a message to SAP, but then you also stop sending other message types… Think about this upfront!

Hard deployments

A very common pain-point of a high level of reuse, is the impact of upgrading a generic service that is used all over the place. There are different ways to reduce the risks of such upgrades, of which automated system testing is an important one. Within the routing slip, you can specify explicitly the version of a service you want to invoke. In that way, you can upgrade services gradually to the latest version, without the risk of a big bang deploy. Define a clear upgrade policy, to avoid that too many different versions of a service are running side-by-side.

Monitoring

A message exchange is spread across multiple loosely coupled service instances, which could impose a monitoring challenge. Many technologies offer great monitoring insights for a single service instance, but lack an overall view across multiple service instances. Introducing a correlation ID into your routing slip, can highly improve the monitoring experience. This ID can be generated the moment you initialize a routing slip.

Conclusion

Routing slips are a very powerful mechanism to deliver unified and robust integrations in a fast way. The main key take-aways of this blog are:

  • Analyze in depth if can benefit from the routing slip pattern
  • Limit the complexity that the routing slip resolves
  • Have explicit versioning of services inside the routing slip
  • Include a unique correlation ID into the routing slip
  • Add historical data to the routing slip

Hope this was a useful read!
Toon

 

Categories: Architecture
Tags: Design
written by: Toon Vanhoutte

Posted on Tuesday, June 20, 2017 11:24 PM

Pieter Vandenheede by Pieter Vandenheede

In this blog I'll try and explain a real world example of a Logic App used to provide the short links to promote the blog posts appearing on our blog. Ready for the journey as I walk you through?

Introduction

At Codit, I manage the blog. We have some very passionate people on board who like to invest their time to get to the bottom of things and - also very important - share it with the world!
That small part of my job means I get to review blog posts before publishing on a technical level. It's always good to have one extra pair of eyes reading the post before publishing it to the public, so this definitely pays off!

An even smaller part of publishing blog posts is making sure they get enough coverage. Sharing them on Twitter, LinkedIn or even Facebook is part of the job for our devoted marketing department! And analytics around these shares on social media definitely come in handy! For that specific reason we use Bitly to shorten our URLs.
Every time a blog post gets published, someone needed to add them manually to out Bitly account and send out an e-mail. This takes a small amount of time, but as you can imagine it accumulates quickly with the amount of posts we generate lately!

Logic Apps to the rescue!

I was looking for an excuse to start playing with Logic Apps and they recently added Bitly as one of their Preview connectors, so I started digging!

First, let's try and list the requirements of our Logic App to-be:

Must-haves:

  • The Logic App should trigger automatically whenever a new blog post is published.
  • It should create a short link, specifically for usage on Twitter.
  • It also should create a short link, specifically for LinkedIn usage.
  • It should send out an e-mail with the short links.
  • I want the short URLs to appear in the Bitly dashboard, so we can track click-through-rate (CTR).
  • I want to spend a minimum of Azure consumption.

Nice-to-haves:

  • I want the Logic App to trigger immediately after publishing the blog post.
  • I want the e-mail to be sent out to me, the marketing department and the author of the post for (possibly) immediate usage on social media.
  • If I resubmit a logic app, I don't want new URLs (idempotency), I want to keep the ones already in the Bitly dashboard.
  • I want the e-mail to appear as if it was coming directly from me.

Logic App Trigger

I could easily fill in one of the first requirements, since the Logic App RSS connector provides me a very easy way to trigger a logic app based on a RSS feed. Our Codit blog RSS feed seemed to do the trick perfectly!

Now it's all about timing the polling interval: if we poll every minute we get the e-mail faster, but will spend more on Azure consumption since the Logic App gets triggered more... I decided 30 minutes would probably be good enough.

Now I needed to try and get the URL for any new posts that were published. Luckily, the links - Item provides me the perfect way of doing that. The Logic Apps designer conveniently detects this might be an array of links (in case two posts get published at once) and places this within a "For each" shape!

Now that I had the URL(s), all I needed to do was save the Logic App and wait until a blog post was published to test the Logic App. In the Logic App "Runs history" I was able to click through and see for myself that I got the links array nicely:

Seems there is only one item in the array for each blog post, which is perfect for our use-case!

Shortening the URL

For this part of the exercise I needed several things:

  • I actually need two URLs: one for Twitter and one for LinkedIn, so I need to call the Bitly connector twice!
  • Each link gets a little extra information in the query string called UTM codes. If you are unfamiliar with those, read up on UTM codes here. (In short: it adds extra visibility and tracking in Google Analytics).
    So I needed to concatenate the original URL with some static UTM string + one part which needed to be dynamic: the UTM campaign.

For that last part (the campaign): we already have our CMS cleaning up the title of a blog post in the last part of the URL being published! This seems ideal for us here.

However, due to lack of knowledge in Logic Apps-syntax I got a bit frustrated and - at first - created an Azure Function to do just that (extract the interesting part from the URL):

I wasn't pleased with this, but at least I was able to get things running...
It however meant I needed extra, unwanted, Azure resources:

  • Extra Azure storage account (to store the function in)
  • Azure App Service Plan to host the function in
  • An Azure function to do the trivial task of some string manipulation.

After some additional (but determined) trial and error late in the evening, I ended up doing the same in a Logic App Compose shape! Happy days!

Inputs: @split(item(), '/')[add(length(split(item(), '/')), -2)]

It takes the URL, splits it into an array, based on the slash ('/') and takes the part which is interesting for my use-case. See for yourself:

Now I still needed to concatenate all pieces of string together. The concat() function seems to be able to do the trick, but an even easier solution is to just use another Compose shape:

Concatenation comes naturally to the Compose shape!

Then I still needed to create the short links by calling the Bitly connector:

Let's send out an e-mail

Sending out e-mail, using my Office365 account is actually the easiest thing ever:

Conclusion

My first practical Logic App seems to be a hit! And probably saves us about half an hour of work every week. A few hours of Logic App "R&D" will definitely pay off in the long run!

Here's the overview of my complete Logic App:

Some remarks

During development, I came across - what appear to me - some limitations :

  • The author of the blog post is not in the output of the RSS connector, which is a pity! This would have allowed me to use his/her e-mail address directly or, if it was his/her name, to look-up the e-mail address using the Office 365 users connector!
  • I'm missing some kind of expression shape in Logic Apps!
    Coming from BizTalk Server where expression shapes containing a limited form of C# code are very handy in a BizTalk orchestration, this is something that should be included one way or the other (without the Azure function implementation).
    A few lines of code in there is awesome for dirty work like string manipulation for example.
  • It took me a while to get my head around Logic Apps syntax.
    It's not really explained in the documentation when or when not to use @function() or @{function()}. It's not that hard at all once you get the hang of it. Unfortunately it took me a lot of save errors and even some run-time errors (not covered at design time) to get to that point. Might be just me however...
  • I cannot rename API connections in my Azure Resource Group. Some generic names like 'rss', 'bitly' and 'office-365' are used. I can set some connection properties so they appear nicely in the Logic App however.
  • We have Office365 Multi-Factor Authentication enabled at our company. I can authorize the Office365 API connection, but this will only last for 30 days. I might need to change to an account without multi-factor authentication if I don't want to re-authorize every 30 days...

Let me know what you think in the comments! Is this the way to go?
Any alternative versions I could use? Any feedback is more than welcome.

In a next blog post I will take some of our Logic Apps best practices to heart and optimize the Logic App.

Have a nice day!
Pieter

Categories: Azure
written by: Pieter Vandenheede

Posted on Friday, June 16, 2017 5:00 PM

Toon Vanhoutte by Toon Vanhoutte

Lately, I was working at a customer that heavily invested in BizTalk Server on premises during the last decade. They are considering to migrate parts of their existing integrations towards Logic Apps, to leverage the smooth integration with modern SaaS applications. A very important aspect is the ability to reuse their existing schemas and maps as much as possible. BizTalk schemas and maps can be easily used within the Logic Apps Integration Account, but there is no support for extension objects at the moment. Let's have a look on how we tackled this problem.

Extension objects are used to consume external .NET libraries from within XSLT maps. This is often required to perform database lookups or complex functions during a transformation. Read more about extension objects in this excellent blog.

Analysis

Requirements

We are facing two big challenges:

  1. We must execute the existing XSLT's with extension objects in Logic Apps
  2. On premises Oracle and SQL databases must be accessed from within these maps

Analysis

It's clear that we should extend Logic Apps with non-standard functionality. This can be done by leveraging Azure Functions or Azure API Apps. Both allow custom coding, integrate seamlessly with Logic Apps and offer the following hybrid network options (when using App Service Plans):

  • Hybrid Connections: most applicable for light weight integrations and development / demo purposes
  • VNET Integration: if you want to access a number of on premise resources through your Site-to-Site VPN
  • App Service Environment: if you want to access a high number of on premise resources via ExpressRoute

As the pricing model is quite identical, because we must use an App Service Plan, the choice for Azure API Apps was made. The main reason was the already existing WebAPI knowledge within the organization.

Design

A Site-to-Site VPN is used to connect to the on-premise SQL and Oracle databases. By using a standard App Service Plan, we can enable VNET integration on the custom Transform API App. Behind the scenes, this creates a Point-to-Site VPN between the API App and the VNET, as described here. The Transform API App can be consumed easily from the Logic App, while being secured with Active Directory authentication.

Solution

Implementation

The following steps were needed to build the solution. More details can be found in the referenced documentation.

  1. Create a VNET in Azure. (link)
  2. Setup a Site-to-Site VPN between the VNET and your on-premises network. (link)
  3. Develop an API App that executes XSLT's with corresponding extension objects. (link)
  4. Foresee Swagger documentation for the API App. (link)
  5. Deploy the API App. Expose the Swagger metadata and configure CORS policy. (link)
  6. Configure VNET Integration to add the API App to the VNET. (link)
  7. Add Active Directory authentication to the API App. (link)
  8. Consume the API App from within Logic Apps.

Transform API

The source code of the Transform API can be found here. It leverages Azure Blob Storage, to retrieve the required files. The Transform API must be configured with the required app settings, that define the blob storage connection string and the containers where the artefacts will be uploaded.

The Transform API offers one Transform operation, that requires 3 parameters:

  • InputXml: the byte[] that needs to be transformed
  • MapName: the blob name of the XSLT map to be executed
  • ExtensionObjectName: the blob name of the extension object to be used

Sample

You can run this sample to test the Transform API with custom extension objects.

Input XML

This is a sample input that can be provided as input for the Transform action.

Transformation XSLT

This XSLT must be uploaded to the right blob storage container and will be executed during the Transform action.

Extension Object XML

This extension object must be uploaded to the right blob storage container and will be used to load the required assemblies.

External Assembly

Create an assembly named, TVH.Sample.dll, that contains the class Common.cs. This class contains a simple method to generate a GUID. Upload this assembly to the right blob storage container, so it can be loaded at runtime.

Output XML

Deploy the Transform API, using the instructions on GitHub. You can easily test it using the Request / Response actions:

As a response, you should get the following output XML, that contains the generated GUID.

Important remark: Do not forget to add security to your Transform API (Step 7), as is it accessible on public internet, by default!

Conclusion

Thanks to the Logic Apps extensibility through API Apps and their VNET integration capabilities, we were able to build this solution in a very short time span. The solution offers an easy way to migrate BizTalk maps as-is towards Logic Apps, which is a big time saver! Access to resources that remain on premises is also a big plus nowadays, as many organizations have a hybrid application landscape.

Hope to see this functionality out-of-the-box in the future, as part of the Integration Account!

Thanks for reading. Sharing is caring!
Toon

Categories: Azure
Tags: Logic Apps
written by: Toon Vanhoutte

Posted on Thursday, June 8, 2017 3:11 PM

Toon Vanhoutte by Toon Vanhoutte

This post contains 10 useful tips for designing enterprise integration solutions on top of Logic Apps. It's important to think upfront about reusability, reliability, security, error handling and maintenance.

Democratization of integration

Before we dive into the details, I want to provide some reasoning behind this post. With the rise of cloud technology, integration takes a more prominent role than ever before. In Microsoft's integration vision, democratization of integration is on top of the list.

Microsoft aims to take integration out of its niche market and offers it as an intuitive and easy-to-use service to everyone. The so-called Citizen Integrators are now capable of creating light-weight integrations without the steep learning curve that for example BizTalk Server requires. Such integrations are typically point-to-point, user-centric and have some accepted level of fault tolerance.

As an Integration Expert, you must be aware of this. Enterprise integration faces completely different requirements than light-weight citizen integration: loosely coupling is required, no message loss is accepted because it's mission critical interfacing, integrations must be optimized for operations personnel (monitoring and error handling), etc…

Keep this in mind when designing Logic App solutions for enterprise integration! Make sure you know your cloud and integration patterns. Ensure you understand the strengths and limits of Logic Apps. The advice below can give you a jump start in designing reliable interfaces within Logic Apps!

Design enterprise integration solutions

1. Decouple protocol and message processing

Once you created a Logic App that receives a message via a specific transport protocol, it's extremely difficult to change the protocol afterwards. This is because the subsequent actions of your Logic App often have a hard dependency on your protocol trigger / action. The advice is to perform the protocol handling in one Logic App and hand over the message to another Logic App to perform the message processing. This decoupling will allow you to change the receiving transport protocol in a flexible way, in case the requirements change or in case a certain protocol (e.g. SFTP) is not available in your DEV / TEST environment.

2. Establish reliable messaging

You must realize that every action you execute, is performed by an underlying HTTP connection. By its nature, an HTTP request/response is not reliable: the service is not aware if the client disconnects during request processing. That's why receiving messages must always happen in two phases: first you mark the data as returned by the service; second you label the data as received by the client (in our case the Logic App). The Service Bus Peek-Lock pattern is a great example that provides such at-least-once reliability.  Another example can be found here.

3. Design for reuse

Real enterprise integration is composed of several common integration tasks such as: receive, decode, transform, debatch, batch, enrich, send, etc… In many cases, each task is performed by a combination of several Logic App actions. To avoid reconfiguring these tasks over and over again, you need to design the solution upfront to encourage reuse of these common integration tasks. You can for example use the Process Manager pattern that orchestrates the message processing by reusing nested Logic Apps or introduce the Routing Slip pattern to build integration on top of generic Logic Apps. Reuse can also be achieved on the deployment side, by having some kind of templated deployments of reusable integration tasks.

4. Secure your Logic Apps

From a security perspective, you need to take into account both role-based access control to your Logic App resources and runtime security considerations. RBAC can be configured in the Access Control (IAM) tab of your Logic App or on a Resource Group level. The runtime security really depends on the triggers and actions you're using. As an example: Request endpoints are secured via a Shared Access Signature that must be part of the URL, IP restrictions can be applied. Azure API Management is the way to go if you want to govern API security centrally, on a larger scale. It's a good practice to assign the minimum required privileges (e.g. read only) to your Logic Apps.

5. Think about idempotence

Logic Apps can be considered as composite services, built on top of several API's. API's leverage the HTTP protocol, which can cause data consistency issues due to its nature. As described in this blog, there are multiple ways the client and server can get misaligned about the processing state. In such situations, clients will mostly retry automatically, which could result in the same data being processed twice at server side. Idempotent service endpoints are required in such scenarios, to avoid duplicate data entries. Logic Apps connectors that provide Upsert functionality are very helpful in these cases.

6. Have a clear error handling strategy

With the rise of cloud technology, exception and error handling become even more important. You need to cope with failure when connecting to multiple on premise systems and cloud services. With Logic Apps, retry policies are your first resort to build resilient integrations. You can configure a retry count and interval at every action, there's no support for exponential retries or circuit breaker pattern. In case the retry policy doesn't solve the issue, it's advised to return a clear error description within sync integrations and to ensure a resumable workflow within async integrations. Read here how you can design a good resume / resubmit strategy.

7. Ensure decent monitoring

Every IT solution benefits from a good monitoring. It provides visibility and improves the operational experience for your support personnel. If you want to expose business properties within your monitoring, you can use Logic Apps custom outputs or tracked properties. These can be consumed via the Logic Apps Workflow Management API or via OMS Log Analytics. From an operational perspective, it's important to be aware that there is an out-of-the-box alerting mechanism that can send emails or trigger Logic Apps in case a run fails. Unfortunately, Logic Apps has no built-in support for Application Insights, but you can leverage extensibility (custom API App or Azure Function) to achieve this. If your integration spans multiple Logic Apps, you must foresee correlation in your monitoring / tracing!  Find here more details about monitoring in Logic Apps.

8. Use async wherever possible

Solid integrations are often characterized by asynchronous messaging. Unless the business requirements really demand request/response patterns, try to implement them asynchronously. It comes with the advantage that you introduce real decoupling, both from a design and runtime perspective. Introducing a queuing system (e.g. Azure Service Bus) in fire-and-forget integrations, results in highly scalable solutions that can handle an enormous amount of messages. Retry policies in Logic Apps must have different settings depending whether you're dealing with async or sync integration. Read more about it here.

9. Don't forget your integration patterns

Whereas BizTalk Server forces you to design and develop in specific integration patterns, Logic Apps is more intuitive and easier to use. This could come with a potential downside that you forget about integration patterns, because they are not suggested by the service itself. As an integration expert, it's your duty to determine which integration patterns should be applied on your interfaces. Loosely coupling is common for enterprise integration. You can for example introduce Azure Service Bus that provides a Publish/Subscribe architecture. Its message size limitation can be worked around by leveraging the Claim Check pattern, with Azure Blob Storage. This is just one example of introducing enterprise integration patterns.

10. Apply application lifecycle management (ALM)

The move to a PaaS architecture, should be done carefully and must be governed well, as described here. Developers should not have full access to the production resources within the Azure portal, because the change of one small setting can have an enormous impact. Therefore, it's very important to setup ALM, to deploy your Logic App solutions throughout the DTAP-street. This ensures uniformity and avoids human deployment errors. Check this video to get a head start on continuous integration for Logic Apps and read this blog on how to use Azure Key Vault to retrieve passwords within ARM deployments. Consider ALM as an important aspect within your disaster recovery strategy!

Conclusion

Yes, we can! Logic Apps really is a fit for enterprise integration, if you know what you're doing! Make sure you know your cloud and integration patterns. Ensure you understand the strengths and limits of Logic Apps. The Logic App framework is a truly amazing and stable platform that brings a whole range of new opportunities to organizations. The way you use it, should be depending on the type of integration you are facing!

Interested in more?  Definitely check out this session about building loosely coupled integrations with Logic Apps!

Any questions or doubts? Do not hesitate to get in touch!
Toon

Categories: Azure
Tags: Logic Apps
written by: Toon Vanhoutte