wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Monday, March 6, 2017 2:18 PM

Toon Vanhoutte by Toon Vanhoutte

Lately, I was working on a proof of concept on Logic Apps. I created a Logic App that looped over all files in a specific folder and processed them. I was curious about the performance, so I copied 10.000 files in the folder. The Logic App kicked off and started processing them. After a while the Logic App ended up in a failed state. Let's have a look what happened.

Apparently, I stumbled upon a limitation that is documented over here.

The documentation suggests to use the query action to filter arrays.  It offers the following capabilities:

You can indeed filter an array, based on a specific query.  However, I did not have an object available that contains the position of an item within the array.  I wanted assurance that I would not hit the for each limit, so this was no option.  If you know a way how to get the position within the array, please let me know via the comments sections below.

I continued my search within the Logic Apps Workflow Definition Language and found the @take function.  This is what the documentation states:

This did the trick. It takes the first 5000 items from an array. Luckily, you do not get an out-of-range exception if the incoming array contains less items!

It's always a good practice to validate your design upfront against the Logic Apps limits.

Hope this helps!
Toon

Categories: Azure
Tags: Logic Apps
written by: Toon Vanhoutte

Posted on Wednesday, May 27, 2015 4:14 PM

Tom Kerkhove by Tom Kerkhove

Microsoft's Yammer has been around for a while and people who are part of one or more networks will agree that Yammer can turn into Spammer.

In this blog post I demonstrate how you can automatically post to a Slack channel.

This blog post was also released on my personal blog.

Microsoft's Yammer has been around for a while and people who are part of one or more networks will agree that Yammer can turn into Spammer.

For each new conversation & comment, Yammer will send you an email resulting in mail floods. The easy fix would be to disable the notification email but then you risk the chance to miss out on interesting/important discussions.

At our current project we use Slack to communicate with each other and it's a really nice tool - Nice & clean just how I like it.

Slack Logo

So lets get rid of the notification emails and notify your team when someone starts a new conversation on Yammer! This is where Microsoft Azure App Services come in, more specifically Microsoft Azure API & Logic Apps.

With Azure Logic apps I've created a flow where I have one API app listening on a Yammer group for new conversations while another Slack API App will notify us in a channel when something pops-up.

How does that look?!

When I create a new conversation in Yammer i.e. "We're ready to go in production" - 
New Yammer Conversation

The Yammer API App in my Logic App will notice that there is a new conversation and will send a message to my team's Slack channel as the Project Announcements-bot. 

Slack Bot Response

Want it yourself? Here's how!

Before getting our hands dirty let's summarize what's on today's schedule.

We will start with provisioning our API apps that we will use from the Azure Marketplace. After that we will create a new Logic app that will describe the flow of our app.

Provisioning the API Apps

As of today you have two options for provisioning your API Apps - One is to provision them upfront where you have more control on naming and such. Second is provision them while you are designing your Logic app and let Azure take care of the naming. Be aware: Azure uses names like YammerConnector1431593752292 that doesn't really say where they're being used.

Since I always want to name my components as self-describing as possible we will provision two API apps up front :

  • A Yammer App that will trigger our Logic app when a new conversation is posted
  • A Slack App that will send a message to Slack as a Bot

Provisioning an API App is super simple : Browse to the new Azure Portal > Click New > Select Web + Mobile > Browse the Azure Marketplace > Select the API Apps section > Select the API App you want.

After you've selected your API App you basically you give it a Name, assign the App Plan & Resource Group : 
Provision API App

Azure will start provisioning the API App for you in the background, while they are doing that let's have a look at the Connector Info.

Before actually provisioning the App you see that each API App or Connector gives you an overview of it's capabilities in a Logic App. Here you can see that the Slack Connector will only be able to act in a Logic app. 
Slack Connector overview

Now when we look at the Yammer Connector Info we see that it can act withing a Logic App but also Trigger it on a certain condition. 
Yammer Connector overview

Defining the flow in a Logic App

Before we can start defining our flow we need to create a new Logic App.

In the Azure Portal click New > Select Web + Mobile > Logic App. Give it a self-describing name and add it to the same App Plan as your provisioned API Apps.

Once it is configured, open it and click Triggers and Actions
Configure Logic App

We will define our flow by defining the sequence of connectors. You can find our provisioned connectors on the side, click on your Yammer connector to add it. 
Clean Logic App

After that, the default card will be replaced with your Yammer connector. As you can see we first need to authenticate with Yammer. Click Authorize
Basic Yammer Card

A pop-up will show to do the OAuth dancing with Yammer. After you've logged in you will see need to grant access to your Logic App.

Read the statement carefully and click Allow if you agree.

Yammer Authentication

(In order to complete the following steps you need to allow access)

Now that you've allowed access to your Yammer account it's interesting to know that the authentication token will be stored in the secure store of the Gateway (A Gateway is used by API Connectors to communicate with each other and outbound services). This is because the gateway will handle all the authentication with Yammer for us.

Once that's done you get an overview of all the triggers the Yammer connector has. Luckily the only one that is available is the one we need, click New Message
Yammer Triggers

Configuring the trigger is fairly easy - We define the trigger frequency in which the connector will look for new messages. Next to that we assign the Group Id of our Yammer Group that we are interested in. The granularity of your trigger frequency depends on the hosting App Plan. In my example I'm using 1 minute which requires me to use a Standard-tier App Plan.

You can find the group Id by browsing to your group and copying the Feed Id.

https://www.yammer.com/Your-Network/#/threads/inGroup?type=in_group&feedId=579250

Yammer Connector Configured

Click the checkmark to save your configuration.

Go back to the side bar and click on your Slack Connector to add it to the pane. Here we need to authenticate with our Slack by clicking Authorize
Basic Slack Connector

Just like with Yammer, Azure will request access to your Slack account to post messages. 
Slack Authentication

Our last step is to configure the Slack connector.

What we will do is send the original message as a quote along with who posted it and a link to the conversation. In Slack that results in the following markup statement -

>>> _"Original-Message"_ by *User* _(Url)_

To achieve this we will use the @concat function to assign the Text value -

This statement is retrieving some of the output values of the Yammer connector.

We will also configure to which Slack channel you want to send it. Optionally you can assign a name to the Slack bot and give it a icon. Here I gave the name of my Yammer group as Slack bot name. 
Slack Connector Configured

Click the checkmark to save your configuration & save the flow of your logic app. 
Save Logic App

After a few seconds/minutes, depending on your trigger configuration, you will see that the Yammer connector picked up your new message and triggered your Logic App. 
Logic App - Run Overview

Now you should see a new message in your Slack channel!

Ship it!

That's it - we're done!

Your Yammer connector will now poll for new conversations in your Yammer group every cycle you've defined in its configuration. If there are new ones, your Logic App will start processing it and you will be notified in Slack!

Wrapping up

As you can see, you can very easily use Azure API & Logic Apps to create small IFTT-like flows. Nevertheless you can even build more full-blown integration scenarios by using the more advanced BizTalk API Apps!

If you want you can even expand this demo and add support for multiple Yammer groups. To do so you'll need to open the Code View and copy additional triggers in the JSON file (Thank you Sam Vanhoutte for the tip on how to create multiple trigger).

Keep in mind that the Slack bot's name that is posting is currently hardcoded, unfortunately the Yammer app doesn't expose the name of the group so this is something you'll have to work around.

Can't get enough of this? You can build your own API App or read Sam Vanhoutte his initial thoughts on Azure App Services!

Thanks for reading,

Tom Kerkhove

Categories: Azure
Tags: Logic Apps
written by: Tom Kerkhove

Posted on Tuesday, March 24, 2015 6:01 PM

Sam Vanhoutte by Sam Vanhoutte

In this post, I'm sharing some of my thoughts about the fresh Azure App Service, that were announced by Scott Guthrie and Bill Staples.

Today, Scott Guthrie and Bill Staples announced a new interesting Azure Service: Azure App Service.  Actually it's a set of services, combined under one umbrella, allowing customers to build rich business oriented applications.  Azure App Services is now the new home for:

  • Azure Web Apps (previously called Azure Websites)
  • Azure Mobile Apps (previously Mobile Services)
  • Azure Logic Apps (the new 'workflow' style apps)
  • Azure API Apps (previously announced as Microservices)

It speaks for itself that the Logic Apps and API apps will be the most important for integration people.  The Azure Microservices were first announced in public on the Integrate 2014 event and it's clear that integration is at the core of App Services, which should make us happy. 

Codit has been part of the private preview program 

Codit has been actively involved in private preview programs and we want to congratulate the various teams in the excellent job they have done.  They have really been listening to the feedback and made incredible improvements over the past months.  While everyone knows there is still a lot to do, it seems they are ready to take more feedback, as everything is public now.  

My personal advise would be to look at it with an open mind, knowing that a lot of things will be totally different from what we've been doing over the past 10-15 years (with BizTalk).  I'm sure a lot of things will (have to) happen in order to make mission critical, loosely coupled integration solutions running on App Services.  But I am confident they will happen.

Is this different from what was said at Integrate2014?

As Integrate 2014 was solely focused on the BizTalk Services, the other things (such as Web Sites and Media apps were not mentioned).  But most of the things we saw and heard back then, now made it to the public preview. 

  • Azure Microservices are now called API apps and are really just web API's in your favorite language, enriched with Swagger metadata and version control.  These API apps can be published to a gallery (public gallery might come later on) or directly to a resource group in your subscription.
  • The Workflows (they used to be called Flow Apps) are now called Logic Apps.  These will allow us to combine various API apps from the gallery in order to instrument and orchestrate logical applications (or workflows).

Important concepts

I tried to list the most important concepts below.

All of the components are built on top of Azure Websites.  This way, they can benefit from the existing out of the box capabilities there:

  • Hybrid connectivity: Hybrid Connections or Azure Virtual Networking.  Both of these are available for any API app you want to write.  And the choice is up to the user of your API app!
  • Auto-scaling: do you want to scale your specific API app automatically?  That's perfectly possible now.  If you have a transformation service deployed and the end of month message load needs to be handled, all should be fine!
  • New pricing model (more pay per use, compared to BizTalk Services)
  • And many more: Speed of deployment, the new portal: we get the new portal

API Apps really form the core of this platform.  They are restful API's, with Swagger metadata that is used to model and link the workflows (you can flow values from one API app to another in Logic apps).

API Apps can be published to the 'nuget-based' gallery, or directly to a resource group in your subscription.  When you will be able to publish to the public gallery over time, it will be possible for other users to leverage your API app in their own applications and logic apps, by provisioning an instance of that package into their own subscription.  That means that all the cost and deployment hassle is for the user of your API app.

Where I hope for improvements

As I mentioned, this is a first version of a very new service.  A lot of people have been working on this and the service will still be shaped over the coming months.  It seems the teams are taking feedback seriously and that's definitely a positive thing.  This is the feedback I posted on uservoice.  If you agree, don't hesitate to go and vote for these ideas!

  • Please think about ALM.  Doing everything in the portal (including rules, mapping, etc) is nice, but for real enterprise scenarios, we need version and source control. I really would love to see a Visual Studio designer experience for more complex workflows as well. The portal is nice for self-service and easy workflows, but it takes some time and is limited in its nature, compared to pro-dev experience in Visual Studio.
    Vote here
  • Seperate configuration from flow or business logic.
    If we now have a look at the json file that makes up a Logic app, we can see that throughout the entire file, references are being added to the actual runtime deployment of the various API apps. We also see values for the various connectors in the json structure. It would really help (in deployment of one flow to various staging slots) to seperate configuration or runtime values from the actual flow logic. 
    Vote here
  • Management
    Now it is extremely difficult to see the various "triggers" and to stop/start them.  With BizTalk, we have receive locations that we can all see in one view and we can stop/start them.  (the same thing for send ports).  Now all of that is encapsulated in the logic app and it would really be a good thing to provide more "management views".  As an example, we have customers with more than 1000 receive endpoints.  I want to get them in an easy to handle and search overview.
    Vote here
  • The usability in the browser has increased a lot, but still I believe it would make sense to make the cards or shapes smaller (or collapsable).  This way, we'll get a better overview of the end to end flow and that will be needed in the typical complex workflows we build today (including exception handling, etc) 
    Vote here

More posts will definitely follow in the coming weeks, so please stay tuned!

Categories: Azure
Tags: Logic Apps
written by: Sam Vanhoutte

Posted on Friday, March 3, 2017 2:41 PM

Toon Vanhoutte by Toon Vanhoutte

A good architect and great developer have one thing in common: they are lazy! They design the solution in such a way that they can reuse as much as possible common components within their solution. This applies to any technology, so it does for Logic Apps. Logic Apps provides many ways to benefit from re-usability.

This blog post focuses on consuming existing Logic Apps from within other Logic Apps, also known as nested workflows. It's a nice feature, but as usual: be aware of the caveats!

Scenario

Let's take the following scenario as a base to start from. Logic App 1 is exposed as a web service. It consumes Logic App 2 which calls on its turn Logic App 3. Logic App 2 and 3 are considered to be reusable building blocks of the solution design. As an example, Logic App 3 puts a request message on the particular queue. Below you can find the outcome of a successful run that finishes within an acceptable timeframe for a web service.

Exception Scenario

If you stick to the above design, you'll discover unpleasant behavior in case you need to cope with failure. Building cloud-based solutions means dealing with failure in your design, even in this basic scenario. Let's simulate an exception in Logic App 3, by trying to put a message on a non-existing queue. As a result, Logic App 1 fails after 6 minutes of processing!

I expected a long delay and potentially a timeout, but those 6 minutes were a real surprise to me. The reason for this behavior is the default retry policies that are applied on Logic Apps. I consulted the documentation and that explains everything. Logic App 1 was fired once. Logic App 2 got retried 4 times, which results in 5 failed instances. The third workflow got even executed 25 (5x5) times.

The retry interval is specified in the ISO 8601 format. Its default value is 20 seconds, which is also the minimum value. The maximum value is 1 hour. The default retry count is 4, 4 is also the maximum retry count. If the retry policy definition is not specified, a fixed strategy is used with default retry count and interval values. To disable the retry policy, set its type to None.

Optimize the retry policies

Time to overwrite those default retry policies. For Logic App 1, I do not want any retry in case Logic App 2 fails. This is achieved by updating the code view:

In Logic App 2, I configure the retry policy to retry once after 20 seconds:

The result is acceptable from a timing perspective:

On the other hand, the exception message we receive is completely meaningless.  Check out this post to learn more about exception handling in such a situation.

Implement fire and forget

In the previous examples, we invoked the underlying Logic App in a synchronous way: call the Logic App and only continue if the Logic App has completed its processing. For those with a BizTalk background: this is comparable with the Call Orchestration shape. As Logic Apps gives you complete freedom on where to put the Response action in your workflow, you can also go for a fire-and-forget pattern, comparable with the Start Orchestration shape. This can be achieved by placing the Response action right after the Request trigger. Via this way, these reusable Logic Apps execute independently from their calling process.

This eventual consistency can have an impact on the way user applications are built and it requires also good operational monitoring in case asynchronous processes fail. Remark in the example below, that the consuming application is not aware that Logic App 3 failed.

Update: Recently I discovered that it's even possible to leave out the Response action within the nested workflows.  Just ensure to update the consuming Logic App action with the following expression: "operationOptions": "DisableAsyncPattern".  This is even more fire-and-forget style and will improve performance a little bit.

This solution reduces processing dependencies between the reusable Logic Apps. Unfortunately, the design is still not bullet-proof. Under a high load, throttling could occur in the underlying Logic Apps, which could result in time-outs when calling the nested workflows. A more convenient design, is to put a Service Bus queue in between. This increases on the other hand the complexity of development, maintenance and operations. It's important to assess this potential issue of throttling within the context of your business case. Is it really worth the effort? It depends on so many factors…

Monitoring

As a final topic, I want to demonstrate the nested workflows all share a common identifier. The parent workflow has a specific ID of its instance.

This ID appears in every involved Logic App run execution, in the form of a Correlation ID. This ID can be used to link / correlate the Logic App instances with each other.

This ID is handed over to the underlying workflow, via the x-ms-client-tracking-id HTTP Header.

Feedback to the product team

It's fantastic that you get full control on the retry policies. The minimal retry interval of 20 seconds seems quite long to me, if you need to deal with sync web service scenario. I found also a nice suggestion to include an exponential back-off retry mechanism. Implementing circuit breakers would also be nice to have!

The monitoring experience for retried instances could be improved. In the Azure portal, they just show up as individual instances. There's no easy way to find out that they are all related to each other. Would be a great feature if all runs with the same Correlation ID are grouped together in the default. Like it? Vote here!

Conclusion

Logic Apps nested workflows are very powerful to reuse functionality in a user-friendly way. Think about the location of the Response action within the underlying Logic App, as this greatly impacts the runtime dependencies. Implement fire and forget if your business scenario allows it and consider a queuing system in case you need a scalable solution that must handle a high load.

Thanks for reading!
Toon

Categories: Azure
written by: Toon Vanhoutte

Posted on Monday, February 20, 2017 3:41 PM

Toon Vanhoutte by Toon Vanhoutte

Lately, I tried to connect to a Service Bus queue with limited permissions (Listen only). I encountered an issue that I want to share with you, so it can save you some time!

 

When you manage a Service Bus namespace, it's important to think about security.  The recommended way to deal with it, is by leveraging its Shared Access Signature (SAS) authentication and authorization mechanism.  You are able to configure SAS policies on your complete ServiceBus namespace or on individual queues and topics.  Use what best meets your expectations!

On the 'coditblog' queue, I created a ReadOnly shared access policy that only contains the Listen claim.  This policy was intended to be used by a Logic App that only needs to read messages from the queue. 

After creating the policy, I copied the primary connection string.

Then I created a Logic App from scratch, by adding the ServiceBus trigger 'When a message is received in a queue (auto-complete)'.  A connection was created by providing a connection name and the copied connection string.

When trying to select the queue name, I got the following exception:
Could not retrieve values. ConnectionString should not include EntityPath.

I double checked my connection string several times and tried multiple variations, but without any success. After some investigation, it turned out that the connector requires the Manage claim to navigate through the list of available queues. A misleading exception message...

Luckily we are not blocked by this! Just choose 'Enter custom value', type the queue name and you're good to go!

The Logic Apps starts successfully when a message arrives on the queue!

Hope this can save you some troubleshooting time!
Toon

 

Categories: Azure
Tags: Logic Apps
written by: Toon Vanhoutte