wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Monday, September 18, 2017 12:45 PM

Toon Vanhoutte by Toon Vanhoutte

Azure's serverless PaaS offering consists of Azure Functions and Logic Apps. If you consult the documentation, you'll find out that there is quite some overlap between the two. For many people, it's not clear what technology to use in what scenario. In this blog post, I discuss the main differences between these two event-driven Azure services and I provide some guidance to help you to make the right decision.

Comparison

Developer experience

A popular comparison states that Azure Functions is code being triggered by an event, whereas Logic Apps is a workflow triggered by an event. This is reflected in the developer experience. Azure Functions are completely written in code, with currently supports JavaScript, C#, F#, Node.js, Python, PHP, batch, bash and PowerShell. In Logic Apps, workflows are created with an easy-to-use visual designer, combined with a simple workflow definition language in the code view. Each developer has of course his/her personal preference. Logic Apps is much simpler to use, but this can sometimes cause limitations in complex scenarios. Azure Functions gives a lot more flexibility and responsibility to the developer.

Connectivity

Logic Apps connects to an enormous variety of cloud / on-premise applications, going from Azure and Microsoft services over SaaS applications and social media to LOB systems. You can find the impressive list of connectors here. Each connector comes with an API connection, that stores the required credentials in a secure way. These API connections can be reused from within multiple Logic Apps, which is great! Azure Functions have the concept of triggers, input and output bindings. Most of these bindings connect your Azure Functions to other Azure services, such as Event Hubs, Storage, DocumentDb, etc… Consult the complete list here. The HTTP binding is probably the most popular one, as it allows the creation of serverless API's. At the moment, there are no signs that Azure Functions aims to support that many bindings as what Logic Apps offers.

Exception handling

Cloud solutions need to deal with transient fault handling. Logic Apps provides out-of-the-box functionality that allows you to configure automatic retries on every action. In case this doesn't solve the problem, the workflow gets a failed status and can be resubmitted after human intervention. This guarantees an at-least-once execution model, which is pretty reliable! In Azure Functions, you have the typical try/catch options available. If you want to enable retries, you need to do the plumbing yourself, by introducing for example Polly. The way you can handle exceptions in the output binding, depends on the used language and type of output binding. This doesn't always give you the desired outcome. No resume / resubmit capabilities, except if you develop them yourself!

State

Until recently, Azure Functions always needed to be stateless and preferably idempotent. With the announcement of Azure Durable Functions, Microsoft brings state and long-running capabilities to Azure Functions, by leveraging the Durable Task Framework. This new framework allows sequential and parallel execution of several Functions, it supports long-running tasks with pre-defined timeouts and provides statefull actors without the need for external storage. The state is automatically stored in Azure Storage queues, tables and blobs, which is disaster proof. I am looking forward how this will evolve. These long-running / statefull processes are inherent available in Logic Apps, except for the statefull actor model.

Networking

Hybrid integration is reality nowadays. Cloud services must be able to connect to on-premises resources in a secure and high performing way. Azure Logic Apps performs this task via the On Premises Data Gateway, that needs to be installed on premises. Behind the scenes, it uses Azure Service Bus Relay to connect to the cloud in a firewall friendly way, through encrypted channels. When using Azure Functions within an App Service Plan, you have more convenient hybrid connectivity options that reside on the network level. App Service Plans offer support for many networking options like Hybrid Connections, VNET Integration and App Service Environment. Via these options, you can integrate Azure Functions with your local network through a Site-to-Site VPN or ExpressRoute.

Deployment

Azure Resource Manager templates are the way to deploy resources across the Microsoft Azure platform. Fortunately, both Azure Functions and Logic Apps have built-in support for ARM deployments, through for example Visual Studio Release Management. Next to this, Azure Functions allows easy setup of continuous deployments triggered from sources like BitBucket, Dropbox, Git, GitHub, OneDrive and VSTS. This is ideal in case multiple and frequent contributions need to be consolidated and tested. Additionally, Azure Functions now has deployment slots in preview. This allows deploying and testing a vNext first, before you swap that tested deployment slot with the current version in production.

Runtime

Logic Apps run only in the cloud, as it has a dependency on Microsoft-managed connectors. As a consequence, you cannot debug, test or run Logic Apps locally. Azure Functions can be easily developed and debugged on your local workstation, which is a big plus to increase developer productivity. Via the Azure Functions Runtime (still in preview) you are able to deploy them on premises in Windows Containers, with SQL Server as a storage layer. Azure Functions is also supported to run on Azure Stack and it has been announced as part of Azure IoT Edge to execute on small devices. This hosting flexibility is a big asset in phased migration scenarios towards the cloud.

Monitoring

Per Logic App, you have a nice overview of the previous runs and their corresponding outcome. You can filter this history, based on a time period and the resulting run status. The monitoring view of a workflow run is the same as the designer view, which makes it very intuitive. For each action, you can see the status and all inputs/outputs. With one button click, you can enable integration with OMS, where you can search on tracked properties. It's on the roadmap to have a user-friendly and cross Logic Apps dashboard on top of this OMS integration. Each Azure Function comes with a Monitor tab, where you can see the execution history. There is also a live event stream that shows the almost real-time processing statistics in nice graphs. On top of that, there's full integration with Application Insights, where you can take advantage of the powerful Analytics queries.

Pricing Model

Logic Apps has a pure pay-per-usage billing model. You pay for each action that gets executed. It's important to be aware that you also need to pay for polling triggers, which can be a hidden cost. If you want to benefit from the capabilities of the Integration Account, you should be aware that this comes with a fixed monthly bill. With Azure Functions, you have two options qua pricing. You can opt for a fixed cost of an App Service Plan. In that option you reserve compute power on which you can run Azure Functions, but also Web, Mobile and API Apps. The second option is completely serverless, with a consumption plan based on resource consumption (memory/s) and number of executions. Don’t forget that the Azure Storage layer also comes with a rather small cost.

Security

Each particular binding or connector comes with its own security. In this section, I focus on the security of Logic Apps and Azure Functions exposed as an API. In order to access a Logic App with the HTTP trigger, the client must include a Shared Access Signature in the URL. The signature is generated via a secret key that can be regenerated at all time. There is also the ability to restrict access, based on incoming IP addresses. To add more authorization logic, you can put Azure API Management in front of it. Azure Functions has a similar concept of API keys. The API key can be shared for the whole Function App (host key) or you can create a specific one for your Function. If you run your Azure Function in an App Service Plan, you can leverage its codeless authentication functionality with Active Directory, Google, Facebook, etc… Real authorization requires a small code change. Azure Function Proxies can be a light-weight alternative of full-blown API Management, to add security on top of your HTTP triggered Functions.

Conclusion

Based on the comparison above, you'll notice that a lot of factors are involved when deciding between the two technologies. At first, it's important to see what technology supports the connectivity that you require. Do you want to write it yourself or you want to leverage out-of-the-box bindings / connectors? Next to that, my general guidance is as follows:

When dealing with synchronous request/response calls, that execute more complex logic, Azure Functions is the preferred option. Logic Apps is better suited for asynchronous integration and fire-and-forget messaging that requires reliable processing. When using Logic Apps, they can be perfectly extended with Azure Functions to execute stateless tasks that cannot be fulfilled by the out-of-the-box Logic Apps capabilities.

Web API's are often composed of both sync and async operations. If you follow the guidance stated above, you might end-up with an API that uses both Azure Functions and Logic Apps. This is where Azure Functions Proxies has its value, as it can expose these separate microservices as a unified API. This will be discussed in another blog post. 


Stay tuned for more!

Categories: Azure
written by: Toon Vanhoutte

Posted on Wednesday, August 16, 2017 11:05 AM

Toon Vanhoutte by Toon Vanhoutte

Accidental removal of important Azure resources is something to be avoided at all time. For sure if it occurs with statefull Azure services, that are hosting very important information. Common examples of such Azure services are Azure Storage Accounts and Azure KeyVaults. This blog post describes three steps to minimize the risk of such unwanted removals, however additional measures might be needed, depending on your use case.

Prevent accidental removal

Role Based Access Control

Azure services offer real fine-grained access control configuration. You should take advantage of it, in order to govern access to your Azure environment. Resources in your production environment should only be deployed by automated processes (e.g. VSTS). Assign the least required privileges to the developers, operators and administrators within your subscription. Read only access in production should be a default setting. In case of incidents, you can always temporary elevate permissions if needed. Read here more about this subject.

Naming Conventions

It's a good practice to include the environment within the name of your Azure resources. Via this simple naming convention, you create awareness on what environment one is working. Combined with this naming convention, it's also advised to have a different subscription for prod and non-prod resources.

Resource Locks

An additional measure you can take, is applying resource locks on important resources. A resource lock can set two different lock levels:

  • CanNotDelete means authorized users can still read and modify a resource, but they can't delete the resource.
  • ReadOnly means authorized users can read a resource, but they can't delete or update the resource. Applying this lock is similar to restricting all authorized users to the permissions granted by the Reader role.

These locks can only be created or removed by a Resource Owner or a User Access Administrator. This prevents that e.g. Contributors delete the resource by accident.

Locks can be deployed through PowerShell, Azure CLI, REST API and ARM templates. This is an example of how to deploy a lock through an ARM template:

Be aware that locks can also be configured on subscription and resource group level. When you apply a lock at a parent scope, all resources within that scope inherit the same lock. Even resources you add later inherit the lock from the parent.

Conclusion

The Microsoft Azure platform gives you all the access control tools you need, but it's your (or your service provider's) responsibility to use them the right way. One should be very cautious when giving / getting full control on a production environment, because we're all human and we all make mistakes.

Categories: Azure
Tags: Logic Apps
written by: Toon Vanhoutte

Posted on Tuesday, August 8, 2017 12:35 PM

Toon Vanhoutte by Toon Vanhoutte

Should we create a record or update it? What constraints define if a record already exists? These are typical questions we need to ask ourselves when analyzing a new interface. This blog post focuses on how we can deal with such update / insert (upsert) decisions in Logic Apps. Three popular Logic Apps connectors are investigated: the Common Data Service, File System and SQL Server connector.

Common Data Service Connector

This blog post of Tim Dutcher, Solutions Architect, was the trigger for writing about this subject. It describes a way to determine whether a record already exists in CDS, by using the "Get Record" action and deciding based on the returned HTTP code. I like the approach, but it has a downside that it's not 100% bullet proof. An HTTP code different than 200, doesn't always mean you received a 404 Not Found.

My suggested approach is to use the "Get List of Records" action, while defining an ODATA filter query (e.g. FullName eq '@{triggerBody()['AcountName']}'). In the condition, check if the result array of the query is empty or not: @equals(empty(body('Get_list_of_records')['value']), false). Based on the outcome of the condition, update or create a record.

File System Connector

The "Create file" action has no option to overwrite a file if it exists already. In such a scenario, the exception "A file with name 'Test.txt' already exists in the path 'Out' your file system, use the update operation if you want to replace it" is thrown.

To overcome this, we can use a similar approach as described above. Because the "List files in folder" action does not offer a filter option, we need to do this with an explicit filter action. Afterwards, we can check again if the resulting array is empty or not: @equals(empty(body('Filter_array')), false). Based on the outcome of the condition, update or create the file.

You can also achieve this in a quick and dirty way. It's not bullet proof, not clean, but perfect to use in case you want to create fast demos or test cases. The idea is to try first the "Create file" action and configure the next "Update file" action to run only if the previous action failed. Use it at your own risk :-)

SQL Server Connector

A similar approach with the "Get rows" actions could also do the job here. However, if you manage the SQL database yourself, I suggest to create a stored procedure. This stored procedure can take care of the IF-ELSE decision server side, which makes it idempotent.

This results in an easier, cheaper and a less chatty solution.

Conclusion

Create/update decisions are closely related to idempotent receivers. Real idempotent endpoints deal with this logic server side. Unfortunately, there are not many of those endpoints out there. If you manage the endpoints yourself, you are in charge to make them idempotent!

In case the Logic App needs to make the IF-ELSE decision, you get chattier integrations. To avoid reconfiguring such decisions over and over again, it's advised to make a generic Logic App that does it for you and consume it as a nested workflow. I would love to see this out-of-the-box in many connectors.

Thanks for reading!
Toon

Categories: Azure
Tags: Logic Apps
written by: Toon Vanhoutte

Posted on Friday, July 28, 2017 4:59 PM

Toon Vanhoutte by Toon Vanhoutte

Recently I was asked for an intervention on an IoT project. Sensor data was sent via Azure IoT Hubs towards Azure Stream Analytics. Events detected in Azure Stream Analytics resulted in a Service Bus message, that had to be handled by a Logic App. Hence, the Logic Apps failed to parse that JSON message taken from a Service Bus queue. Let's have a more detailed look at the issue!

Explanation

This diagram reflects the IoT scenario:

We encountered an issue in Logic Apps, the moment we tried to parse the message into a JSON object. After some investigation, we realized that the ServiceBus message wasn't actually a JSON message. It was preceded by string serialization "overhead".

Thanks to our favourite search engine, we came across this blog that nicely explains the cause of the issue. The problem is situated at the sender of the message. The issue is caused, because the BrokeredMessage is created with a string object:

If you control the sender side code, you can resolve the problem by passing a stream object instead.

Solution

Unfortunately we cannot change the way Azure Stream Analytics behaves, so we need to deal with it at receiver side. I've found several blogs and forum answers, suggesting to clean up the "serialization garbage" with an Azure Function. Although this is a valuable solution, I always tend to avoid additional components if not really needed. Introducing Azure Functions comes with additional cost, storage, deployment complexity, maintenance, etc…

As this is actually pure string manipulation, I had a look at the available string functions in the Logic Apps Workflow Definition Language. The following expression removes the unwanted "serialization overhead":

If you use this, in combination with the Parse JSON action, you have a user-friendly way to extract data from the JSON in the next steps of the Logic App. In the sample below, I just used the Terminate action for testing purpose. You can now easily use SensorId, Temperature, etc…

Conclusion

It's a pity that Azure Stream Analytics doesn't behave as expected when sending messages to Azure Service Bus. Luckely, we were able to fix it easily in the Logic App. Before reaching out to Azure Functions, as an extensibility option, it's advised to have an in-depth look at the available Logic Apps Workflow Definition Language functions.

Hope this was a timesaver!
Cheers,
Toon

Categories: Azure
written by: Toon Vanhoutte

Posted on Thursday, July 20, 2017 6:26 PM

Toon Vanhoutte by Toon Vanhoutte

Jon Fancey announced at Integrate the out-of-the-box batching feature in Logic Apps! This early morning, I saw by accident that this feature is already released in West-Europe. This blog contains a short preview of the batching functionality. There will definitely be a follow up with more details and scenarios!

In batching you need to have two processes:

  • Batch ingestion: the one responsible to queue messages into a specific batch
  • Batch release: the one responsible to dequeue the messages from a specific batch, when certain criteria are met (time, number of messages, external trigger…)

Batch Release

In Logic Apps, you must start with the batch release Logic App, as you will need to reference it from the batch ingestion workflow. This is to avoid that you are sending messages into a batch that does not exist! This is how the batch release trigger looks like:

You need to provide:

  • Batch Name: the name of your batch
  • Message Count: specify the number of messages required in the batch to release it

In the future, definitely more release criteria will be supported.

Batch Ingestion

Now you can inject messages into the batch. Therefore, I created just a simple request / response Logic App, that contains the Send messages to batch action. First you need to specify the previously created Logic App that is responsible for the batch release.

Once you've done this, you can specify all required info.

You need to provide:

  • Batch Name: the name of the batch. This will be validated at runtime!
  • Message Content: the content of the message to be batched.
  • Partition Name: specify a "sub-batch" within the batch. In my scenario, all invoices for one particular customer will be batched together. If empty, the partition will be DEFAULT.
  • MessageId: a message identifier. If empty, a GUID will be generated.

The result

I've just triggered the batch-ingest Logic Apps many times. This queues messages within the batch.

Each time 5 messages, belonging to the same partition, are available in the batch, the batch release Logic App gets fired.

The output looks like this:

Conclusion

Very happy to see this has been added to the product, as batching is still required nowadays. I thought this would have been part of the Integration Account; cool to see there is no dependency on that. The batch release process is not using a polling trigger, so this saves you also some additional costs.

I'll get in touch with the product group for some feedback, but this looks already very promising!

Categories: Azure
Tags: Logic Apps
written by: Toon Vanhoutte