Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Thursday, September 28, 2017 12:46 PM

Stijn Moreels by Stijn Moreels

“Property-Based Testing”, ever heard of it? It’s a very popular topic in the functional community. The opposite of this approach is the well-known “Example-Based Testing”. People think in examples and that’s why it’s so popular; also in the non-functional community. This is/was the way we write/wrote tests.
“Given a static, constant example; the output should be this”
But is that the best we can do?


Property-Based Testing is about generalizing the input so we can make statements about the output; without specifying exactly what the input or output should be, only should look like.

I’m not going to give you the full-introduction because there are already so much good resources about this topic, (also in different languages).

But what I will do, is give you an introduction to FsCheck in a C# environment. FsCheck is written in F# but has a C#-friendly API. I’m going to use the FsCheck.Xunit package for this blog post.


For a full-introduction of FsCheck itself, I highly recommend the documentation of FsCheck; with a good explanation about the framework. Although they give a good idea of how the framework is built, I find it hard to find examples of how it can be used concretely; especially if you’re using the xUnit variant.

Several blog posts are using F# to make properties with FsCheck, but with C# the posts are rather rare…

Fact to Property

Let’s start from the xUnit example they present on their documentation:

If you know xUnit, you know that ‘Fact’ is the way xUnit marks methods as test-methods and the static class ‘Assert’ is used to assert on the result.

Now, I’ll give you the same example but written with some FsCheck properties:

What are the differences?

  • The ‘Fact’ attribute is changed to ‘Property’ attribute
  • Return type is ‘Property’ instead of ‘void
  • Assert’ class isn’t used, but the condition is returned and transformed by the ‘ToProperty()’ call to a ‘Property
  • The inputs of the method under test aren’t hard-coded anymore

This last difference is probably the most important one.
I highly recommend you read the resources if you haven’t heard about PDT because I won’t list all the benefits of Property-Based Testing. I hope that you see that by using this approach, I can’t maliciously fake the actual implementation anymore, while in the first example I could have done this.

We’ve added two parameters to the test method that FsCheck will fill-in for us with some random values. This will contain negative, zero and positive values all in the range of an Int32 data type. All valid integers so to say. FsCheck will, by default, run 100 tests with random values for the inputs of the test.

FsCheck has several extension methods on boolean values, like the one above. Let’s look at some more.

Conditional & Lazy Properties

Sometimes, we want to restrict the input to make sure you’re always end up with the same output. A simple example is the mathematical division. You can’t divide by zero, so to have the same result we must make sure that the given input isn’t below zero.

What’s different?

  • We added the ‘When()’ call to specify that we can’t divide by zero (this makes sure we don’t have to call ‘ToProperty()’ again)
  • We extracted the method, which we wanted to test, in its own delegate. Note that FsCheck has extension methods on any delegate that returns a boolean.

That is a good example of the Conditional Property; but why do we need to extract the call to ‘Divide’? Because otherwise FsCheck will evaluate this immediately (even with ‘y’ being zero) which would result in a ‘DivideByZeroException’ and FsCheck will treat any exception being thrown as a test failure. That’s why.

By extracting this, we’re telling FsCheck that we’re only interested in the results IF the condition holds. In our case: ‘y’ must be zero.
That’s convenient!

With this simple example, we’ve shown how we express conditions in our properties to make sure we’re always in a given set of inputs, and shown how we can create Lazy Properties which are useful to only evaluate the test if the condition we’ve set holds. This also can be useful if the actual test takes some time and we don’t want to lose time while evaluating a test which result isn’t of interest for us.

Exception Properties

In functional programming, I’ll try not to use any exceptions in my code; but in imperative languages this is the way to express something went wrong. We also write tests that trigger those exceptions that we throw by giving invalid inputs.

The xUnit package also has some methods on the Assert class called “Throws”, “ThrowsAny”, ... How can we express this in FsCheck?

The documentation says that this isn’t actually supported in C# (you can see it at the lower-case method); but writing it this way works.

Observed Properties

The best possible alternative for this feature, is the ‘usermessage’ you can send with the ‘Assert’ class in the xUnit package. We send a string with the assert so we can later see which assertion has failed.

FsCheck takes this a step further.

Trival Properties

FsCheck has a way to count the cases for which a condition is met. In our previous example, can we count how many generated values are negative values?

In our test output, we can see that the positive and negative values are almost split in half:

Ok, passed 100 tests (47% trivial).

Try to run them again and see how this test output change.

Classified Properties

Sometimes, we want more than one condition to check about our input and maybe add some custom message for each category of input. According to me this is the closest thing to the ‘Assert’’s ‘usermessage’.

FsCheck has a way to express this by classifying properties.

In our output, we’re now seeing:

Ok, passed 100 tests.
63% Smaller than '1000'.
37% Smaller than '1000', Bigger than '10'.

See, how the categories can also be combined and are shown to the user in a friendly way.

Collected Properties

We’ve seen some examples how we can express some categories for our test inputs by specifying conditions on them and giving them a name. But sometimes we’re just interested in the actual input value itself and how it changes during the test run.

This will result in this test output:

Ok, passed 100 tests.
8% "Values together: 0".
5% "Values together: 8".
5% "Values together: 1".
4% "Values together: 3".
4% "Values together: -12".
3% "Values together: 38".
3% "Values together: 2".
3% "Values together: -4".
3% "Values together: -14".
3% "Values together: -1".
2% "Values together: 9".
2% "Values together: 7".
2% "Values together: 5".
2% "Values together: 32".
2% "Values together: 21".

This way, we can clearly see how the test inputs changes over time.

Combined Observation Properties

As a final observation property, we can also combine several of the previous observed properties into one property that combines all the results:

This will result in this test output:

Ok, passed 100 tests.
7% "Values together: 3", Smaller than '1000'.
5% "Values together: 2", Smaller than '1000'.
5% "Values together: 0", Smaller than '1000'.
4% "Values together: 13", Smaller than '1000'.
4% "Values together: 1", Smaller than '1000'.
3% "Values together: -8", Smaller than '1000'.
3% "Values together: -4", Smaller than '1000'.
3% "Values together: -15", Smaller than '1000'.
3% "Values together: -12", Smaller than '1000'.
2% "Values together: 9", Smaller than '1000'.
2% "Values together: 8", Smaller than '1000'.
2% "Values together: 7", Smaller than '1000'.
2% "Values together: 27", Smaller than '1000', Bigger than '10'.
2% "Values together: 22", Smaller than '1000'.
2% "Values together: 1", Smaller than '1000', Bigger than '10'.
2% "Values together: -56", Smaller than '1000'.
2% "Values together: -3", Smaller than '1000'.
2% "Values together: -11", Smaller than '1000'.
2% "Values together: -10", Smaller than '1000'.

Combined Properties

The previous properties all had the same thing in common: they are  testing a single condition. What if we want to test multiple conditions? And how do we distinguish each property from one another?

Sometimes, we want to test two conditions. Combining them in a ‘AND’ expression. FsCheck also has this as extension method:

We can also add a ‘Label’ which is the same as the ‘usermessage’ in the ‘Assert’ class in the xUnit package: it pops up when the condition isn’t met.

By using this approach, we always know which property has failed.

Note that I now use a ‘NonNegative’ type instead of a regular int. This is also part of the FsCheck framework and allows me to make sure I always get a positive integer without specifying it in a Conditional Property. As you have seen, FsCheck will try any value that acts as a valid type; so, if I would add a condition to my property stating that I want a positive integer, I’ll get roughly the half of the test runs. This way, by using the ‘NonNegative’ I’m sure that I still get my 100 test runs without skewing the input so much I get merely any test runs.

Of course, we can also combine our properties in an ‘OR’ expression with the extension method ‘Or()’.


We’ve already seen an example with the previous properties where I used the ‘NonNegative’ type. FsCheck has several types that you can use to stricken our input. Some interesting ones:

  • PositiveInt represent an integer bigger than zero
  • NonZeroInt represent an integer which isn’t zero
  • NonNegativeInt represent an integer which isn’t below zero
  • IntWithMinMax represent an integer that can contain the int.Min and int.Max values
  • NonNull<T> wraps a type to prevent null being generated
  • NonEmptyArray<T> represent an array which isn’t empty
  • NonEmptySet<T> represent a set which isn’t empty
  • NonEmptyString represent a string which isn’t empty
  • StringNoNulls represent a string without null characters (‘\000’)
  • NormalFloat represent a float which isn’t infinite or NaN
  • Interval represent an integer interval
  • IPv4Address represents an IPv4 Address
  • IPv6Address represents an IPv6 Address

And many more... Don’t hesitate to come up with your own generic types that you can contribute to FsCheck!

We can also generate our own domain models with invalid and valid ones and use FsCheck to generate them for use; but that would lead us to another topic about generators.


In this post, we’ve seen how Property-Based Testing isn’t just a functional concept but an idea we can use in any language. FsCheck is inspired from the QuickCheck variant in Haskell, there’s also the ScalaCheck in Scala, JavaQuickCheck for Java, ClojureCheck for Clojure, JSVerify for JavaScript, … and so many more.

I’m not saying that you can abandon all your Example-Based Tests. Just like I stated in the beginning of this post: people think in examples. So, I think the combination of Example-Based Tests and Property-Based Tests is the sweet spot. By examples we can show the next person concrete ways of how to use your API and with properties you ensure that the implementation is the right one tested with any boundary conditions.

Thanks for reading!

Categories: Technology
Tags: Code Quality
written by: Stijn Moreels

Posted on Wednesday, September 27, 2017 11:49 AM

Toon Vanhoutte by Toon Vanhoutte

When using the Microsoft Azure stack, an important aspect is to use the right service for the job. In this way, you might end up with an API that is composed of several Azure services such as Blob Storage, Logic Apps and Azure Functions. In order to publish such an API in a unified way, you can leverage Azure Functions Proxies.


In this fictitious scenario, I'm building a Pizza Delivery API. It's composed of the following operations:

  • GET Products is still in development, so I want to return a mocked response.
  • GET Product returns the product information from Blob Storage.
  • POST Order invokes a Logic Apps that creates the order in an async fashion.
  • GET Order call an Azure Function that retrieves the specific order information.
  • PUT Order is responsible to asynchronously update an order via Logic Apps.
  • DELETE Order removes the order from within a Logic App.


In this blog, I'll show you how to build this API, without writing a single line of code.



This section just demonstrates the basic principles to start creating your first Azure Function Proxy.

  • Azure Function Proxies is still in preview. Therefore, you need to explicitly enable it within the Settings tab.


  • Now, you can create a new proxy, by using the "+" sign. You need to specify a name, a route template, the allowed HTTP methods and a backend URI. Click Create to save your changes.


  • When the proxy is created, you immediately get the proxy URL on which you can invoke the Azure Function Proxy. Via the Advanced editor link, you can define more options in a JSON format.


  • The advanced editor is a JSON editor of the proxies.json file, that gives you more configuration options.


After this introduction, we can start the implementation of the Pizza Delivery API.


This operation won't have a backend service, because it's still in development. We will mock a response, so the API can already be used for development purposes.

  • Create a proxy GetProductsViaMocking. Provide the following details:
    Route template: api/products
    Allowed HTTP methods: GET
    Backend URL: none, we will use mocking

  • In the Advanced editor, specify the following responseOverrides. This sets the Content-Type to application/json and the response body to a fixed list of pizzas.

  • If you test this GET operation, you'll receive the mocked list of three pizzas!

Blob Storage

The GET Product operation will return a JSON file from Blob Storage in a dynamic fashion. Blob Storage is an easy and cheap way to return static or less frequently changing content.

  • Create a blob container, named "pizza". Set its access level to private, so it's not publicly accessible by default. For every available pizza, add a JSON file that holds the product information.


  • Create an Access Policy on container level. The policy provides read access, without any expiration. You can easily do this via the Azure Storage Explorer.


  • Create a Shared Access Signature from this policy, via the Azure Storage Explorer. This gives you a URL (containing the SAS token), to access the blobs from the "pizza" container.

  • To keep the SAS token better manageable, I prefer to add the SAS query string to the Application Settings of our Function App. Create an Application Setting BLOB_SAS_QUERYSTRING.

  • Create a proxy GetProductViaBlobStorage. Provide the following details:
    Route template: api/product/{productId}
    Allowed HTTP methods: GET
    Backend URL:{productId}.json?%BLOB_SAS_QUERYSTRING%

Remark that the route template parameter 'productId' is reused in the Backend URL to get to correct JSON file.
Remark that the application settings can be retrieved via this %BLOB_SAS_QUERYSTRING% syntax.

  • In the Advanced editor, specify the following responseOverrides. This sets the Content-Type to application/json

  • If you test this GET operation, you get the product description of the corresponding product id.

Logic Apps

The asynchronous order operations will be tackled by a Logic App that puts the commands on a ServiceBus queue. Further downstream processing is not covered in this post.

  • Create a Logic App as shown below. Remark that the Request trigger contains an {operation} parameter in the relative path. The request command is sent to an orderqueue, with the {operation} as a message property.


  • Get the request URL from the Logic App. This also contains quite a long query string, that includes the API version and the SAS token. Similar to the previous step, insert the query string into an Application Setting named LOGICAPPS_SAS_QUERYSTRING.


  • Create a proxy PostOrderViaLogicApps. Provide the following details:
    Route template: api/order
    Allowed HTTP methods: POST
    Backend URL:

Remark that the create operation is passed via the Backend URL.

  • If you test this POST operation, you'll notice that the message is accepted.


  • In the Logic Apps Run History, you should see that a new Logic App instance got fired.


  • The message is now placed on the Service Bus queue. The operation has been set to create.


  • Also create a proxy for the PUT and DELETE operation, in a similar way.


Azure Functions

  • I've created an Azure Function that just returns a dummy order. The returned order ID is taken from the request.

  • Deploy this Azure Function in a Function App and retrieve the URL. The URL contains a query string with the required access code. Add this query string to the Application Settings, named FUNCTIONS_CODE_QUERYSTRING.


  • Create a proxy GetOrderViaAzureFunctions. Provide the following details:
    Route template: api/order/{orderId}
    Allowed HTTP methods: GET

Remark that the route template parameter 'orderId' is reused in the Backend URL to pass it on to the Azure Function.

  • If you test this GET operation, you'll notice that a dummy order is returned, with the requested Id.


Feedback to the product team

After playing around with Azure Function Proxies, I have the following suggestions for the product team. They are listed in a prioritized order (most wanted feature requests first):

  • The mocking experience is not ideal when working with JSON objects, as you need to constantly escape quote characters. It would be great if this could be made more user-friendly. As an alternative, you can reach out to blob storage, but this additional step decreases developer productivity. 
  • It would be nice if there is a way to remove all or some HTTP headers from the response. Each Azure service comes with its own set of HTTP response headers and I don't always want them to be returned to the client application. This would be great from both a security and consistency perspective. 
  • Accessing parts of the request or response body would open up a lot of new opportunities. As an example, I've tried to put the message immediately on an Azure Storage Queue, without any code in between. However, this was not feasible, because I needed to wrap the original message content inside this XML template:


  • Currently there is no way to perform different transformations, based on - for example - the backend response status code. This could become handy in certain scenarios.


Azure Function Proxies can be easily used to create an API that consists of multiple HTTP-based (micro)services. Practically every service can be accessed from within this light-weight application proxy, as long as the authentication is performed through the URL (e.g. SAS token, API key). This avoids that the client application needs to maintain a different URL per operation it wants to invoke. The service is very easy to use and requires no coding at all. A great addition to our integration toolbelt!


Categories: Azure
Tags: Functions
written by: Toon Vanhoutte

Posted on Tuesday, September 26, 2017 9:28 PM

Stijn Moreels by Stijn Moreels

Using F# and Property-Based Testing to solve the Coin Change Kata really helped me to get more insight in the Property-Based Testing technique; I hope it will help you also.


The one way to become an expert in something is to practice; and Programming Katas are a very good approach to keep practicing your programming skills.

In this post, I will solve the Coin Change Kata with F# and use Property-Based Testing (with FsCheck) to drive the design.
For me, this was a lesson in writing properties and not so much solving the kata. It was a fun exercise. I thought it would be useful to share my code with you.
If you haven’t any experience with Property-Based Testing or F#, I recommend you look first at those topics.

Coin Change Kata


Ok, there are several different descriptions of this Kata; so, I’ll show you what I want to accomplish first.

“Given an amount and a series of Coin Values, give me the best possible solution that requires the least amount of Coins and the remaining value if there is any”

So, by looking at the definition of the Coin Kata; I need two inputs and two outputs:


The first thing I did before describing my properties, is defining my signature of my function. My first mistake was I thought I could use integers all over the place. Something like this:

int list -> int -> int * int list

But we can’t have negative coin values, so I started with making a coin type. In the Kata, there are several amounts they use so I chose to use the same:

Note that, by describing the coin values in this way, I have restricted the input values of the coins. This makes Illegal States Unpresentable (from Yaron Minsky).

And so my signature, after type interference, is the following:

Coin list -> int -> int * Coin list

Strictly speaking this is not the right signature, because we are still handling negative amounts, but for the sake of this exercise; I will leave this as an exercise for you.


First Property: Ice Breaker

So, let’s start coding. The first property should be some kind of Ice Breaker property. I came up with the following:

“Change amount with nothing to change gives back the initial amount”

This is the property when we do not have any coin values, so we just get back the same amount as remaining value. Note that I use ‘byte’ as a input value so I make sure I have a positive value, and for the sake of this exercise. The maximum byte value is enough for this demonstration.

We can easily implement this:

We can play the Devil’s Advocate and intentionally use a fake implementation, for example:

Which will still pass.

Second Property: Boundaries

The next property I wrote was the other way around: what if I haven’t got any amount to change?

“Change zero amount result in original coin values”

Note that we have FsCheck to generate us the random list of our Coins. We don’t care what coins we’re about the use for the change and that’s why we can use FsCheck to generate some for us.

I think this is a good implementation example of how Test Ignorance can be accomplished with Property-Based Testing.

And our implementation:

Now we can’t fill the list so we’re back with the first implementation which makes the two properties pass.

Third Property: Find the Constants

I’m not quite sure this is a proper example of this; because you could state this property for other values as well with some effort and it’s possibly also covered with a latter property. Although, this was the next property I wanted to write because it drives me more in the actual implementation of the function and it’s a constant in this implementation.

“Change is always One if there’s no other coin values”

We can implement this property (and respect the others) with this implementation:

When I have only ‘One’ coins, the change for a random amount is always a list of ‘One’ coins that has the same length of the initial amount to-be changed. I can of course play the Devil’s advocate and change the remaining amount to 42 for example (because 42 is the answer to life):

And so, we can stricken our property to also assert on the remaining amount:

Because of FsCheck this is hardly an issue. I added some Labels (from FsCheck) to clearly state in the output WHAT failed in the property. This is good thing for Defect Localization.

Also note that playing the Devil’s Advocate makes sure that I implement the right implementation and that my properties this state in the most strictly solution.

Fourth Property: Some Things Never Change

For the fourth property, I thought even further about the result and came up with this property. The constant that I found was that whatever I change into coins, the initial to-be-changed amount should always be the sum of the remaining change and the changed coins.

“Sum of changed coins and remaining amount is always the initial to-be-changed amount”

What this property needs, is a non-empty list of coins because we otherwise re-testing the already written property of the empty coins. This is also no issue for FsCheck, with Conditional Properties we can easily define this with the List.length coins <> 0 ==> lazy expression.

This will make sure that the other part of the function only gets evaluated, and so verified if the condition is met.

The rest of the function is the mapping of all the coins into values, sum this and add the remaining amount to this. All this together should be the same as the initial amount.

This is the first time I need to get the actual value of coins, so a made a function for this:

How do we know how much coins we get in a given amount? That’s a division of the amount by that coin value. We need several coin values and so we must also do the division by the other coin values; so, we need the remaining value of the division. That’s a modulo of the amount by that coin value.

We need to do this for all the different coins we have.

Does this pattern sounds familiar?

We have an initial value, we need to loop over a list, and do something with a given value that can be passed to the next loop iteration.

In an imperative language, that’s the for-loop we’re stating:

Something like this (sort of).

But, we’re in a functional language now; so, what’s the alternative? Fold!

Here is some implementation using fold:

One of the things I like to do is to specify the different arguments on its own instead of inlining them (in this case into the List.fold function). I think this increases Readability and shows more the Code’s Intent: that the core and return value is the result of a List.fold operation.

This reminds me of the Formatting Guidelines I described in a previous post that the return value of a method should be placed on a different line to increase readability and highlights “The Plot” of the method.

This is very similar; we want to show the what we’re doing as “The Plot” of the function by specifying the argumented-functions on separate.

Also note that we can use the function valueOfCoin that we needed in our Property. People not familiar with TDD and the Test-First mindset sometimes say that they don’t like to see if the test is the only place where some functionality is used; but if you use TDD the test is the first client that will use those functionalities!

Fifth Property: Final Observation

We’re almost there; there’s just one last thing we didn’t do right in our implementation. The Kata stated that we must find “the best possible solution” for the amount in change. We now have an implementation that finds “some” solution but not the “best” solution. Why? Because we don’t use the order in which the different coin values are passed in; we just loop over them. We need the best solution for the least amount of coins.

How do we get from “some” solution to the “best” solution? Well, we need to check first with the highest coin values and then gradually to the least coin value.

How do we specify this in a Property? I must admit that it did not come to me very fast, so I think this was a good exercise in Property-Based Testing for me. This was the Property I came up with:

“Non-One Coin value is always part of the change when the amount is that Coin value”

Why do we need a non-one Coin value? Why do we need a non-empty Coin list? Because we otherwise testing an already specified property.

That’s why we use the Conditional expression: (nonOneCoin <> One && List.length coins <> 0) ==> lazy.

Now, the other part of the Property. We need to check if the given random list of coins (with a non-one Coin in it) result that the non-one Coin is part of the change we get if the amount to-be-changed is the value of the non-one Coin.

That’s seems reasonable. If I want the change the value 50 in coins and I have the Coin value 50, I want that as return value. That would be the solution if I need the least amount of coins. I don’t care if a have Coins of 50 and 25 for example, the order of the different Coin values doesn’t matter; just give me the change with the least amount of coins.

Note that we first use the Gen.shuffle function to shuffle the random list of coins with the non-One Coin. After that, we’re sure that we have a list with a non-One coin. If I would specify this condition inside the Conditional expression of FsCheck, I would have a lot of tests cases that are skipped because the condition wouldn’t be met. If I set the condition on a signal Coin value; I will have a lot more test cases.

The chance that I get a Coin that isn’t One is much higher than I get a list that contains a non-One coin I guess. But not only that; I get more Code’s Intent in my Property if I state my non-one Coin value like this I guess.

We finally pipe into the snd function that gives us the second element of the tuple so we can use it in our assertion to check if the nonOneCoin value exists in the resulted list of coins.

How do we implement this?

We sort the Coins by their Coin value. Note how we again can use the already defined valueOfCoin function.


As I said before: this wasn’t exactly an exercise in solving the Coin Change Kata but rather in specifying Properties to drive this implementation. I noticed that I must think on a higher level about the implementation instead of hard-coding the test values.

I don’t know which values FsCheck will provide me and that’s OK; I don’t need to know that. I just need to constrain the inputs so that I can predict the output without specifying exactly what that output should look like. Just specifying some Properties about the output.

Hopefully you found this a nice read and have enjoyed the way we write Properties in this example. Maybe now you’re inspired to write Properties for your own implementations. The full code can be found at my GitHub.

FsCheck can also be used from a C# environment instead of F# so you don’t have to be a F# expert to write Properties. It’s a concept of looking at tests and how we constrain inputs to predict outputs.

Thank you.

Categories: Technology
Tags: F#
written by: Stijn Moreels

Posted on Monday, September 25, 2017 12:00 PM

Stijn Moreels by Stijn Moreels

How can we use the F# Agent to set up a pipeline which can be used in Concurrent Applications? We need not only a working version but also a declarative version which we can easily extend and maintain.


In the context of Concurrent Applications, there are several architectural systems that describe the communication of the system. The Actor Model or an Agent-Based Architecture, is one of those systems we can use to write robust Concurrent Applications.

One of the challenges in Concurrent Programming is that several computations want to communicate with each other and share a state safely.

F# has implemented this model (or at least a part of it) with the Mailbox Processor or shorted version: Agent. See Thomas Petricek's blog post for more information.

Agent Pipeline


The way agents can communicate with each other is by itself a lot harder to comprehend than in a sequential or even a parallel system. Each agent can communicate with each other by sending messages. The agent itself can alter its Private State, but no other “external” state, they communicate the state by sending messages to other agents.

This Isolation of the agents in Active Objects can increase the system complexity. In this post, I will talk about some brain-dump exercise of mine to create an Agent Pipeline. I think the way we think of a pipeline is a rather easier approach to think about agents because we still have that feeling of “sequential”.


In F# this is rather easy to express, so let’s try it!

F# Agent Template

First, let’s define the agent we are going to use. Our agent must receive a message, but must also send some kind of response, so the next agent can process it. The agent itself is an asynchronous computation, so the result of the message will be wrapped inside an Async type.

Our signature of calling our pipeline agent should thereby be something like this:

'a -> Async<'a>

Ok, lets define some basic prerequisites and our basic agent template. We’ll use a basic string as our message just for example purposes.

Now that we have our basic setup, we can start looking at the body of our agent. We must receive a message and send back a reply. This can be done with the basic method Receive() which will return a tuple with the message itself and the channel we have to send our reply to. This call to “receive” will block the loop till the next message arrives. Agents run on a single logical thread and all messages send to agents are queued.

The body can be defined like this (print function to simulate the processing):

F# Async Binding

Ok, now we have our basic agent; we can look at how we can bind agents together.

Just like the previous diagram; I would like to express in code how messages are “piped” to other agents. When we think of piping, we can think of two approaches: Applicative (<*>) and Monadic (>>=). Since we need the result of the previous call in our next call, I’m going to use the Monadic style (>>=).

We see by looking at the signature that we must bind two separate worlds: the world of strings and the world of Async. Only looking at the signature makes me want to write some bind functions; so first, before we go any further; lets define some helper functions for our Async world:

These two functions should be enough to define our pipeline. Look at the signature of our bind:

(‘a -> Async<’b>) -> Async<’a> -> Async<’b>

This is just what we want in our agent signature. Now, I’m going to create some agents to simulate a pipeline:

Note that the pipeline of the agents is almost exactly like we have designed in our diagram. This is one the many reasons that I like F# so much. Much more than C#, you can express declaratively exact how you see the problem. C# Async/await variant is inspired by the F# Asynchronous Workflows.

Or if you like a Klesli style (which I like to use sometimes). This will make sure that we don’t have to return the message as an Async:


“Functional programming is the most practical way to write concurrent programs. Trying to write concurrent programs in imperative languages isn’t only difficult, it leads to bugs that are difficult to discover, reproduce, and fix”

- Riccardo Terrell (Functional Concurrency)

This is just a brain-dump of some exercise for myself in training Monadic Binds and Agents, and how to combine them. What I really learned how to do, is looking at the signature itself. Much more than in Object-Oriented languages the signature isn’t a lie and tells exactly how it’s done. Only by looking at the signature, you can make a guess of how the function will look like.

Functional Programming is still a bit strange at first if you come from an Object-Oriented world; but trust me; it’s worth the learn. In a future where Asynchronous, Parallelism and Concurrent topics are considered “mainstream”, Functional Programming will be even more become a lesser and lesser mainstream language.

Categories: Technology
Tags: F#
written by: Stijn Moreels

Posted on Monday, September 18, 2017 12:45 PM

Toon Vanhoutte by Toon Vanhoutte

Azure's serverless PaaS offering consists of Azure Functions and Logic Apps. If you consult the documentation, you'll find out that there is quite some overlap between the two. For many people, it's not clear what technology to use in what scenario. In this blog post, I discuss the main differences between these two event-driven Azure services and I provide some guidance to help you to make the right decision.


Developer experience

A popular comparison states that Azure Functions is code being triggered by an event, whereas Logic Apps is a workflow triggered by an event. This is reflected in the developer experience. Azure Functions are completely written in code, with currently supports JavaScript, C#, F#, Node.js, Python, PHP, batch, bash and PowerShell. In Logic Apps, workflows are created with an easy-to-use visual designer, combined with a simple workflow definition language in the code view. Each developer has of course his/her personal preference. Logic Apps is much simpler to use, but this can sometimes cause limitations in complex scenarios. Azure Functions gives a lot more flexibility and responsibility to the developer.


Logic Apps connects to an enormous variety of cloud / on-premise applications, going from Azure and Microsoft services over SaaS applications and social media to LOB systems. You can find the impressive list of connectors here. Each connector comes with an API connection, that stores the required credentials in a secure way. These API connections can be reused from within multiple Logic Apps, which is great! Azure Functions have the concept of triggers, input and output bindings. Most of these bindings connect your Azure Functions to other Azure services, such as Event Hubs, Storage, DocumentDb, etc… Consult the complete list here. The HTTP binding is probably the most popular one, as it allows the creation of serverless API's. At the moment, there are no signs that Azure Functions aims to support that many bindings as what Logic Apps offers.

Exception handling

Cloud solutions need to deal with transient fault handling. Logic Apps provides out-of-the-box functionality that allows you to configure automatic retries on every action. In case this doesn't solve the problem, the workflow gets a failed status and can be resubmitted after human intervention. This guarantees an at-least-once execution model, which is pretty reliable! In Azure Functions, you have the typical try/catch options available. If you want to enable retries, you need to do the plumbing yourself, by introducing for example Polly. The way you can handle exceptions in the output binding, depends on the used language and type of output binding. This doesn't always give you the desired outcome. No resume / resubmit capabilities, except if you develop them yourself!


Until recently, Azure Functions always needed to be stateless and preferably idempotent. With the announcement of Azure Durable Functions, Microsoft brings state and long-running capabilities to Azure Functions, by leveraging the Durable Task Framework. This new framework allows sequential and parallel execution of several Functions, it supports long-running tasks with pre-defined timeouts and provides statefull actors without the need for external storage. The state is automatically stored in Azure Storage queues, tables and blobs, which is disaster proof. I am looking forward how this will evolve. These long-running / statefull processes are inherent available in Logic Apps, except for the statefull actor model.


Hybrid integration is reality nowadays. Cloud services must be able to connect to on-premises resources in a secure and high performing way. Azure Logic Apps performs this task via the On Premises Data Gateway, that needs to be installed on premises. Behind the scenes, it uses Azure Service Bus Relay to connect to the cloud in a firewall friendly way, through encrypted channels. When using Azure Functions within an App Service Plan, you have more convenient hybrid connectivity options that reside on the network level. App Service Plans offer support for many networking options like Hybrid Connections, VNET Integration and App Service Environment. Via these options, you can integrate Azure Functions with your local network through a Site-to-Site VPN or ExpressRoute.


Azure Resource Manager templates are the way to deploy resources across the Microsoft Azure platform. Fortunately, both Azure Functions and Logic Apps have built-in support for ARM deployments, through for example Visual Studio Release Management. Next to this, Azure Functions allows easy setup of continuous deployments triggered from sources like BitBucket, Dropbox, Git, GitHub, OneDrive and VSTS. This is ideal in case multiple and frequent contributions need to be consolidated and tested. Additionally, Azure Functions now has deployment slots in preview. This allows deploying and testing a vNext first, before you swap that tested deployment slot with the current version in production.


Logic Apps run only in the cloud, as it has a dependency on Microsoft-managed connectors. As a consequence, you cannot debug, test or run Logic Apps locally. Azure Functions can be easily developed and debugged on your local workstation, which is a big plus to increase developer productivity. Via the Azure Functions Runtime (still in preview) you are able to deploy them on premises in Windows Containers, with SQL Server as a storage layer. Azure Functions is also supported to run on Azure Stack and it has been announced as part of Azure IoT Edge to execute on small devices. This hosting flexibility is a big asset in phased migration scenarios towards the cloud.


Per Logic App, you have a nice overview of the previous runs and their corresponding outcome. You can filter this history, based on a time period and the resulting run status. The monitoring view of a workflow run is the same as the designer view, which makes it very intuitive. For each action, you can see the status and all inputs/outputs. With one button click, you can enable integration with OMS, where you can search on tracked properties. It's on the roadmap to have a user-friendly and cross Logic Apps dashboard on top of this OMS integration. Each Azure Function comes with a Monitor tab, where you can see the execution history. There is also a live event stream that shows the almost real-time processing statistics in nice graphs. On top of that, there's full integration with Application Insights, where you can take advantage of the powerful Analytics queries.

Pricing Model

Logic Apps has a pure pay-per-usage billing model. You pay for each action that gets executed. It's important to be aware that you also need to pay for polling triggers, which can be a hidden cost. If you want to benefit from the capabilities of the Integration Account, you should be aware that this comes with a fixed monthly bill. With Azure Functions, you have two options qua pricing. You can opt for a fixed cost of an App Service Plan. In that option you reserve compute power on which you can run Azure Functions, but also Web, Mobile and API Apps. The second option is completely serverless, with a consumption plan based on resource consumption (memory/s) and number of executions. Don’t forget that the Azure Storage layer also comes with a rather small cost.


Each particular binding or connector comes with its own security. In this section, I focus on the security of Logic Apps and Azure Functions exposed as an API. In order to access a Logic App with the HTTP trigger, the client must include a Shared Access Signature in the URL. The signature is generated via a secret key that can be regenerated at all time. There is also the ability to restrict access, based on incoming IP addresses. To add more authorization logic, you can put Azure API Management in front of it. Azure Functions has a similar concept of API keys. The API key can be shared for the whole Function App (host key) or you can create a specific one for your Function. If you run your Azure Function in an App Service Plan, you can leverage its codeless authentication functionality with Active Directory, Google, Facebook, etc… Real authorization requires a small code change. Azure Function Proxies can be a light-weight alternative of full-blown API Management, to add security on top of your HTTP triggered Functions.


Based on the comparison above, you'll notice that a lot of factors are involved when deciding between the two technologies. At first, it's important to see what technology supports the connectivity that you require. Do you want to write it yourself or you want to leverage out-of-the-box bindings / connectors? Next to that, my general guidance is as follows:

When dealing with synchronous request/response calls, that execute more complex logic, Azure Functions is the preferred option. Logic Apps is better suited for asynchronous integration and fire-and-forget messaging that requires reliable processing. When using Logic Apps, they can be perfectly extended with Azure Functions to execute stateless tasks that cannot be fulfilled by the out-of-the-box Logic Apps capabilities.

Web API's are often composed of both sync and async operations. If you follow the guidance stated above, you might end-up with an API that uses both Azure Functions and Logic Apps. This is where Azure Functions Proxies has its value, as it can expose these separate microservices as a unified API. This will be discussed in another blog post. 

Stay tuned for more!

Categories: Azure
written by: Toon Vanhoutte