Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Friday, October 13, 2017 12:05 AM

CONNECT 2017, a 2-day event organized by Codit filled with integration concepts and the latest trends within Internet of Things and Azure technologies. Read the recap here.


CONNECT 2017 focused on Digital Transformation with international speakers from Microsoft, the business and the community. The full-day event was organized in Utrecht and Ghent and inspired participants to strengthen their integration strategy and prepare them for the next steps towards a fully connected company.

This blogpost will capture the key take-aways and some of the lessons learned during both days.

[NL] Opening keynote - Ernst-Jan Stigter, Microsoft Netherlands

Ernst-Jan started off with the fact that we can all agree the cloud is here to stay and the next step to accelerate by applying Digital Transformation. Microsoft's vision on Digital Transformation focuses on bringing people, data and processes together to create value for your customers and keep your competitive advantage. In his keynote, Ernst-Jan explains the challenges and opportunities this Digital Transformation offers.

Microsoft's Digital Transformation framework focuses on 4 pillars: Empower employees, Engage customers, Optimize operations and Transform products where the latter one is an outcome of the first 3 pillars. Digital Transformation is enabled by the modern workplace, business applications, applications & infrastructure, data and AI.

Ernst-Jan continues to lay out Microsoft's strategy towards IoT. By collecting, ingesting, analyzing data and acting upon that data, customers can be smarter than ever before, being able to build solutions that were unthinkable in the past. He shares some IoT use cases examples at Dutch Railways, City of Breda, Rijkswaterstaat and Q-Park to illustrate this.

[BE] Opening keynote - Michael Beal, Microsoft BeLux

Michael started explaining what digital transformation means and the vision of Microsoft on that subject. Microsoft is focusing on empowering people and building trust in Technology.

Michael continued his talk with the vision of Microsoft on the Intelligent cloud combined with an intelligent edge. To wrap up, Michael talked about how Microsoft thinks about IoT and how Microsoft is focusing on simplifying IoT.

Democratizing IoT by allowing everyone to access the benefits of IoT and providing the foundation of Digital Transformation is one of the core missions of Microsoft in the near future.

A great inspiring talk to start the day with in Belgium.

[NL/BE] Hybrid Integration and the power of Azure Services - Jon Fancey, Microsoft Corp

Jon Fancey is a Principal Product Manager at Microsoft and is responsible for the BizTalk Server, Logic Apps and Azure API Management products.

He shares his vision on integration and the fact there is a continuous pressure between the forces and trends in the market. He explains that companies need to manage change effectively to be able to adapt in a quickly changing environment.

Azure enables organizations to innovate their businesses. To deal with digital disruptions (rapid evolving technology), Digital Transformation is required. Jon goes through the evolution of inter-organizational communication technologies from EDI, RPC, SOAP, REST to Swagger/Open API.
Logic Apps now has 160+ connectors currently available for all types of needs: B2B, PaaS support, SaaS, etc.... This number is continually growing, if needed you can build your own connector and use that in your Logic Apps.

Today, Azure Integration Services consist of BizTalk Server, Logic Apps, API Management, ServiceBus and Azure Functions. Each of these components can be leveraged in several scenarios and, when combined, can fulfill unlimited opportunities. Jon talks about serverless integration. Key advantages are reduced DevOps effort, reduced time-to-market and per action billing.

[NL] Mature IoT Solutions with Azure - Sam Vanhoutte, Codit

In this session Sam Vanhoutte, CTO of Codit, explained us how businesses can leverage IoT solutions to establish innovation and agility.

He first showed us some showcases from enterprises that are using IoT today to create innovative solutions with a relatively small effort. All that while gaining a very high TCO. He showed us how a large transport company combined Sigfox (an IoT connection service), geofencing and the Nebulus IoT gateway to track "black circuit" movements of containers. Sam also showed us how a large manufacturer of food processing machines uses IoT to connect existing machines to gather data for remote monitoring and predictive maintenance, even though these machines communicate with legacy protocols.

Next, Sam reflected on the pitfalls of IoT projects and how to address them. He stressed the importance of executive buy-in. Solutions will rarely make it to production if this is lacking. Sam also advised to use the existing installed-base of enterprises in order to decrease the time to market and add value fast. This can be achieved by adding a IoT gateway. Also, you need to think about how to process all the data these devices are generating and add some filtering and aggregation before storage costs become too high. Sam then stressed the importance of security and patching the devices.

One last thing to keep in mind is to spend your money and time wisely in an IoT project. Use the bits and pieces from the cloud patform that are already there and focus on value generators. In the last part of the presentation, Sam showed us how the Nebulus gateway takes care of the heavy lifting of connecting devices and how it can jumpstart a companies’ journey into its first IoT project.

[BE] Cloud Integration: What's in it for you? - Toon Vanhoutte & Massimo Crippa, Codit

During this session Toon Vanhoutte (Lead Architect) and Massimo Crippa (API Management Domain Lead) gave us more information about different integration scenarios.

Massimo started with showing us the different scenarios as they were yesterday, today and how it will become tomorrow. In the past everything was On-Premise. Nowadays we have a hybrid landscape which includes the huge advantage of connectivity, for example the ease of use of Logic Apps. There is also the integrated azure environment, the velocity e.g. the continuous releases for LogicApps and the network (VNET integration).

Toon introduced Cloud Integration which has the following advantages. Serverless technology, migration path, the pricing is consumption based and the use of ALM which stands for continuous integration & delivery. The shift towards the cloud can start with IAAS (Infrastructure as a service). The main advantages of IAAS are : availability, security and the lower costs. But why we should choose for Hybrid Integration? Flexibility and agility towards your customers and it is future proof. Serverless integration reduces the total cost of ownership, you have less devops, you can instantly scale your setup with a huge business value.

Massimo told us that security is very important through governance, firewall, identity and access rules. Another topic is monitoring, in the below photo you have all of the different types of monitoring.

The Codit approach in moving forward is a mix between on-premise (Biztalk - Sentinet - SWL server) and an Azure infrastructure.

[NL] The Microsoft Integration Platform - Steef-Jan Wiggers, Codit

The presentation of Steef-Jan started with an overview of the application landscape from yesterday’s, today’s and tomorrow’s organizations. Previously, all applications, which were mostly server products, were running at on-premises data centers. Today, the majority of the enterprises have a hybrid application landscape: the core applications are still running on-premises, but they are already using some SaaS applications in the cloud . Tomorrow, cloud-based applications will take over our businesses

The integration landscape is currently undergoing a switch from on-premises to hybrid to the cloud. On-premise integration is based on BizTalk Server and Sentinet for API Management. BizTalk is used for running missing-critical productions workloads and Sentinet for virtualizing API's with minimal latency. Both have been made cloud ready. Adapters for Logic Apps (On premise Gateway) and Service Bus (Queues, Topics and Relay) have been added in BizTalk, for Sentinet integration with Azure Service bus and more focus on REST, OAuth and OpenID. In Hybrid Integration, Logic Apps is used for connecting the cloud and API Management as well.
You have the advantage of continuous releases, moving faster and adapting faster to change. For networking you can use VNET and Relays. Cloud Integration has the advantage of Serverless integration (no server installation & patching, inherent high availability, …).
The pricing is consumption based: pay per executed action.

Different paths are available to switch from on-premises to cloud: "it should be a natural evolution and not a revolution".
One way is IaaS integration for obtaining better availability for your server infrastructure. IaaS improves security and has less costs. Hybrid integration gives you flexiblity in your application landscape. It is agile towards the business and you can release faster. A hybrid setup ensures you are set for the future. Serverless integration reduces the efforts you put in operations tremendously: no more server patching, backups… The costs are lower and you have the advantage to be able to scale much faster as well.

The Codit Approach

If you look at the hybrid integration platform you can distinguish several blocks. On premises has the known integration technologies. In Azure you find the standard compute and storage options. Connectivity enables smooth integration between on premises and the cloud. Messaging solutions like Service Bus and Event Grid allow decoupling of application. For integration, Logic Apps are used which orchestrate all integrations that can be extended via Azure Functions and API Apps. Integration with Azure API Management ensures governance and security using of Azure AD and Azure Key Vault. Administration and operations are done by using VSTS Release Management to rollout the solutions throughout the DTAP street in a consistent manner. A role-based monitoring experienced is offered by App Insights for developers, OMS for operations and Power BI reports for business users.

Codit wants you to be fully connected: Integration is the backbone of your Digital Transformation. Now more than ever.

[NL] update links - How the Azure ecosystem is instrumental to your IoT solution - Glenn Colpaert, Codit

IoT is here to stay so we'd better get ready for it. In the future everything will be connected, even cows. Glenn kicked off his session by giving a good overview of all main IoT pillars ranging from data storage & analytics to edge computing and connectivity and device management. Of course, that's not the only things to take into account. Security is often forgotton about, or "applied on top of it" later on. But security should be designed from the ground up. Microsoft's goal is to simplify IoT on several perspectives: Security, Device Management, Insights, Edge. Microsoft Azure provides a whole ecosystem of services that can assist you with this:

  • Azure IoT Hub that provides a gateway between the edge and the cloud with Service Assisted Communications built-in by default
  • Perform near-real-time stream processing with Azure Stream Analytics
  • Write custom business logic with Service Fabric or Azure Functions
  • Enable business connectivity with Azure Logic Apps for building a hybrid story
  • Azure Time Series Insights enabling real-time streaming insights
  • Setup DevOps pipelines with Visual Studio Team Services

However, when you want to get your feet wet: Azure IoT Central & Solutions are very easy. Start small and play around before spending a big budget on custom development. By using a Raspberry Pi simulator Glenn showed how easy it is to send telemetry to Azure IoT Hub and how you can visualize all the telemetry without writing a single line of code with Azure Time Series Insights. The key take-aways from this session are:

  • Data Value is created by making sense of your data
  • Insights Connect insights back to business
  • Security Start thinking about security from day zero
  • Edge IoT Edge is there for low latency scenarios
  • Evolve Learn by experience with new deployments

If you are interested in learning more about data storage & analytics, we highly recommend reading Zoiner Tejada's Mastering Azure Analytics

[NL/BE] Event-Driven Serverless Architecture - the next big thing in the cloud - Clemens Vasters, Microsoft Corp

Clemens starts the session with explaining the "Serverless" concept, which frees you entirely from any infrastructure pain points. You don't have to worry about patching, scaling and all the other infrastructure tasks that you normally have in a hosted environment. It lets you solely focus on your apps & data. Very nice! Clemens teaches us that there are different PaaS options for hosting your services, each having its own use cases and advantages.

Managed Cluster

Applications are being deployed on a cluster that handles the placement, replication, ownership consensus and management of stateful resources. This option is used to host complex, stateful, highly reliable and always-on services.

Managed Middleware

Applications are deployed on sets of independant "stateless" middleware servers, like web servers or pure compute hosts. These applications may be "always-on" or "start on demand" and typically maintain a shared cached state and resources.

Managed Functions

Function implementations can be triggered by a configured condition (event driven) and are short lived. There is a high level of abstraction of the infrastructure where your function implementations are running. Next to that, you have different deployment models you can use to host your services. The classic "monolith" approach divides the functional tiers on designated role servers (like a web server, database server,…). The disadvantage of this model is that you need to scale your application by cloning the service on multiple servers or containers. The more modern approach is the "microservice" approach, where you seperate functionality into smaller services and host them as a cluster. Each service can be scaled out independently by creating instances across servers or containers. It's an autonomous unit that manages a certain part of a system and can be built and deployed independently.

[BE] Maturing IoT Solutions with Microsoft Azure - Sam Vanhoutte & Glenn Colpaert, Codit

Sam and Glenn kicked off their session talking about the IoT End-to-End Value chain. A typical IoT solution chain is comprised of the following layers:

  • Devices are the basis for the IoT solution because they connect to the cloud backend.
  • The Edge brings the intelligence layer closer to the devices to reduce the latency.
  • Gateways are used to help devices to connect with the cloud.
  • The Ingestion layer is the entry into the IoT backend and is typically the part that must be able to scale out to handle a lot of parallel incoming data streams from the (thousands of) devices.
  • The Automation layer is where business rules, alerting and anomaly detection typically take place.
  • The Data layer is where analytics and machine learning typically take place and where all the stored data gets turned into insights and information. Report and Act is all about turning insights in action, where business events get integrated with the backend systems, or where insights get exposed in reports, apps or open data.

At Codit, we have built a solution, the Nebulus IoT Gateway, that helps companies jump start the IoT connectivity phase and generate value as quickly as possible. The Gateway is a software-based IoT solution that instantly connects your devices to (y)our cloud. The gateway provides all required functionality to cope with connectivity issues, cloud-based configuration management and security challenge.

As integration experts, we at Codit can help you simplify this IoT Journey. Our IoT consultants can guide you through the full IoT Service offering and evolve your PoC to a real production scenario.

The session ended with the following conclusion:

[NL/BE] Closing keynote - Richard Seroter, Pivotal

The theory of constraints tells you the way to improve performance is to find and handle bottlenecks. This also applies to Integration and the software delivery of the solution. It does not matter how fast your development team is working if it takes forever to deploy the solution. Without making changes, your cloud-native efforts go to waste.

Richard went on comparing traditional integration with cloud-native integration, showing the move is also a change in mindset.

A cloud-native solution is composable: it is built out by chaining together independent blocks allowing targeted updates without the need of downtime. This is part of the always-on feature of the integration: a cloud-native solution assumes failure and is built for it. Another aspect of the solution is that it's built for scale; The solution scales with demand, and the different components do this separately. Making the solution usable for 'citizen integrators' by developing for self-service, will reduce the need for big teams of integration specialists. The integration project should be done with the modern resources and connectors in mind allowing for more endpoints and data streams. The software lifecycle will be automated; The integration can no longer be managed and monitored by people. Your software is managed by your software.


Thank you for reading our blog post, feel free to comment or give us feedback in person. You can find the presentations of both days on following links:

This blogpost was prepared by:

Glenn Colpaert - Nils Gruson - René Bik - Jacqueline Portier - Filiep Maes - Tom Kerkhove - Dennis Defrancq - Christophe De Vriese - Korneel Vanhie - Falco Lannoo

Categories: Community

Posted on Thursday, October 12, 2017 11:35 PM

Toon Vanhoutte by Toon Vanhoutte

After my first blog in this series about Azure Function Proxies, I received several questions related to API management. People were curious how to position Azure Function Proxies compared to Azure API Management. It should be clear that Azure Function Proxies has some very limited API management functionality, but it comes nowhere near the capabilities of Azure API Management! Comparing the feature set of these two Azure services doesn't make sense, as Azure API Management supersedes Azure Function Proxies on all levels. Don't forget why Azure Function Proxies was introduced: it's to unify several separate functions into an API, not to provide full-blown APIM.  Let's just touch upon the functionalities that they have more or less in common!

Common Functionalities


Azure Function Proxies have limited transformation capabilities on three levels: rewriting of the URI, modification of the HTTP headers and changing the HTTP body. The options for transformations are very basic and focussed on just creating a unified API. Azure API Management on the other hand, has an impressive range of transform capabilities.

These are the main transformation policies:

Next to these policies, you have the opportunity to write policy expressions that inject .NET C# code into your processing pipeline, to make it even more intelligent.


Azure Function Proxies supports any kind of backend security that can be accomplished through static keys / tokens in the URL or HTTP headers. Frontend-facing, Azure Function Proxies offers out-of-the-box authentication enforcement by several providers: Azure Active Directory, Facebook, Google, Twitter & Microsoft. Azure API Management has many options to secure the frontend and backend API, going from IP restrictions to inbound throttling, from client certificates to full OAuth2 support.

These are the main access restriction policies:

  • Check HTTP header - Enforces existence and/or value of a HTTP Header.
  • Limit call rate by subscription - Prevents API usage spikes by limiting call rate, on a per subscription basis.
  • Limit call rate by key - Prevents API usage spikes by limiting call rate, on a per key basis.
  • Restrict caller IPs - Filters (allows/denies) calls from specific IP addresses and/or address ranges.
  • Set usage quota by subscription - Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per subscription basis.
  • Set usage quota by key - Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per key basis.
  • Validate JWT - Enforces existence and validity of a JWT extracted from either a specified HTTP Header or a specified query parameter.

These are the main authentication policies:

Hybrid Connectivity

Azure Function Proxies can leverage the App Service networking capabilities, if they are deployed within an App Service Plan. This gives three powerful hybrid network integration options: hybrid connections, VNET integration or App Service Environment. Azure API Management, premium tier, allows your API proxy to be part of a Virtual Network. This provides access to all resources within the VNET, which can be extended to on-premises through a Site-to-Site VPN or ExpressRoute. On this level, both services offer quite similar functionality.


The scope of Azure Function Proxies is really at the application level. It creates one single uniform API, that typically consists of multiple heterogenous backend operations. Azure API Management has more of an organizational reach and typically governs (large parts) of the API's available within an organization. The diagram below illustrates how they can be combined together. The much broader scope of API Management results also in a much richer feature set: e.g. the publisher portal to manage API's, the developer portal with samples for quick starts, advanced security options, the enormous range of runtime policies, great versioning experience, etc…

Use cases

These are some use cases where Azure Function Proxies was already very beneficial:

  • Create a single API that consists of multiple Azure Functions and / or Logic Apps
  • Create a pass-through proxy to access on-premises API's, without any coding
  • Generate a nicer URL for AS2 endpoints that are hosted in Azure Logic Apps
  • Generate a simple URL for Logic Apps endpoints, that works better for QR codes
  • Add explicit versioning in the URL of Azure Functions and / or Logic Apps


Azure Function Proxies really has an added value in the modern world of API's that often consist of multiple heterogenous (micro-)service operations. It offers very basic runtime API management capabilities, that reside on the application level.


Categories: Azure
Tags: Functions
written by: Toon Vanhoutte

Posted on Thursday, October 5, 2017 11:43 PM

Stijn Moreels by Stijn Moreels

This post will expand on this subject in how I changed my way of writing code and how I became a functional guide.


One of the things current Functional Programmers and Enthusiasts come across very often when working in a “mainstream” development environment, is that they must work together with non-functional programmers.

My opinion is that people should not try to convince other people of their opinion, but instead only show you how to do certain things and let the audience decide what they feel most comfortable with.

That’s exactly what happened to me, I worked on a project and because I used functional approaches; people asked me to explain some concepts that I introduced to the rest of the team.


Instead of directly talking about Higher-Order Functions, Currying, Monads, Catamorphisms, Endomorphisms, … I decided that I wanted to start with the simplest thing I think was possible to change.


In functional programming, we try to compose all the functions together to create new functions. In imperative languages, we use variables that we can send to the next function instead of sending the result of the first function directly. Piping and Composition are those building blocks.

Note that Composition is the very root of all software development. It always feels to me that functional programming is using this all the way: Everything is an expression, and so, everything can be composed.

My first change was to add an extension method called ‘PipeTo’ in the C# project:

What piping really means to me, is just the order in which you express a value and a function. With piping, you can change that order. Normally we would type the method and then the argument that we want to send to that method.

What piping allows me to do, is to first write the value and then the method.

This simple extension allowed me to write quite powerful expressions:

This is some dummy example of how you can use this approach. Note that I use FsCheck as testing framework and that my test is actually expressed in a single line.

I see two major benefits about this approach:

  1. First, when I use this piping-method, I don’t have to express intermediate variables that sometimes only clutter the actual functionality you want to write. Together with the variables, in C# we must also express the types (if we don’t use ‘var’ everywhere); so, it struck me that I wasted time reading types instead of reading method names.
  1. Second, instead of assigning the result of a method to a variable, we can immediately send it to the next method. This allows us to write in the same order of the data flow like we would express this with intermediate variables.

To go a little deeper on the second benefit. This is what it looks like with the intermediate variables:

We need this intermediate variables to have the data flow from top to bottom (like you read a book). To get rid of the intermediate variabeles without the ‘PipeTo’, we could inline the variables:

But I hope that you find this less readable, that’s why we would extract this in separate variables for the same reason we can use the ‘PipeTo’: to have the data flow from top to bottom but still get that readability.


In F#, we use the ‘Option’ type to indicate that a value might be missing. In Haskell we use ‘Maybe’. By default, in C# and other imperative languages, we use ‘null’ to indicate that there’s a value missing.

I’m not going explain fully why because that would lead us to far. There are many posts and books that will explain this you. Even the inventor of the ‘null’ type thought it was the ‘Billion Dolar Mistake’.

So, we use an other type to indicate this missing value. So what?

Well, this is very powerful and a lot more robust because now you now exactly where there’s a value present and where not. C# (for example) doesn’t have any such thing for reference types, but it got a less stronger type called ‘Nullable<>’ for value types.

My second change was to implement some basic functionality of the Maybe Monad.

The ‘Map’ in F# is the ‘Select’ in C#,
the ‘Filter’ in F# is the ‘Where’ in C#.

With this simple implementation, I’ve created some functionality that we can use to start implementing missing values with the ‘Maybe’ type.

The first thing I explained in this type, is the binding functionality. When you show people of endless ‘if’ structures that all would check ‘IsPresent’ on this type, you can show that this is exactly what the ‘Bind’ does:

Normally these ‘if’ structures would check for ‘null’. If we would use our already known practices of refactoring, we would see that there’s a duplication. The thing that’s variable is the action that must be executed when there’s a value. This is exactly what the ‘Bind’ method gets, so we could rewrite it like this:

The other two methods ‘Where’ and ‘Select’ must be familiar to you if you know LINQ. It’s strange to see that experienced C# developers know the functionality of LINQ but aren’t yet using the concepts behind LINQ in their own design. LINQ is functional programming.

The ‘Select’ takes the value from the ‘Maybe’ instance (if there is one) and execute a method that accepts this value. The return of the ‘Select’ is then a new ‘Maybe’ instance with the result of the just executed method.

The ‘Where’ takes a predicate and will return a ‘Maybe’ instance if the value is presents and the predicate holds for the value inside the ‘Maybe’.

This type itself isn’t functional, but what we do with it is; that’ why I think it’s also good first step into functional programming in C#.


I showed some examples of how we can achieve a more functional approach in an object-oriented language like C#. We can extend this idea and come up with even more extensions:

The first ones are probably the simplest. We define a foreach loop that we can use to run through a list of items and execute a (side-effect/dead-end) function on each one. We also define a ‘Tee’ that we can use to send a ‘dead-end’ function inside a pipeline. We don’t have to stop after our method returns ‘void’; we can just continue with the original value.

I also added a ‘Use’ extension to pipe a disposable resource and a ‘Compose’ extension to compose two functions together into a single function.

Now I think it would be a good exercise in functional programming to come up with some changes in your code that uses this extensions!


Instead of directly writing the software in a different language, with different keywords, syntax, practices, … I discovered that people are more comfortable if I use functional approaches first in the languages they're familiar with.

This way, you can clearly see in the same language syntax the “Before” and “After” part.

Remember, don’t try to convince other people of your opinion. Everyone has a different view and ways he or she works and feels comfortable. The only thing you can do, is show how you work and how you see things without blaming other languages or persons because they use a different approach.

Just like everything else, try to be open-minded!

Categories: Technology
written by: Stijn Moreels

Posted on Wednesday, October 4, 2017 1:52 PM

Toon Vanhoutte by Toon Vanhoutte

By creating a uniform API on top of several heterogenous service operations, we also simplify the security model for the API consumer.

After the configuration we've done in part 1, we've hidden the complexity of maintaining 4 SAS tokens and 1 function code client-side. Be aware that, at the moment, the Azure Function Proxy is not secured by default. In some cases, this might be the desired behaviour, in other scenarios we would like to restrict access to the API. Let's have a look how we can achieve the latter!

Enforce Authentication

You can leverage the default App Service authentication feature, that forces clients to get authenticated against one of these providers: Azure Active Directory, Facebook, Google, Twitter & Microsoft. This can be done without any code changes. This only covers authentication. When authorization is required, some minimal code changes are needed.

Suggestions for product team

  • Common security measures like IP restrictions and configurable rate limits to protect against DoS attacks would be great. There is already a feature request on UserVoice.

  • Leveraging the standard Azure Function keys or host keys would be also a simple way to authorize the API endpoint. You can easily setup rotating keys to improve security. Apparently this is on the radar, but no ETA defined yet!


Categories: Azure
Tags: Functions
written by: Toon Vanhoutte

Posted on Tuesday, October 3, 2017 9:28 AM

Stijn Moreels by Stijn Moreels

In this post, we will look at how F#'s feature Active Patterns can help build a clear, declarative solution for the Validation of Domain Models. By using a Partial Pattern for each Business Rule, we can clearly see how the input is restricted to verify each rule.


The reason I wrote this post was to learn more about F# Active Patterns, and how I can use this for certain, specific problems. They say this feature is a real “killer” feature of the F# environment, so I found it a good exercise to think about how I can use this in my daily practice.

Scott Wlaschin has an amazing blog post series where he writes about this topic. He shows how we regularly miss the true definition of the domain and how we can fix this with the simplicity of F#.

My blog post builds upon that idea and looks how we can validate our models in the same simplicity.

Domain Modeling

When thinking about the modeling of the domain, F# has a very nice way to express this. Throughout this post, I will be using F# for my modeling and for the validation of my model. Sometimes I will show your what the alternative would look like in an Object-Oriented Language like C#.


Ok, let’s define our model. We want to define a “Book” in our domain. A book in our domain has several items which defines “a book”; but for the sake of this exercise we’ll keep it very short:

Just like Scott Wlaschin has asked the question, I'll ask it again: “What’s wrong with this design?”.

Several things, as a Security Expert you could say that we’re could have a problem if someone enters negative pages, or special chars for the ISBN or the Author.
As a Domain Expert, you could say that this model doesn’t actually represent the domain.


Let’s start with a simple one: we can’t have negative pages; so, let’s define a new type for this. Note that we have cleared it “private” so we can’t call this new type directly via its Value Constructor. Because we have made it private; we need another function that will create this type for us. When we enter a negative number, we can’t create a type. That’s sounds like an Option to me:

FYI: At this point, Scott's talk stops because the talk is about the domain itself and not to specify how we can refactor the validation of the models.

Now we can start with the refactoring to Active Patterns. Because this is a simple type, I think you can’t see the immediate benefit of this approach; so, hang on. We use the Partial Pattern approach for these Business Rules because we can’t wrap all possible values in a single pattern.

The Partial Pattern approach needs a return type of unit option. We can use the Some branch to return the pattern itself and the None branch can be used to specify that the input doesn’t match this pattern.

One can argue about the over-engineering of this approach; but personally, I find this a way more simplistic approach than the inlined Guard Clauses in the Match Expression.

Our book looks now like this:


Next up, is the author’s name. It reasonable to think that the length of the name will be no longer than 50 chars.

We can specify all these rules in our model the same way as we did with the pages:

Notice that we now have two branches that cover our type. By extracting the rules into Partial Patterns, we have made it clear, in our Constructor Function, that we need a string that isn’t “null” or empty and is a maximum of 50 characters long.

Now, how would we specify this in C#? Because we do not have an option type by default, only a less stronger Nullable<T> type; we normally use exceptions.

Note that we can reuse the pre-conditions for the empty string and the length across our application in the F# version, while we must redefine them for every class in C# (except off course we extract this functionality in some “utility” classes.

Now, our book type looks like this:


The last type is the most interesting and the reason why I would use Active Patterns for my Domain Model Validation.

If we have some more complex type for example an ISBN13 number; how would we model that? First of all, let’s specify some requirements:

  • Number must have length of 13
  • Last number is checksum

The checksum is calculated by evaluating the following steps:

  1. Take the 12 first chars
  2. Multiply the even numbers in the sequence with 3
  3. Take the sum of all the results
  4. Modulo 10 the result
  5. Substract 10 if the outcome isn’t zero
  6. Final result must be the same as 13th number

I came up with this:

What I like about this, is the declarativity of the checksum calculation and the fact that you can see immediately what rules we have in our ISBN validation.

Note that I changed the Active Pattern for the length of the string by passing in a function; this way I can reuse it for my String50 type and for this one AND can you see more clearly what exactly we're trying to validate with the string's length (greater than, equal to, ...).

Now, I wanted to check this with C#. To achieve the same level of simplicity; we would extract each rule in it’s own method:

If we extract each rule in a method, I think we get that same simplicity. But we should send some arguments with the rules not just for reusability in other models but for readability as well.

Take for example the Regular Expression rule. It’s much simpler to just send the pattern with the rule than to come up with some name for the method (or Active Pattern) that would satisfy what you’re trying to verify.

Note that the C# version isn’t done yet and must be refactored since there’s a lot going on which can’t be comprehend as quickly as the F# version (but that's just my opinion).

Before you say anything about LINQ, I explicitly used and imperative approach because otherwise we would use functional programming again and when I compare functional with imperative I always try to be functional and imperative in the extreme so I can quickly see what's the actual difference is.


Of course, to be complete let’s write some properties for our newly created types. I found not every type to be that obvious to write properties for so it might be a good exercise for you as well.

PositiveInt Properties

First, let us look at the positive integer type. This was the simplest type to model and is also the simplest type to test. I came up with these two properties for the two branches:

String50 Properties

The next type must have a length of 50 chars to be a valid type. Following properties came to mind:

ISBN13 Properties

Now, the last type is probably the most interesting. We must generate valid ISBN numbers to check if the checksum acts properly. I came up with a Test Oracle as another way to express the checksum so I’ll could filter with this expression to generate valid ISBN13 numbers:

I love the way FsCheck allows me to write such properties with such little effort. Now I have a way to generate random, valid ISBN13 numbers. Notice that I didn't check the other Active Pattern branch, perhaps this is a good exercise for you? All other cases should result in None.

Small side note: the assertion is now valid (accessible) because I wrote the types and properties in the same file. When this isn't the case, we could test for any type (with wildcards) wrapped inside a Some case, instead of actually creating an ISBN13 or any other type. That way, we could change the values for that type without changing our test. For the sake of this exercise, I thought it was clearer to assert the type this way.

Love to hear your opinion!


In this post, we looked at how F#'s feature Active Patterns can help build a clear, declarative solution for the Validation of Domain Models. By using a Partial Pattern for each Business Rule, we can clearly see how the input is restricted to verify each rule.

In an object-oriented approach, you would normally create a class to wrap the domain model and specify the business rules inside the constructor while in functional programming, this can be done by privatizing the Value Constructor and create a new Constructor Function which uses Active Patterns to specify each business rule.

Thanks for reading!

Categories: Technology
Tags: F#
written by: Stijn Moreels