wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Tuesday, September 26, 2017 9:28 PM

Stijn Moreels by Stijn Moreels

Using F# and Property-Based Testing to solve the Coin Change Kata really helped me to get more insight in the Property-Based Testing technique; I hope it will help you also.

Introduction

The one way to become an expert in something is to practice; and Programming Katas are a very good approach to keep practicing your programming skills.

In this post, I will solve the Coin Change Kata with F# and use Property-Based Testing (with FsCheck) to drive the design.
For me, this was a lesson in writing properties and not so much solving the kata. It was a fun exercise. I thought it would be useful to share my code with you.
If you haven’t any experience with Property-Based Testing or F#, I recommend you look first at those topics.

Coin Change Kata

Description

Ok, there are several different descriptions of this Kata; so, I’ll show you what I want to accomplish first.

“Given an amount and a series of Coin Values, give me the best possible solution that requires the least amount of Coins and the remaining value if there is any”

So, by looking at the definition of the Coin Kata; I need two inputs and two outputs:

Signature

The first thing I did before describing my properties, is defining my signature of my function. My first mistake was I thought I could use integers all over the place. Something like this:

int list -> int -> int * int list

But we can’t have negative coin values, so I started with making a coin type. In the Kata, there are several amounts they use so I chose to use the same:

Note that, by describing the coin values in this way, I have restricted the input values of the coins. This makes Illegal States Unpresentable (from Yaron Minsky).

And so my signature, after type interference, is the following:

Coin list -> int -> int * Coin list

Strictly speaking this is not the right signature, because we are still handling negative amounts, but for the sake of this exercise; I will leave this as an exercise for you.

Properties

First Property: Ice Breaker

So, let’s start coding. The first property should be some kind of Ice Breaker property. I came up with the following:

“Change amount with nothing to change gives back the initial amount”

This is the property when we do not have any coin values, so we just get back the same amount as remaining value. Note that I use ‘byte’ as a input value so I make sure I have a positive value, and for the sake of this exercise. The maximum byte value is enough for this demonstration.

We can easily implement this:

We can play the Devil’s Advocate and intentionally use a fake implementation, for example:

Which will still pass.

Second Property: Boundaries

The next property I wrote was the other way around: what if I haven’t got any amount to change?

“Change zero amount result in original coin values”

Note that we have FsCheck to generate us the random list of our Coins. We don’t care what coins we’re about the use for the change and that’s why we can use FsCheck to generate some for us.

I think this is a good implementation example of how Test Ignorance can be accomplished with Property-Based Testing.

And our implementation:

Now we can’t fill the list so we’re back with the first implementation which makes the two properties pass.

Third Property: Find the Constants

I’m not quite sure this is a proper example of this; because you could state this property for other values as well with some effort and it’s possibly also covered with a latter property. Although, this was the next property I wanted to write because it drives me more in the actual implementation of the function and it’s a constant in this implementation.

“Change is always One if there’s no other coin values”

We can implement this property (and respect the others) with this implementation:

When I have only ‘One’ coins, the change for a random amount is always a list of ‘One’ coins that has the same length of the initial amount to-be changed. I can of course play the Devil’s advocate and change the remaining amount to 42 for example (because 42 is the answer to life):

And so, we can stricken our property to also assert on the remaining amount:

Because of FsCheck this is hardly an issue. I added some Labels (from FsCheck) to clearly state in the output WHAT failed in the property. This is good thing for Defect Localization.

Also note that playing the Devil’s Advocate makes sure that I implement the right implementation and that my properties this state in the most strictly solution.

Fourth Property: Some Things Never Change

For the fourth property, I thought even further about the result and came up with this property. The constant that I found was that whatever I change into coins, the initial to-be-changed amount should always be the sum of the remaining change and the changed coins.

“Sum of changed coins and remaining amount is always the initial to-be-changed amount”

What this property needs, is a non-empty list of coins because we otherwise re-testing the already written property of the empty coins. This is also no issue for FsCheck, with Conditional Properties we can easily define this with the List.length coins <> 0 ==> lazy expression.

This will make sure that the other part of the function only gets evaluated, and so verified if the condition is met.

The rest of the function is the mapping of all the coins into values, sum this and add the remaining amount to this. All this together should be the same as the initial amount.

This is the first time I need to get the actual value of coins, so a made a function for this:

How do we know how much coins we get in a given amount? That’s a division of the amount by that coin value. We need several coin values and so we must also do the division by the other coin values; so, we need the remaining value of the division. That’s a modulo of the amount by that coin value.

We need to do this for all the different coins we have.

Does this pattern sounds familiar?

We have an initial value, we need to loop over a list, and do something with a given value that can be passed to the next loop iteration.

In an imperative language, that’s the for-loop we’re stating:

Something like this (sort of).

But, we’re in a functional language now; so, what’s the alternative? Fold!

Here is some implementation using fold:

One of the things I like to do is to specify the different arguments on its own instead of inlining them (in this case into the List.fold function). I think this increases Readability and shows more the Code’s Intent: that the core and return value is the result of a List.fold operation.

This reminds me of the Formatting Guidelines I described in a previous post that the return value of a method should be placed on a different line to increase readability and highlights “The Plot” of the method.

This is very similar; we want to show the what we’re doing as “The Plot” of the function by specifying the argumented-functions on separate.

Also note that we can use the function valueOfCoin that we needed in our Property. People not familiar with TDD and the Test-First mindset sometimes say that they don’t like to see if the test is the only place where some functionality is used; but if you use TDD the test is the first client that will use those functionalities!

Fifth Property: Final Observation

We’re almost there; there’s just one last thing we didn’t do right in our implementation. The Kata stated that we must find “the best possible solution” for the amount in change. We now have an implementation that finds “some” solution but not the “best” solution. Why? Because we don’t use the order in which the different coin values are passed in; we just loop over them. We need the best solution for the least amount of coins.

How do we get from “some” solution to the “best” solution? Well, we need to check first with the highest coin values and then gradually to the least coin value.

How do we specify this in a Property? I must admit that it did not come to me very fast, so I think this was a good exercise in Property-Based Testing for me. This was the Property I came up with:

“Non-One Coin value is always part of the change when the amount is that Coin value”

Why do we need a non-one Coin value? Why do we need a non-empty Coin list? Because we otherwise testing an already specified property.

That’s why we use the Conditional expression: (nonOneCoin <> One && List.length coins <> 0) ==> lazy.

Now, the other part of the Property. We need to check if the given random list of coins (with a non-one Coin in it) result that the non-one Coin is part of the change we get if the amount to-be-changed is the value of the non-one Coin.

That’s seems reasonable. If I want the change the value 50 in coins and I have the Coin value 50, I want that as return value. That would be the solution if I need the least amount of coins. I don’t care if a have Coins of 50 and 25 for example, the order of the different Coin values doesn’t matter; just give me the change with the least amount of coins.

Note that we first use the Gen.shuffle function to shuffle the random list of coins with the non-One Coin. After that, we’re sure that we have a list with a non-One coin. If I would specify this condition inside the Conditional expression of FsCheck, I would have a lot of tests cases that are skipped because the condition wouldn’t be met. If I set the condition on a signal Coin value; I will have a lot more test cases.

The chance that I get a Coin that isn’t One is much higher than I get a list that contains a non-One coin I guess. But not only that; I get more Code’s Intent in my Property if I state my non-one Coin value like this I guess.

We finally pipe into the snd function that gives us the second element of the tuple so we can use it in our assertion to check if the nonOneCoin value exists in the resulted list of coins.

How do we implement this?

We sort the Coins by their Coin value. Note how we again can use the already defined valueOfCoin function.

Conclusion

As I said before: this wasn’t exactly an exercise in solving the Coin Change Kata but rather in specifying Properties to drive this implementation. I noticed that I must think on a higher level about the implementation instead of hard-coding the test values.

I don’t know which values FsCheck will provide me and that’s OK; I don’t need to know that. I just need to constrain the inputs so that I can predict the output without specifying exactly what that output should look like. Just specifying some Properties about the output.

Hopefully you found this a nice read and have enjoyed the way we write Properties in this example. Maybe now you’re inspired to write Properties for your own implementations. The full code can be found at my GitHub.

FsCheck can also be used from a C# environment instead of F# so you don’t have to be a F# expert to write Properties. It’s a concept of looking at tests and how we constrain inputs to predict outputs.

Thank you.

Categories: Technology
Tags: F#
written by: Stijn Moreels

Posted on Monday, September 25, 2017 12:00 PM

Stijn Moreels by Stijn Moreels

How can we use the F# Agent to set up a pipeline which can be used in Concurrent Applications? We need not only a working version but also a declarative version which we can easily extend and maintain.

Introduction

In the context of Concurrent Applications, there are several architectural systems that describe the communication of the system. The Actor Model or an Agent-Based Architecture, is one of those systems we can use to write robust Concurrent Applications.

One of the challenges in Concurrent Programming is that several computations want to communicate with each other and share a state safely.

F# has implemented this model (or at least a part of it) with the Mailbox Processor or shorted version: Agent. See Thomas Petricek's blog post for more information.

Agent Pipeline

Introduction

The way agents can communicate with each other is by itself a lot harder to comprehend than in a sequential or even a parallel system. Each agent can communicate with each other by sending messages. The agent itself can alter its Private State, but no other “external” state, they communicate the state by sending messages to other agents.

This Isolation of the agents in Active Objects can increase the system complexity. In this post, I will talk about some brain-dump exercise of mine to create an Agent Pipeline. I think the way we think of a pipeline is a rather easier approach to think about agents because we still have that feeling of “sequential”.

 

In F# this is rather easy to express, so let’s try it!

F# Agent Template

First, let’s define the agent we are going to use. Our agent must receive a message, but must also send some kind of response, so the next agent can process it. The agent itself is an asynchronous computation, so the result of the message will be wrapped inside an Async type.

Our signature of calling our pipeline agent should thereby be something like this:

'a -> Async<'a>

Ok, lets define some basic prerequisites and our basic agent template. We’ll use a basic string as our message just for example purposes.

Now that we have our basic setup, we can start looking at the body of our agent. We must receive a message and send back a reply. This can be done with the basic method Receive() which will return a tuple with the message itself and the channel we have to send our reply to. This call to “receive” will block the loop till the next message arrives. Agents run on a single logical thread and all messages send to agents are queued.

The body can be defined like this (print function to simulate the processing):

F# Async Binding

Ok, now we have our basic agent; we can look at how we can bind agents together.

Just like the previous diagram; I would like to express in code how messages are “piped” to other agents. When we think of piping, we can think of two approaches: Applicative (<*>) and Monadic (>>=). Since we need the result of the previous call in our next call, I’m going to use the Monadic style (>>=).

We see by looking at the signature that we must bind two separate worlds: the world of strings and the world of Async. Only looking at the signature makes me want to write some bind functions; so first, before we go any further; lets define some helper functions for our Async world:

These two functions should be enough to define our pipeline. Look at the signature of our bind:

(‘a -> Async<’b>) -> Async<’a> -> Async<’b>

This is just what we want in our agent signature. Now, I’m going to create some agents to simulate a pipeline:

Note that the pipeline of the agents is almost exactly like we have designed in our diagram. This is one the many reasons that I like F# so much. Much more than C#, you can express declaratively exact how you see the problem. C# Async/await variant is inspired by the F# Asynchronous Workflows.

Or if you like a Klesli style (which I like to use sometimes). This will make sure that we don’t have to return the message as an Async:

Conclusion

“Functional programming is the most practical way to write concurrent programs. Trying to write concurrent programs in imperative languages isn’t only difficult, it leads to bugs that are difficult to discover, reproduce, and fix”

- Riccardo Terrell (Functional Concurrency)

This is just a brain-dump of some exercise for myself in training Monadic Binds and Agents, and how to combine them. What I really learned how to do, is looking at the signature itself. Much more than in Object-Oriented languages the signature isn’t a lie and tells exactly how it’s done. Only by looking at the signature, you can make a guess of how the function will look like.

Functional Programming is still a bit strange at first if you come from an Object-Oriented world; but trust me; it’s worth the learn. In a future where Asynchronous, Parallelism and Concurrent topics are considered “mainstream”, Functional Programming will be even more become a lesser and lesser mainstream language.

Categories: Technology
Tags: F#
written by: Stijn Moreels

Posted on Monday, September 18, 2017 12:45 PM

Toon Vanhoutte by Toon Vanhoutte

Azure's serverless PaaS offering consists of Azure Functions and Logic Apps. If you consult the documentation, you'll find out that there is quite some overlap between the two. For many people, it's not clear what technology to use in what scenario. In this blog post, I discuss the main differences between these two event-driven Azure services and I provide some guidance to help you to make the right decision.

Comparison

Developer experience

A popular comparison states that Azure Functions is code being triggered by an event, whereas Logic Apps is a workflow triggered by an event. This is reflected in the developer experience. Azure Functions are completely written in code, with currently supports JavaScript, C#, F#, Node.js, Python, PHP, batch, bash and PowerShell. In Logic Apps, workflows are created with an easy-to-use visual designer, combined with a simple workflow definition language in the code view. Each developer has of course his/her personal preference. Logic Apps is much simpler to use, but this can sometimes cause limitations in complex scenarios. Azure Functions gives a lot more flexibility and responsibility to the developer.

Connectivity

Logic Apps connects to an enormous variety of cloud / on-premise applications, going from Azure and Microsoft services over SaaS applications and social media to LOB systems. You can find the impressive list of connectors here. Each connector comes with an API connection, that stores the required credentials in a secure way. These API connections can be reused from within multiple Logic Apps, which is great! Azure Functions have the concept of triggers, input and output bindings. Most of these bindings connect your Azure Functions to other Azure services, such as Event Hubs, Storage, DocumentDb, etc… Consult the complete list here. The HTTP binding is probably the most popular one, as it allows the creation of serverless API's. At the moment, there are no signs that Azure Functions aims to support that many bindings as what Logic Apps offers.

Exception handling

Cloud solutions need to deal with transient fault handling. Logic Apps provides out-of-the-box functionality that allows you to configure automatic retries on every action. In case this doesn't solve the problem, the workflow gets a failed status and can be resubmitted after human intervention. This guarantees an at-least-once execution model, which is pretty reliable! In Azure Functions, you have the typical try/catch options available. If you want to enable retries, you need to do the plumbing yourself, by introducing for example Polly. The way you can handle exceptions in the output binding, depends on the used language and type of output binding. This doesn't always give you the desired outcome. No resume / resubmit capabilities, except if you develop them yourself!

State

Until recently, Azure Functions always needed to be stateless and preferably idempotent. With the announcement of Azure Durable Functions, Microsoft brings state and long-running capabilities to Azure Functions, by leveraging the Durable Task Framework. This new framework allows sequential and parallel execution of several Functions, it supports long-running tasks with pre-defined timeouts and provides statefull actors without the need for external storage. The state is automatically stored in Azure Storage queues, tables and blobs, which is disaster proof. I am looking forward how this will evolve. These long-running / statefull processes are inherent available in Logic Apps, except for the statefull actor model.

Networking

Hybrid integration is reality nowadays. Cloud services must be able to connect to on-premises resources in a secure and high performing way. Azure Logic Apps performs this task via the On Premises Data Gateway, that needs to be installed on premises. Behind the scenes, it uses Azure Service Bus Relay to connect to the cloud in a firewall friendly way, through encrypted channels. When using Azure Functions within an App Service Plan, you have more convenient hybrid connectivity options that reside on the network level. App Service Plans offer support for many networking options like Hybrid Connections, VNET Integration and App Service Environment. Via these options, you can integrate Azure Functions with your local network through a Site-to-Site VPN or ExpressRoute.

Deployment

Azure Resource Manager templates are the way to deploy resources across the Microsoft Azure platform. Fortunately, both Azure Functions and Logic Apps have built-in support for ARM deployments, through for example Visual Studio Release Management. Next to this, Azure Functions allows easy setup of continuous deployments triggered from sources like BitBucket, Dropbox, Git, GitHub, OneDrive and VSTS. This is ideal in case multiple and frequent contributions need to be consolidated and tested. Additionally, Azure Functions now has deployment slots in preview. This allows deploying and testing a vNext first, before you swap that tested deployment slot with the current version in production.

Runtime

Logic Apps run only in the cloud, as it has a dependency on Microsoft-managed connectors. As a consequence, you cannot debug, test or run Logic Apps locally. Azure Functions can be easily developed and debugged on your local workstation, which is a big plus to increase developer productivity. Via the Azure Functions Runtime (still in preview) you are able to deploy them on premises in Windows Containers, with SQL Server as a storage layer. Azure Functions is also supported to run on Azure Stack and it has been announced as part of Azure IoT Edge to execute on small devices. This hosting flexibility is a big asset in phased migration scenarios towards the cloud.

Monitoring

Per Logic App, you have a nice overview of the previous runs and their corresponding outcome. You can filter this history, based on a time period and the resulting run status. The monitoring view of a workflow run is the same as the designer view, which makes it very intuitive. For each action, you can see the status and all inputs/outputs. With one button click, you can enable integration with OMS, where you can search on tracked properties. It's on the roadmap to have a user-friendly and cross Logic Apps dashboard on top of this OMS integration. Each Azure Function comes with a Monitor tab, where you can see the execution history. There is also a live event stream that shows the almost real-time processing statistics in nice graphs. On top of that, there's full integration with Application Insights, where you can take advantage of the powerful Analytics queries.

Pricing Model

Logic Apps has a pure pay-per-usage billing model. You pay for each action that gets executed. It's important to be aware that you also need to pay for polling triggers, which can be a hidden cost. If you want to benefit from the capabilities of the Integration Account, you should be aware that this comes with a fixed monthly bill. With Azure Functions, you have two options qua pricing. You can opt for a fixed cost of an App Service Plan. In that option you reserve compute power on which you can run Azure Functions, but also Web, Mobile and API Apps. The second option is completely serverless, with a consumption plan based on resource consumption (memory/s) and number of executions. Don’t forget that the Azure Storage layer also comes with a rather small cost.

Security

Each particular binding or connector comes with its own security. In this section, I focus on the security of Logic Apps and Azure Functions exposed as an API. In order to access a Logic App with the HTTP trigger, the client must include a Shared Access Signature in the URL. The signature is generated via a secret key that can be regenerated at all time. There is also the ability to restrict access, based on incoming IP addresses. To add more authorization logic, you can put Azure API Management in front of it. Azure Functions has a similar concept of API keys. The API key can be shared for the whole Function App (host key) or you can create a specific one for your Function. If you run your Azure Function in an App Service Plan, you can leverage its codeless authentication functionality with Active Directory, Google, Facebook, etc… Real authorization requires a small code change. Azure Function Proxies can be a light-weight alternative of full-blown API Management, to add security on top of your HTTP triggered Functions.

Conclusion

Based on the comparison above, you'll notice that a lot of factors are involved when deciding between the two technologies. At first, it's important to see what technology supports the connectivity that you require. Do you want to write it yourself or you want to leverage out-of-the-box bindings / connectors? Next to that, my general guidance is as follows:

When dealing with synchronous request/response calls, that execute more complex logic, Azure Functions is the preferred option. Logic Apps is better suited for asynchronous integration and fire-and-forget messaging that requires reliable processing. When using Logic Apps, they can be perfectly extended with Azure Functions to execute stateless tasks that cannot be fulfilled by the out-of-the-box Logic Apps capabilities.

Web API's are often composed of both sync and async operations. If you follow the guidance stated above, you might end-up with an API that uses both Azure Functions and Logic Apps. This is where Azure Functions Proxies has its value, as it can expose these separate microservices as a unified API. This will be discussed in another blog post. 


Stay tuned for more!

Categories: Azure
written by: Toon Vanhoutte

Posted on Monday, September 18, 2017 12:31 PM

Stijn Moreels by Stijn Moreels

How can Functional Programming help us to ignore even more in our tests?

Introduction

In this series of Test Infected, I will show you how we can increase the Test Ignorance of our tests by applying Functional approaches to our Imperative code.
If you don’t quite understand what I mean with “ignorance”, I recommend my previous post about the topic. In this post, we will go through with the journey of increasing the Code’s Intent by increasing the Ignorance in a Functional way.

Functional Ignorance

Fixture

The fixture-phase of your test can become very large, several previous posts have already proved this.
How can functional programming help?
Well, let’s assume you want to setup an object with some properties, you would:

  • Declare a new variable
  • Initialize the variable with a newly created instance of the type of the variable
  • Assign the needed properties to setup the fixture

Note that we’re most interested in our test in the last item; so how can we make sure that the last part is the most visible?

Following example shows what I mean:

We would like to test something with the subject property of the message, but note that this is not the first thing which catches your eye (especially if we use the object-initializer syntax). We must also initialize something in a context.

We could, of course, extract the creation functionality with a Parameterized Creation Method and extract the insertion functionality that accepts a message instance.

But note that we do not use the message elsewhere in the test. We could extract the whole functionality and just accept the subject name, but we will have to use an explicit method name to make clear that we will insert a message in the context AND will assign the given subject name to that inserted message. What if we want to test something else? Another explicit method?

What I sometimes do is extract only the assigning functionality like this:

We don’t use the name of the method to state our intentions, we use our code.

In the extracted method, we can do whatever necessary to create an ignored message. If we do need another way to create a message initially, we can always create a new method that only inserts the incoming message and call this from our functional method.

If would be nice if we had immutable values and could use something like F# "Copy-And-Replace Expressions".

Exercise

Several times, when you want to test your code branches from an external SUT endpoint, the creation of the SUT doesn’t change, but rather the info you send to the endpoint. Since we have a value that does not change across several tests; we could say that the value is not that important to the test case but rather the changing values.

When you come across such a scenario, you can use the approach I will describe in here.

The idea is to split the exercise logic from the SUT creation. If you have different endpoints you want to test for the same SUT fixture, you can even extend this approach by letting the client code decide what endpoint to call.

Following example shows two test cases where the SUT creation is the same:

Note that we have the same pattern: (1) create SUT, (2) exercise SUT. Compare with the following code where the SUT is being exercised differently.

We ignore the unnecessary info by Functional Thinking:

We can extend this idea by letting the client choose the return value. This is rather useful if we want to test the SUT with the same Fixture but with different member calls:

I use this approach in almost every Class Test I write. This idea is simple: Encapsulate what varies. Only we think in Functions rather than in Objects. Functions can be treated as Objects!

Verification

The last topic I will discuss in a Functional approach is the Result Verification phase of the Four-Phase Test.

When I applied some techniques in this phase, I always come back to the same principle: I ask myself the same question: “What is really important?” What interests me the most?

In the Result Verification phase, this is the Assertion itself. WHAT do you assert in the test to make it a Self-Evaluating Test? What makes the test succeed or fail?
That’s what’s important; all the other clutter should be removed.

A good example (I think) is when I needed to write some assertion code to Spy on a datastore. When the SUT was exercised, I needed to check whether there was any change in the database and if this correspondeded with my expectations.
Of course, I needed some logic to call the datastore, retrieve the entities, assert the entities, Tear Down some datastore-related items. But the test only cares whether the updated happened or not.

As you can see, the assertion itself is baked-in into the called method and we must rename the method to a more declarative name in order for the test reader to know what we’re asserting on.

Now, as you can see in the next example, I extracted the assertion, so the test itself can state what the assertion should be.
Also note that when I extract this part, I can reuse this Higher-Order Function in any test that needs to verify the datastore, which is exactly what I did:

Conclusion

Test Ignorance can be interpreted in many ways, this post explored some basic concepts of how Functional Programming can help us to write more Declarative Tests. By extracting not only hard-coded values, but hard-coded functions, we can make complex behavior by composing smaller functions.

Functional Programming hasn’t been a fully mainstream language (yet), but by introducing Functional Concepts into Imperative Languages such as: lambda functions, pattern matching, inline functions, pipelines, higher-order functions, … we can maybe convince the Imperative programmer to at least try the Functional way of thinking.

Categories: Technology
Tags: Code Quality
written by: Stijn Moreels

Posted on Monday, September 11, 2017 3:13 PM

Pim Simons by Pim Simons

With the introduction of BizTalk 2016 it is now possible to use SHA-2 certificates when signing a message. As this is not as straightforward as I expected it to be, I’ve decided to share my experiences with setting up SHA-2 in this blogpost.

For one of our customers we migrated all their interfaces from BizTalk 2006 R2 to BizTalk 2016. During testing of the new BizTalk 2016 environment we found that the signature for the AS2 messages being sent out was not working correctly. While there was no exception in BizTalk, the external party, that was receiving the messages, was unable to verify the signature of the messages. The messages from the old BizTalk 2006 R2 environment were all verified and processed successfully. Obviously we started checking if all of the certificates and party settings were setup correctly in the new BizTalk 2016 environment. We found those to be correct and continued to search for the cause of this issue.

We ended up finding a difference when comparing the signing algorithms. The old BizTalk 2016 R2 environment was using SHA1 while the new BizTalk 2016 machine was using SHA256. Having found this clue, we figured that the fix would be easy: just change the signing algorithm on the AS2 agreement. However, this is where we ran into some problems. It turns out there really isn’t anywhere to configure this on the AS2 agreement. As shown in the picture below, it is possible to specify that the message should be signed, but it is not possible to specify a signing algorithm.

 
The documentation does not specify where to supply the signing algorithm. But after walking through all of the settings of the AS2 agreement again, I noticed that the signing algorithm for the MDN was set to SHA256 and not SHA1. While it is greyed out and, at least according to the screen, only used for MDN’s, we decided to change it anyway and see if this could be the issue.


 
I enabled ‘Request MDN’ and ‘Request signed MDN’ after which I could change the signing algorithm to SHA1. Finally, I disabled ‘Request MDN’ and ‘Request signed MDN’ again since we are not using the MDN.


This finally solved our issue with the signing of the message as now the SHA1 algorithm was used to sign the message!

In conclusion, it is possible to specify the signing algorithm for outgoing messages, but it is not where you would expect it to be. If you interpret the screens of the AS2 party agreement you would think that the signing algorithm can only be specified for MDN’s as it is greyed out by default.

Hopefully the choice of signing algorithm will be easier after a bugfix or in the next release of BizTalk.  

I enabled ‘Request MDN’ and ‘Request signed MDN’ after which I could change the signing algorithm to SHA1. Finally, I disabled ‘Request MDN’ and ‘Request signed MDN’ again since we are not using the MDN.

Categories: BizTalk
written by: Pim Simons