wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Thursday, July 7, 2016 11:58 AM

Luis Delgado by Luis Delgado

Discover how to unit test your node.js Azure functions on Azure, to increase code quality and productivity, using these code samples.

Writing unit and integration tests for Azure Functions is super critical to the development experience, since their execution relies on context variables and are beyond your control and supplied by the runtime. Furthermore, currently there is no local development or debugging experience available for Azure Functions. Therefore, testing if your functions behave properly, in the context of their runtime, is extremely critical to catch defects and increase your productivity.

Because Node.js is dynamically-typed, I want to share a quick trick on how to mimick the Azure Functions runtime context in order to test your functions. I did not find any documentation from Microsoft related to unit testing Node.js Azure Functions, so feel free to comment on the approach I propose here.

As an example, we are going to make a function that posts an observation every minute to Azure IoT Hub:

deviceSimulator/index.js

Now we want to write a unit/integration test for this function.

deviceSimulator/test.js

The function getContextObject simply returns an object the mimics the context object expected by the Azure Functions runtime. The test will simply import your function from index.js, create the mock-up context object and feed it to your function for execution. Finally, within your test, you can override the context.done() function to do the assertions you need and call done();

Is this the proper way to test Azure Functions on Node.js? I will let the Functions Product Group comment on that :). However, this method works for me.

The other alternative you have is to create your inside (internal) functions on other files that you can test separately in the traditional way you would test JS code, and import those files in your index.js file. The problem I see with that approach is, if your internal functions make a call to the context object, your tests will probably fail because of this.

Comments, feedback or suggestions? Submit an issue to the repository or write them below.

Categories: Azure
written by: Luis Delgado

Posted on Friday, July 1, 2016 1:42 PM

Pieter Vandenheede by Pieter Vandenheede

In this post you can find the slide deck I presented at BTUG.be on June 30th around the recap of the Integrate 2016 conference, the updates in Azure Logic Apps and BizTalk Server 2016 CTP2.

Yesterday I had a great time at btug.be, while presenting my session on Integrate 2016.

I presented the new changes in BizTalk Server 2016 CTP2, covered the upcoming changes in RTM and the new schema update of Azure Logic Apps, together with the new features available in the public preview of the Enterprise Integration Pack.
Thanks again to our company, Codit, for providing me the opportunity to be there!

As promised there, please find my slide deck below via SlideShare:

Feel free to comment if you have any questions.

The second speaker yesterday was Eldert Grootenboer. He had a great talk on IoT, gateways, sensors and ... boats! Keep an eye out on his blog, since he promised some more IoT posts coming up.

As always, it was nice to talk to the people present. A big thank you to them! Especially since it was not such a great time to attend, just before holiday period and there were a lot of traffic jams around Antwerp yesterday evening. We do have a good community out there!

Enjoy the slide deck!

Pieter

Categories: Community
written by: Pieter Vandenheede

Posted on Wednesday, June 22, 2016 8:58 AM

Maxim Braekman by Maxim Braekman

Have you set up communication with a web service before and had to find a way to keep track of some information that was available in the request, but is no longer present in the response? Continue reading, since this post could help you sort out this problem.

Setting up communication with a web service can always turn out to be a bit tricky. You need to take care of configuring the certificate-settings, if required, configure the bindings to use the correct protocol, security and so on. But once all of these settings are correct and you start testing, now and again, depending on the scenario, you might notice you are losing some useful data across two-way communication, since some of the data which was available in the request, no longer appears to be in the response.
In such a case, one could opt to use an orchestration, although this is not the best solution, performance-wise.

An alternative way of storing those values is by creating a static class which will be storing the data based on the Interchange ID of the message. Since this static class needs to be told what data it needs to track, 2 custom pipeline components are needed. Why 2 components? You’ll need one to pass on the data from the message into the static class and another to retrieve those values from the class and pass them back onto the response message.

Yes, this can also be done by merging the components into a single one and using a property to indicate the direction, but for the sake of this post we will be using 2 separate components, just to keep everything clear.

Imagine a large system, such as AX, which is containing a whole bunch of data about several orders, but needs to retrieve some additional information - from another system - before processing these orders. Since these requests could be working asynchronously, in the AX point-of-view, the source-system will need some kind of ID to match the response to the initial request. In this case the request that is being sent towards BizTalk will be containing an orderID or requestID or any other form of identification, just to make sure each response is matched to the correct request.

Okay, so now this request, containing the ID, has arrived in BizTalk, but since the destination-system has no need for any ID from an external system, no xml-node will be provided for this ID, nor will it be returned within the response. In such a situation this issue becomes a “BizTalk-problem” to be resolved by the integration developer/architect.

This is when the use of the aforementioned static class comes in handy. Since the actual call to the destination-system is a synchronous action and there is no need for an orchestration to perform any additional actions, we can simply use the custom pipeline components to store/retrieve the original ID, assigned by the source-system.

The custom pipeline components

The static class might look as the example below, which allows BizTalk to save a complete list of context-properties for a specific interchangeID.

Next, you’ll be needing the pipeline components to actually access this class and allow for the data to be saved and restored when needed.

This post will not be zooming into the code of these pipeline components, but below is the general explanation of what these components are supposed to do.

SaveContextOverCallRequest

The first custom pipeline component will retrieve the InterchangeID from the context of the incoming message and use this as a unique ID to save a specified list of properties to the static class. This list could be scaled down by setting a specific namespace, which can be used to filter the available context properties. This would make sure only the properties from the specified namespace are being saved, preventing an overload of data being stored in memory.

SaveContextOverCallResponse

The second custom pipeline component will again retrieve the InterchangeID from the context of the incoming message, but this time it will use this value to retrieve the list of context-properties from the static class. Once the properties have been collected, there is no need for the static class to keep track of these values any longer, therefore it can remove these from its dictionary.

Next, you’ll be needing the pipeline components to actually access this class and allow for the data to be saved and restored once needed.

This post will not be zooming into the code of these pipeline components, but below is the general explanation of what these components are supposed to do.

Using the component

Once these components have been created, they will have to be added to the send/receive pipeline, depending on the type of component.

The send pipeline will contain the ‘SaveContextOverCallRequest’-component to make sure the required properties are being saved. The custom pipeline component should be the last component of this pipeline, since you want to make sure all of the property promotion is finished before the properties are being saved into the static class.

The receive pipeline will contain the ‘SaveContextOverCallResponse’-component, as this will be restoring the saved properties to the context. This should also be the first component in this pipeline, because we want the saved properties to be returned to the context of the message as soon as possible, to make sure these values are accessible for any further processing. Be aware that whether or not you are able to put this first, will laregely depend on your situation and transport protocol.

Example

To show the functionality of these components, a simple test-case has been set up, in which a request-message is being picked up from a file-location, a connection is made with a service and the response is sent back to a different folder. To give you an idea of the complete flow, the tracking data has been added here.

The request that will be used in this sample is a pretty simple xml-message, which can be seen below:

<ns0:GetData xmlns:ns0="http://Codit.Blog.Stub">
                <ns0:request>
                               <ns0:RequestDate>2016-06-15</ns0:RequestDate>
                               <ns0:RequestNumber>0002</ns0:RequestNumber>
                               <ns0:CustomerID>0001</ns0:CustomerID>
                               <ns0:Value>Codit offices</ns0:Value>
                </ns0:request>
</ns0:GetData>

              

 

 

 

 

 

As you can see, this request contains both a request- and customer-ID, which are the two ‘important’ values in this test-case. To make sure these properties are available in the context of the message, we made sure these are being promoted by the XML Disassembler, since the fields are indicated as promoted in the schema. Once the flow is triggered, we can have a look at the context properties and notice that the 2 values have been promoted.

The initial raw response that comes back from the service, which is the message before any pipeline processing has been performed, no longer contains these context-properties and neither does it contain these values in the body.

<GetDataResponse xmlns="http://Codit.Blog.Stub">
                <GetDataResult xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
                               <ResponseDate>2016-06-15T20:06:02.8208975+01:00</ResponseDate>
                               <Value>Ghent, Paris, Lisbon, Zurich, Hampshire</Value>
               </GetDataResult>
</GetDataResponse>

However, if we have another look at the context properties after the receive pipeline has done its job, we notice the properties are back in place and can be used for further processing/routing/....

Conclusion

Whenever you need to save a couple of values cross-call, there is an alternative solution to building an orchestration to keep track of these values.

Whenever you are building a flow, which will be saving a huge amount of data, you could of course build a solution which saves this data to disk/SQL/...,  but that is all up to you. 

Categories: BizTalk
written by: Maxim Braekman

Posted on Tuesday, May 31, 2016 8:36 PM

Pieter Vandenheede by Pieter Vandenheede

For a BizTalk automated deployment, I needed to automatically add machine.config WCF behaviorExtensions by using some C# code.

I've been playing around with BizTalk Deployment Framework lately and for one particular BizTalk application, I need to add 4 custom behaviorExtensions. I've had some very specific logic that I needed to put into some WCF Message Inspectors. When you think about automating the installation of a BizTalk application, you don't want to be manually adding the behaviorExtensions to the machine.config. So I set out to add these via a C# application. Seems this was not as trivial as I though it would be. First things first, we need to be able to retrieve the location of the machine.config file:

The above code, will give you the path of the machine.config file, depending on what runtime (x86 or x64) you are running the code under. When running as x86, you will get the following path:

"C:\Windows\Microsoft.NET\Framework\v4.0.30319\Config\machine.config"

When running as x64, you will get the following path:

"C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Config\machine.config"

Once the path has been found, we need to open the file and position ourselves at the correct location in the file (system.serviceModel/extensions):

Now this is the point where I initially got stuck. I had no idea I had to cast the Section as a System.ServiceModel.Configuration.ExtensionsSection. Doing so, allows you to add your behaviorExtension in the config file as such:

Don't forget to set the ForceSave, as - without it - it doesn't seem to write the update. All together, this gives you the following code:

If you do like me and you want to adapt the x86 AND the x64 machine.config file, just replace the "Framework" in the x86 machine.config path into "Framework64" before doing the same. I made myself a simple console application which allows me to call it while running the MSI for my BizTalk application. Only make sure the MSI runs as an administrator!

Categories: Technology
written by: Pieter Vandenheede

Posted on Saturday, May 14, 2016 1:54 PM

Joachim De Roissart by Joachim De Roissart

Michel Pauwels by Michel Pauwels

Pieter Vandenheede by Pieter Vandenheede

Robert Maes by Robert Maes

The Codit recap of the third and last day at Integrate 2016.

Day 3 is already here and looking at the agenda, the main focus of today is IoT and Event Hubs.

Unfortunately we underestimated traffic today and spent quite a long time getting to the Excel, missing the first one and a half session. Luckily we were able to ask around and following Twitter while being stuck in traffic.

Azure IaaS for BizTalkers

Stephen W. Thomas identified the possibilities nowadays with Azure IaaS (Infrastructure as a Service). The offering there is getting bigger and bigger every day. Having lots of choice means that the right choice tends to be hard to make. 58 types of Virtual Machines makes one wonder what the right one would be. Luckily Stephen was able to guide us in the differences, identifiying which ones were the better choice for optimal performance, depending on your MSDN access level. The choices you make have an immediate effect on the monthly cost, so be aware!

Unified tracking on premise and in the cloud

M.R. Ashwin Prabhu was the next presenter. He took off with a very nice demo, showing how he showed how to analyze on premise BAM tracking data into a Power BI dashboard, something a lot of people might have already thought about, but never got to doing. We all know the BAM portal in BizTalk Server has not been updated since 2004 and it urgently needs an update. Customers are spoiled nowadays and, with reason, many of them don't like the looks of the BAM portal and get discouraged to use it.

Moving from BAM data to Logic Apps tracked data is not far-fetched, so another demo followed later demonstrating just this. Just minutes later he demonstrated combining BAM data and Logic Apps tracked data into one solution, with a very refreshing tracking portal in PowerBI, using the following architecture:

(Image by Olli Jääskeläinen)

IoT Cases

IoT is not the next big thing, it already is now! 

Steef-Jan Wiggers explained how a few years ago he managed a solution to process sensory data from windmills. Nowadays Azure components like Event Hubs - the unsung hero of the day -, Azure Stream Analytics, Azure storage, Azure Machine Learning and Power BI can make anything happen!

Kent Weare proved this in his presentation. He explained how, at his company, they implemented a hybrid architecture to move sensory data to Azure instead of on premise historians. Historians are capable of handling huge amounts of data and events, something which can also be done to Azure Event Hubs and Stream Analytics. Handling the output of this data was managed by Logic Apps, Azure Service Bus and partly by an on premise BizTalk Server. The decision to move to Azure were mostly based on the options around scalability and disaster recovery.
In short, a very nice presentation, making Azure services like Event Hubs, Stream Analytics, Service Bus, Logic Apps and BizTalk Server click together!

After the break Mikael Häkansson continued on the IoT train. He showed his experience and expertise by showing off two more demos: first he managed to do some live programming and debugging on-stage. He demonstrated how easy it can be to read sensory data from IoT devices, namely the Raspberry Pi 3. He talked about managing devices such as applications and updates to firm- and software.
In a last demo he showed us how he made an access control device using a camera with facial recognition software and microservicebus.com.

The last sessions

Next up was Tomasso Groenendijk, talking about his beloved Azure API Management. He showed off the capabilities, especially around security (delegation and SSL), custom domain names and such. 

A 3 day conference takes its toll: very interesting sessions, early mornings, long days and short nights. Just as our attention span was being put to the limits (nothing related to the speakers off course), Nick Hauenstein managed to spice things up, almost as much as the food we tasted the last couple of days ;-)

He managed to give us the most animated session of the conference without a doubt. Demonstrated by this picture alone:

(Picture by BizTalk360)

A lightning fast and interactive session, demonstrating his fictional company creating bobbleheads.He demonstrated a setup of Azure Logic Apps and Service Bus, acting and behaving like a BizTalk Server solution. Correlation, long running processes, most of the things we know from BizTalk Server.

Really nice demo and session, worth watching the video for!!

Due to the fact we needed to close up our commercial stand and had a strict deadline on our EuroStar departure with some high traffic in London, we missed Tom Canters session unfortunately. We'll make sure we check that video when they are released in a few weeks.

The conclusion

Integrate 2016 was packed with great sessions, enchanting speakers and great people from all over the world. We had a real great time, meeting people from different countries and companies. The mood was relaxed, even when talking to competitors! Props to the organizers of the event and the people working to make things go smooth at the Platinum Suites of the London Excel. It was great seeing all of these people again after last year, hearing everyones progress since last year's BizTalk Summit 2015.

We take with us the realisation that Azure is hotter than ever. Microsoft is still picking up its pace and new features get added weekly. Integration has never been so complex, but nowadays Microsoft has one - or even more - answers to each question. With such a diverse toolset, both on premise and in the cloud, it does remains hard to keep ahead. A lot of time is spent on keeping up-to-date with Microsoft's ever changing landscape.

We hope you liked our recaps of Integrate 2016, we sure spent a lot of time on it. Let us know what you liked or missed, we'll make sure to keep it in mind next year.

Thank you for reading!

Brecht, Robert, Joachim, Michel and Pieter