Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on 12 de março de 2018 18:12

Tom Kerkhove by Tom Kerkhove

Sam Vanhoutte by Sam Vanhoutte

Recently a few colleagues of us participated in The Barrel Challenge, a 3,500-kilometer relay tour across Europe, riding in a 30-years-old connected car.

We wouldn't be Codit if we didn't integrate the car with some sensors to track their data in real-time. The installed sensors in the classic Ford Escort car collected data on noise, temperature, speed, humidity and location. This was based on the telemetry that the device in the car was sending to Microsoft Azure via our Nebulus IoT Gateway.

Missed everything about the rally? Don't worry - Find more info here or read about it on the Microsoft Internet of Things blog.

Connecting our car to the cloud

The car was connected with the sensors, using the GrovePI starter kit, to a Rapsberry PI 3, running on Windows 10 IoT Core. This starter kit contains a set of sensors and actuators that can easily be read, leveraging the .NET SDK that is provided by the Grove team.

In order to connect the device with the Azure backend, Nebulus™ IoT Gateway was used.  This is our own software gateway that can be centrally managed, configured and monitored. The gateway is built in .NET core and can run in Azure IoT Edge.

The GPS signals were read from the Serial port, while the other sensors (temperature, sound, humidity…) were read, using the GPIO pins through the GrovePi SDK.

The gateway was configured, using buffering (as connectivity was not always guaranteed in tunnels or rural areas), so that all data was transmitted on reconnect.

Connectivity happened through 4G, used by a Mi-Fi device.

Real-time route & position

The most important part of the whole project: having a real-time map to see the current position of the car and show sensor data.

There aren't many options to handle this, you can go low level websockets or use something like, but we chose to use SignalR  given we are most familiar with the Microsoft stack.

The setup is fairly easy - You add NuGet packages, set up a hub class and implement the client library. We decided to go for the latest version which runs on .NET core. But the best thing about this new version is that there's a Typescript library and yes it does work with Angular 5 ! To connect SignalR to our application we wrapped it in a service which we gave the name "TrackerService".

Now all this data also had to be managed on the client, so this part is done with Ngrx, this is a redux clone for Angular but it has RxJs support! What this means is that the components don't directly get data from the TrackerService nor does the service push any data to the components. Actually the TrackerService just dispatches an action with the payload received from SignalR, the action is then handled by a reducer, which updates the state. The components subscribe to the state and receive all the changes. The advantage of this is that you switch to `OnPush` change detection in all of the components which results in a performance boost.

The map

For the map we initially looked at Azure Location Based Services, but it currently doesn't support the features we needed such as custom markers , at least not when we started with the project. This made us choose for Leaflet  which is free and has a lot of interesting features. First of all it was very easy to show the total route by just passing in an array of gps coordinates into a polyLine function. The best part of Leaflet was that it was super easy to calculate the total distance of a route. Just reduce the gps array list and call the distanceTo-method using previous and current coordinates and you'll get an estimated distance. No need to call an extra API! 

Updating Leaflet data is just a matter of subscribing to the NgRx store and appending the real-time data to the current `poliyLine` and updating the position of the car marker.

Creating aggregates in near-real-time

In order to visualize how our team was doing we decided to create aggregates for every 15 minutes, hour for a variety of metrics like speed and altitude. We based these aggregates on the device telemetry that was sent to Azure IoT Hubs. Since we were already using Routes, we added a new one to that and included all events that can be consumed by our aggregation layer.

To perform these aggragates it was a no-brainer to go with Azure Stream Analytics given it can handle the ingestion throughput and it natively support aggregates by using Windowing, more specifically a Tumbling Window.

By using named temporal result sets we were able to capture the aggregate results in a result set and output it to the sinks that are required. This allows us to keep our script simple, but still output the same results without duplicating the business logic.

And that's how we've built the whole scenario - Here are all the components we used in a high-level overview:


Want to have a look? You can find all our code on GitHub.

Thanks for reading,

Jan, Sam & Tom

Posted on 7 de março de 2018 18:36

Toon Vanhoutte by Toon Vanhoutte

Recently, I discovered a new tab for Logic Apps resources in the Azure portal, named Workflow Settings. Workflow settings is a very generic name, but it's good to know that it includes additional access control configuration, through inbound IP restrictions. There are two types of restrictions possible: on the runtime and on the run history. Let's have a closer look!

Runtime restrictions

You can configure IP restrictions to your Logic Apps triggers:

  • Any IP: the default setting that does not provide any additional security
  • Only other Logic Apps: this should be the default setting for Logic Apps that are used as reusable components
  • Specific IP ranges: this should be configured for externally exposed Logic Apps, if possible

When trying to access the Logic App trigger from an unauthorized IP address, you get a 401 Unauthorized.

"The client IP address 'XXX.XXX.XXX.XXX' is not in the allowed caller IP address ranges specified in the workflow access control configuration."

Run history restrictions

You can also restrict calls to the run history inputs and outputs. When there are no IP addresses provided, there's no restriction. From the moment you provide IP ranges, it behaves as a whitelist of allowed addresses.

When trying to access the Logic App run details from an unauthorized IP address, you can still see the visual representation of the Logic App run. However you're not able to consult the further details.



Another small, but handy security improvement to Logic Apps. It's important to be aware of these capabilities and to apply them wisely.


Categories: Azure
written by: Toon Vanhoutte

Posted on 20 de fevereiro de 2018 17:26

Toon Vanhoutte by Toon Vanhoutte

During the past weeks, I received quite some questions about the behaviour of variables inside a non-sequential for-each loop. It does not always behave as one might expect. This post describes a potential issue you might encounter, explains the reason why and provides a solution to get it solved.

The problem

Let's consider the following scenario

  • The Logic App receives a request that contains an array
  • It iterates with a parallel for-each loop through the array

- A filename with a @guid() is generated
- A result array of these file names is composed

  • The Logic App returns that array as a response

When running the for-each loop in parallel mode, there is an unexpected behaviour, because some of the filenames contain the same GUID value.

The explanation

The variable is initialized at a global level and not within the for-each loop. This means that multiple parallel for-each executions are working against the same instance of the global variable. Depending on race conditions, this might lead to incorrect functioning of your Logic App.

The solution

The issue can be simply resolved by using the Compose action instead of variables. The Compose is instantiated within each individual for-each iteration, so there's no possibility that other for-each iterations interfere.



Don't use the Set Variable action within a non-sequential for-each loop! Use the Compose action, instead!


Categories: Azure
written by: Toon Vanhoutte

Posted on 19 de fevereiro de 2018 11:00

Nicolas Cybulski by Nicolas Cybulski

A best practice guide when faced with uncertain results during a project due to lack of information.

Kicking the can down the road

I guess it has happened to the best of us. You’re on a project, whether it is during sales, analysis or even development, and your customers utters the sentence: “Well, since we don’t have all the information yet, let’s take this vague path, and we’ll see how it turns out.” The customer just kicked the can down the road.

This innocent sentence is a potential trap since you’re now dealing with a “known unknown”, or to put it in other words: “A Risk”.

When you’re in a similar situation your first reflex should be to ask the following 3 questions:

  • WHEN will we evaluate the solution? (Planning)
  • WHAT will be the conditions against which to evaluate the solution? (Scope)
  • HOW will we react if the solution does NOT pan out like we’ve expected? (Action)

Planning (WHEN)

So, the first question you need to ask is: “WHEN”. It is important that a fixed point in time is communicated with the customer to assure that the new solution is being evaluated.

It is up to the customer and the project manager to decide how much risk they want to take. Are they prepared to wait until the entire solution is developed before evaluating? In this case, the possibility exists that the solution is not fit for use and everything needs to be redone.

A better way to go (especially when the proposed solution is uncertain or vague) is to cut it into smaller iterations and evaluate early. This way, problems can be caught in an early stage and corrective actions can be performed.

Scope (WHAT)

Now that we’ve set out one or more moments of evaluation, it is important to define exactly WHAT we’re going to evaluate. Since a customer (or even your team itself) decided to take an uncertain approach to a vaguely described scope (e.g. we need to generate reporting, yet we don’t know exactly what tool to use, or how data should be represented) it is important that everybody is on the same page as to what criteria need to be fulfilled.

This evaluation process is linked to planning. The later you evaluate the solution, the more precise the scope should be, since there is little or no way of correcting the approach afterwards.

Once again, the greater the uncertainty of the scope, the shorter the iterations should be, but even short iterations need to have fixed criteria, and deliverables up front. These criteria define the scope of the iteration.

Action (HOW)

Last, but most definitely not least, the customer needs to be informed about potential actions, should the result of the evaluation turn out to be unsatisfactory. Usually the “we’ll see” sentiment originates in the inability (or unwillingness) to make long term decisions.

However, kicking the can down the road, is never a good strategy when dealing with projects. Sooner or later these potential time bombs come to detonate, usually in a late stage of the project when budgets and deadlines are tight.

So, it is of utmost importance that the customer is made aware that the suggested solution might not work out. A worst-case scenario needs to be set up in advance, and follow up actions (extra budget, change in deadlines, drop the feature altogether, …) need to be communicated.


So, in conclusion: when working on a project it is important not to fall in the trap of pushing risks down the road because information is missing.

Either delay the development until more information is available, or adapt iterations according the vagueness of the proposed solution. Plan your evaluation moments, define the scope of each iteration and communicate a plan “B” if the worst would come to happen.


Hope you enjoyed my writing!
Please don't hesitate to contact me if you think I totally missed or hit the mark.

Posted on 7 de fevereiro de 2018 13:20

Pim Simons by Pim Simons

During a migration from BizTalk 2006R2 to BizTalk 2016 we ran into an issue with the “ESB Remove Namespace” pipeline component. This component is available out-of-the box when the ESB Toolkit is installed and is used to remove all namespaces from a message.

After successfully completing the migration and putting the new environment in production the customer also needed to process a new message. As with the other messages received in their BizTalk environment, all of the namespaces have to be removed from the message and a new one is added. For this the customer had used the “ESB Remove Namespace” and “ESB Add Namespace” pipeline components and this setup had been successfully migrated to BizTalk 2016. For more information on these pipeline components, see:

However, when the new message was received by BizTalk we received this error:

Reason: The XML Validator failed to validate.
Details: The 'nil' attribute is not declared.

It turned out the XSD of the new message has a field that can be nillable, in the message we received the field was indeed marked as nillable. The “ESB Remove Namespace” pipeline component removed the xsi namespace and prefix which caused the “XML validator” pipeline component to fail!

Changing the message content or the XSD was not an option since the XSD was created and maintained by the external party which was sending us the message. We ultimately ended up recreating the “ESB Remove Namespace” pipeline component into a custom pipeline component and modifying the code.

The “ESB Remove Namespace” pipeline component contains code to process the Attributes which contains this snippet:
if (string.Compare(inReader.Name, "xmlns", StringComparison.OrdinalIgnoreCase) != 0 && !inReader.Name.StartsWith("xmlns:", StringComparison.OrdinalIgnoreCase))
                writer.WriteStartAttribute("", inReader.LocalName, "");
We replaced this with:
if (inReader.LocalName.ToLower() == "nil" && inReader.NamespaceURI.ToLower() == "")
                writer.WriteStartAttribute(inReader.Prefix, inReader.LocalName, inReader.NamespaceURI);
else if (String.Compare(inReader.Name, "xmlns", StringComparison.OrdinalIgnoreCase) != 0 &&
                !inReader.Name.StartsWith("xmlns:", StringComparison.OrdinalIgnoreCase))
                writer.WriteStartAttribute("", inReader.LocalName, "");
Now when we receive a message containing a node that is marked as nil, the custom “ESB Remove Namespace” pipeline component handles it correctly and the message is processed.

We could not find any information about the “ESB Remove Namespace” pipeline component not supporting nillable nodes on MSDN and I find it strange that the pipeline component does not support this. To me this seems like a bug.

Categories: BizTalk
written by: Pim Simons