Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Friday, April 17, 2015 5:21 PM

Maxim Braekman by Maxim Braekman

On April 3rd, we had the honor of taking part in the world premier of the IoT Dev Camp organized by Microsoft at their offices in Zaventem, Belgium. Our host of the day was Jan Tielens (@jantielens), who guided us through demo's and labs while using both cloud services and electronics.

In general, it might sound easy to implement a range of devices into a proper interface, but there are a lot of things which have to be taken into account when setting this up. Some of the things you need to keep in mind are device registration, security and keeping connectivity-settings up-to-date.

One of the possibilities to take care of securing the communication between the devices and a cloud service, but preventing you to configure the security every single time, is by using a device-gateway.
This gateway will take care of all the communication and corresponding security, between the devices and cloud service. This allows you to easily add new devices to the interface without adapting the current interface.


The goal of this session was to create a solution in which sensors are registering data which is sent and processed by cloud services. Before we could actually start tinkering with devices and sensors ourselves, we got a nice presentation, including some demo's, on how to configure and use Azure App Services, such as Event hubs, streaming analytics and mobile services.

Event hubs

An ideal service to be used for collecting all data coming from several devices are Event Hubs. This enables the collection of event streams at high throughput, from a diverse set of devices and services. As this is a pub-sub ingestion service, this can be used to pass on the data to several other services for further processing or analysis. 

Streaming analytics

Thanks to streaming analytics the data retrieved from the sensors can be analyzed at real-time, showing the most recent, up-to-date information on, for example, a dashboard.

As Sam Vanhoutte already gave an extensive description of streaming analytics in this blog post, I will not be diving into this subject. 

Mobile services

Using Azure Mobile services, you can quickly create a service which can be used to process and show any type of data on a website, mobile device or any other application you could be creating.

This session did not go into the details of creating mobile services with custom methods. This was only used as an example to show that the backend database can be used to store the output when using streaming analytics. In a real-life solution this would allow you to make all of the data, collected from several sensors, publicly available.


There are several types of devices which can be used as a base to start setting up an IoT-interface. Some of those boards are described in short underneath.


The Arduino, which is an open-source device, is probably the most-popular one currently available. The biggest benefit of this device is the large community, which allows you to easily get the required information or samples to get you going.

The down-side of this device, are the low specs. With only 32K of flash memory, it has a limited amount of capabilities. Security-wise, for instance, it is not possible to communicate with services, using the HTTPS-protocol, however it is capable of sending data over HTTP, using an UTP-shield.

More info can be found here:


Another device, which is quite similar to the Arduino, is the Netduino, as it is based on the platform of the former. This board has better specs than the Arduino, but is, because of these specs, more power-consuming.

The big difference, however is that it allows you to run the .Net Micro Framework, enabling you to develop using .Net languages.

Then again, the downside of this board is that the community is not as big, meaning you will have to figure out more of the details yourselves.

More info can be found here:

.Net Gadgeteer

One of the other available development-boards is the Microsoft .Net Gadgeteer, which also enables you to use the .Net Micro Framework and allows you to use a range of "plug-and-play" sensors.

The specs of this device are better than both of the previous boards, meaning it has a lot more capabilities, but then again it does not have a large community helping you out.

More info can be found here:

Raspberry Pi

Of all of these boards, the Raspberry Pi is the one with the highest specs, even allowing you to run an operating system on top of the device. Typically, this device is running a version of Linux, but as was announced some time ago, Microsoft will publish a free version of Windows 10 that will be able to run on top of the board!

The huge benefit of this board is the capability of using pretty much any programming language for development, since any required framework can be installed.

However, before you can start using the Raspberry Pi, you will need to obtain a copy of any OS, which has to be 'burned' onto a micro-SD card, that will be acting as the 'hard'-drive of this board.

More info can be found here:


Once the presentation and demos were finished, we all got the chance, during a ‘hackathon’, to attempt to set up a fully working flow, starting from sensors and ending up in a cloud service.

On overall, this session gave us a nice overview of the capabilities of IoT in real-life situations.

To round up the session, Microsoft came up with a nice surprise. As this was the world premiere, all of the attendees received a Raspberry Pi 2, preparing us for integrating IoT using Windows 10.

Thank you, Microsoft!




Categories: Azure
Tags: Event Hubs, IoT
written by: Maxim Braekman

Posted on Monday, May 26, 2014 11:23 PM

Tom Kerkhove by Tom Kerkhove

Henry Houdmont by Henry Houdmont

Glenn Colpaert by Glenn Colpaert

Maxim Braekman by Maxim Braekman

Today was the first day of the brand new Belgian conference Techorama. Some of the Codit attendees are taking a look back to the first day of the conference.


Today was the first day of a brand new Belgian conference called Techorama.
After techdays called it quit, some community techies decided to join forces and organise a community driven alternative.

They have the ambition to make Techorama THE annual tech conference for the developer community in Belgium.
More details on the people behind Techorama can be found here:

Some people of Codit attended this great event, we have a set of reviews with some take-aways ready for those that were not able to attend this great initiative!

Keynote - 'The Real Deal' by Bruno Segers

To be honest - The keynote was my least favorite session for the simple reason that it was not a technical one. However Bruno was able to tell his story in a very amusing way.
He pointed out that we, as developers, know what the risk is of exposing our data on the internet but that the majority of the users are not!

Innovation is really great but we should not pay the price with our identity and data, we need better law definitions that protect the users.

Big companies are selling our data - Never forget this.

- Tom Kerkhove

What’s new in Windows Phone 8.1? by Gitte Vermeiren

In this session Gitte gave us an overview on what’s new in Windows Phone 8.1 and what’s in it for us as developers. This session was inspired by the BUILD session of a couple of weeks ago.

Gitte talked about the convergence story of Microsoft is bringing to make apps and applications easily available/convertible cross-device. Write it once, use it everywhere!!

This session was full of code examples and tons of tips and tricks on how to use the new WP 8.1 SDK and how to build better cross-device apps.

- Glenn Colpaert

Lean ALM: making software delivery flow and learning from software innovators by Dave West

This session was all about the following best practices:


Successful teams have a few things in common:

  • They have smart people with a clear mission
  • They have a mandate to change whatever is necessary to improve their productivity
  • They automate the hell out of everything


Allow teams to plan, to do, to learn … with the right tools and practices at the team and roll that feedback back into more traditional planning and operational processes.


Encourage teams to put in place a transparent, visual process in real time that searches for the truth whilst ensuring that team measures roll-up into a broader view of the delivery.


E-mail = Evil

Enable high performance teams to focus on the currencies of collaboration within the context of the work they are doing whilst some of the team will not be in the scrum team using the same tools and practices.

- Henry Houdmont

Intro to the Google cloud by Lynn Langit

Okay, so most .NET-developers know about Microsoft Azure, right? But what about the Google Cloud? The actual difference was about to be explained to us by Lynn Langit, a former Microsoft-employee. Of course the overall concept is quite similar to the known cloud platforms, such as Microsoft Azure. So you do have a Google-equivalent to the Azure VM's or Worker roles which are called the Compute and App engines, but the main idea behind the Google cloud is high performance at low cost.

Instead of allowing the customer to set a limit on the scalability of the cloud engine, Google allows you to set a limit on the maximum price. Therefore the engine will scale automatically according to the amount of resources needed, limited to the total costs.

The idea of maximum performance, at low costs can also be found in BigQuery, the equivalent of SQL Azure storage. For this type of storage, you do not pay for the amount of data you collect, but for the amount of queries executed.

Shortly, Google cloud is definitely worth taking a look at.

Maxim Braekman

Service Virtualization & API management on the Microsoft platform by Sam Vanhoutte

Imagine a landscape of webservices with their own authentication, monitoring,... how do we keep track of all this?
With webservice virtualization you can control all these features in one place by exposing the physical webservices over a virtualized one.

Sam introduced me to Windows Azure API Management that has been recently announced as a public preview!
It enables you to easily virtualize REST API's in a controlled way by using a Developer & Publish portal where you can create new APIs with operations, products and policies where developers can apply to use it from the developer portal.

Next to the new Microsoft Azure API Management in the cloud Sam also talked about an on-premise alternative called Sentinet, a product that Codit is an exclusive reseller of (more info here).
Once again - With this tool we can virtualize physical webservices to the consumer with our own access control, load balancing (round-robin/fail-over/...), etc.
It even has a test-section where you can tell the virtualized service to send out test messages - Imagine the physical service is not build yet but you need to test it in the consuming application?
This is no longer a problem!

Both the API Management & Sentinet were new to me, but Sam was able to explain them very easily and show me the big benefits of both platforms and illustrate how easy they are to use.

- Tom Kerkhove

(If you want to know more about the API Management, read Massimo his post here)

Zone out, check in, move on by Mark Seeman

Most programmers desire to be ‘in the zone’ as much as possible; they see it as a prerequisite to being productive. However, the reality is often one of interruptions.
As it turns out, being in the zone is a drug, and as you build up tolerance, getting the next ‘high’ becomes more and more difficult. This may explain why programmers move on to management or other pastures as they get older.

However, it’s possible to stay productive as a programmer, even in the face of frequent interruptions.
Forget the zone, and learn to work in small increments of time, this is where a distributed version control system can help greatly.

To be more productive, we need to stay focused, but how do we stay focused? As it turns out, we stay focused when we get (immediate) feedback. Mark believes that unit tests provides us the feedback we need to stay focused on the code we're writing.

A tip for avoiding interruptions: the headphone rule (as long as the headphone is on, nobody must interrupt me).
Problem: we spend too much time reading code instead of writing code because we get interrupted and we need to start over and find out where we were.


  • Write less code
  • Build modules instead of monolythic systems
  • Work from home
  • Use a distributed version control system (Git, Mercurial) and work with branches
  • Check-in every five minutes (compilable code)
  • Integrate often: get in, get out quickly to avoid working concurrently with a colleague on the same files to avoid merge conflicts.
  • Use feature toggles to be able to check-in and ship incomplete code

One other way to keep focused is by keeping your code small (modular) and clear, so when interrupted it takes less time to pick up the thread again later.

- Wouter Seye & Henry Houdmont

Windows Azure Web Jobs – the new way to run your workloads in the Cloud by Magnus Mårtensson

The very online available Magnus started his session by setting a kitchen timer to one hour because when he starts talking cloud he’s unstoppable.
His talk of today was about “Azure Web Jobs”  that recently got released as a public preview.

You can think of Azure Web Jobs as background processes that are hosted under the Azure Website where it’s deployed to and therefore share the same advantages.
Magnus gave us some very interactive demo’s showing us all the features and possibilities of Azure Web Jobs. It’s amazing how easily you can set this up.
Web Jobs is using Azure storage behind the scenes, but as a developer you literally require zero knowledge on how to code storage when using Azure Web Jobs.
Web Jobs supports many languages like C#, Python, PHP,…
Be sure to check out this new feature as this is a very powerful and cool addition to the Microsoft Azure Platform.

- Glenn Colpaert

Dude, where's my data? by Mark Rendle

What type of (cloud) storage should be used for different kinds of applications/situations. Mark Rendle enlightened us about some of the possibilities, such as several types of relational databases, NoSQL, queues and messages. Although there are many options to choose from, not all are suitable for any type of application you might build.

Some of the best practices, that were given during this session, for choosing the type of storage can be found below.

  • Store as little data in SQL as possible, the more data you store, the harder it becomes to manage all data.
  • If you do need storage, use the simplest storage that works, do not make it more complicated. If the data can be stored in a text-file, stick to the text-file.
  • Sufficiently test the made choice in Azure! Do not test on local hardware or emulator, this is never exactly the same as Azure.

Last, but definitely not least:  Experiment, learn and keep an open mind. Do not always stick to the familiar, known option.

Maxim Braekman

Managing your Cloud Services + Continuous Integration solutions by Kurt Claeys

Imagine you have a cloud service running in production but how do we manage & monitor that? This was the main topic in this talk by Kurt.
He showed us how we can auto-scale our cloud service based on the number of messages in the queue, what the Microsoft Service Management API has to offer, the SQL database management views, how to restore/backup SQL databases to blob storage, etc.

Kurt also introduced us to the several ways to deploy your cloud service and how you can use continuous integration with Visual Studio Online that is linked to our cloud service.
This will automatically deploy a new version to Staging-phase when every time we check-in our code and everything passed the build!

Very interesting topics that will easy the development & afterlife of a cloud service!

- Tom Kerkhove

#IoT: Privacy and security considerations by Yves Goeleven

We all know that we are at dawn of what is called the Internet of Things and we are already bumping in the first serious issues - How about our privacy and securing the use of millions of sensors?
Yves opened his session by stating that everything is new for everyone and he has some ideas about those issues.

We will need to fight this on two fronts: physically & virtually where we need to secure ourselves against physical tampering, virtual tampering and think about our data, etc.
For example, small devices have low memory & CPU, how will we encrypt it and where will we do this? Are we communicating directly with the cloud or are we using gateways?

While I'm not really active in the IoT world this discussion session was still very interesting to me and I really liked Yves his approach with the Gateway-principle where devices are communicating to one gateway per location where after the gateway communicates with one backend. Also it was good to know what Yves his vision was on this topic and what very important issues need to be tackled as soon as possible! I know that this next big thing will/is happening and we need to take our precautions.

I think we can say that Yves is becoming a guru on IoT in the Belgium community and I'm very curious on his next talk!

- Tom Kerkhove


That was it for day one, stay tuned for more Techorama action tomorrow!!


Thanks for reading,

Henry, Glenn, Maxim, Wouter & Tom

Posted on Monday, March 12, 2018 6:12 PM

Tom Kerkhove by Tom Kerkhove

Sam Vanhoutte by Sam Vanhoutte

Jan Tilburgh by Jan Tilburgh

Recently a few colleagues of us participated in The Barrel Challenge, a 3,500-kilometer relay tour across Europe, riding in a 30-years-old connected car.

We wouldn't be Codit if we didn't integrate the car with some sensors to track their data in real-time. The installed sensors in the classic Ford Escort car collected data on noise, temperature, speed, humidity and location. This was based on the telemetry that the device in the car was sending to Microsoft Azure via our Nebulus IoT Gateway.

Missed everything about the rally? Don't worry - Find more info here or read about it on the Microsoft Internet of Things blog.

Connecting our car to the cloud

The car was connected with the sensors, using the GrovePI starter kit, to a Rapsberry PI 3, running on Windows 10 IoT Core. This starter kit contains a set of sensors and actuators that can easily be read, leveraging the .NET SDK that is provided by the Grove team.

In order to connect the device with the Azure backend, Nebulus™ IoT Gateway was used.  This is our own software gateway that can be centrally managed, configured and monitored. The gateway is built in .NET core and can run in Azure IoT Edge.

The GPS signals were read from the Serial port, while the other sensors (temperature, sound, humidity…) were read, using the GPIO pins through the GrovePi SDK.

The gateway was configured, using buffering (as connectivity was not always guaranteed in tunnels or rural areas), so that all data was transmitted on reconnect.

Connectivity happened through 4G, used by a Mi-Fi device.

Real-time route & position

The most important part of the whole project: having a real-time map to see the current position of the car and show sensor data.

There aren't many options to handle this, you can go low level websockets or use something like, but we chose to use SignalR  given we are most familiar with the Microsoft stack.

The setup is fairly easy - You add NuGet packages, set up a hub class and implement the client library. We decided to go for the latest version which runs on .NET core. But the best thing about this new version is that there's a Typescript library and yes it does work with Angular 5 ! To connect SignalR to our application we wrapped it in a service which we gave the name "TrackerService".

Now all this data also had to be managed on the client, so this part is done with Ngrx, this is a redux clone for Angular but it has RxJs support! What this means is that the components don't directly get data from the TrackerService nor does the service push any data to the components. Actually the TrackerService just dispatches an action with the payload received from SignalR, the action is then handled by a reducer, which updates the state. The components subscribe to the state and receive all the changes. The advantage of this is that you switch to `OnPush` change detection in all of the components which results in a performance boost.

The map

For the map we initially looked at Azure Location Based Services, but it currently doesn't support the features we needed such as custom markers , at least not when we started with the project. This made us choose for Leaflet  which is free and has a lot of interesting features. First of all it was very easy to show the total route by just passing in an array of gps coordinates into a polyLine function. The best part of Leaflet was that it was super easy to calculate the total distance of a route. Just reduce the gps array list and call the distanceTo-method using previous and current coordinates and you'll get an estimated distance. No need to call an extra API! 

Updating Leaflet data is just a matter of subscribing to the NgRx store and appending the real-time data to the current `poliyLine` and updating the position of the car marker.

Creating aggregates in near-real-time

In order to visualize how our team was doing we decided to create aggregates for every 15 minutes, hour for a variety of metrics like speed and altitude. We based these aggregates on the device telemetry that was sent to Azure IoT Hubs. Since we were already using Routes, we added a new one to that and included all events that can be consumed by our aggregation layer.

To perform these aggragates it was a no-brainer to go with Azure Stream Analytics given it can handle the ingestion throughput and it natively support aggregates by using Windowing, more specifically a Tumbling Window.

By using named temporal result sets we were able to capture the aggregate results in a result set and output it to the sinks that are required. This allows us to keep our script simple, but still output the same results without duplicating the business logic.

And that's how we've built the whole scenario - Here are all the components we used in a high-level overview:


Want to have a look? You can find all our code on GitHub.

Thanks for reading,

Jan, Sam & Tom

Posted on Wednesday, January 21, 2015 2:19 PM

Sam Vanhoutte by Sam Vanhoutte

Azure Stream Analytics is a very interesting service on the Azure platform that allows developers and data analysts to get interesting information or events out of a stream of incoming events. The service is part of the typical IoT value chain and is positioned in the transformation part. This post describes how to get started.

Azure Stream Analytics is a very interesting service on the Azure platform that allows developers and data analysts to get interesting information or events out of a stream of incoming events.  The service is part of the typical IoT value chain and is positioned in the transformation part.  The following picture is taken from a Microsoft presentation on IoT and shows Stream Analytics, connected with Event Hubs.


This post explains how you can start using Azure Stream Analytics and covers some troubleshooting tips that should save you some time.  My scenario will explain how you can send data through the Azure Event Hubs and how you can apply the standing queries of Azure Stream Analytics to get some added value out of the incoming stream.

In my scenarios and this post, I will solely focus on Azure Event Hubs as the input for my jobs.  You can use blobs as well, but I believe Event Hubs is the closest match with the integration world.

Typical scenarios for Complex Event Processing

I tried to describe some of the typical scenarios where Stream analytics can be used.  But on the high level, you can say that any scenario that needs to get intelligence out of incoming streams by correlating events should be a good fit for ASA (the abbreviation of Azure Stream Analytics).

Noise reduction

When talking with customers on frequency of incoming events (for example sensor data), we see often that there's a tendency to send data in a higher frequency that mostly needed.  While this has some advantages (better accuracy), this is not always needed.  The higher the frequency of ingested events, the higher the ingestion costs.  (potential extra data transfer costs on the device side and higher number of incoming events).  Typically the ingestion costs are not that high (when using Azure Event Hubs, for example you pay around 2 cents for 1mio events), but the long term storage cost is always higher (as the data mostly increases each month and you pay for the same data every month again). 

That's why it would be good to reduce the incoming stream with the high frequency into an aggregated (averages/sums)  output stream to the long term storage.  Then still you benefit from the higher accuracy, but you save on the expensive cold storage.

Enrich events with reference data

It is possible to join the incoming data with reference tables/data.  This allows you to build in decision logic. For example, you could have a table with all the devices and their region or customer.  And you could use that extra information to join with the telemetry information. (using the DeviceId) and aggregate per region, customer etc.

Leverage multiple outputs (pub-sub style)

When using Azure Event Hubs, it is perfectly possible to create multiple ASA jobs that share the same EventHub as their input.  This allows to get multiple results, calculations or events out of the incoming stream.  For example, one job could extract data for archiving, while another job just looks for anomalies, or event detection.

Detect special events, anomalies or trends

The canonical example, used in complex event processing, is the TollBooth or traffic speed scenario.  Every car passing a toll booth or the traffic camera, results in a new event.  But the added value is in detection special cases in the stream of events. 

  • Maybe it's needed to automatically trigger an event if the duration of one car from one booth to another is shorter than the allowed time, because he is speeding.  This event can then be processed by the backend.
  • The same stream can also be used to detect traffic jams, if the average time between booths decreased in a window of time.  Here it is important to take the average values, in order to avoid the extremes of cars that are very slow, or cars that speed.
  • And at the same time, license plates could be matched against the reference data of suspected cars.  Cars that are being sought after because they are stolen, or because they are driven by suspects of crime.

Getting started with Azure Stream Analytics

We will start with a very basic scenario, where we will send events to an Event Hub and we will execute standing queries against that stream, using ASA. This is a quick step-by-step guide to create an Analysis job.

Sending data to Azure Event Hubs

In order to create an event hub, there are a lot of blog posts and samples that explain this.

The sending of data is probably the most important part, as it defines the data structure that will be processed by the ASA job.  Therefore, it is important that the event that is sent to the Event Hub is using supported structures.  At the end of this post, I have some troubleshooting tips and some limitations that I encountered.

The following code snippet sends my telemetry-event to the Event Hub, serializing it as JSON.

As you can see, we are defining a TelemetryInfo object that we send multiple times (in parallel threads) to the event hub.  We just have 10 devices here to simulate data entry.  These json-serialized objects get UTF8-encoded and are sent in the EventData object of EventHub SDK.

Creating the Analysis job

In order to create the job, you have to make sure the subscription is enabled for Azure Stream Analytics.  To do this, log on to with the subscription owner and click on preview features to enable ASA.

Log on to the Azure portal and click the ASA icon.  There, you can create a new job.  The job has to have a name, a region and a monitoring account.  (there is one monitoring account for every region).

Once this is done, it is required to specify one or more inputs, the actual query and the output to which the job has to write.

Create the inputs

Select the inputs tab in your job.

Create a new input and define the input as Data Stream (constant inflow of data).  Select Event Hub in the second window of the wizard, as that's the input we want to work with.  In the 3rd part of the wizard, we have to select the event hub we are sending our events to.  Ideally this is in the same subscription.

In the last part, we have to specify the encoding and serialization.  We will use json as UTF8

Create the output

One job can have only one output at this moment, which is unfortunate.  If you also would love to see this changed, don't hesitate to vote on my suggestion on uservoice.

In this step, we will create a blob output that takes our stream events and outputs it in a CSV file. 

For that, we click on the Output tab and create a new one.  To start, we select Blob storage.  As you can see, we can also output our data to SQL Azure, or to another Event Hub.  In the wizard, we have to provide our blob settings and the serialization/encoding, as you can see in the following screenshots.

Creating the query

The scenario we are now implementing is really like the "hello world" example of Stream Analytics.  We just take all the incoming data from the event hub and will output it to the blob storage.

Therefore, we just create the following query: SELECT * FROM TelemetryStream, where TelemetryStream is the name of our input.  

Starting and testing the job

Now we have all things configured, we can start the job.  And if everything goes well, we can run the producer (see above) and we should see that the data is getting written to our blob container, based on the properties we specified.  This allows us to easily get data and get a basic test on which we can start defining our query to the level we need.  If you have issues, please refer to the troubleshooting section of this post.

Testing queries

The query windows also allows users to test their queries against sample files, which is a big improvement since a few months.  The only challenge one has is to get a good sample file.  For that, I mostly change the output of my job to JSON/UTF and apply the SELECT * query again, resulting in a perfect good test file.  I then take that json file to upload it in the test wizard.  If the data is valid, you can easily test the query and see the output of the query in the bottom of the query screen.

Writing Analytics queries

As you can see in the source code (and in the output on our blob), the data we are sending contains the following fields:

DeviceId, ErrorReading, MainValue, Type, EventProcessedUtcTime, PartitionId, EventEnqueuedUtcTime.  The last 3 fields are added by ASA and can be used in the query as well.

Now we will dive a little deeper in the query language of ASA, that really feels like plain old SQL. 

Time-based aggregation

One of the most common aggregation methods that is needed (explained in the noise reduction scenario) is the grouping of events by a window of time.  ASA provides 3 types of windows, of which the tumbling window (fixed time intervals) is the most common.  The details of these windows are explained well in the documentation.  The other windowing methods are Hopping (overlapping time windows) and Sliding (flexible windowing) windows.

If we now want to aggregate the data from our sample, we can easily update the query to aggregate the sensor data per minute interval, by writing the following query:

The documentation outlines good examples and other functions.

Using reference data

It is also possible to add reference data to your query, so that this data can be used to enrich the query or take extra filters or decisions.  For that, it is needed to add a new input to the query of type 'Reference data' and browse to an existing blob (I always use CSV files for this).  In this sample, I am uploading the SensorList.csv that you can view on our github account. 

Now, you can use this data to make SQL-like joins and enrich your query.  I am using the SensorList.csv that is part of the Gist I created for this blog:

 And this allows me to write the following join-query that adds the name of the sensor to the output on my blob.

Troubleshooting and diagnostics

I had some issues in the past days, trying to get things work.  I hope this post helps others in avoiding the same issues.  With this, I also want to thank the ASA team for the help they gave in fixing this.

For logging information, you should fall back on the Operation Logs of Azure that you can get to, through the dashboard of your ASA job.  But (it's a preview, don't forget that!) there were some errors that were not visible in the logs and that required support from the team.  I'll try to list some of the limitations that I encountered here.

Data Conversion Errors

When you are getting Data Conversion errors in the log, the chance is big that you are sending an unsupported object type in your event.  I had sent booleans and Arrays in my event and they were causing issues. 

I created the ToJson() method in my sender (see above), where I am json-serializing a dynamic object with only the allowed types.  This still allows my local application to work with all properties, I'm just making sure to remove or change the incorrect data types, before sending them (by using the dynamic object).

However, the intention is that the value of the unsupported fields will be set to NULL in an update that is expected shortly.  That way, you can still process the events, even if they contain invalid data types.

There is also the possibility to declare the data structure and data types in the query.  That way, you describe the data types of the different fields (without creating something specific)

The following query is an example of that

The details of the issue and the resolution I had, can be found on the msdn forum.

Next blog post

In a next blog post, I'm planning to explore the following things:

  • Output data to SQL Azure
  • ASA - Event Hub specifics
  • Dynamic reference data
  • Combining multiple input streams


Categories: Azure
written by: Sam Vanhoutte

Posted on Monday, June 8, 2015 8:31 AM

Filiep Maes by Filiep Maes

Toon Vanhoutte by Toon Vanhoutte

Pieter Vandenheede by Pieter Vandenheede

On June 4th, organized the first "Integration Day". At Codit we were happy to be present and in this blog post you will find a recap of the sessions that were brought that day.

The BTUG Integration Day took place in the Moonbeat studio in Mechelen. A nice venue for about 35 eager participants of the event. There were 9 sessions in total, and the day kicked off with a nice keynote session from Jan Tielens.


Speaker: Jan Tielens

Jan started off with a small recap of where integration comes from and where it is headed to. From monolithic designs to API apps, web apps and logic apps. He proceeded with a demo on provisioning API apps and logic apps and how to retrieve tweets using a certain #hashtag by using a recurrence app and the new Logic Apps.

The demo didn't go exactly as planned due to the network acting up, but it involves retrieving tweets from Twitter and sending them to a local Raspberry Pi printer. Later that day it seems that it worked just fine:

Jan continued his keynote talking about the capabilities of API apps and the Swagger integration and the concept of hybrid integration: integration between different cloud services or the combination cloud and on-premises.


BizTalk Server Deep Dive Tips & Tricks for Developers and Admins

Speaker: Sandro Pereira


After the keynote, Sandro took the stage to have a session on BizTalk tips and tricks for both BizTalk administrators and developers. 

The first part was focused on BizTalk administration.

The most important tips:

  • Configure the default BizTalk backup job, to include custom databases
  • Take advantage of PowerShell to automate the tracking configuration settings
  • Automatically clean up custom message archiving folders

The second part was more developer oriented. Interesting tricks:

  • Use default or custom pipeline components in a generic way
  • Request/Response patterns are supported in a messaging only scenario
  • Via a registry change, the Business Rules Engine supports consuming static methods

It's good to see that most of the tips and tricks are already part of the Codit BizTalk Best Practices, our technical 'bible' to ensure quality within BizTalk integration projects!


Demystifying Logic Apps

Speaker: Ashwin Prabhu

Aswhin started giving an overview of the history of Logic Apps, even it's not been here for a long time it has some interesting key-points.

Windows first announced Windows Azure in 2008, in 2011 Service Bus was introduced and this were the first integration (baby)step.

In 2013 BizTalk Services were announced, but after some architectural changes this was re-worked so it would fit in the new eco-system (App services). The main reason for this is that Microsoft would like to provide us a democratic eco-system so we - as a developer - can (re-)use functionality from each other (e.g. mapping functionality)
These different building blocks (Logic app, API app, Web app, mobile app) provide us an easy way to use different functionality without having a steep learning curve. 

During the demo: Aswhin created a logic app with 2 different connectors a SQL connector and File connector - SQL server was queried and some data was picked up and sent to the file adapter.

What can we expect for Logic Apps in the future?

  • Integration patters (Convoys, long running processes, Auto delivery)
  • Large messaging handling
  • Azure services on premise.
  • Build-in designer for Visual Studio.
  • Bug fixes, important is that you provide your feedback Microsoft is ready to listen! (Tip: if you are using connectors at this moment, and you don’t want to be bothered with updates, you can disable the auto-update in the portal.
  • Better error messages

During the question round, Aswhin got the question if Logic Apps created to take over of BizTalk Server? BizTalk server on-premises is here to stay, but things are moving! For example: a start-up may be better served with cloud services so they can focus on their functionality instead of infrastructure,

Microsofts purpose is to provide an alternative in the cloud. But both worlds can exist next to each other.


5 Advanced techniques in BizTalk360 to Solve Operational Problems

Speaker: Saravana Kumar

Just before lunch Saravana took the lead and presented how BizTalk 360 can help you to solve daily Operational problems.

BizTalk 360 has 50 features in BizTalk 360 focused on Operations & Monitoring.

Saravana his sessions was hands-on containing 5 different interesting demo's.

1.  Visualize your message flow

  • However complex they are, with zero coding change you can visualize the BizTalk flows.
  • Admin console is difficult to understand, very technical.

2. Automate your operations

  • A lot of time is lost daily on monotonous tasks.
  • Data monitoring / MessageBox monitoring (In our opinion the BizTalk flows should handle these tasks as much as possible leaving no suspended messages/manual intervention).

3. Import/Export, Auto correct monitoring configuration

  • Import/Export: moving monitoring configuration from Test to Production.
  • Autocorrect: receive location down, gets automatically started by BizTalk 360.

4. Manage any internet connected BizTalk environment remotely

  • In a secure way
  • No complex VPN connection necessary
  • Handy for operations teams that need to be able to connect 24/7 to the BizTalk environment: BTS 360 is using Service bus Relay

5. Understand Throttling

  • Understanding throttling is a complex process and requires a BizTalk expert to understand the problem.
  • BizTalk 360 can be used to easily understand what the real problem is on the environment.

Next to BizTalk 360 there are different monitoring tools on the market (Codit Integration Dashboard, System Center Operation Manager, AIMS, BizTalk Health Monitor) each having their advantages.


BAM Experiences in Large Scale Deployments

Speaker: Marius W Storsten

AIMS Innovation has - up until now - used BAM as a core component of their monitoring solution for BizTalk: AIMS Innovation. Marius shared AIMS' experiences on using BAM in a monitoring setup -> how it works, the good & bad, performance, bugs, tips, tricks and challenges

Marius tried to make it an interactive session, which is very nice, but I don't think he counted on a Belgian audience :)
Luckily some Dutch guys were quicker to answer.

It is AIMS' experience that the choice for BAM has not been the best and Marius showed us this by referencing some of their experiences and discoveries around BAM. One of them being a dependency between bttdeploy.exe and the Tracking Profile Editor (TPE). Meaning that bttdeploy.exe depended on TPE and not the way around.

Marius concluded with some recommendations on using BAM:

There is also a small, but nice blog post up on their website about this as well:


Governance and API Management for BizTalk Server- what’s the value and how to implement it?

Speaker: Adrew Slivker

In a world that exists out of services and API's that are business critical for companies we need governance and management of these.

What is governance about?

  • Manage services
  • Share metadata (SOAP, Swagger, ... )
  • Document services
  • Publish & register services
  • ... 

The management of the services exists out of security (authentication & authorization), monitoring, SLA, etc...

Sentinet manages SOA and API services and applications deployed on-premises, in the cloud, or in hybrid environments.  To provide us the possibility to govern & manage our services, Sentinet uses the gateway concept – publish internal services to partners, provide the possibility to use multiple protocols, add monitoring, ... all that without changing the internal functionality of your services.

During the demo - Andrew showcased the Nevatech stack and the Sentinet solution. In the demo an internal net.tcp service hosted by BizTalk that's been able to consumed by clients through a virtual service hosted by Sentinet via both SOAP and REST, without any development.


JSON and REST in BizTalk

Speaker: Steef Jan Wiggers


Steef Jan brought us a session about JSON and REST.
In a new hybrid world, integration is key and will become more important than ever. There are several systems like SAP that are not going to the cloud in the near future.  BizTalk server 2013 R2 offers capabilities to fulfill the demand for a hybrid type integration solution using the JSON encoder/decoder.

The session was mostly based on DEMO's where we also connected with the API of

You can find this demo on the technet wikis as well:


Azure Event Hubs, StreamAnalytics and Power BI

Speaker: Sam Vanhoutte

In his IoT demo Sam will show how to use Azure Event Hubs, Stream Analytics and Power BI.

There is a lot of similarities between BizTalk integration and IoT, it is all about connecting data.

A typical IoT event stream looks like:

  • Data generators (sensors)
  • Field gateways: Used as bridge between the cloud and the data generators
  • Event hubs: Used to collect data on a large scale
  • Stream Analytics: digest the data
  • Data analytics: Power BI

Event Hubs- is a highly scalable publish-subscribe event ingestor that can intake millions of events per second so that you can process and analyze the massive amounts of data produced by your connected devices and applications.  In his demo, Sam shows how to setup an event hub and how it works using throughput units.

After collecting data you can use stream analytics for real time analytics. Stream Analytics provides out-of-the-box integration with Event Hubs to ingest millions of events per second. It is based on SQL syntax. Sam gives a demo of how stream analytics works.

Power BI is about visualizing data instead of using tables for the end user, a (free) Power BI dashboard is available. Currently, it has limited capabilities:

  • Data collection
  • Support for Power Maps
  • Pin reports, relationship between different views

Sam ends with an overall demo about traffic control. His demo generates speed information, sends the data to the event hub, uses stream analytics to sort the data and finally shows the information in Power BI.



We had a blast with the Integration Day and hope to be present again next year! A big thank you to the organization and the speakers and sponsors of this event. We (as Codit) are proud to be apart of this!