wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Friday, October 9, 2015 4:20 PM

Tom Kerkhove by Tom Kerkhove

Last week Microsoft did a lot of announcements regarding Microsoft Azure during AzureCon.

In this blog post I'll discuss a few of them that I personally feel that were interesting.

Azure has changed a lot since it was released and keeping track of all the changes is (almost) impossible.

In one of the AzureCon sessions I found a nice overview of the Azure Landscape summarizing almost all of the services that are available at the moment.

Unfortunately this landscape is already out-of-date and is missing some of the services such as Azure Data Lake (private preview), IoT Hub (public preview), IoT Suite (public preview) and others. That said it still gives a nice overview of what the platform is offering as of today and summarize what's in the landscape.

Today I'll walk you through some of the announcements made at AzureCon last week and what you can expect from the services.

Azure IoT Hub

One of the most -if not THE most- important aspect of IoT is security. We need to know who is sending data to us, where they are physically located, be able to revoke access, push software updates, and more all at scale. Building this is hard and requires a big investment.

Because of this Microsoft released Azure IoT Hub in public preview, a managed service that enables secure device-to-cloud ingestion & cloud-to-device messaging between millions of devices and the cloud. By using bi-directional communication you are able to send commands from the cloud to your devices.

With IoT Hubs comes a device registry allowing you to store metadata of a device and use a per-device authentication model. When you suspect that a device has gone rogue you simply revoke it in the IoT hub.

Devices can communicate with IoT Hub by using HTTP 1.1 or AMQP 1.0. For those who are new to AMQP, I recommend watching Clemens Vasters' "AMQP 1.0" video series.

Interested in more?

  • Learn how you can manage your devices using IoT Hub & IoT Suite in this article
  • Learn how to connect your IoT devices with the Azure IoT libraries here
  • Read Clemens Vasters' whitepaper on Service Assisted Communication
  • Learn how IoT Hub is different from Event Hubs in this article
  • Learn how you can support additional protocols in this article
  • Get an overview of Azure IoT Hub here

Azure IoT Suite

Next to Azure IoT Hub they've also released Azure IoT Suite in public preview. This suite is an abstraction layer on top of existing services such as IoT Hub, Stream Analytics, DocumentDb, Event Hubs and others allowing you to focus on your scenario rather than the implementation.

Based on the preconfigured solution you choose, the service will generate all the artifacts in a matter of minutes and you're ready to go. Once this is completed we can change the configuration, scaling, etc... as we've always done through the Azure portal. 
An example of such a preconfigured solution is remote monitoring of your devices in the field.

The suite comes with a management portal with a dashboard that gives you an overview of the telemetry history per device, map with your devices, history of alerts and some gauges showing the humidity. Ofcourse, this is different for each solution you choose to provision.

There is also integrated device management (on top of IoT Hub) but personally I'm glad to see built-in support for rules & actions. This allows us to add business logic to the solution without writing any code!

Microsoft Azure Certified for IoT

As part of this announcement Microsoft also announced the Microsoft Azure Certified for IoT program for devices that are tested and verified to work with the Azure IoT Suite.

We believe that IoT suite would be a very good solution to generate a reference implementation in a quick way that can then be customized for the customer. This would be ideal in prototyping, demos and standard solutions.

Another great thing to note is that all the preconfigured solution from Microsoft are available on GitHub allowing you to customize what you want - The management portal for example. You can find the Remote Monitoring example here.

To take it a step further - It would be great to have the ability to save our reference architecture as a template and re-provision it again later on or share it with our peers.

You can now get started with Azure IoT Suite and provision a solution for you here.

Interested in more?

  • Watch an introductory video on Azure IoT Suite here
  • Read more about the Microsoft Azure Certified for IoT program here

Azure Container Service

Almost one year ago Docker & Microsoft announced their partnership to drive the adoption of distributed application with containerisation. Since then Microsoft has been working on a Docker Engine for Windows Server, contributed to the Docker ecosystem and containerisation is the next big thing - Works on your machine? Ship your machine!

During AzureCon Microsoft announced the Azure Container Service, a service to easily create & manage clusters of hosts running Docker, Apache Mesos, Mesosphere Marathon & Dockser Swarm. For this Microsoft partnered with Mesosphere, a company building on top of the Apache Mesos project.

While the service is in an early stage you can already deploy a quickstart ARM template that creates a Mesos cluster with Marathon & Swarm for you. Later this year the Azure Container Service will become available for you that will make it even more easy. While the service will be in charge of creating and management of the Azure infrastructure while Docker will stay in charge of running the app code.

Interested in more?

  • Learn more about Docker, Azure Container Service & Windows Server containers here
  • Read more on Mesosphere and how Mesos powers the service here
  • Follow the Docker & Microsoft partnership here
  • Learn more about Mesosphere here
  • Read more on containers here
  • Read the announcement here

Azure Compute Pre-Purchase Plan

As of the 1st of December you will be able to pre-purchase your compute power with the Compute Pre-Purchase Plan. This allows you to reserve predictable workloads, such as development VMs during business hours, and save up to 63% of what you pay today! This will be available in every region.

From my understandings this is a similar offering such as AWS EC2 Reserved Instances, here's what Amazon is offering.

Azure Security Center

Over the past couple of months we've seen services to increase the security of your solutions in Azure - One example of them is Azure Key Vault. If you haven't heard about it? Read more about it here!

During AzureCon Microsoft added an additional service to build more secure solutions - Azure Security Center. Security center provides a unified dashboard with security information on top of your existing Azure resources. The goal is to get insights of what resources are vulnerable or detect events that were undetected in the past. Next to that the service is also heavily analysing all the data and using Machine Learning to improve the detection system.

An example of this is somebody trying to brute force your VMs. Azure Security Center than tries to determine where the user is located and create awareness around this.

Based on policies you can define the service will also give recommendations to improve the security of your resources. The example for this was that they've defined a policy that every Web App should have a firewall configured. This allows the service to detect Web Apps without a firewall and recommend a fix for it.

While the service isn't publically available yet, you can already request an invite here. Public preview is scheduled for later this year.

Interested in more?

  • Read more on what the Azure Security Center offers here
  • Learn more about the Azure Security Center in this video
  • Learn more about security & compliances in Azure here
  • Learn more about Encryption and key management with Azure Key Vault in this video

New SAS capabilities for Azure Storage

After adding support for client-side encryption with Azure Key Vault, the Azure Storage team has extended their security capabilitis with the additions of three features for Shared Access Signatures:

  • Account-level SAS tokens - You can now create SAS tokens on the account level leveraging an alternative to storage account keys. This allows you to give a person or application access to manage your account without exposing your account keys. Currently only Blob & File access are supported, Queues & Tables are coming in the next two months
  • IP Restrictions - Specify one or a range of IP addresses for your SAS token from which requests are allowed, others will be blocked.
  • Protocol - Restrict account & service-level SAS tokens to HTTPS only.

For more information on the new SAS capabilities or other Azure Storage announcements, read the announcement or read about using Shared Access Signatures (SAS) for Azure Storage here.

General availability

During AzureCon several services were announced to become general available in the near future.
Here are some of them :

  • Azure HDInsight on Linux
  • App Service Environment
  • Azure File Storage
  • Mobile Engagement
  • Azure Backup

They also announced that the regions Central India (Pune), South India (Chennai) & West India (Mumbai) are now available for everyone. Here is the full list of all the supported locations.

Conclusion

These were a ton of announcements of which these were only a few. If you want to read all of them I suggest you go to the official Microsoft Azure blog.

All the sessions of AzureCon are available on-demand on Channel9.

Thanks for reading,

Tom.

Categories: Community
Tags: Containers, IoT
written by: Tom Kerkhove

Posted on Tuesday, May 12, 2015 3:44 PM

Maxim Braekman by Maxim Braekman

Sam Neirinck by Sam Neirinck

Tom Kerkhove by Tom Kerkhove

The second edition of Techorama, which is being hosted at Utopolis Mechelen, provided a large range of interesting sessions covering all kind of topics. Read more about some of the sessions from the first day in this post.

The second edition of Techorama again promises to be an interesting event, grouping experts in all kind of technologies to share their knowledge and experiences. Split over two days, there are over 70 sessions. A short summary on some of the sessions of this first day can be found below.

Keynote by Hadi Hariri

The honor of kicking off the entire event went to Hadi Hariri, who got to give an inspiring presentation about the constant chase of developers and tech companies for the mythical "silver bullet". In other words, developers keep looking for the ultimate framework that allows them to build any kind of great application. Because of this constant chase, new frameworks keep popping up and people have been moving from framework to framework, only to discover that this brand-new technology is not perfect as well. Since every framework will have its limitations, the goal to find this silver bullet, remains unreachable.

But with any type of project, the most important idea to keep in mind is to think before you act. Don't just start developing your apps using the newest technology, but consider what would be the best choice for your specific situation.

Keeping in mind that several types of frameworks, technologies and tools are to be subject of several sessions, this keynote started the event off in a very fitting way.

SELECT VALUE FROM DATASTREAM by Alan Smith

For those who were present at the first edition of Techorama, you'll notice a first familiar face. Alan Smith is back, this time giving an insight in the usage of Azure Streaming Analytics by collecting the telemetry data from…, yes, the racing game, he loves using at any demo :)

By sending all of the telemetry data to an event hub, Alan was able to process this data through Streaming analytics to get the average speed, gear, best lap time,… but also to figure out if anyone is cheating. Streaming analytics makes it possible to query the data in any sort of way, allowing you to look for strange/abnormal values, therefore finding cheaters.

As Sam Vanhoutte already gave an extensive description of streaming analytics in this blog post, I will not be diving into this subject, but the demo given by Alan made sure that all of the possibilities were very well illustrated. 

In overall, yet again an interesting and entertaining presentation.

Messaging patterns by Mike Wood

After his talk last year I was looking forward to see Mike Wood in action again! He gave a good session on using a messaging approach in your project and what problems it can fix for you but also what the downsides are.

During the session Mike walked us through some of the concept & patterns used in messaging.
Here are some examples of those discussed :

  • Handle your Poison messages, you don't want to waste resources by trying to process these messages and block everything. It's a best practice to send them to a seperate queue, i.e. a dead-letter queue, so you can keep on processing the other messages.
  • Support Test messages when possible. This allows you to test a specific behavior on a production system while not changing the live-data.
  • Trace your messages so you can visualize the flow of a message. This can help you determine what happens to your messages and where the culprit is when it was lost.
  • Don't lose your messages! By tracing your messages you can follow the trail which a message follows. When using Service Bus Topics, it's possible that the topic swallows your message if there is no matching subscription. One option to handle this is to create a catch-all-subscription.
  • Provide functional transparency in a way that you know what the average processing time is for a specific action so you can pin-point issues and provide alerting on this.
  • Use idempotent processing or provide decompensating logic as an alternative. If your processing is not idempotent you should provide an alternative flow that allows you to rollback the state.

In general, messaging can help you improve the scalability & flexibility by decoupling your solution but this comes with the downside that your complexity increases. Also processing in sequence or using ordering is not easy.

Although I have some experience with messaging it was still a nice session where he give some additional tips on how you can trace your messages better or pinpointing the issues by using the average processing refrences.

Great speaker & great content!

Docker and why it is relevant for developers by Rainer Stropek

One of the benefits of going to a conference is to learn about technologies you would otherwise not pickup easily. It’s also an opportunity to learn about speakers unknown to you. Since I didn’t know Rainer and only had a very (very) high-level knowledge of Docker, this seemed like a good session.

Docker is a platform to facilitate building, shipping and running your application, anywhere. It uses a concept called Container virtualization. This is a level above virtual machines, the container reuses the host operating system. It has the benefit that deployment using Docker is much faster than spinning up a new VM (it can be as little as a second).

What you deploy with Docker is a Docker image. An image is not a monolithic entitity. You can build upon existing images (which can be found on Docker Hub), and only the modifications you do are in your image, the baseimage is referenced in the dockerfile.

Once you setup your Docker image, you can easily deploy it to another environment and be sure it’s setup identically to your Development machine.

All of this and more was covered in Rainer’s session. At the end an ASP.NET 5 application was deployed with Docker, on an Ubuntu machine.

What about Windows one might ask? Docker uses Linux-specific kernel features, which means you’d need to use a lightweight VM to run Docker in a Linux virtual machine. 
However, with the recent announcements of Windows Server Containers and Hyper-V Containers, I think it’ll be very interesting how Microsoft incorporates the container model in both their cloud and on-prem solutions.

The slides of this excellent talk can already be found on his blog.

 

 

That was it for day one, stay tuned for more Techorama action tomorrow!!

 

Thanks for reading,

 

Tom, Sam & Maxim

Categories: Community
Tags: Containers

Posted on Monday, March 12, 2018 6:12 PM

Tom Kerkhove by Tom Kerkhove

Sam Vanhoutte by Sam Vanhoutte

Jan Tilburgh by Jan Tilburgh

Recently a few colleagues of us participated in The Barrel Challenge, a 3,500-kilometer relay tour across Europe, riding in a 30-years-old connected car.

We wouldn't be Codit if we didn't integrate the car with some sensors to track their data in real-time. The installed sensors in the classic Ford Escort car collected data on noise, temperature, speed, humidity and location. This was based on the telemetry that the device in the car was sending to Microsoft Azure via our Nebulus IoT Gateway.

Missed everything about the rally? Don't worry - Find more info here or read about it on the Microsoft Internet of Things blog.

Connecting our car to the cloud

The car was connected with the sensors, using the GrovePI starter kit, to a Rapsberry PI 3, running on Windows 10 IoT Core. This starter kit contains a set of sensors and actuators that can easily be read, leveraging the .NET SDK that is provided by the Grove team.

In order to connect the device with the Azure backend, Nebulus™ IoT Gateway was used.  This is our own software gateway that can be centrally managed, configured and monitored. The gateway is built in .NET core and can run in Azure IoT Edge.

The GPS signals were read from the Serial port, while the other sensors (temperature, sound, humidity…) were read, using the GPIO pins through the GrovePi SDK.

The gateway was configured, using buffering (as connectivity was not always guaranteed in tunnels or rural areas), so that all data was transmitted on reconnect.

Connectivity happened through 4G, used by a Mi-Fi device.

Real-time route & position

The most important part of the whole project: having a real-time map to see the current position of the car and show sensor data.

There aren't many options to handle this, you can go low level websockets or use something like socket.io, but we chose to use SignalR  given we are most familiar with the Microsoft stack.

The setup is fairly easy - You add NuGet packages, set up a hub class and implement the client library. We decided to go for the latest version which runs on .NET core. But the best thing about this new version is that there's a Typescript library and yes it does work with Angular 5 ! To connect SignalR to our application we wrapped it in a service which we gave the name "TrackerService".

Now all this data also had to be managed on the client, so this part is done with Ngrx, this is a redux clone for Angular but it has RxJs support! What this means is that the components don't directly get data from the TrackerService nor does the service push any data to the components. Actually the TrackerService just dispatches an action with the payload received from SignalR, the action is then handled by a reducer, which updates the state. The components subscribe to the state and receive all the changes. The advantage of this is that you switch to `OnPush` change detection in all of the components which results in a performance boost.

The map

For the map we initially looked at Azure Location Based Services, but it currently doesn't support the features we needed such as custom markers , at least not when we started with the project. This made us choose for Leaflet  which is free and has a lot of interesting features. First of all it was very easy to show the total route by just passing in an array of gps coordinates into a polyLine function. The best part of Leaflet was that it was super easy to calculate the total distance of a route. Just reduce the gps array list and call the distanceTo-method using previous and current coordinates and you'll get an estimated distance. No need to call an extra API! 

Updating Leaflet data is just a matter of subscribing to the NgRx store and appending the real-time data to the current `poliyLine` and updating the position of the car marker.

Creating aggregates in near-real-time

In order to visualize how our team was doing we decided to create aggregates for every 15 minutes, hour for a variety of metrics like speed and altitude. We based these aggregates on the device telemetry that was sent to Azure IoT Hubs. Since we were already using Routes, we added a new one to that and included all events that can be consumed by our aggregation layer.

To perform these aggragates it was a no-brainer to go with Azure Stream Analytics given it can handle the ingestion throughput and it natively support aggregates by using Windowing, more specifically a Tumbling Window.

By using named temporal result sets we were able to capture the aggregate results in a result set and output it to the sinks that are required. This allows us to keep our script simple, but still output the same results without duplicating the business logic.

And that's how we've built the whole scenario - Here are all the components we used in a high-level overview:

 

Want to have a look? You can find all our code on GitHub.

Thanks for reading,

Jan, Sam & Tom