wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Monday, October 23, 2017 11:18 AM

Glenn Colpaert by Glenn Colpaert

In this blog post, I will go deeper into detail on why IoT is more than just collecting some data from devices and explain you why it's important to engage business into your IoT Solution next to your perfectly built architecture.

Simplifying IoT, one Azure service at a time!

The Internet of Things (IoT) isn't a technology revolution, it is a business revolution enabled by technology. By 2020, there will be 26 Billion connected 'things' and IoT will be good for a 12$ Trillion market share. These connected 'things' will range from more consumer-driven IoT ranging from wearables and home automation to intelligent industrial scenarios like smart buildings and intelligent machine infrastructures.

In this blog post, I will go deeper into detail on why IoT is more than just collecting some data from devices and explain you why it's important to engage business into your IoT Solution next to your perfectly built architecture. I will talk about some of the more complex things you need to think about when building and designing your solution. Some of them might be scary or sound very complex to tackle, but remember that some of these solutions are just one Azure service away...

A simple view on IoT

When creating a simplified overview of an IoT project or solution, it can be drilled down to the following 4 key components.
An IoT project always comes down to securely connecting your devices to the cloud and start flowing your local data streams into the cloud. Once your device data is stored in the cloud you can start creating insights on it. Based on that, you can leverage the business intelligence towards business and allow them to act upon actions or events raised on that data and trigger additional workflows.

IoT projects can be complex!

However, when taking a closer look on IoT Projects there is more to say than the above 4 key components, especially when moving from a POC setup to a full-blown production ready solution with potentially thousands of devices in the field. As IoT is a business-driven revolution, the most important action there is that you need business to be involved from the very start, as they are the key-drivers from your IoT project. The risk of not involving the business into IoT projects is that you potentially get stuck in POC limbo and your IoT solutions will never see the break of day. Once you get business on board, things are getting easier... or not. Some of the most important technical questions or decisions are listed below, all of them are just a small part of your entire solution. 

How to connect things that are hard to connect?

Getting your IP enabled devices connected to the cloud is one thing, but how will you connect your existing devices, that don't speak IP, to the cloud. What if your devices are not capable of change or the risk of changing them is too high? Or what if your devices aren't even allowed to talk to the cloud, due to security reasons. When this is the case you might need to look at other possibilities to connect your devices with the cloud, like for example introducing a gateway that will be responsible for acting as a 'bridge' between your devices and cloud platform.

Device Management/lifecycle

Once your devices are connected, there's still some open questions or challenges you need to tackle before processing your data. How will you securely identify and enroll your devices onto your IoT Platform. How will you scale that enrollment for many devices? Next to enrollment there is also a question of configuration and managing your devices. When looking at Device Management and Lifecycles there are a couple of management patterns like updates, reboots, configuration updates or even software updates.

Data storage/Visualization

Another key component within an IoT solution is data. Data is key on getting the insights the business is looking for. Without a proper data storage/visualization strategy you're in for some trouble, think fast IO and high scale. When it comes to storing your data, there is no silver bullet. It really depends on the use-case and what the ultimate goal is. Key action there is to pick the storage based on the actions you will perform with your stored data. There is storage that is a perfect input for your analytics tiers but mighy not be a good option when it's just about archiving the data for later use.

Analytics

As already mentioned during this blog, data is key inside your IoT solution. The real value of your IoT project is making sense of your data and getting insights from that data. Once you captured that insight, it is key to connect these insights back to the business and evolve your business by learning from those insights.

Edge Computing

When doing IoT projects you're not always in the position of having full-blown connected sites or factories. There might be a limit on communication bandwidth or even limited internet connectivity. What if you would like your devices to only send aggregated data of the last minute to the cloud? What if you would like to keep all your data close to your device and only send fault data to the cloud. If this is the case, you need introduce Edge Computing into your IoT Solution, Edge Computing allows you to perform buffering, analytics, machine learning and even executing custom code on your device without the need of a proper internet connection.

Security

Let's not go into detail on this one. Start implementing it from day zero as this is the most important part of your IoT Solution. Your end to end value chain must be secured. Never cut budget on your security strategy and implementation when doing IoT Projects.

Simplifying IoT

Congratulations, you've survived the scary part... Thanks to the Azure cloud some of the above challenges are just a couple of button clicks away. The goals of Azure and Microsoft is making it easier to build, secure and provision scalable solutions from device to cloud. The list of recent IoT innovations on the Azure platform is endless, with major focus on some of the key challenges every IoT project phases: Security, Device Management, Insights and Edge Computing.
The future is bright, time to do some IoT!!
Cheers, Glenn
Categories: Azure
Tags: IoT
written by: Glenn Colpaert

Posted on Monday, April 17, 2017 3:18 PM

Luis Delgado by Luis Delgado

Dates are always important, but in the context of IoT projects they are even more relevant. The reason for this is because IoT clients are mostly human-less terminals, machines with no understanding of time. For example, if a client application shows the end-user a wrong date, the user will sooner or later see the problem and correct it. Machines will never identify a date as being incorrect, so the problem can become endemic to your solution and go without notice for a long time.

Having incorrect dates will screw up your data. Not knowing the point in time at which a data observation was recorded will render any historical and time-series analysis useless. Hence, we at Codit spend significant time making sure that the definition, serialization and interpretation of time is correct from the very beginning of the IoT value chain. The following are some basic principles for achieving this.

Add a gateway timestamp to all data observations

In general, we assume that data observations generated by machines will be accompanied by a timestamp generated by the originating machine. This is generally true. However, we have noted that the clocks of machines cannot be trusted. This is because, in general, operators of equipment place little importance to the correctness of a machine’s internal clock. Typically, machines do not need to have precise clocks to deliver the function they were designed for. We have seen machines in the field transmit dates with the wrong time offset, the wrong day, and even the wrong year. Furthermore, most machines are not connected to networks outside their operational environment, meaning they have no access to an NTP server to reliably synchronize their clocks.

If you connect your machines to the Internet through a field gateway, we highly recommend you to add a receivedInGateway timestamp upon receiving a data point at the gateway. Gateways have to be connected to the Internet, they have access to NTP clocks and can generally provide reliable DateTime timestamps.

A gateway timestamp can even allow you to rescue high-resolution observations that are plagued by a machine with an incorrect clock. Suppose, for example, that you get the following data in your cloud backend:

You can see that the originating machine’s clock is wrong. You can also see that the datetime stamps are being sent with sub-second precision. You cannot trust the sub-second precision at the "receivedInGateway" value because of network latency. However, you can safely assume the sub-second precision at the machine is correct, and you can use the gateway’s timestamp to correct the wrong datetimes for high-precision analysis (in this case, the .128 and .124 sub-second measurements).

Enforce a consiste DateTime serialization format

Dates can become very complicated very quickly. Take a look at the following datetime representations:

  • 2017–04–15T11:40:00Z: follows ISO8601 serialization format
  • Sat Apr 15 2017 13:40:00 GMT+0200 (W. Europe Daylight Time): typical way dates are serialized in the web
  • 04/15/2017 11:40:00: date serialization in American culture
  • 15/04/2017 13:40:00GMT+0200

All of these dates represent the same point in time. However, if you get a mixture of these representations in your data set, your data scientists will probably spend a significant amount of hours cleaning the datetime mess inside your data set.

We recommend our customers to standardize their datetime representations using the ISO8601 standard:

YYYY-MM-DDTHH:mm:ss.sssZ

This is probably the only datetime format that the web has defined as de facto, and is even documented by the ECMA Script body HERE.

Note the "Z" at the end of the string. We recommend customer to always transmit their dates in Zulu time. This is because analytics is done easier when you can assume that all time points belong to the same time offset. If that were not the case, your data team will have to write routines to normalize the dates in the data set. Furthermore, Zulu time does not suffer from time jumping scenarios for geographies that switch summer time on and off during the year.

(By the way, for those of you wondering, Zulu time, GMT and UTC time are, for practical purposes, the same thing. Also, none of them observe daylight saving changes).

At the very least, if they don’t want to use UTC time, we ask customers to add a correct time offset to their timestamps:

2017-04-15T13:40:00+02:00

However, in the field, we typically find timestamps with no time offset, like this:

2017-04-15T13:40:00

The problem with datetimes without a time offset is that, by definition, they have to be interpreted as local time. This is relatively easy to manage when working on a client/server application, where you can use the local system time (PC or Server). However, since a lot of IoT is related to analytics, it will be close to impossible to determine the correct point of time of a data observation whose timestamp does not include a time offset.

Make sure that your toolset supports the DateTime serialization format

This might sound trivial, but sometimes you do find quirky implementations of the ISO8601 among software vendors. For instance, as of this writing, Microsoft Azure SQL Server partially supports ISO8601 as serialization format for DateTime2 types. However, this applies only to the ISO8601 literal format. The compact format of ISO8601 is not supported by SQL Server. So if you do depend on SQL for your analytics and storage, make sure you don’t standardize on ISO8601 compact form.

Conclusion

Dates are easy for humans to interpret, but they can be quite complex to deal with in computer systems. Don’t let the trivialness of dates (from a human perspective) fool you into underestimating the importance of defining proper DateTime standardized practices. In summary:

  • Machine clocks cannot be trusted. If you are using a field gateway, make sure you add a gateway timestamp.
  • Standardize on a commonly-understood datetime serialization format, such as the ISO8601
  • Make sure your date serialization includes a time offset.
  • Prefer to work with Zulu/UTC/GMT times instead of local times.
  • Ensure your end-to-end tooling supports the datetime serialization format you have selected.
Categories: Technology
Tags: IoT
written by: Luis Delgado

Posted on Monday, March 21, 2016 11:48 AM

Luis Delgado by Luis Delgado

At Codit, we help customers envision, design and implement solutions focused on Internet of Things (IoT) initiatives. As part of this work, I've realized that organizations investing in IoT initiatives typically walk through a path, which I will call the "IoT Maturity Levels".

 

Maturity levels are important because they provide organizations with a goal-oriented path, so they can measure progress and celebrate small successes on their way to a greater goal. They are important because, as experience shows, it is best for organizations to progress through consecutive maturity levels rather than to simply try to swallow a super-complex project at once. Violent and ambitious jumps in maturity typically fail due to organizational change resistance, immature operational procedures, and deficient governance practices. It is better to have a solid hand on a maturity level before adventuring into the next one.

Here at the 4 maturity levels of IoT, and what they mean for organizations:

Level 1: Data Generation and Ingestion

What is it about: In level 1, organizations begin projects to generate and collect IoT data. This involves coupling their services or products with devices that capture data and gateways to transmit that data, implementing data ingestion pipelines to absorb that data, and storing that data for later use. 

What it means: at this point, companies are finally in a position to generate data and collect it. Data generation is the key building block of IoT, and the first maturity level is aimed at getting your hands on data. Typically, the hardest part are the devices themselves, how to securely capture and transmit the data, how to manage those devices in the field, solve connectivity issues, and building a pipeline that scales to serve many devices.

Level 2: First Analytics

What is it about: once armed with data, companies will typically try to derive some value out of it. These are initially ad-hoc, exploratory efforts. Some companies might already have a developed concept about how they will use the data, while others will need to embark in exploring the data to find useful surprises. For example, data analysts / scientists will start connecting to the data with mainstream tools like Excel and PowerBI and start exploring.

What it means: Companies might be able to start extracting value from the data generated. This will mostly be manual efforts done by functional experts or data analysts. At this stage, the organization starts to derive initial value from the data.

Level 3: Deep Learning

What is it about: the organization recognizes that the data is much more valuable and large than manual analysis permits, and starts investing in technology that can automatically extract insights from the data. These are typically investments in deep learning, machine learning or streaming analytics. Whereas the value of the data in Level 2 was extracted from the manual work of highly-skilled experts, the value of the data in Level 3 is extracted automatically from sophisticated algorithms, statiscal models and stochastic process modeling.

What it means: the organization is able to scale the value of its data, as it is not dependent anymore on the manual work of data analysts. More data can be analyzed in many more different ways, in less time. The insights gained might be more profound, due to the sophistication of the analysis, which can be applied to gigantic data sets with ease.

Level 4: Autonomous Decision Making

What is it about: the deep learning and analytical models, along with their accuracy and reliability, are solid upon exiting Level 3. The organization is now in a position to trust these models to make automated decisions. In Level 3, the insights derived from deep learning are mostly used as input for pattern analysis, reporting dashboards and management decision-making. In Level 4, the output of deep learning is used to trigger autonomous operational actions.

What it means: in Level 4, the deep learning engine of the organization is integrated with its operational systems. The deep learning engine will trigger actions in the ERP (i.e: automatic orders to replenish inventory), LoB systems (remote control of field devices via intelligent bi-directional communication), the CRM (triggering personalized sales and marketing actions based on individual customer behavior), or any other system that interacts with customers, suppliers or internal staff. These actions will require no human intervention, or at least, require minimal human supervision or approvals to be executed.

Do you need to go all the way up to Level 4?
Not necessarily. How far you need to invest into the maturity of your IoT stack is dependent on the business case for such an investment. The true impact of IoT, and what business value it might bring, is very hard to gauge at Day 0. It is best to start with smaller steps by developing innovative business models, rapidly prototyping them, and making smaller investments to explore if an IoT-powered business model is viable or not. Make larger commitments only as steps from previous successes. This allows you to fail fast with minimal pain if your proposed business model turns out to be wrong, adapt the model as you learn through iterations, and allows your team to celebrate smaller successes on your IoT journey.

Categories: Architecture
Tags: IoT
written by: Luis Delgado

Posted on Wednesday, May 13, 2015 7:28 PM

Maxim Braekman by Maxim Braekman

Sam Neirinck by Sam Neirinck

Tom Kerkhove by Tom Kerkhove

The second edition of Techorama, which is being hosted at Utopolis Mechelen, provided a large range of interesting sessions covering all kind of topics. Read more about some of the sessions from the second day in this post.

Just as promised in yesterday’s post of day 1, we are back with an overview of some of the sessions from the second day of Techorama.

Internet of things, success or failure by Stefan Daugaard Poulsen

Twitter: @cyberzeddk

One of the sessions to start off the second day of Techorama, was one about the internet of things, presented by Stefan. He made it clear from the very beginning, that this was not going to be a technical session about writing the code to run on devices, nor about the electronics themselves since Stefan, to put it in his own words, knows jack about it.

Companies are continuously attempting to invent new devices for all kinds of purposes, but are all of these devices actually useful? It’s not all about inventing shiny new devices that look good, but they should take some aspects into account:

  • Does it solve a problem? Can it be used to actually make life easier or provide useful information.
  • Is it consumer-friendly? In other words, can it be shipped without user-manual without raising questions.
  • Does it repeat history? There is no use in re-creating devices that clearly failed in the past.

Off course, one could ask a whole bunch of other questions before starting the development or creating a kickstarter-project. But these questions above are vital in order to build a device, which might turn into a succes.

Although the Internet of Things is becoming widely popular and lots of companies are jumping onto the IoT-train, there are quite some challenges:

  • Privacy: what happens with the data that is being collected by this device.
  • Security: since most devices will be connected to a network, they may not become the culprit of security-leaks.
  • Data processing: all of the sensors are generating a huge load of data, which needs to be processed in an orderly way.
  • Data storage: all of this data that is being processed needs to be stored in a correct way. Do you actually need of the data? How long do you need to save it?
  • Futuristic thinking: the devices should be an enhancement of the current world, but with some limitations. It is not always possible to change how everything is currently working, without expensive modifications.
  • Battery-life: there is no use in creating a device that needs to be charged every couple of hours.

In overall, people or companies should think before creating the next new thing, as it needs to be useful, non-intrusive, reliable and enhancing.

Embellishing APIs with Code Analyzers by Justin Rusbatch

Twitter: @jrusbatch

Visual Studio 2015 ships with the long-awaited Roslyn compiler platform. I can’t remember when exactly Microsoft started talking about Compiler as a Service, but it’s been a couple of years at least. However, it was worth the wait!

As is more and more common within Microsoft, the development of this platform is all open on Github. This means the compiler is no longer a black box, but a fully featured set of APIs which can be used to analyze code, among many other things.

Justin did a small demo on how easy it was to create and debug an Analyzer using Visual Studio 2015 and the VS2015 SDK. It was a simple demo analyzer which would indicate that class names must be in upper case (I suggest not to use this in your actual code).

A code analyzer looks like this in Visual Studio: 
enter image description here

I can think of quite a few use cases already to use code analyzers for. If we think about BizTalk development alone, one can imagine quite a few rules to create, just for Pipeline Components.

  • The pipeline component class must have a GuidAttribute and ComponentCategoryAttribute. (this prevents a few minutes of wondering why your pipeline component doesn’t show up in the Toolbox).
  • Do in-depth code analysis to see if the Load&Save methods are implemented correctly.
  • Create warnings for working with non-streaming classes.

Additionally, each integration project has business-specific rules and coding/naming guidelines. Perhaps your guidelines require you to do a LogStartMethod() & LogEndMethod() in each and every method. Now you can create an analyzer which can verify this, and optionally break your build. This way you can ensure that all your guidelines are enforced, and you have great Visual Studio tooling as an additional benefit. You can even create quick fixes so it’s just a matter of clicking the light bulb and the log statements are inserted without you typing a thing.

All in all, it’s something I will definitely look into in the coming weeks.

Teamwork - Playing Well With Others by Mike Wood

Twitter: @mikewo

Those who've read yesterdays post already know I'm a fan of Mike as a speaker but todays session was really really inspiring!

The focus of the talk was about how can you as an individual work well with others - The first step to achieve this is by stop thinking about yourself as an individual, but instead think the team. Together you are one and believe in one big picture - Your main objective. You need to get rid of your ego and work together as a team to achieve your goal and get across the hurdles that are holding your back from achieving it.

Here are some other interesting tips he gave us :

  • When communicating in team, do it wisely - Don't start pointing fingers at each other when things fail but work together to fix it as soon as possible. Talk in the we form when it's possitive, otherwise talk in the I form, i.e. I broke the build because of this or that. Avoid the "Lottery"-effect where only one person has knowledge about a certain topic, losing him/her means losing a lot of knowledge.

  • Great work can be rewarded with incentives but do it the right way - Reward the team instead of one individual. As an example don't reward the salesman who sold the most, reward the team when they've reached a certain target. This will boot their team spirit instead of having internal competition.

  • Understand failure and accept it - Nobody is perfect & everybody makes mistakes so accept this, it's inevitable to make mistakes but make sure you learn from them.

  • Leadership - Not everyone wants to be a leader so don't push people to do this. A true leader knows how his team members work & feel so they can take that into account. Provide them with guidance & your vision to which you are striving. Also delegation is key to success but don't assign them tasks you would not want to do on your own.

  • Invest in your team members - Have trust in them and let them research things they're interested in

These are just some of the examples Mike gave us that can really contribute in thinking as a team, working as a team & shipping great solutions as a team.

I'd like to end this blog post with a quote Mike mentioned during his talk.

"We will accomplish what we do together. We share our successes & we never let anyone of us fail alone."
- USC Covenant
 

 

This rounds up our 2 day adventure at Techorama, first of all we want to thank everybody for reading our two blog posts and off course a big thank you to the organization of Techorama for creating such an amazing event!! 

Thanks for reading,

Maxim, Sam & Tom

Categories: Community
Tags: IoT

Posted on Wednesday, May 31, 2017 3:48 PM

Sam Vanhoutte by Sam Vanhoutte

What can we learn from the WannaCry ransomware attack and the way we tackle Internet of Things (IoT) projects? That we had better invest enough resources to make, and keep, our smart devices safe.

I was at the airport of Seattle, returning from the Microsoft Build Conference, when I saw the outbreak of the WannaCry ransomware trending on Twitter. There was talk of hospitals that couldn’t operate anymore, government departments unable to function, public transport issues... All consequences of the virus that spread from computer to computer, looking for new victims. The consequences for many IoT scenarios around the world played through my mind. I also remembered the conversations I've had with partners and clients over the past years about investing time and money in the security and safe keeping of IoT devices.

The WannaCry story clearly demonstrated that there was a crushing responsibility for various IT service companies. They should have kept computer systems up to date with a supported Windows version and the latest security updates. Very often, time, budget or change management is a reason why such updates did not happen. "It it’s not broken, don’t fix it." Such thinking left the back door to several critical systems wide open, which made things broken a lot quicker than anyone assumed.

That's why, starting with Windows 10, Microsoft has changed the default 'update policy'. Security and system updates are automatically installed, giving customers a Windows system that is up to date by default. However, the pushing of automatic updates is a major problem with most IoT systems available today.

IoT security with holes

Very often, devices - from smart scales and to internet thermostats to even healthcare devices – are not equipped to receive security updates. The software often does not allow it, or the computing power of the device is too limited to deal with the update logic.

In most cases, the users of such a device don’t think about by the fact that their gadget (or more dangerously, their health device) is actually a mini computer that may have a security issue. If security updates cannot be pushed by default through the manufacturer’s IoT platform, you can assume that the device will never be updated during its entire lifecycle. To make matters worse, such devices often have a long lifespan. Thus, the encryption algorithms used today will no longer prove sufficient to keep sensitive data encrypted in the foreseeable future.

Companies should therefore always supply an update mechanism in their IoT solution. This makes the initial investment higher, but it also offers an undeniable advantage. For one thing, pushing updates can prevent your brand from getting negative exposure in the news as the result of a (serious) vulnerability. But you can also send new pieces of functionality to those devices. This keeps the devices relevant and enables you to offer new features to your customers.

By taking the responsibility for updating (and thus securing) such systems away from the end user, we create a much safer internet. Because no one wants his smart toaster (and its internet connection) used to enable drug trafficking, child pornography or terrorism.

 

Note: This article was first published via Computable on 30 May 2017 (in Dutch) 

Categories: Opinions
Tags: IoT
written by: Sam Vanhoutte