wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Tuesday, June 6, 2017 5:00 PM

Stijn Moreels by Stijn Moreels

Several things became clear to me when studying CI. One of things is that everything is based on the principle of automation. The moment when you start thinking about “I can’t automate this”: that’s the moment when you should ask yourself if that is really the case.

Introduction

Before I read the book about Continuous Integration by Paul Duvall, Stephen M. Matyas III, and Andrew Glover, I thought that CI meant that we just create a deployment pipeline in which we can easily/automate deployment of our software. That and the fact that developers integrate continuously with each other.

I’m not saying that it’s a wrong definition, I’m saying that it might be too narrow for what it really is.

Thank you, Paul, Stephen, and Andrew, for the inspiring book and the motivation to write this post.

Automation

Several things became clear to me when studying CI. One of these things is that everything is based on the principle of automation. The moment when you start thinking about “I can’t automate this” that’s the moment when you should ask yourself if that is really the case.

CI is all about automation. We automate the Compilation (different environments, configurations), Testing (unit, component, integration, acceptance…), Inspection (coverage, complexity, technical debt, maintainability index…), Documentation (code architecture, user manual, schemas…).

We automate the build that runs all these steps, we automate the feedback we get from it, …

You can almost automate everything. Things that you can’t automate are for example Manual Testing. The reason is that the definition of manual testing is that you let a human test your software. You let the human decide what to test. You can in fact automate the environment in which this human must test the software, but not the testing itself (otherwise it wouldn’t be called “manual” testing).

That’s what most intrigued me when studying CI - the automation. It makes you think of all those manual steps you must take to get your work done. All those tiny little steps that by itself aren’t meaning much but are a big waste if you see them all together.

If you always must build your software locally before committing, could we than just place the commit commands at the end of our build script?

Building

It’s kind of funny when people talk about “building” software. When some people say: “I can’t build the software anymore”; don’t always mean “build”; they mean “compile”. In the context of Continuous Integration, the “compile” step is only the first step of the pipeline but it’s sometimes the most important step to people. Many think of it as:

“If it compiles == it works”

When you check out some code and the Build fails (build, not compilation); that could mean several things: failed Unit Tests, missing Code Coverage, maximum Cyclometric Complexity, … but also a compilation failure.

In the next paragraphs, when I talk about a “build” I’m talking in the context of CI and don’t mean “compile”.

Continuous Building Software

Is your build automated?
Are your builds under 10 minutes?
Are you placing the tasks that will most likely to fail at the beginning of your build?
How often do you run your integration builds? Daily? Weekly? At every change (continuously)?

  • Every developer should have the ability to run (on demand) a Private Build on his or her machine.
  • Ever project should have the ability to run (on demand, polled, event-driven) an Integration Build that include slower tasks (integration/component tests, performance/load tests…),
  • Every project should have the ability to run (on demand, scheduled) a Release Build to create deployable software (typically at the end of the iteration), but must include the acceptance tests.

There are tremendous build script tools available to automate these kinds of things. NAnt, Psake, FAKE, Cake… are a few (I use FAKE).

Continuous Preventing Development/Testing

Are your tests automated?
Are you writing a test for every defect?
How many asserts per test? Limit to one?
Do you categorize your tests?

“Drive to fix the defect and prevent from reoccurring”

Many other posts discus the Test-First and Test-Driven mindset and the reason behind that; so, I will not discuss this here. What I will discuss is the reaction people have on a failing test from your build.

A failed build should trigger a “Stop the presses” event within the team. Everyone should be concerned about the failure and should help each other to make the build succeed again as quickly as possible. Fixing a failed build should be the responsible of the team and not (only) the person that broke the build.

But what do you do when the build failed? What reaction should you have?

First, write a test that exposes the defect by writing a test that passes. When that new test passes, you have proven the defect and can start fixing it. Note that we don’t write a failed test!

There are three reasons why you should write a test that passes for a defect (we’re using Test-Driven Development, right?):

  1. It’s difficult to write a failing test that uses the assertion correctly because the assertion may not be added when the test doesn’t fail anymore which means you don’t have a test that passes but a test that’s just not failing.
  2. You’re guessing what the fix should alter in behavior == assumption.
  3. If you have to fix the code being tested, you have a failing test that works but one that doesn’t verify the behavioral change.

To end the part of testing, let me be clear on some points that many developers fail to grasp: the different software tests. I have encountered several definitions of the tests so I merge them here for you. I think the most important part is that you test all these kind of aspects and not the part if you choose to call your Acceptance Tests, or Functional Tests:

  • Unit Tests: testing the smallest possible “units” of code with no external dependencies (including file system, database…), written by programmers - for programmers, specify the software at the lowest level…
    Michael Feathers has some Unit Test Rulz that specify whether a test can be seen as a Unit Test.
  • Component Tests encapsulate business rules (could include external dependencies), …
  • Integration Tests don’t encapsulate business rules (could include external dependencies), tests how components work together, Plumbing Tests, testing architectural structure, …
  • Acceptance Tests (or Functional Tests) written by business people, define the definition of “done”, purpose to give clarity, communication, and precision, test the software as the client expects it, (Given > When > Then structure), …
  • System Tests test the entire system, could sometimes overlap with the acceptance tests, test the system in a developer perspective…

Continuous Inspection

Can you show the current amount of code complexity?
Performing automated design reviews?
Monitoring code duplication?
Current code coverage?
Produce inspection reports?

It wouldn’t surprise you that Code Inspection is maybe not the most “sexy” part of software development (is Code Testing sexy?). But nonetheless it’s a very important part of the build.

Try to ask some projects what their current Code Coverage is, Maintainability Index? Technical Debt? Duplication? Complexity?...

Al those elements are so easily automated but so little teams adopt this mindset of Continuous Inspection. These elements are a certain starting point:

Continuous Deployment

Can you rollback a release?
Are you labelling your builds?
Deploy software with a single command?
Deploy with different environments (configuration)?
How do you handle fixes after deployment?

At the end of the pipeline (in a Release Build), you could trigger the deployment of the project. Yes, you should include the Acceptance Tests in here because this is the last step before the actual deployment.

The deployment itself should be done with one “Push on the Button”; as simple as that. In Agile projects, the deployment of the software is already done at the very beginning of the project. This means that the software is placed at the known deployment target as quickly as possible.

That way the team get as quickly as possible feedback of how the software act in “the real world”.

Continuous Feedback

When you deploy, build, test, … something, wouldn’t you want to know as quickly as possible what happened? I certainly do.

One of the first things I always do when starting a project is checking if I (and the team) gets the right notifications. As a developer, I want to know as quickly as possible when a build succeeds/failed. As an architect, you want to know what the current documentation of the code base is and what the code looks like in schemas, as project manager you may want to know if the acceptance tests where succeeded so the clients get what he/she wants…

Each function has its own responsibilities and its own reason to want feedback on things. You should be able to give them this feedback!

I use Catlight for my build feedback, work item tracking, release status... This tool will maybe in the future support pull request notifications too.

Some development teams have an actual big colorful lamp that indicate the current build status. Red = Failed, Green = Successful and Yellow = Investigating. Some lamps go more lighter/darker red if the build states in a "failed" state for too long.

Conclusion

Don’t call this a full-CI summary because it is certainly not. See this as a quick introduction of how CI can be implemented in a software project with the high-level actions in place and what you can improve in your project automation process. My motto is that anything can be improved and so, be more automated.

I would also suggest you read the book I talked about and/or check the site of Thought Works for more information on the recent developments in the CI community.

Start integrating your software to develop software with lesser risk and higher quality. Make it as automated that you just must “Push the Button”The Integrate Button.

Categories: Technology
written by: Stijn Moreels

Posted on Friday, June 2, 2017 3:55 PM

Toon Vanhoutte by Toon Vanhoutte

Quite often I get questions on how to modify JSON collections or arrays within Logic Apps. Before reaching out to Azure Functions, as an extensibility option, I prefer to use the out-of-the-box available Logic Apps functionality. This blog post contains some tips and tricks about dealing with collections.

Remove and rename properties of items within a collection

This example explains how you can easily remove and rename properties within the repeated items of a collection. Therefore, we leverage the Data Operations - Select action. Below you can find a simple example.

Input collection

This is the sample input collection.

Select action

Configure the Data Operations - Select action, so only two properties remain, while being renamed. The output array will contain the same number of items, whilst the items will only include Item and OrderNumber as properties. This offers a great user experience for performing simple transformations on collections.

Output Collection

This is the resulting output collection.

Reduce the number of items within a collection

This sample explains how you can easily reduce the number of items within a collection. The following expressions can be used to perform this job, without looking at the content of the item:

  • @first: returns the first item of the collection
  • @last: returns the last item of the collection
  • @take: returns the first x items of the collection
  • @skip: returns the last x items of the collection

In case you want to reduce the number of items in a collection, based on their content, you can use the Data Operations - Filter action. Below you can find a simple example.

Input collection

This is the sample input collection.

Filter action

Configure the Data Operations - Filter action, so only the items that have a quantity lower than or equal to 100, are included in the output collection. This allows you to quickly take a subset of a collection, based on specific filter criteria. With the available Logic Apps expressions, you can write quite complex and powerful filter queries.

Output Collection

This is the resulting output collection.

Remove duplicate items from a collection

This section explains how you can easily remove duplicate items from a collection. You can use the @union expression to achieve this, although it's not originally designed for that. Below you can find a simple example.

Input collection

This is the sample input collection.

Compose action

Configure a Data Operations - Compose action that uses the @union expression. This expression merges the content of two input collections, but has also a side-effect that only one of the duplicate items is included in the output collection. So, as a trick, you can use @union(triggerBody(),triggerBody()) to remove duplicate items from a collection. I pass the input collection twice to the @union expression, cause I didn't find out how to pass an empty collection.

If you don't want any of the duplicate items to be included in the output collection, you could use the reverse @intersection expression in a similar way.

Output Collection

This is the resulting output collection, using the @union expression.

Conclusion

Logic Apps has already several powerful mechanisms to modify collections in a user friendly way. The product team is constantly adding enhancements to the framework, so I'm convinced that we will enjoy additional ways to handle arrays in the near future!

Do you also know a trick about handling Logic App collections? Don't hesitate to share it in the comments section below! Are you looking for more info on transforming JSON objects in Logic Apps? Definitely have a look over here!

Thank you
Toon

Categories: Azure
written by: Toon Vanhoutte

Posted on Thursday, June 1, 2017 3:30 PM

Stijn Moreels by Stijn Moreels

One of the mind-blowing development techniques that radically changed the programming world is the Test-Driven Development approach introduced by Kent Beck.

But... writing tests before we start coding? Who will do that?

Introduction

One of the mind-blowing development techniques that radically changed the programming world; is the Test-Driven Development.

Writing tests before we start coding? Who will do that?

I must admit that I personally wasn’t really convinced by the idea; maybe because I didn’t quite understand the reason we should write our tests first and the way we should do it. Can you have a bad Software Design with TDD? Can you break your Architecture with TDD? Yes! TDD is a discipline that you should be following. And like any discipline you must hold to a certain number of requirements. By the end of the day, it’s YOUR task to follow this simple mindset.

In this article, I will talk about the Mindset introduced by Kent Beck when writing in a Test-Driven Development environment.

Too many developers don’t see the added-value about this technique and/or don’t believe it works.

TDD works!

"Testing is not the point; the point is about Responsibility"
-Kent Beck

Benefits

Because so many of us don’t see the benefits of TDD, I thought it would make sense to specify them for you. Robert C. Martin has inspired me with this list of benefits.

Increased Certainty

One of the benefits is that you’re certain that it works. Users have more responsibility in a Test-Driven team; because they will write the Acceptance Tests (with help of course), and they will define what the system must do. By doing so, you’re certain that what you write is what the customer wants.

The amount of uncertainty that builds up by writing code that isn’t exactly what the customer wants is called: The Uncertainty Principle. We must always eliminate this uncertainty.

By writing tests first; you can tell your manager and customer: “Yes, it will work; yes, it’s what you want”.

Defect Reduction

Before I write in a Test-First mindset; I always thought that my code was full of bugs and doesn’t handle unexpected behavior.
Maybe it was because I’m very certain of myself; but also, because I wrote the tests after the code and was testing what a just wrote; not what I want to test.

This increases the Fake Coverage of your code.

Increased Courage

So many developers are “afraid” to change something in their code base. They are afraid to break something. Why are they afraid? Because they don’t have tests!

“When programmers lose the fear of cleaning; they clean”
- Robert C. Martin

A professional developer doesn’t allow that his/her code rots; so, you must refactor with courage.

In-Sync Documentation

Tests are the lowest form of documentation of your code base; always 100% in sync with the current implementation in the production code.

Simple Design

TDD is an analysis/design technique and not necessary a development technique. Tests force you to think about good design. Certainly, if you write them BEFORE you write the actual implementation. If you do so, you’re writing them in offence and not in defense (when you’re writing them afterwards).

Test-First also helps you think about the Simplest thing that could possibly work which automatically helps you to write simple structured designed code.

Test-First Mindset

When you’re introduced into the Test-First methodology, people often get Test Infected. The amount of stress that’s taking from you is remarkable. You refactor more aggressively your code without any fear that you might break something.

Test-Driven Development is based on this very simple idea to first write your test, and only then write your production code. People underestimate the part “first write your test”. When you writing your tests, you’re solving more problems than you think.

Where should I place this code? Who’s responsible for this logic? What names should I use for my methods, classes…? What result must a get from this? What isn’t valid data? How will my class interface look like? …

After trying to use TDD in my daily practice, if found myself always asking the same questio:

“I would like to have a … with … and …”

Such a simple idea changed my vision so radically about development and I’m convinced that by using this technique, you’re writing simpler code because you always think about:

“What’s the simplest thing that could make this test work”

If you find that you can implement something that isn’t the right implementation, write another test to expose this behavior you want to implement.

TDD is - in a way - a physiological methodology. What they say is true: you DO get addicted to that nice green bar that indicate that you’re tests all pass. You want that color as green as possible, you want it always green, you want it run as fast as possible so you can quickly see it’s green…

To be a Green-Bar-Addict is a nice thing.

Kent Beck Test-Patterns

It felt a little weird to just state all the patterns Kent Beck introduced. Maybe you should just read the book Test-Driven Development by Example; he’s a very nice writer and I learned a lot from the examples, patterns and ideas.

What I will do, is give you some basic patterns I will use later in the example and some patterns that we very eye-opening for me the first time.

Fake It

When Kent talked about “What’s the simplest thing that could work”, I was thinking about my implementation but what he meant was “What’s the simplest thing that could work for this test”.

If you’re testing that 2 x 3 is 6 than when you implement it, you should Fake It and just return the 6.

Very weird at first, especially because the whole Fake It approach is based on duplication; the root of all software evil. Maybe that’s the reason experienced software engineers are having problems with this approach.

But it’s a very powerful approach. Using this technique, you can quickly get the bar green (testing bar). And the quicker you get that bar green, the better. And if that means you must fake something; then you should do that.

Triangulation

This technique I found very interesting. This approach really drives the abstraction of your design. When you find yourself not knowing what to do next, or how you should go further with your refactoring; write another test to support new knowledge of the system and the start of new refactorings in your design.

Especially when you’re unsure what to do next. 

If you’re testing that 2 x 3 is 6 than in a Triangulation approach you will first return 6 and only change that if you’re testing again but then for 2 x 2 is 4.

Obvious Implementation

Of course: when the implementation is so simple, so obvious, … Than you could always implement it directly after your test. But remember that this approach is only the second option after Fake It and Triangulation.

When you find yourself taking steps that are too big, you can always take smaller steps.

If you’re testing that 2 x 3 is 6, in an Obvious Implementation approach you will just write 2 x 3 right away.

By Example

I thought it would be useful to show you some example of the TDD workflow. Since everyone is so stoked about test-driving Fibonacci I thought it would be fun to test-drive another integer sequence.

Let’s test-drive the Factorial Sequence!

What happens when we factorial 4 for example? 4! = 4 x 3 x 2 x 1 = 24

Test

But let’s start with something super simple:

Always start with the same sentence: “I would like to have a… “. I would like to have a method called Factorial which I could use to send an integer with that will calculate the factorial integer for me.

Now we have created a test before anything about factorial is implemented.

Compile

Now we have the test, let’s start now by making our code compile again.

Let’s test this:

Hooray! We have a failed test == progress!

Implement

First Steps

What’s the simplest thing that we could write in order that this test will run?

Hooray! Our test passed, we can go home, right?

A Bit Harder

What’s next? Let’s check. What happens if we would test for another value?

I know, I know. Duplication, duplication, duplication. But were testing now right, not yet in the last step of the TDD mantra.

What is the simplest we could change to make this test pass?

Yes, I’m feeling good right now. A nice green bar.

One Step Before Generalize

Let’s add just another test, a bit harder this time. But these duplication is starting to irritate me; you know the mantra: One-Two-Three-Refactor? This is the third time, so let’s start refactoring!

Ok, what’s the simplest thing?

Generalize

Ok, we could add if/else-statements all day long, but I think it’s time to some generalization. Look at what we’ve now been implementing. We write 24 but do we mean 24?

Remembering Factorial, we mean something else:

All still works, yeah. Now, we don’t actually mean 4 by 4 do we. We actually mean the original number:

And we don’t actually mean 3, 2, and 1 by 3, 2 and 1, we actually mean the original number each time mins one. So, actually the Factorial of the 3! could you say, no?

Let’s try:

Wow, still works.Wait, isn’t that if-statement redundant? 2 x 2! == 2 right?

Exploration

Now, the factorial of 0, is also 1. We haven’t tested that haven’t we? We have found a boundary condition!

This will result in a endless loop because we will try to factorial an negative number; and since factorial only happens with positieve numbers (because the formula with negative integers will result in a division by zero and so, blocking us for calculating a factorial value for these negative integers).

Again, simplest thing that could work?

Now, the last step of TDD is always Remove Duplication which in this case is the 1 that’s used two times. Let’s take care of that:

Hmm, someone may have noticed something. We could actually remove the other if-statement with checking for 1 if we adapt the check of 0. This will return 1 for us in the recursive call:

By doing this, we also have ruled out all the other negative numbers passed inside the method.

Conclusion

Why oh why are people so skeptic about Test-Driven Development. If you look at the way you use it in your daily practice, you find yourself writing simpler and more robust code.

TDD is actually a Design Methodology and not a Development Methodology. The way you think about the design, the names, the structure… all that is part of the design process of your project. The tests that you have is the added value of this approach and makes sure that you can refactor safely and are always certain of your software.

Start trying today in your daily practice so you stop thinking about: How will you implement it? but rather:

How will you test it?

 

Categories: Technology
written by: Stijn Moreels

Posted on Thursday, June 1, 2017 8:00 AM

Luis Delgado by Luis Delgado

During //Build, Microsoft surprised the market with the CosmosDb announcement. With the announcement, Microsoft made some very bold statements about the service, particularly about scalability and performance. I was particularly curious about the sub-10-millisecond write performance promised at the 99th percentile, which also included the indexing overhead on writes.

At Codit, we are regularly engaged in customer IoT projects. By nature, IoT projects generate vasts amounts of data that need to be quickly ingested and stored. So with this in mind, I wanted to test how CosmosDb behaves when it needs to ingests large batches of historical observations.

The data set

To run the experiment, I used the historical flight data available here. The data model includes about 110 columns, out of which I picked the first 50 columns. I got about 5Gb of data comprised of millions of these rows. Each row is about 1725 bytes in size.

The CosmosDb setup

Since the experiment data is tabular in nature, I created a Table API database with one collection called flights.

Next, I wrote a C# program that would read the csv files from the dataset (each one is about 200Mb) and created a TableEntity using the DynamicTableEntity approach. I then grouped all rows based on their airport’s IATA code, which I decided to use as the PartitionKey for this experiment. Finally, the program creates insert batches of length 100 for every group of rows belonging to the same airport code. The program then takes each of these insertion batches (remember, each batch has up to 100 rows to insert) and executes them sequentially.

To reduce network latency, I provisioned an Azure VM in the same region as the CosmosDb (North Europe) and gave it a generous amount of RAM (12Gb) so it could load all the data set in memory (the program I wrote is not optimized to reduce memory consumption).

The result

  1. Overall, the program created 19005 insert batches, most of them holding 100 rows to insert.
  2. The program managed to execute 4–5 batches per second. This means it managed to insert about 400–500 rows/s. This is about 860KB/s throughput.
  3. The 96th percentile for batch insertion time (remember, we are talking about insertion of a batch that includes 100 rows, not about inserting individual rows) was 283ms.

Takeaway

With this experiment, I managed to hit 429 errors very quickly. At the end, I needed to increase the RU/s to 10'000. But most importantly, I needed to change the indexing policy from consistent to lazy. With the amount of data I was trying to load, I was not able to avoid 429 errors (RequestTooLarge) with a consistent indexing policy, not even when using 10'000 RU/s.

Write performance is excellent, as you can see from the result above. Even when using consistent indexing policy, the write performance of the batches was very high (when you don’t get throttled).

As typical for a v1 product, there are some inconsistencies between the service documentation and the actual APIs. For example, the documentation states that CosmosDb is not limited to the 100 rows/insert batch as Azure Table storage is. However, when building insertion batches with more than 100 entities, the program throws an exception: The maximum number of operations allowed in one batch has been exceeded, even though I am using the new WindowsAzure.Storage-PremiumTable library. I guess this new library is not updated. I did not try to insert batches using the REST API directly (the documentation only talks about working with .NET, there is no REST reference for the TableDB API).

I think the key takeaway for me is that, when designing a large-scale system with CosmosDb, you will have to be extremely diligent on calculating how much capacity the system will need to offer, and its corresponding cost trade-off. Getting large write throughput on this database seems to be expensive. On the other hands, I ran the same loading job against the traditional Azure Table Storage service, and I never got throttled at all. In summary, I can derive the following learnings:

  1. The performance of individual writes of CosmosDb is excellent.
  2. Getting high write throughput with consistent indexing is significantly more expensive on CosmosDb than Table Storage. Table Storage will give you way more throughput for less money.
  3. You should carefully optimize your CosmosDb index. Although the database advertises that everything is indexed by default (which is a great engineering feat, no question about it), the overhead of indexing everything is going to cost you money, especially if your data model is large.
  4. CosmosDb Table API effectively supports secondary indexes, whereas Table Storage only supports PartitionKey-RowKey indexing. This is a big advantage for CosmosDb when you need to run more complex queries.

Resources

Here is the code I used to load the CSV files into CosmosDb. The code is light based on the Microsoft sample.

Here is the link to one of those CSV files.

Update

With some time in my hands, I refactored the code to execute the insert batches in parallel and do a more detailed write throughput comparison with the standard Azure Table Storage Table. In the refactoring I also decided to pick only 20 columns of the data set, and deserialize them a proper class to get the correct data types (instead of having them all as string). The conclusions don’t change, but I can offer now additional hard facts.

In the following screenshots, you’ll see rows with three values: the first value is the timestamp of the row insertion batch (remember, each batch contains up to 100 rows), the second value is the ellapsed time for inserting the batches in milliseconds, and the last value is the total amount of rows inserted.

Azure Table storage, 10 parallel batches of 100 rows per batch

 

CosmosDb, 10 parallel insert batches of 100 rows per batch, consistent indexing, 10KRU/s. Basically useless, no request succeeds

CosmosDb, 10 parallel batches of 100 rows per batch, lazy indexing, 5K RU/s

You can see that the write performance of CosmosDb is much better than that of Azure Table storage (just compare the batches with 1000 records, many of them hover around 100ms). That is, when you don’t get throttled.

Categories: Azure
Tags: CosmosDb, Storage
written by: Luis Delgado

Posted on Wednesday, May 31, 2017 3:48 PM

Sam Vanhoutte by Sam Vanhoutte

What can we learn from the WannaCry ransomware attack and the way we tackle Internet of Things (IoT) projects? That we had better invest enough resources to make, and keep, our smart devices safe.

I was at the airport of Seattle, returning from the Microsoft Build Conference, when I saw the outbreak of the WannaCry ransomware trending on Twitter. There was talk of hospitals that couldn’t operate anymore, government departments unable to function, public transport issues... All consequences of the virus that spread from computer to computer, looking for new victims. The consequences for many IoT scenarios around the world played through my mind. I also remembered the conversations I've had with partners and clients over the past years about investing time and money in the security and safe keeping of IoT devices.

The WannaCry story clearly demonstrated that there was a crushing responsibility for various IT service companies. They should have kept computer systems up to date with a supported Windows version and the latest security updates. Very often, time, budget or change management is a reason why such updates did not happen. "It it’s not broken, don’t fix it." Such thinking left the back door to several critical systems wide open, which made things broken a lot quicker than anyone assumed.

That's why, starting with Windows 10, Microsoft has changed the default 'update policy'. Security and system updates are automatically installed, giving customers a Windows system that is up to date by default. However, the pushing of automatic updates is a major problem with most IoT systems available today.

IoT security with holes

Very often, devices - from smart scales and to internet thermostats to even healthcare devices – are not equipped to receive security updates. The software often does not allow it, or the computing power of the device is too limited to deal with the update logic.

In most cases, the users of such a device don’t think about by the fact that their gadget (or more dangerously, their health device) is actually a mini computer that may have a security issue. If security updates cannot be pushed by default through the manufacturer’s IoT platform, you can assume that the device will never be updated during its entire lifecycle. To make matters worse, such devices often have a long lifespan. Thus, the encryption algorithms used today will no longer prove sufficient to keep sensitive data encrypted in the foreseeable future.

Companies should therefore always supply an update mechanism in their IoT solution. This makes the initial investment higher, but it also offers an undeniable advantage. For one thing, pushing updates can prevent your brand from getting negative exposure in the news as the result of a (serious) vulnerability. But you can also send new pieces of functionality to those devices. This keeps the devices relevant and enables you to offer new features to your customers.

By taking the responsibility for updating (and thus securing) such systems away from the end user, we create a much safer internet. Because no one wants his smart toaster (and its internet connection) used to enable drug trafficking, child pornography or terrorism.

 

Note: This article was first published via Computable on 30 May 2017 (in Dutch) 

Categories: Opinions
Tags: IoT
written by: Sam Vanhoutte