wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Thursday, June 8, 2017 3:11 PM

Toon Vanhoutte by Toon Vanhoutte

This post contains 10 useful tips for designing enterprise integration solutions on top of Logic Apps. It's important to think upfront about reusability, reliability, security, error handling and maintenance.

Democratization of integration

Before we dive into the details, I want to provide some reasoning behind this post. With the rise of cloud technology, integration takes a more prominent role than ever before. In Microsoft's integration vision, democratization of integration is on top of the list.

Microsoft aims to take integration out of its niche market and offers it as an intuitive and easy-to-use service to everyone. The so-called Citizen Integrators are now capable of creating light-weight integrations without the steep learning curve that for example BizTalk Server requires. Such integrations are typically point-to-point, user-centric and have some accepted level of fault tolerance.

As an Integration Expert, you must be aware of this. Enterprise integration faces completely different requirements than light-weight citizen integration: loosely coupling is required, no message loss is accepted because it's mission critical interfacing, integrations must be optimized for operations personnel (monitoring and error handling), etc…

Keep this in mind when designing Logic App solutions for enterprise integration! Make sure you know your cloud and integration patterns. Ensure you understand the strengths and limits of Logic Apps. The advice below can give you a jump start in designing reliable interfaces within Logic Apps!

Design enterprise integration solutions

1. Decouple protocol and message processing

Once you created a Logic App that receives a message via a specific transport protocol, it's extremely difficult to change the protocol afterwards. This is because the subsequent actions of your Logic App often have a hard dependency on your protocol trigger / action. The advice is to perform the protocol handling in one Logic App and hand over the message to another Logic App to perform the message processing. This decoupling will allow you to change the receiving transport protocol in a flexible way, in case the requirements change or in case a certain protocol (e.g. SFTP) is not available in your DEV / TEST environment.

2. Establish reliable messaging

You must realize that every action you execute, is performed by an underlying HTTP connection. By its nature, an HTTP request/response is not reliable: the service is not aware if the client disconnects during request processing. That's why receiving messages must always happen in two phases: first you mark the data as returned by the service; second you label the data as received by the client (in our case the Logic App). The Service Bus Peek-Lock pattern is a great example that provides such at-least-once reliability.  Another example can be found here.

3. Design for reuse

Real enterprise integration is composed of several common integration tasks such as: receive, decode, transform, debatch, batch, enrich, send, etc… In many cases, each task is performed by a combination of several Logic App actions. To avoid reconfiguring these tasks over and over again, you need to design the solution upfront to encourage reuse of these common integration tasks. You can for example use the Process Manager pattern that orchestrates the message processing by reusing nested Logic Apps or introduce the Routing Slip pattern to build integration on top of generic Logic Apps. Reuse can also be achieved on the deployment side, by having some kind of templated deployments of reusable integration tasks.

4. Secure your Logic Apps

From a security perspective, you need to take into account both role-based access control to your Logic App resources and runtime security considerations. RBAC can be configured in the Access Control (IAM) tab of your Logic App or on a Resource Group level. The runtime security really depends on the triggers and actions you're using. As an example: Request endpoints are secured via a Shared Access Signature that must be part of the URL, IP restrictions can be applied. Azure API Management is the way to go if you want to govern API security centrally, on a larger scale. It's a good practice to assign the minimum required privileges (e.g. read only) to your Logic Apps.

5. Think about idempotence

Logic Apps can be considered as composite services, built on top of several API's. API's leverage the HTTP protocol, which can cause data consistency issues due to its nature. As described in this blog, there are multiple ways the client and server can get misaligned about the processing state. In such situations, clients will mostly retry automatically, which could result in the same data being processed twice at server side. Idempotent service endpoints are required in such scenarios, to avoid duplicate data entries. Logic Apps connectors that provide Upsert functionality are very helpful in these cases.

6. Have a clear error handling strategy

With the rise of cloud technology, exception and error handling become even more important. You need to cope with failure when connecting to multiple on premise systems and cloud services. With Logic Apps, retry policies are your first resort to build resilient integrations. You can configure a retry count and interval at every action, there's no support for exponential retries or circuit breaker pattern. In case the retry policy doesn't solve the issue, it's advised to return a clear error description within sync integrations and to ensure a resumable workflow within async integrations. Read here how you can design a good resume / resubmit strategy.

7. Ensure decent monitoring

Every IT solution benefits from a good monitoring. It provides visibility and improves the operational experience for your support personnel. If you want to expose business properties within your monitoring, you can use Logic Apps custom outputs or tracked properties. These can be consumed via the Logic Apps Workflow Management API or via OMS Log Analytics. From an operational perspective, it's important to be aware that there is an out-of-the-box alerting mechanism that can send emails or trigger Logic Apps in case a run fails. Unfortunately, Logic Apps has no built-in support for Application Insights, but you can leverage extensibility (custom API App or Azure Function) to achieve this. If your integration spans multiple Logic Apps, you must foresee correlation in your monitoring / tracing!  Find here more details about monitoring in Logic Apps.

8. Use async wherever possible

Solid integrations are often characterized by asynchronous messaging. Unless the business requirements really demand request/response patterns, try to implement them asynchronously. It comes with the advantage that you introduce real decoupling, both from a design and runtime perspective. Introducing a queuing system (e.g. Azure Service Bus) in fire-and-forget integrations, results in highly scalable solutions that can handle an enormous amount of messages. Retry policies in Logic Apps must have different settings depending whether you're dealing with async or sync integration. Read more about it here.

9. Don't forget your integration patterns

Whereas BizTalk Server forces you to design and develop in specific integration patterns, Logic Apps is more intuitive and easier to use. This could come with a potential downside that you forget about integration patterns, because they are not suggested by the service itself. As an integration expert, it's your duty to determine which integration patterns should be applied on your interfaces. Loosely coupling is common for enterprise integration. You can for example introduce Azure Service Bus that provides a Publish/Subscribe architecture. Its message size limitation can be worked around by leveraging the Claim Check pattern, with Azure Blob Storage. This is just one example of introducing enterprise integration patterns.

10. Apply application lifecycle management (ALM)

The move to a PaaS architecture, should be done carefully and must be governed well, as described here. Developers should not have full access to the production resources within the Azure portal, because the change of one small setting can have an enormous impact. Therefore, it's very important to setup ALM, to deploy your Logic App solutions throughout the DTAP-street. This ensures uniformity and avoids human deployment errors. Check this video to get a head start on continuous integration for Logic Apps and read this blog on how to use Azure Key Vault to retrieve passwords within ARM deployments. Consider ALM as an important aspect within your disaster recovery strategy!

Conclusion

Yes, we can! Logic Apps really is a fit for enterprise integration, if you know what you're doing! Make sure you know your cloud and integration patterns. Ensure you understand the strengths and limits of Logic Apps. The Logic App framework is a truly amazing and stable platform that brings a whole range of new opportunities to organizations. The way you use it, should be depending on the type of integration you are facing!

Interested in more?  Definitely check out this session about building loosely coupled integrations with Logic Apps!

Any questions or doubts? Do not hesitate to get in touch!
Toon

Categories: Azure
Tags: Logic Apps
written by: Toon Vanhoutte

Posted on Tuesday, June 6, 2017 5:00 PM

Stijn Moreels by Stijn Moreels

Several things became clear to me when studying CI. One of things is that everything is based on the principle of automation. The moment when you start thinking about “I can’t automate this”: that’s the moment when you should ask yourself if that is really the case.

Introduction

Before I read the book about Continuous Integration by Paul Duvall, Stephen M. Matyas III, and Andrew Glover, I thought that CI meant that we just create a deployment pipeline in which we can easily/automate deployment of our software. That and the fact that developers integrate continuously with each other.

I’m not saying that it’s a wrong definition, I’m saying that it might be too narrow for what it really is.

Thank you, Paul, Stephen, and Andrew, for the inspiring book and the motivation to write this post.

Automation

Several things became clear to me when studying CI. One of these things is that everything is based on the principle of automation. The moment when you start thinking about “I can’t automate this” that’s the moment when you should ask yourself if that is really the case.

CI is all about automation. We automate the Compilation (different environments, configurations), Testing (unit, component, integration, acceptance…), Inspection (coverage, complexity, technical debt, maintainability index…), Documentation (code architecture, user manual, schemas…).

We automate the build that runs all these steps, we automate the feedback we get from it, …

You can almost automate everything. Things that you can’t automate are for example Manual Testing. The reason is that the definition of manual testing is that you let a human test your software. You let the human decide what to test. You can in fact automate the environment in which this human must test the software, but not the testing itself (otherwise it wouldn’t be called “manual” testing).

That’s what most intrigued me when studying CI - the automation. It makes you think of all those manual steps you must take to get your work done. All those tiny little steps that by itself aren’t meaning much but are a big waste if you see them all together.

If you always must build your software locally before committing, could we than just place the commit commands at the end of our build script?

Building

It’s kind of funny when people talk about “building” software. When some people say: “I can’t build the software anymore”; don’t always mean “build”; they mean “compile”. In the context of Continuous Integration, the “compile” step is only the first step of the pipeline but it’s sometimes the most important step to people. Many think of it as:

“If it compiles == it works”

When you check out some code and the Build fails (build, not compilation); that could mean several things: failed Unit Tests, missing Code Coverage, maximum Cyclometric Complexity, … but also a compilation failure.

In the next paragraphs, when I talk about a “build” I’m talking in the context of CI and don’t mean “compile”.

Continuous Building Software

Is your build automated?
Are your builds under 10 minutes?
Are you placing the tasks that will most likely to fail at the beginning of your build?
How often do you run your integration builds? Daily? Weekly? At every change (continuously)?

  • Every developer should have the ability to run (on demand) a Private Build on his or her machine.
  • Ever project should have the ability to run (on demand, polled, event-driven) an Integration Build that include slower tasks (integration/component tests, performance/load tests…),
  • Every project should have the ability to run (on demand, scheduled) a Release Build to create deployable software (typically at the end of the iteration), but must include the acceptance tests.

There are tremendous build script tools available to automate these kinds of things. NAnt, Psake, FAKE, Cake… are a few (I use FAKE).

Continuous Preventing Development/Testing

Are your tests automated?
Are you writing a test for every defect?
How many asserts per test? Limit to one?
Do you categorize your tests?

“Drive to fix the defect and prevent from reoccurring”

Many other posts discus the Test-First and Test-Driven mindset and the reason behind that; so, I will not discuss this here. What I will discuss is the reaction people have on a failing test from your build.

A failed build should trigger a “Stop the presses” event within the team. Everyone should be concerned about the failure and should help each other to make the build succeed again as quickly as possible. Fixing a failed build should be the responsible of the team and not (only) the person that broke the build.

But what do you do when the build failed? What reaction should you have?

First, write a test that exposes the defect by writing a test that passes. When that new test passes, you have proven the defect and can start fixing it. Note that we don’t write a failed test!

There are three reasons why you should write a test that passes for a defect (we’re using Test-Driven Development, right?):

  1. It’s difficult to write a failing test that uses the assertion correctly because the assertion may not be added when the test doesn’t fail anymore which means you don’t have a test that passes but a test that’s just not failing.
  2. You’re guessing what the fix should alter in behavior == assumption.
  3. If you have to fix the code being tested, you have a failing test that works but one that doesn’t verify the behavioral change.

To end the part of testing, let me be clear on some points that many developers fail to grasp: the different software tests. I have encountered several definitions of the tests so I merge them here for you. I think the most important part is that you test all these kind of aspects and not the part if you choose to call your Acceptance Tests, or Functional Tests:

  • Unit Tests: testing the smallest possible “units” of code with no external dependencies (including file system, database…), written by programmers - for programmers, specify the software at the lowest level…
    Michael Feathers has some Unit Test Rulz that specify whether a test can be seen as a Unit Test.
  • Component Tests encapsulate business rules (could include external dependencies), …
  • Integration Tests don’t encapsulate business rules (could include external dependencies), tests how components work together, Plumbing Tests, testing architectural structure, …
  • Acceptance Tests (or Functional Tests) written by business people, define the definition of “done”, purpose to give clarity, communication, and precision, test the software as the client expects it, (Given > When > Then structure), …
  • System Tests test the entire system, could sometimes overlap with the acceptance tests, test the system in a developer perspective…

Continuous Inspection

Can you show the current amount of code complexity?
Performing automated design reviews?
Monitoring code duplication?
Current code coverage?
Produce inspection reports?

It wouldn’t surprise you that Code Inspection is maybe not the most “sexy” part of software development (is Code Testing sexy?). But nonetheless it’s a very important part of the build.

Try to ask some projects what their current Code Coverage is, Maintainability Index? Technical Debt? Duplication? Complexity?...

Al those elements are so easily automated but so little teams adopt this mindset of Continuous Inspection. These elements are a certain starting point:

Continuous Deployment

Can you rollback a release?
Are you labelling your builds?
Deploy software with a single command?
Deploy with different environments (configuration)?
How do you handle fixes after deployment?

At the end of the pipeline (in a Release Build), you could trigger the deployment of the project. Yes, you should include the Acceptance Tests in here because this is the last step before the actual deployment.

The deployment itself should be done with one “Push on the Button”; as simple as that. In Agile projects, the deployment of the software is already done at the very beginning of the project. This means that the software is placed at the known deployment target as quickly as possible.

That way the team get as quickly as possible feedback of how the software act in “the real world”.

Continuous Feedback

When you deploy, build, test, … something, wouldn’t you want to know as quickly as possible what happened? I certainly do.

One of the first things I always do when starting a project is checking if I (and the team) gets the right notifications. As a developer, I want to know as quickly as possible when a build succeeds/failed. As an architect, you want to know what the current documentation of the code base is and what the code looks like in schemas, as project manager you may want to know if the acceptance tests where succeeded so the clients get what he/she wants…

Each function has its own responsibilities and its own reason to want feedback on things. You should be able to give them this feedback!

I use Catlight for my build feedback, work item tracking, release status... This tool will maybe in the future support pull request notifications too.

Some development teams have an actual big colorful lamp that indicate the current build status. Red = Failed, Green = Successful and Yellow = Investigating. Some lamps go more lighter/darker red if the build states in a "failed" state for too long.

Conclusion

Don’t call this a full-CI summary because it is certainly not. See this as a quick introduction of how CI can be implemented in a software project with the high-level actions in place and what you can improve in your project automation process. My motto is that anything can be improved and so, be more automated.

I would also suggest you read the book I talked about and/or check the site of Thought Works for more information on the recent developments in the CI community.

Start integrating your software to develop software with lesser risk and higher quality. Make it as automated that you just must “Push the Button”The Integrate Button.

Categories: Technology
written by: Stijn Moreels

Posted on Friday, June 2, 2017 3:55 PM

Toon Vanhoutte by Toon Vanhoutte

Quite often I get questions on how to modify JSON collections or arrays within Logic Apps. Before reaching out to Azure Functions, as an extensibility option, I prefer to use the out-of-the-box available Logic Apps functionality. This blog post contains some tips and tricks about dealing with collections.

Remove and rename properties of items within a collection

This example explains how you can easily remove and rename properties within the repeated items of a collection. Therefore, we leverage the Data Operations - Select action. Below you can find a simple example.

Input collection

This is the sample input collection.

Select action

Configure the Data Operations - Select action, so only two properties remain, while being renamed. The output array will contain the same number of items, whilst the items will only include Item and OrderNumber as properties. This offers a great user experience for performing simple transformations on collections.

Output Collection

This is the resulting output collection.

Reduce the number of items within a collection

This sample explains how you can easily reduce the number of items within a collection. The following expressions can be used to perform this job, without looking at the content of the item:

  • @first: returns the first item of the collection
  • @last: returns the last item of the collection
  • @take: returns the first x items of the collection
  • @skip: returns the last x items of the collection

In case you want to reduce the number of items in a collection, based on their content, you can use the Data Operations - Filter action. Below you can find a simple example.

Input collection

This is the sample input collection.

Filter action

Configure the Data Operations - Filter action, so only the items that have a quantity lower than or equal to 100, are included in the output collection. This allows you to quickly take a subset of a collection, based on specific filter criteria. With the available Logic Apps expressions, you can write quite complex and powerful filter queries.

Output Collection

This is the resulting output collection.

Remove duplicate items from a collection

This section explains how you can easily remove duplicate items from a collection. You can use the @union expression to achieve this, although it's not originally designed for that. Below you can find a simple example.

Input collection

This is the sample input collection.

Compose action

Configure a Data Operations - Compose action that uses the @union expression. This expression merges the content of two input collections, but has also a side-effect that only one of the duplicate items is included in the output collection. So, as a trick, you can use @union(triggerBody(),triggerBody()) to remove duplicate items from a collection. I pass the input collection twice to the @union expression, cause I didn't find out how to pass an empty collection.

If you don't want any of the duplicate items to be included in the output collection, you could use the reverse @intersection expression in a similar way.

Output Collection

This is the resulting output collection, using the @union expression.

Conclusion

Logic Apps has already several powerful mechanisms to modify collections in a user friendly way. The product team is constantly adding enhancements to the framework, so I'm convinced that we will enjoy additional ways to handle arrays in the near future!

Do you also know a trick about handling Logic App collections? Don't hesitate to share it in the comments section below! Are you looking for more info on transforming JSON objects in Logic Apps? Definitely have a look over here!

Thank you
Toon

Categories: Azure
written by: Toon Vanhoutte

Posted on Thursday, June 1, 2017 3:30 PM

Stijn Moreels by Stijn Moreels

One of the mind-blowing development techniques that radically changed the programming world is the Test-Driven Development approach introduced by Kent Beck.

But... writing tests before we start coding? Who will do that?

Introduction

One of the mind-blowing development techniques that radically changed the programming world; is the Test-Driven Development.

Writing tests before we start coding? Who will do that?

I must admit that I personally wasn’t really convinced by the idea; maybe because I didn’t quite understand the reason we should write our tests first and the way we should do it. Can you have a bad Software Design with TDD? Can you break your Architecture with TDD? Yes! TDD is a discipline that you should be following. And like any discipline you must hold to a certain number of requirements. By the end of the day, it’s YOUR task to follow this simple mindset.

In this article, I will talk about the Mindset introduced by Kent Beck when writing in a Test-Driven Development environment.

Too many developers don’t see the added-value about this technique and/or don’t believe it works.

TDD works!

"Testing is not the point; the point is about Responsibility"
-Kent Beck

Benefits

Because so many of us don’t see the benefits of TDD, I thought it would make sense to specify them for you. Robert C. Martin has inspired me with this list of benefits.

Increased Certainty

One of the benefits is that you’re certain that it works. Users have more responsibility in a Test-Driven team; because they will write the Acceptance Tests (with help of course), and they will define what the system must do. By doing so, you’re certain that what you write is what the customer wants.

The amount of uncertainty that builds up by writing code that isn’t exactly what the customer wants is called: The Uncertainty Principle. We must always eliminate this uncertainty.

By writing tests first; you can tell your manager and customer: “Yes, it will work; yes, it’s what you want”.

Defect Reduction

Before I write in a Test-First mindset; I always thought that my code was full of bugs and doesn’t handle unexpected behavior.
Maybe it was because I’m very certain of myself; but also, because I wrote the tests after the code and was testing what a just wrote; not what I want to test.

This increases the Fake Coverage of your code.

Increased Courage

So many developers are “afraid” to change something in their code base. They are afraid to break something. Why are they afraid? Because they don’t have tests!

“When programmers lose the fear of cleaning; they clean”
- Robert C. Martin

A professional developer doesn’t allow that his/her code rots; so, you must refactor with courage.

In-Sync Documentation

Tests are the lowest form of documentation of your code base; always 100% in sync with the current implementation in the production code.

Simple Design

TDD is an analysis/design technique and not necessary a development technique. Tests force you to think about good design. Certainly, if you write them BEFORE you write the actual implementation. If you do so, you’re writing them in offence and not in defense (when you’re writing them afterwards).

Test-First also helps you think about the Simplest thing that could possibly work which automatically helps you to write simple structured designed code.

Test-First Mindset

When you’re introduced into the Test-First methodology, people often get Test Infected. The amount of stress that’s taking from you is remarkable. You refactor more aggressively your code without any fear that you might break something.

Test-Driven Development is based on this very simple idea to first write your test, and only then write your production code. People underestimate the part “first write your test”. When you writing your tests, you’re solving more problems than you think.

Where should I place this code? Who’s responsible for this logic? What names should I use for my methods, classes…? What result must a get from this? What isn’t valid data? How will my class interface look like? …

After trying to use TDD in my daily practice, if found myself always asking the same questio:

“I would like to have a … with … and …”

Such a simple idea changed my vision so radically about development and I’m convinced that by using this technique, you’re writing simpler code because you always think about:

“What’s the simplest thing that could make this test work”

If you find that you can implement something that isn’t the right implementation, write another test to expose this behavior you want to implement.

TDD is - in a way - a physiological methodology. What they say is true: you DO get addicted to that nice green bar that indicate that you’re tests all pass. You want that color as green as possible, you want it always green, you want it run as fast as possible so you can quickly see it’s green…

To be a Green-Bar-Addict is a nice thing.

Kent Beck Test-Patterns

It felt a little weird to just state all the patterns Kent Beck introduced. Maybe you should just read the book Test-Driven Development by Example; he’s a very nice writer and I learned a lot from the examples, patterns and ideas.

What I will do, is give you some basic patterns I will use later in the example and some patterns that we very eye-opening for me the first time.

Fake It

When Kent talked about “What’s the simplest thing that could work”, I was thinking about my implementation but what he meant was “What’s the simplest thing that could work for this test”.

If you’re testing that 2 x 3 is 6 than when you implement it, you should Fake It and just return the 6.

Very weird at first, especially because the whole Fake It approach is based on duplication; the root of all software evil. Maybe that’s the reason experienced software engineers are having problems with this approach.

But it’s a very powerful approach. Using this technique, you can quickly get the bar green (testing bar). And the quicker you get that bar green, the better. And if that means you must fake something; then you should do that.

Triangulation

This technique I found very interesting. This approach really drives the abstraction of your design. When you find yourself not knowing what to do next, or how you should go further with your refactoring; write another test to support new knowledge of the system and the start of new refactorings in your design.

Especially when you’re unsure what to do next. 

If you’re testing that 2 x 3 is 6 than in a Triangulation approach you will first return 6 and only change that if you’re testing again but then for 2 x 2 is 4.

Obvious Implementation

Of course: when the implementation is so simple, so obvious, … Than you could always implement it directly after your test. But remember that this approach is only the second option after Fake It and Triangulation.

When you find yourself taking steps that are too big, you can always take smaller steps.

If you’re testing that 2 x 3 is 6, in an Obvious Implementation approach you will just write 2 x 3 right away.

By Example

I thought it would be useful to show you some example of the TDD workflow. Since everyone is so stoked about test-driving Fibonacci I thought it would be fun to test-drive another integer sequence.

Let’s test-drive the Factorial Sequence!

What happens when we factorial 4 for example? 4! = 4 x 3 x 2 x 1 = 24

Test

But let’s start with something super simple:

Always start with the same sentence: “I would like to have a… “. I would like to have a method called Factorial which I could use to send an integer with that will calculate the factorial integer for me.

Now we have created a test before anything about factorial is implemented.

Compile

Now we have the test, let’s start now by making our code compile again.

Let’s test this:

Hooray! We have a failed test == progress!

Implement

First Steps

What’s the simplest thing that we could write in order that this test will run?

Hooray! Our test passed, we can go home, right?

A Bit Harder

What’s next? Let’s check. What happens if we would test for another value?

I know, I know. Duplication, duplication, duplication. But were testing now right, not yet in the last step of the TDD mantra.

What is the simplest we could change to make this test pass?

Yes, I’m feeling good right now. A nice green bar.

One Step Before Generalize

Let’s add just another test, a bit harder this time. But these duplication is starting to irritate me; you know the mantra: One-Two-Three-Refactor? This is the third time, so let’s start refactoring!

Ok, what’s the simplest thing?

Generalize

Ok, we could add if/else-statements all day long, but I think it’s time to some generalization. Look at what we’ve now been implementing. We write 24 but do we mean 24?

Remembering Factorial, we mean something else:

All still works, yeah. Now, we don’t actually mean 4 by 4 do we. We actually mean the original number:

And we don’t actually mean 3, 2, and 1 by 3, 2 and 1, we actually mean the original number each time mins one. So, actually the Factorial of the 3! could you say, no?

Let’s try:

Wow, still works.Wait, isn’t that if-statement redundant? 2 x 2! == 2 right?

Exploration

Now, the factorial of 0, is also 1. We haven’t tested that haven’t we? We have found a boundary condition!

This will result in a endless loop because we will try to factorial an negative number; and since factorial only happens with positieve numbers (because the formula with negative integers will result in a division by zero and so, blocking us for calculating a factorial value for these negative integers).

Again, simplest thing that could work?

Now, the last step of TDD is always Remove Duplication which in this case is the 1 that’s used two times. Let’s take care of that:

Hmm, someone may have noticed something. We could actually remove the other if-statement with checking for 1 if we adapt the check of 0. This will return 1 for us in the recursive call:

By doing this, we also have ruled out all the other negative numbers passed inside the method.

Conclusion

Why oh why are people so skeptic about Test-Driven Development. If you look at the way you use it in your daily practice, you find yourself writing simpler and more robust code.

TDD is actually a Design Methodology and not a Development Methodology. The way you think about the design, the names, the structure… all that is part of the design process of your project. The tests that you have is the added value of this approach and makes sure that you can refactor safely and are always certain of your software.

Start trying today in your daily practice so you stop thinking about: How will you implement it? but rather:

How will you test it?

 

Categories: Technology
written by: Stijn Moreels

Posted on Wednesday, May 31, 2017 3:48 PM

Sam Vanhoutte by Sam Vanhoutte

What can we learn from the WannaCry ransomware attack and the way we tackle Internet of Things (IoT) projects? That we had better invest enough resources to make, and keep, our smart devices safe.

I was at the airport of Seattle, returning from the Microsoft Build Conference, when I saw the outbreak of the WannaCry ransomware trending on Twitter. There was talk of hospitals that couldn’t operate anymore, government departments unable to function, public transport issues... All consequences of the virus that spread from computer to computer, looking for new victims. The consequences for many IoT scenarios around the world played through my mind. I also remembered the conversations I've had with partners and clients over the past years about investing time and money in the security and safe keeping of IoT devices.

The WannaCry story clearly demonstrated that there was a crushing responsibility for various IT service companies. They should have kept computer systems up to date with a supported Windows version and the latest security updates. Very often, time, budget or change management is a reason why such updates did not happen. "It it’s not broken, don’t fix it." Such thinking left the back door to several critical systems wide open, which made things broken a lot quicker than anyone assumed.

That's why, starting with Windows 10, Microsoft has changed the default 'update policy'. Security and system updates are automatically installed, giving customers a Windows system that is up to date by default. However, the pushing of automatic updates is a major problem with most IoT systems available today.

IoT security with holes

Very often, devices - from smart scales and to internet thermostats to even healthcare devices – are not equipped to receive security updates. The software often does not allow it, or the computing power of the device is too limited to deal with the update logic.

In most cases, the users of such a device don’t think about by the fact that their gadget (or more dangerously, their health device) is actually a mini computer that may have a security issue. If security updates cannot be pushed by default through the manufacturer’s IoT platform, you can assume that the device will never be updated during its entire lifecycle. To make matters worse, such devices often have a long lifespan. Thus, the encryption algorithms used today will no longer prove sufficient to keep sensitive data encrypted in the foreseeable future.

Companies should therefore always supply an update mechanism in their IoT solution. This makes the initial investment higher, but it also offers an undeniable advantage. For one thing, pushing updates can prevent your brand from getting negative exposure in the news as the result of a (serious) vulnerability. But you can also send new pieces of functionality to those devices. This keeps the devices relevant and enables you to offer new features to your customers.

By taking the responsibility for updating (and thus securing) such systems away from the end user, we create a much safer internet. Because no one wants his smart toaster (and its internet connection) used to enable drug trafficking, child pornography or terrorism.

 

Note: This article was first published via Computable on 30 May 2017 (in Dutch) 

Categories: Opinions
Tags: IoT
written by: Sam Vanhoutte