wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Thursday, June 1, 2017 3:30 PM

Stijn Moreels by Stijn Moreels

One of the mind-blowing development techniques that radically changed the programming world is the Test-Driven Development approach introduced by Kent Beck.

But... writing tests before we start coding? Who will do that?

Introduction

One of the mind-blowing development techniques that radically changed the programming world; is the Test-Driven Development.

Writing tests before we start coding? Who will do that?

I must admit that I personally wasn’t really convinced by the idea; maybe because I didn’t quite understand the reason we should write our tests first and the way we should do it. Can you have a bad Software Design with TDD? Can you break your Architecture with TDD? Yes! TDD is a discipline that you should be following. And like any discipline you must hold to a certain number of requirements. By the end of the day, it’s YOUR task to follow this simple mindset.

In this article, I will talk about the Mindset introduced by Kent Beck when writing in a Test-Driven Development environment.

Too many developers don’t see the added-value about this technique and/or don’t believe it works.

TDD works!

"Testing is not the point; the point is about Responsibility"
-Kent Beck

Benefits

Because so many of us don’t see the benefits of TDD, I thought it would make sense to specify them for you. Robert C. Martin has inspired me with this list of benefits.

Increased Certainty

One of the benefits is that you’re certain that it works. Users have more responsibility in a Test-Driven team; because they will write the Acceptance Tests (with help of course), and they will define what the system must do. By doing so, you’re certain that what you write is what the customer wants.

The amount of uncertainty that builds up by writing code that isn’t exactly what the customer wants is called: The Uncertainty Principle. We must always eliminate this uncertainty.

By writing tests first; you can tell your manager and customer: “Yes, it will work; yes, it’s what you want”.

Defect Reduction

Before I write in a Test-First mindset; I always thought that my code was full of bugs and doesn’t handle unexpected behavior.
Maybe it was because I’m very certain of myself; but also, because I wrote the tests after the code and was testing what a just wrote; not what I want to test.

This increases the Fake Coverage of your code.

Increased Courage

So many developers are “afraid” to change something in their code base. They are afraid to break something. Why are they afraid? Because they don’t have tests!

“When programmers lose the fear of cleaning; they clean”
- Robert C. Martin

A professional developer doesn’t allow that his/her code rots; so, you must refactor with courage.

In-Sync Documentation

Tests are the lowest form of documentation of your code base; always 100% in sync with the current implementation in the production code.

Simple Design

TDD is an analysis/design technique and not necessary a development technique. Tests force you to think about good design. Certainly, if you write them BEFORE you write the actual implementation. If you do so, you’re writing them in offence and not in defense (when you’re writing them afterwards).

Test-First also helps you think about the Simplest thing that could possibly work which automatically helps you to write simple structured designed code.

Test-First Mindset

When you’re introduced into the Test-First methodology, people often get Test Infected. The amount of stress that’s taking from you is remarkable. You refactor more aggressively your code without any fear that you might break something.

Test-Driven Development is based on this very simple idea to first write your test, and only then write your production code. People underestimate the part “first write your test”. When you writing your tests, you’re solving more problems than you think.

Where should I place this code? Who’s responsible for this logic? What names should I use for my methods, classes…? What result must a get from this? What isn’t valid data? How will my class interface look like? …

After trying to use TDD in my daily practice, if found myself always asking the same questio:

“I would like to have a … with … and …”

Such a simple idea changed my vision so radically about development and I’m convinced that by using this technique, you’re writing simpler code because you always think about:

“What’s the simplest thing that could make this test work”

If you find that you can implement something that isn’t the right implementation, write another test to expose this behavior you want to implement.

TDD is - in a way - a physiological methodology. What they say is true: you DO get addicted to that nice green bar that indicate that you’re tests all pass. You want that color as green as possible, you want it always green, you want it run as fast as possible so you can quickly see it’s green…

To be a Green-Bar-Addict is a nice thing.

Kent Beck Test-Patterns

It felt a little weird to just state all the patterns Kent Beck introduced. Maybe you should just read the book Test-Driven Development by Example; he’s a very nice writer and I learned a lot from the examples, patterns and ideas.

What I will do, is give you some basic patterns I will use later in the example and some patterns that we very eye-opening for me the first time.

Fake It

When Kent talked about “What’s the simplest thing that could work”, I was thinking about my implementation but what he meant was “What’s the simplest thing that could work for this test”.

If you’re testing that 2 x 3 is 6 than when you implement it, you should Fake It and just return the 6.

Very weird at first, especially because the whole Fake It approach is based on duplication; the root of all software evil. Maybe that’s the reason experienced software engineers are having problems with this approach.

But it’s a very powerful approach. Using this technique, you can quickly get the bar green (testing bar). And the quicker you get that bar green, the better. And if that means you must fake something; then you should do that.

Triangulation

This technique I found very interesting. This approach really drives the abstraction of your design. When you find yourself not knowing what to do next, or how you should go further with your refactoring; write another test to support new knowledge of the system and the start of new refactorings in your design.

Especially when you’re unsure what to do next. 

If you’re testing that 2 x 3 is 6 than in a Triangulation approach you will first return 6 and only change that if you’re testing again but then for 2 x 2 is 4.

Obvious Implementation

Of course: when the implementation is so simple, so obvious, … Than you could always implement it directly after your test. But remember that this approach is only the second option after Fake It and Triangulation.

When you find yourself taking steps that are too big, you can always take smaller steps.

If you’re testing that 2 x 3 is 6, in an Obvious Implementation approach you will just write 2 x 3 right away.

By Example

I thought it would be useful to show you some example of the TDD workflow. Since everyone is so stoked about test-driving Fibonacci I thought it would be fun to test-drive another integer sequence.

Let’s test-drive the Factorial Sequence!

What happens when we factorial 4 for example? 4! = 4 x 3 x 2 x 1 = 24

Test

But let’s start with something super simple:

Always start with the same sentence: “I would like to have a… “. I would like to have a method called Factorial which I could use to send an integer with that will calculate the factorial integer for me.

Now we have created a test before anything about factorial is implemented.

Compile

Now we have the test, let’s start now by making our code compile again.

Let’s test this:

Hooray! We have a failed test == progress!

Implement

First Steps

What’s the simplest thing that we could write in order that this test will run?

Hooray! Our test passed, we can go home, right?

A Bit Harder

What’s next? Let’s check. What happens if we would test for another value?

I know, I know. Duplication, duplication, duplication. But were testing now right, not yet in the last step of the TDD mantra.

What is the simplest we could change to make this test pass?

Yes, I’m feeling good right now. A nice green bar.

One Step Before Generalize

Let’s add just another test, a bit harder this time. But these duplication is starting to irritate me; you know the mantra: One-Two-Three-Refactor? This is the third time, so let’s start refactoring!

Ok, what’s the simplest thing?

Generalize

Ok, we could add if/else-statements all day long, but I think it’s time to some generalization. Look at what we’ve now been implementing. We write 24 but do we mean 24?

Remembering Factorial, we mean something else:

All still works, yeah. Now, we don’t actually mean 4 by 4 do we. We actually mean the original number:

And we don’t actually mean 3, 2, and 1 by 3, 2 and 1, we actually mean the original number each time mins one. So, actually the Factorial of the 3! could you say, no?

Let’s try:

Wow, still works.Wait, isn’t that if-statement redundant? 2 x 2! == 2 right?

Exploration

Now, the factorial of 0, is also 1. We haven’t tested that haven’t we? We have found a boundary condition!

This will result in a endless loop because we will try to factorial an negative number; and since factorial only happens with positieve numbers (because the formula with negative integers will result in a division by zero and so, blocking us for calculating a factorial value for these negative integers).

Again, simplest thing that could work?

Now, the last step of TDD is always Remove Duplication which in this case is the 1 that’s used two times. Let’s take care of that:

Hmm, someone may have noticed something. We could actually remove the other if-statement with checking for 1 if we adapt the check of 0. This will return 1 for us in the recursive call:

By doing this, we also have ruled out all the other negative numbers passed inside the method.

Conclusion

Why oh why are people so skeptic about Test-Driven Development. If you look at the way you use it in your daily practice, you find yourself writing simpler and more robust code.

TDD is actually a Design Methodology and not a Development Methodology. The way you think about the design, the names, the structure… all that is part of the design process of your project. The tests that you have is the added value of this approach and makes sure that you can refactor safely and are always certain of your software.

Start trying today in your daily practice so you stop thinking about: How will you implement it? but rather:

How will you test it?

 

Categories: Technology
written by: Stijn Moreels

Posted on Wednesday, May 31, 2017 3:48 PM

Sam Vanhoutte by Sam Vanhoutte

What can we learn from the WannaCry ransomware attack and the way we tackle Internet of Things (IoT) projects? That we had better invest enough resources to make, and keep, our smart devices safe.

I was at the airport of Seattle, returning from the Microsoft Build Conference, when I saw the outbreak of the WannaCry ransomware trending on Twitter. There was talk of hospitals that couldn’t operate anymore, government departments unable to function, public transport issues... All consequences of the virus that spread from computer to computer, looking for new victims. The consequences for many IoT scenarios around the world played through my mind. I also remembered the conversations I've had with partners and clients over the past years about investing time and money in the security and safe keeping of IoT devices.

The WannaCry story clearly demonstrated that there was a crushing responsibility for various IT service companies. They should have kept computer systems up to date with a supported Windows version and the latest security updates. Very often, time, budget or change management is a reason why such updates did not happen. "It it’s not broken, don’t fix it." Such thinking left the back door to several critical systems wide open, which made things broken a lot quicker than anyone assumed.

That's why, starting with Windows 10, Microsoft has changed the default 'update policy'. Security and system updates are automatically installed, giving customers a Windows system that is up to date by default. However, the pushing of automatic updates is a major problem with most IoT systems available today.

IoT security with holes

Very often, devices - from smart scales and to internet thermostats to even healthcare devices – are not equipped to receive security updates. The software often does not allow it, or the computing power of the device is too limited to deal with the update logic.

In most cases, the users of such a device don’t think about by the fact that their gadget (or more dangerously, their health device) is actually a mini computer that may have a security issue. If security updates cannot be pushed by default through the manufacturer’s IoT platform, you can assume that the device will never be updated during its entire lifecycle. To make matters worse, such devices often have a long lifespan. Thus, the encryption algorithms used today will no longer prove sufficient to keep sensitive data encrypted in the foreseeable future.

Companies should therefore always supply an update mechanism in their IoT solution. This makes the initial investment higher, but it also offers an undeniable advantage. For one thing, pushing updates can prevent your brand from getting negative exposure in the news as the result of a (serious) vulnerability. But you can also send new pieces of functionality to those devices. This keeps the devices relevant and enables you to offer new features to your customers.

By taking the responsibility for updating (and thus securing) such systems away from the end user, we create a much safer internet. Because no one wants his smart toaster (and its internet connection) used to enable drug trafficking, child pornography or terrorism.

 

Note: This article was first published via Computable on 30 May 2017 (in Dutch) 

Categories: Opinions
Tags: IoT
written by: Sam Vanhoutte

Posted on Wednesday, May 24, 2017 12:00 PM

Toon Vanhoutte by Toon Vanhoutte

This blog post covers several ways to optimize the monitoring experience of your Logic Apps. Logic Apps provide two ways to add functional data to your workflow runs: tracked properties and outputs. This functional data can be used to improve operational experience, to find a specific run based on business data and to serve as a base for reporting. Both options are discussed in depth and compared.

Introduction

Tracked Properties

As the documentation states: Tracked properties can be added onto actions in the workflow definition to track inputs or outputs in diagnostics data. This can be useful if you wish to track data like an "order ID" in your telemetry

Tracked properties can be added in code view to your Logic Apps actions in this way:

Outputs

As the documentation states: Outputs specify information that can be returned from a workflow run. For example, if you have a specific status or value that you want to track for each run, you can include that data in the run outputs. The data appears in the Management REST API for that run, and in the management UI for that run in the Azure portal. You can also flow these outputs to other external systems like PowerBI for creating dashboards. Outputs are not used to respond to incoming requests on the Service REST API

Outputs can be added in code view to your Logic Apps workflow in this way. Remark that you can also specify their data type:

Runtime Behaviour

Do runtime exceptions influence tracking?

What happens if my Logic App fails? Are the business properties still tracked or are they only available in a happy scenario? After some testing, I can conclude the following:

  • Tracked properties are only tracked if the appropriate action is executed.
  • Outputs are only tracked if the whole Logic App completes successfully.

When using tracked properties, my advice is to assign them to the first action (not supported on triggers) of your Logic App, so they are certainly tracked.

Do tracking exceptions influence runtime?

Another important aspect on tracking / monitoring is the fact that it should never influence the runtime behaviour. It's unacceptable that a Logic App run fails, because a specific data field to be tracked is missing. Let's have a look what happens in case we try to track a property that does not exist, e.g.: "Reference": "@{triggerBody()['xxx']}"

In both cases the Logic App ends in a failed state, however there are some differences:

Tracked Properties

  • The action that is configured with tracked properties fails.
  • There is a TrackedPropertiesEvaluationFailed exception.

Outputs

  • All actions completed successfully, despite the fact that the Logic App run failed.
  • In the Run Details, an exception is shown for the specific output. Depending on the scenario, I've encountered two exceptions:

    >  The template language expression 'triggerBody()['xxx']' cannot be evaluated because property 'xxx' doesn't exist, available properties are 'OrderId, Customer, Product, Quantity'.

    > The provided value for the workflow parameter 'Reference' at line '1' and column '57' is not valid.

Below you can find a screen capture of the workflow details.

This is not the desired behaviour. Luckily, we can easily avoid this. Follow this advice and you'll be fine:

  • For strings: Use the question mark operator to reference potential null properties of an object without a runtime error. In case of a null reference, an empty string will be tracked. Example: "Reference": "@{triggerBody()?['xxx']}"

  • For integers: Use the int function to convert the property to an integer, in combination with the coalesce function which returns the first non-null object in the arguments passed. Example: "Quantity": "@int(coalesce(triggerBody()?['xxx'], 0))"

Azure Portal

The first place where issues are investigated is the Azure Portal. So it would be very helpful to have these business properties displayed over there. Tracked properties are nowhere to be found in the Azure Portal, but fortunately the outputs are displayed nicely in the Run Details section. There is however no option to search for a specific Logic App run, based on these outputs.

Operations Management Suite

In this post, I won't cover the Logic Apps integration with Operations Management Suite (OMS) in depth, as there are already good resources available on the web. If you are new to this topic, be sure to check out this blog post and this webcast.

Important take-aways on Logic Apps integration with OMS:

  • Only tracked properties are available in OMS, there's no trace of outputs within OMS.
  • OMS can be used to easily search a specific run, based on business properties
    > The status field is the status of the particular action, not of the complete run
    > The resource runId can be used to find back the run details in the Azure Portal

  • OMS can be used to easily create reports on the tracked business properties
  • Think about data retention. The free pricing tier gives you data retention of 7 days.

Management API

Another way to search for particular Logic Apps runs, is by using the Azure Service Management API. This might come in handy if you want to develop your own dashboard on top of Logic Apps.
The documentation clearly describes operations available to retrieve historical Logic Apps run in a programmatic way. This can be done easily with the Microsoft.Azure.Management.Logic library. Outputs are easier to access than tracked properties, as they reside on the WorkflowRun level. The following code snippet can serve as a starting point:

Unfortunately, the filter operations that are supported are limited to the ones available in the Azure Portal:

This means you can only find specific Logic Apps runs - based on outputs or tracked properties - if you navigate through all of them, which is not feasible from a performance perspective.

Conclusion

Comparison

Below you can find a comparison table of the investigated features. My current advice is to use both of them, as you'll get the most in return. 

Feature Tracked Properties Outputs
Tracking Level Action Logic App Run
Runtime behaviour    If action executes If Logic App succeeds 
Azure Portal No integration Visible in run details
OMS Full integration No integration
Management API No search No search

Feedback to Product Team

Monitoring is one of areas that needs some improvements to provide all we need for operating and monitoring an integration environment. Here are my feature requests in order of priority:

  • Ensure outputs are also monitored in case a workflow run fails. Vote here.
  • Provide OMS integration for outputs. Vote here.
  • Extend the search filter in the Management API. Vote here.
  • Provide a unique URL to directly redirect to a specific Logic App run.

Hope this post clarifies the monitoring capabilities of Logic Apps. 

Feel free to vote for the mentioned feature requests!
Toon

Categories: Azure
Tags: Logic Apps
written by: Toon Vanhoutte

Posted on Friday, May 19, 2017 11:29 AM

Toon Vanhoutte by Toon Vanhoutte

Receiving data from a SQL table and processing it towards other back-end systems; It's a very common use case in integration. Logic Apps has all the required functionality in its toolbox to fulfill this integration need. This blog post explains how you can do this in a reliable fashion, in case you're dealing with mission critical interfaces where no data loss is accepted.

Scenario

Let's discuss the scenario briefly.  We need to consume data from the following table.  All orders with the status New must be processed!

The table can be created with the following SQL statement:

First Attempt

Solution

To receive the data, I prefer to create a stored procedure. This avoids maintaining potentially complex SQL queries within your Logic App. The following stored procedure selects the first order with status New and updates its status to Processed in the same statement. Remark that it also returns the @@ROWCOUNT, as this will come in handy in the next steps.

The Logic App fires with a Recurrence trigger.  The stored procedure gets executed and via the ReturnCode we can easily determine whether it returned an order or not.  In case an order is retrieved, its further processing can be performed, which will not be covered in this post.

Evaluation

If you have a BizTalk background, this is a similar approach on using a polling SQL receive location. One very important difference: the BizTalk receive adapter executes the stored procedure within the same distributed transaction as it persists the data in the MessageBox, whereas Logic Apps is completely built on API's that have no notion of MSDTC at all.

In failure situations, when a database shuts down or the network connection drops, it could be that the order is already marked as Processed, but it never reaches the Logic App. Depending on the returned error code, your Logic App will end up in a Failed state without clear description or the Logic App will retry automatically (for error codes 429 and 5xx). In both situations you're facing data loss, which is not acceptable for our scenario.

Second attempt

Solution

We need to come up with a reliable way of receiving the data. Therefore, I suggest to implement a similar pattern as the Azure Service Bus Peek-Lock. Data is received in 2 phases:

  1. You mark the data as Peeked, which means it has been assigned to a receiving process
  2. You mark the data as Completed, which means it has been received by the receiving process

Next to these two explicit processing steps, there must be a background task which reprocesses messages that have the Peeked status for a too long duration. This makes our solution more resilient.

Let's create the first stored procedure that marks the order as Peeked.

The second stored procedure accepts the OrderId and marks the order as Completed.

The third stored procedure should be executed by a background process, as it sets the status back to New for all orders that have the Peeked status for more than 1 hour.

Let's consume now the two stored procedures from within our Logic App.  First we Peek for a new order and when we received it, the order gets Completed.  The OrderId is retrieved via this expression: @body('Execute_PeekNewOrder_stored_procedure')?['ResultSets']['Table1'][0]['Id']

The background task could be executed by a SQL Agent Job (SQL Server only) or by another Logic App that is fired every hour.

Evaluation

Happy with the result? Not a 100%! What if something goes wrong during further downstream processing of the order? The only way to reprocess the message is by changing its status in the origin database, which can be a quite cumbersome experience for operators. Why can't we just resume the Logic App in case of an issue?

Third Attempt

Solution

As explained over here, Logic Apps has an extremely powerful mechanism of resubmitting workflows. Because Logic Apps has - at the time of writing - no triggers for SQL Server, a resubmit of the Recurrence trigger is quite useless. Therefore I only want to complete my order when I'm sure that I'll be able to resubmit it if something fails during its further processing. This can be achieved by splitting the Logic App in two separate workflows.

The first Logic App peeks for the order and parses the result into a JSON representation. This JSON is passed to the next Logic App.

The second Logic App gets invoked by the first one.  This Logic App completes the order first and performs afterwards the further processing.  In case something goes wrong, a resubmit of the second Logic App can be initiated.

Evaluation

Very happy with the result as:

  • The data is received from the SQL table in a reliable fashion
  • The data can be resumed in case further processing fails

Conclusion

Don't forget that every action is HTTP based, which can have an impact on reliability. Consider a two-phased approach for receiving data, in case you cannot afford message loss. The same principle can also by applied on receiving files: read the file content in one action and delete the file in another action. Always think upfront about resume / resubmit scenarios. Triggers are better suited for resubmit than actions, so if there are triggers available: always use them!

This may sound overkill to you, as these considerations will require some additional effort. My advice is to determine first if your business scenario must cover such edge case failure situations. If yes, this post can be a starting point for you final solution design.

Liked this post? Feel free to share with others!
Toon

 

 

Categories: Azure
written by: Toon Vanhoutte

Posted on Monday, May 15, 2017 4:43 PM

Sam Vanhoutte by Sam Vanhoutte

Microsoft has just wrapped up its annual developer conference. In this blog, we will summarize the most important trends and announcements for you.

Sam Vanhoutte, CTO of Codit, was present at Microsoft Build 2017 and is reporting for us. Codit collaborates with Microsoft and their product teams, but we also place value in coming to these events and hearing about all the new announcements. The main focus for Build, by the way, is on developers. Therefore, we will not be talking about upcoming versions of Windows 10.

Unlike other years, Build was not held in San Francisco but instead has moved to Seattle. There, CEO Satya Nadella shared his vision for developers. He points out that there are a lot of opportunities, but that opportunities also come with responsibilities.

Nadella indicated the evolution of the Internet using a few numbers. In 1992, the total internet volume was 100 gigabytes per day. Today, the world consumes 17,5 million times more, only now per second. Ninety percent of all data was generated over the past two years. Of course, this number will multiply because of the billions of connected devices and sensors that are going to be connected in the near future.

                                 

Microsoft in numbers

Microsoft also mentioned numbers about themselves. For instance: there are now 500 million Windows 10 devices. The commercial version of Office 365 has 100 million active users per month and Cortana has over 140 million users per month.

Azure Active Directory is currently being used by 12 million organizations. And over 90 percent of Fortune 500 companies use the Microsoft Cloud.

IoT

Microsoft is currently investing heavily in IoT, and notes that in the smart industry 'the edge' is going to be very important. ‘The edge’ means the periphery of the network, where sensors and devices generate their data (and sometimes process data) before it is moved to a cloud service.

This is also reflected in the product news: Azure IoT Edge is combination of the intelligent cloud and the intelligent edge. The system enables cloud functionality to be executed in 'the edge'. This functionality includes gateways, machine learning, stream analytics and functions that can now be executed locally, but are managed and configured from the cloud.

The reason is that it is increasingly common for large amounts of data to be processed quickly. Sometimes this is better done locally. It must be noted that Microsoft is not the only one to move in that direction. Bosch also mentioned such local processing, quoting the term ‘fog computing’.

Codit is pleased with the expansion. Sam Vanhoutte said "This is also very important for Codit and it is a major evolution in Microsoft's IoT offering, closely involving us as Azure IoT Elite partner".

AI

In the area of artificial intelligence, Microsoft presented products like Microsoft Graph. The software links people, their activities and their devices together. Combined with machine learning and cognitive services, the company gave a demo where alerts were being generated when unauthorized persons use certain items on the work floor.

In addition, Microsoft also introduced several new cognitive services, many them being fully customizable. This allows you to get started with machine learning by doing things like ‘feeding’ images of certain objects to the system. Another demo showed live captioning during a PowerPoint presentation. This is something that IBM also demonstrated during their Watson conference last fall.

Azure

The fact that Microsoft is more than Windows and Office becomes clear as soon as Azure moves into the spotlight. For example, a new database service by the name of Azure CosmosDB was presented, able to store and present data quickly and reliably on a worldwide scale. Storage possibilities include NoSQL documents, graphs and key-value.

 Also new are two managed databases: Managed MySQL and Managed PostreSQL.

For containers, developers had the choice of using Docker, Kubernetes and Mesos on Azure. This is supplemented with Service Fabric for Windows or Linux. Also, live debugging is possible in a production environment without impacting users.

But there is more...

Finally, Microsoft announced the Azure mobile app, Visual Studio for Mac and a built-in script console in the Azure portal. These are all things that make the life of devops easier.

For developers, Microsoft announces the XAML standard 1.0. It should make it even easier to develop apps that run on Windows, iOS and Android. Microsoft also expanded on how to make apps more accessible to people with disabilities.

What does this mean for Belgium?

According to Sam Vanhoutte, Microsoft's cloud tools can help companies in their digital transformation. "It will be easier for many customers to achieve innovation in an inexpensive and fast-paced manner. Fail fast and pay as you grow." Also, the increased availability of AI can provide efficiency gains for local companies.

The fact that Microsoft is now offering ‘edge’ products and services is a new trend in the cloud landscape. However, according to Vanhoutte, it is not yet clear how all these new features are included in the pricing.

Codit also received kudos during the Build conference. Swiss Re, a Swiss insurance company and a customer of Codit, was asked to present a case in which travelers received reimbursement immediately after their flight had been delayed.

If you want to learn about all the announcements done at Build 2017, you can go to Channel 9, Microsoft's video service, and rerun the live streams.

Note: This article was first published via Data News (in Dutch).