wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Wednesday, July 12, 2017 9:45 AM

Stijn Moreels by Stijn Moreels

In this part of the Test Infected series, I will talk about how code is hard to test – both in a Test-First mindset and without.

Hard-to-Test Code

By “hard”, I mean anything that is uneasy, sloppy, frustrating or annoying, … anything that makes you sigh. All that can be categorized as “hard”.

TDD or Test-Driven Development is a lot more than just writing tests first, it’s also about Designing Software. You think about a lot of things during the writing of tests, and all those things together guards you from writing Hard-to-Test Code.

When I write tests, I want an easy way to exercise and verify the SUT (System Under Test). The less code I need to write to do that, the better. The clearer the test intent, the better. The easier the test, the better.

Obscured Fixture Setup

What do I mean with a Fixture in this paragraph? Anything that you need to do so you can exercise the SUT. This could mean that you first have to initialize the SUT with some valid arguments, it could mean that you must insert some Dummy Data into a datastore, it could mean that you must call some methods of the SUT...

According to Kent Beck: anything that “Sets the Table” for the SUT.

This section is about the maintainability of the Test Fixture and how we can improve this.

Discovery

With a Complex Fixture Setup, I mean that I must write a lot of code to “set this table”. I must admit that  I quickly assign a fixture as being “complex” – but that’s a good thing I guess.

Look at the following snippet. It’s a good thing that we “spy” Client instead of actually sending a mail, but also note that your eyes are attracted to the strings in the Attachments and Header of the valid message instead of the actual testing and verifying.

I don’t like complex, big, or hard-to-understand Fixtures. I want clear visual of what is tested and how. Nothing more, nothing less. Of course, I don’t know if you find this complex, maybe you don’t (because you wrote it), I just don’t like big methods I guess.

We have 16 lines, with 3 comments, 2 blank spaces, 2 braces, 7 lines of which are Fixture Setup.

Causes

You could think of many causes to having a Complex Fixture Setup.

  • You could have a Tightly-Coupled system which forces you to create all those extra objects and initialize them with the right values.
  • Your test includes information which doesn’t really matter in the context of the test; this way introducing a Polluted Test. This could happen in a Copy-Paste programming environment in which you just copy the Fixture of another test.
  • It could also happen if there wasn’t enough research done to maybe Fake the Fixture and thereby avoiding unnecessary setup code.

Impact

Now, we have a Complex Fixture – so what?

Let’s look at the impact a Complex Fixture could have on your development. What if I want to test some unhappy paths for the Client. What if we want to test the creation of the Message with Constructor Tests. What if we want to test with a single Attachment instead of two…

Al those tests would require a similar Fixture Setup.

If you have Cut-and-Paste developers in your team, you would have a lot of Test Duplication. Which again result in a High Test Maintenance Cost.
Besides the fact of duplication, it isn’t clear to me that we are testing a “valid message”. Does it have to do with the header value? Does it have to do with the attachments? Does it have to do with something else? …

What do I have to do to create valid mail message. Does this message require attachments? By not having a clear Fixture Setup, we don’t have a clear Test Overview.

Possible Solution

The first thing you should do is to eliminate all the unnecessary information from your test. If you don’t use it/need it – don’t show it.
I only want to see what’s really important to the test to pass.

If you, after that, still have a big Fixture to set up – place it in Parameterized Creation Methods so you can only send the values to the Creation Methods that are valuable for the test. This way you resolve the duplication of the tests.

Also make sure that you don’t have any duplication in your Implicit Fixture (typically in some kind of “setup” method or constructor) for setting up a Datastore for example.

Missing Object Seam Enabling Point

What is an Object Seam? Michael C. Feathers introduced this and called it: “A place where you can alter behavior without editing that place”.

Every Seam must have an Enabling Point where the decision for one behavior or the other can be made. In Object-Oriented languages, we can use this method by introducing Test Doubles by which we implement another version of the dependency or other kind of object we need to exercise the SUT.

Discovery

Not having an Enabling Point for the SUT makes our code Hard-to-Test. This can happen in many situations – especially when you have chosen some kind of design decision and everything must make room. (This sounds a bit like a Golden Hamer Smell to me).

Please don’t look at the names, it’s only an example.

The following example shows how the Message Service doesn’t contains any enabling point. We are bound to use the file system if we want to test the Message Service. A possible solution could introduce a Fake Datastore (probably in-memory) and send it to the Message Service.

Also note that we can’t even write a valid assertion. Yes, we could check if the File Based Data Store has written something on the disk. But I hope that you can see that this isn’t the right way to assert the Message Service.

We would have to write code that assert the functionality of the File Based Data Store and not from the Message Service.

Ok, it was a very “simple” example of a SUT could miss an Enabling Point. Let’s make it a tiny bit harder.

The File Sender always uses XML serialization/deserialization if we want to write a file. We always must use the Message Helper (what kind of name is that?) if we want to write a file.

These Utility Classes are the result of thinking in a Procedural way, not in a OOP way. Because all these static classes are used in this class, we have no choice than to also test them if we want to test the File Sender. If we Unit Test this class, we aren’t testing a “unit”, we are testing 3 (!) classes.

Whenever I see the name “helper” appear, I immediately think that there is room for improvement in the design. For a start, please rename “helper” to a more appropriate name. Everything could be a helper for another class but that doesn’t mean we must name all our classes with the prefix “helper”.

Try to move those functionalities in the appropriate classes. Our Message could for example have a property called IsEmpty instead of outsourcing this functionality in a different class.

Functional Languages have the same need to inject Seams. Look at the following example of the same functionality:

If we want to change the Message Helper or the serialization, we must inject functions in our “Write to File” function. In this case, our Stub is just another function.

Again, don’t look at the names, or functionality – it’s just to make a point on the Enabling Point of our Object Seams.

Causes

You could think of many causes of a Missing Enabling Point:

  • If the Dependencies are implemented right inside the SUT – which would indicate a Tight-Coupling (again); and therefore, we cannot implement our own version of the dependency.
  • Utility Classes are a result of Procedural Thinking in a OOP environment. (Yes, maybe there are some exceptions – maybe) which result in Static Dependency with the SUT. We cannot alter these behaviors of these static classes in our test.
  • The SUT may do a lot of work inside the constructor which always need to run if we want to exercise the SUT – thereby limiting our capabilities of placing an Enabling Point.
  • Having a chain of method calls could also indicate this Tight-Coupling only in a more subtle way. If we have a single dependency but we have to call three methods deep before we have the right info, we have the same problem. It violates the "Don't Talk to Strangers" design principle.

Impact

By constantly missing an Enabling Point for your Seam; you are making a design that isn’t reusable for other purposes.
Sometimes the reason behind the absence of Enabling Points lies in the way the responsibilities are spread across the classes and not wrapped in the right classes.

Maybe I’m a bit allergic to Utility Classes.

Possible Solution

Placing an Enabling Point in our SUT. That should be our solution. We need some kind of Observation Point we can use to alter behavior, so we can verify the outcome.

Note that the observation can be direct or indirect, just like the control point (or Enabling Point) of our SUT. Use different kind of Test Doubles to achieve these goals.

We use a Test Stub to control the Indirect Inputs of the SUT; we use a Mock Object for verification of the Indirect Outputs.

Classes which have private information, behavior, … these classes can maybe expose their information or behavior in a subclass. We could create a Test Specific Subclass which we can use to exercise the SUT instead of the real one.

But be careful that you don’t override any behavior you are testing. That we lead to False Positive test cases and would introduce paths in your software that are never exercised in a test environment.

In Functional languages, everything is a function; so, we could introduce a Stub Function for our injection of data, and a Mock and/or Stub Function for our response and verification, … so we have an Enabling Point for our SUT.

Over-Specified Software by Mocking

I already said it in previous posts and sections: you must be careful about Mocking and what you try to Mock. In another post, I mentioned the Mock Object as Test Double to assert on indirect outputs of the SUT. This can be useful if you can’t verify any other outside observable behavior or state of the SUT.

By using this Mock Object we can in fact verify the change of the SUT.

Discovery

Yes, we can verify the change of the SUT; but have we also a maintainable change of the Mock Object?
If we need to change some signature of the Mock Object, we need to alter the change throughout all the Mock Objects and there direct assertions to complete the change in signature.
If we mock out everything of the SUT and thereby Tight-Couple our Test Double with the SUT, we have Over-Specified Software by Mocking too much.

Causes

Multiple situations can cause a SUT being Tight-Coupled to the DOC (Depend-On Component, in this case the Mock Object).

  • Writing Tests After implementation can cause this situation. If you have developed a Hard-to-Test SUT, you may have encounter a SUT that only can be exercised/tested (and so, verified) by injecting a Mock Object and assert on the indirect outputs of the SUT.
    Sometimes, the assert-phase of these tests aren’t the whole picture we want to test but only a fragment; a side-effect. By using this extra side-effect, we have made our test code Tight-Coupled on our SUT.
  • Testing Unnecessary Side-Effects can also cause Over-Specified Software. If we assert on things we don't necessary need in our test or things that do not add any extra certainty to our test case; we should remove those assertions. Testing on “extra” items doesn’t result in more robust software but rather in Over-Specified Software.

Impact

So, let’s say you’re in such a situation; what’s the impact in your development work?
Just like any software that is Tight-Coupled, you have the cost of maintenance. If you’re software is tested in a way that the slightest change in your SUT that doesn’t alter any behavior of the SUT result in a failed test; you could state that you have Over-Specified Software.
Any change you make is a hard one, which result that developers will make lesser changes. Lesser changes/refactorings/cleanup… will result in lesser quality of your software.

Possible Solution

People tend to forget the Dummy Object when they are writing tests. The purpose of the Dummy Object is to fulfil the SUT needs without actually doing anything for it. Passing “null” or empty strings are good examples, or objects that are empty (and maybe throw exceptions when they are called to ensure that they aren’t called during the exercise of the SUT).
Not everything needs to be a Mock Object. And just to be clear A Mock isn’t a Stub!

You’ll be amazed how many unnecessary data you write in your tests when you start removing all those Mock Objects and replace them with lighter objects like Dummy Objects.

Yes, Mock Objects are absolutely necessary for a complete developer toolset for testing; yes, sometimes Mock Objects are the only possible solution to verify indirect outcomes of the SUT; yes, sometimes we need to assert on indirect output calls directly…
But certainly, not always. Try using another Test Double first instead of directly using a Mock Object. Just like you’ll use an Inline Fixture first before moving to a Shared Fixture.

Besides the fact that you can change your Test Double, you could also look at WHAT you want to test and may come up with some refactorings in your SUT to verify the right state or behavior. The best solution to have code that is easy to test; is writing your tests first, which result immediately in testable code.

Asynchronous Code

Some small part about Asynchronous Code, because it's too much to talk about in such a small section.

The problem with async code, is that we don't always have the same context in which we can assert for the right outcome. Sometimes we use a Polling functionality to get the work done for example. This will (of course) result in Slow Tests, but sometimes we don't have control of the running process.

In the book xUnit Test Patterns we've seen that we can use a Humble Object which extracts the async code, so we can make sync calls in our test. In my previous post, I talked about a Spy and used a Wait Handle, to block the thread before we succeed the test; this can also be a solution (if it's implemented right; timeout, ...).

The xUnit framework (not the xUnit family!; .NET xUnit != xUnit Family) written for .NET, has support for async methods which makes sure that we can assert in the right task-context.

Conclusion

So many classes are already in play that are more difficult to test; that’s why my motto is to not make this pile of classes any bigger and to write in a Test-Driven way easy-to-read/easy-to-maintain code every day. Because every day can be that day where you must change a feature, add/modify/remove functionality, or anything else that include change.

Tests are there to help, not to slow you down. In fact, by writing tests you work more productively, efficiently, safer, rousted, …

So, don’t write any Hard-to-Test code but write code that grows incrementally from your tests.

Categories: Technology
Tags: Code Quality
written by: Stijn Moreels

Posted on Friday, July 7, 2017 12:00 PM

Stijn Moreels by Stijn Moreels

In this part of the Test Infected series, I will talk about the Test Doubles. These elements are defined to be used as a “stand-in” for our SUT (System under Test). These Test Doubles can be DOC (Depend on Components); but also, other elements we need to inject to exercise the SUT.

Introduction

This is probably the first post in the Test Infected series. The term “test infected” was first used by Erich Gamma and Kent Beck in their article.

“We have been amazed at how much more fun programming is and how much more aggressive we are willing to be and how much less stress we feel when we are supported by tests.”

The term "Test-Driven Development" was something I heard in my first steps of programming. But it was when reading different books about the topic that I really understood what they meant.

The Clean Coder by Robert C. Martin talks about the courage and a level of certainty of Test-Driven Development. Test-Driven Development: By Example by Kent Beck has taught me the mentality behind the practice and Gerard Meszaros with his book xUnit Test Patters has showed me the several practices that not only improved my daily development, but also my Test-First mindset. All these people have inspired me to learn more about Test-Driven Development and the Test-First Mindset. To see relationships between different visions and to combine them the way I see it; that's the purpose of my Test Infected series.

In this part of the Test Infected series, I will talk about the Test Doubles. These elements are defined to be used as a “stand-in” for our SUT (System Under Test). These Test Doubles can be DOC (Depend on Components); but also, other elements we need to inject to exercise the SUT.

I find it not only interesting to examine the theoretical concept of a Test Double, but also how we can use it in our programming.

Types

No, a Stub isn’t a Mock; no, a Dummy isn’t a Fake. There are differences in the way we test our code. Some test direct inputs other indirect outputs. Each type has a clear boundary and reason to use.

But be careful, overuse of these Test Doubles leads to Over Specified Software in which the test is Tightly-Coupled to the Fixture Setup of the tests which result in more refactoring work for your tests (sometimes more than the production code itself).

Test Code must be as clear, simple and maintainable… as Production Code – maybe even more.

Dummy Object

We use a Dummy Object if we want to inject some information that will never be used. null (C#), None (Python) … are good examples; but even “ignored data” (string) are valid Dummy Objects. If we’re talking about actual valid objects, we could throw exceptions when the methods of that object are called. This way we make sure that the object isn’t used.

We introduce these kinds of objects because the signature of the object to test requires some information. But if this information is not of interest of the test, we could introduce a Dummy Object to only show the test reader the related test information.

We must introduce custom Dummy Objects if the SUT doesn’t allow us to send null / None

Test Stub

In the literature, I found two different types of Test Stubs. One that returns or exposes some data, which can be used to validate the actual outcome of the System under Test (SUT); this is called a Responder and one that throws exceptions when the SUT interacts with the Stub (by calling methods, data…) so the Unhappy Path is being tested; this is called a Saboteur.

But, I encountered a possible third type which I sometimes use in test cases. I like to call it a Sink but it’s actually just a Null Object. This type of Stub object would just act as a “sink”, which means that the Stub isn’t doing anything with the given data. You could use a Sink in situations where you must for example inject a “valid” object but don’t feel like the test case doesn’t really care about what’s happening outside the SUT (in what cases does it?).

By introducing such an Anonymous Object, you let the test reader know that the object you send to the SUT is not of any value for the test.

This kind of “stubbing” can also be accomplished by introducing a new Subclass with Test Specifics and override with empty, expected or invalid implementations to test all paths of the SUT.

Following example shows how the Basket gets calculated with a valid (Anonymous) and invalid (Saboteur) product. I like to call valid items “filled” or “anonymous” to reference the fact that I don’t care what it contains or is structured. You can use “saboteur” to indicate the you have a “invalid” product in place that throws exceptions when it gets called.

I don’t know why, but sometimes, especially in this case where you have a several valid and a single invalid item – the setup reminds me of a pattern called Poison Pill. This is used in situations where you want to stop an execution task from running by placing a “poison pill” in the flow.

This type of Stub isn’t a Dummy object because the methods, properties, etc… are called. Also note that there’s a difference between a Saboteur and a Dummy Object which throws exceptions when called. The Saboteur is used to test all the paths of the SUT; whether the Dummy Object guards against calls that aren’t supposed to happen (which result in a test failure).

You would be amazed what a Stub can do in your design. Some developers even use this stub later as the actual production implementation. This is for me the ultimate example of Incremental Design. You start by writing your tests and incrementally start writing classes that are dependencies of your SUT. These classes will eventually evolve in actual production code.

Now, here is an example of a Stub. The beauty of Functional Programming, is that we can use Object Expressions. This means we can inline our Stub in our test.

Java also has a feature to define inline classes and override only a part of the methods you exercise during the test run

 

  • To decrease the Test Duplication, we can define Pseudo Objects for our Stubs. This means we define a default implementations that throws exceptions for any called member (like a Dummy for example). This allows us to override only those members we are interested in, in our Stub.
  • During my first experience with Kent Beck's Test-Driven Development, I came across the Self-Shunt idea. This can actually be any Test Double, but I use it most of the time as a Stub. Here, we use the Test Class itself as Test Double. Because we don't create an extra class, and we specify the return value explicitly; we have a very clear Code Intent. Note that I only use this practice if the Test Double, can't be reused somewhere else. Sometimes your Test Double start as a Self-Shunt but can grow to a full-blown Stub.

Test Spy

Ok, Stubs are cool – very cool. The Saboteur is especially useful to test the unhappy/rainy paths throughout the SUT. But there’s a downside to a pure stub: we cannot test the Indirect Output of our SUT.

That’s where the Test Spy comes in. With this Test Double, we can capture the output calls (the Indirect Output) of our SUT for later verification in our test.

Most of the time, this is interesting if the SUT doesn’t return anything useful that we can use to verify if the test was successful. We can just write the ACT statement and no ASSERT statement and the test will automatically result in a test failure if any exception during the exercise of the SUT is being thrown.

But, it’s not a very explicit assertion AND (more importantly), if there are any changes to the SUT, we cannot fully verify if that change in behavior doesn’t break our software.

When developing a logging framework (for example); you will have a lot of these situations because a log-function wouldn’t return anything (the log framework I came across didn’t). So, if we only get a void (in C#), how can we verify if our log message is written correctly?

When working in an asynchronous environment; Test Spies also can be useful. Testing asynchronous code will always have some blocking system in place if we want to test Indirect Outputs – so a Test Spy is the ideal solution.

By hiding the blocking mechanism, we have a clear test and the test reader knows exactly what the purpose of the test is, what the SUT should do to make the test pass, and what the DOC (Depend-on Component) do in the background to make the right verification in our assert-phase.

All of this makes sure that we have a true test positive.

The time-out is (off course) context specific – try to limit to the very minimum; 5 seconds is a very long time for a single unit test to pass but not for an integration test.

Mock Object

If we want to test Indirect Outputs right away and not at the end of the test run (like a Test Spy uses “is called” in the assert-phase); we can use a Mock Object.

There’s a subtle difference with a Mock Object and a Test Spy. A Spy will capture its observations so that it can be verified in a later part (in the assert-phase); while a Mock will make the test fail when it encounters something that was not expected.

Of course, combinations can be made, but there’s something that I would like to warn you about the Mock Object. Actually, two somethings.

1) One must be careful what he/she mocks and what he/she exercises in the SUT. If we mock too much or mock the wrong parts, how do we verify that our SUT will survive in the “real world” where there aren’t any Mock Objects that return just the data the SUT expect?

2) One must be careful that it doesn’t use the Mock Object for all his/her tests. It will result in a Tight-Coupling between the test cases and the SUT. Especially when one uses mocking frameworks, it can be overused. Try to imagine that there must change something in your SUT. Try to go down the path how many Mock Objects you must change in order to get that change in your SUT.

Tests are there to help us, not to frustrate us.

These two items can also be applied on Test Stubs for example, if we specify too much information in our Indirect Input. The  different with a Mock is that we validate also the Indirect Output immediately the output and not in our Assert-phase. Tight-Coupling and overusing any pattern is a bad practice in my opinion. So, always start with the smallest: can you use a Dummy? Then a Stub? Maybe we can just Spy that? Ok, now we can use a Mock.

Look at the following example: we have a function that transforms a given input to an output, but only after we asserted on the expected input. This is a good example of how we assert directly and giving expected output for our SUT.

Fake Object

The last Test Double I would like to discuss, the Fake Object. This Test Double doesn’t always need be configured. An object that is a “fake” is actually a full-implementation object that implement the functionality in such a way that the test can use it during the test run.

A perfect example is the in-memory datastore. We implement the whole datastore operations, all within memory so we don’t need a full configured datastore in our tests.

Yes, of course you must test the datastore connection but with a Fake Object in place you can limit the tests that connect to the “real” database to a minimum and run all the other tests with a “fake”

Your first reaction for external components should be to check if you can fake the whole external connection. Tests that use in-memory storage rather than the file system, datastore, network connectivity… will run a lot faster – and therefore will be run a lot more by the developers.

This type of Test Double is different from the others, in a way that there is no verification in place. This type of object “just” replaces the whole implementation the SUT is dependent on.

Conclusion

Honestly, I think that the reason why I wrote this blog post is because I heard people always talk about “mocks” instead of the right words. Like Martin Fowler says in in his blog post: “Mocks aren’t Stubs”.

I know that in different environments people use different terms. A Pragmatic Programmer will use other words or the same for some Test Doubles than someone from the Extreme Programming background. But it is better that you state what you mean with the right terminology, than to call everything a “mock”.

What I also wanted to show was, that a Test Double isn’t “bound” to Object Oriented Programming or Functional Programming. A Test Double is a concept (invented by Gerard Meszaros) that alter the behavior of your SUT in such a way that the test can verify the expected outcome.

It’s a concept, and concepts can be used everywhere.

Categories: Technology
written by: Stijn Moreels

Posted on Monday, July 3, 2017 11:52 AM

Toon Vanhoutte by Toon Vanhoutte

Azure Service Bus is a very robust and powerful message broker. As with every Azure service, you need to be aware of its strengths and limitations. The most important limitation of Azure Service Bus is the message size. The Standard tier allows messages up to 256kB, within the Premium tier you hit the limit for messages larger than 1 MB. A way to overcome this limit, is implementing the claim check pattern. This blog posts explains how you can use this pattern within Logic Apps to send/receive large messages to/from Azure Service Bus queues.

Claim Check Pattern

The claim check pattern is described over here. The pattern aims to reduce the size of the message being exchanged, without sacrificing information content. In a nutshell, this is how it works:

  1. The sender uploads the payload in an external data store, to which the receiver has also access.
  2. The sender sends a message, that includes a reference to the uploaded payload, to the receiver.
  3. The receiver downloads the payload, using the reference extracted from the exchanged message.

A real-life example of this pattern is the way WeTransfer is used to email large data.

  1. The sender uploads the large data to the WeTransfer data store
  2. An email, including a download link, is sent to the receiver
  3. The receiver clicks the download link and receives the large data

Claim Check API App

Logic Apps and Azure Service Bus work perfectly together. If we can overcome the message size limit of 256kB, a whole bunch of new scenarios reveals. Azure Blob Storage can perfectly take the role of external data store and we can leverage its SAS tokens to give read access to the receiver.

As a proof of concept, I created a custom API app that provides this functionality. You can view and download the code here. This page also includes instructions on how the deploy and configure this API App. Are you new to creating API Apps for Logic Apps? Definitely check out this post that explains how to create a custom polling trigger and how you can leverage the cool TRex library.

This is how the API App implements the claim check pattern:

  1. The sending Logic App uploads the payload to blob storage and assigns a read-only SAS policy 
  2. The sending Logic App sends a message to a service bus queue, containing the blob URI (including SAS token) in the 'claimcheck-uri' header.
  3. The receiving Logic App receives the message from the queue and retrieves the blob via the URI provided in the 'claimcheck-uri' header.

The custom API App contains several actions and triggers. The ones relevant for this post are Send Message to Queue and Receive Message from Queue.

Send Message to Queue

The user experience of this action is very similar to the default Service Bus action. However, under the hood, the claim check pattern is applied. The following parameters are available:

  • Content: Content of the message.
  • Content Type: Content type of the message content.
  • Queue Name: Name of the queue.
  • Properties: Message properties in JSON format (optional).
  • Scheduled Enqueue Time: UTC time in MM/dd/yyyy HH:mm:ss format (optional).

Receive Message from Queue

This polling trigger is used to receive messages from the queue. When there are still messages available in the queue, the trigger will fire continuously, until the queue is empty.

As an output, this trigger provides the message content (retrieved from blob storage), the content type and the message properties. The lock token must be used to explicitly complete the message in the Logic App, as this is required to ensure at-least-once delivery.

The API App provides also a variant on this trigger to retrieve multiple messages from the queue within one batch.

Conclusion

API Apps are very powerful extension points to Logic Apps. In this scenario, it helped us to overcome the Service Bus message size limitation of 256kB. By implementing the claim check pattern with Azure Blob Storage, we are now capable of exchanging payloads up to 50 MB, which is the current Logic Apps message size limit!

Hope you enjoyed this one!
Toon

 

Categories: Azure
Tags: Logic Apps
written by: Toon Vanhoutte

Posted on Wednesday, June 28, 2017 4:17 PM

Integrate 2017 is the yearly conference for Microsoft integration. This is the day 3 recap of the sessions presented at Integrate with the views and opinions of the Codit staff.

Rethinking Integration - Nino Crudele

Nino Crudele was perfectly introduced as the "Brad Pitt" of integration. We will not comment on his looks, but rather focus on his ability to always bring something fresh and new to the stage!

Nino's message was that BizTalk Server has the ideal architecture for extensibility across all of its components. Nino described how he put a "Universal Framework" into each component of BizTalk. He did this to be able to improve the latency and throughput of certain BizTalk solutions, when needed and appropriate.

He also shared his view on how not every application is meant to fully exist in BizTalk Server alone. In certain situations BizTalk Server may only act as a proxy to something else. It's always important to choose the right technology for the job. As an integration expert it is important to keep up with technology and to know its capabilities, allowing for a best of breed solution in which each component fits a specific purpose e.g. Event Hubs, Redis, Service Bus, etc...

Nino did a good job delivering a very entertaining session and every attendee will forever remember "The Chicken Way".

Moving to Cloud-Native Integration - Richard Seroter

Richard Seroter presented the 2nd session of the day. He shared his views on moving to cloud-native thinking when building integration solutions. He started by comparing the traditional integration approach with the cloud-computing model we all know today. Throughout the session, Richard shared some interesting insights on how we should all consider a change in mindset and shift our solutions towards a cloud-native way of thinking.

“Built for scale, built for continuous change, built to tolerate failure”

Cloud-native solutions should be built “More Composable”. Think loose-coupling, building separate blocks that can be chained together in a dynamic fashion. This allows for targeted updates, without having to schedule downtime… so “More Always-On”. With a short demo, Richard showed how to build a loosely-coupled Logic App that consumed an Azure Function, which would be considered a dependency in the traditional sense. Then he deployed a change to the Azure Function - on-the-fly - to show us that this can be accomplished without scheduled downtime. Investing time in into the design and architecture aspect of your solution pays off, when this results in zero downtime deployments.

Next, he talked about adding “More Scalability” and “More Self-Service”. The cloud computing model excels in ease of use and makes it possible for citizen developers or ad-hoc integrators to take part in creating these solutions. This eliminates the need for a big team of integration specialists, but rather encourages a shift towards embedding these specialists in cross-functional teams.

In a fantastic demo, he showed us a nice Java app that provides a self-service experience on top of BizTalk Server. Leveraging the power of the new Management API (shipped with Feature Pack 1 for BizTalk 2016 Enterprise), he deployed a functioning messaging scenario in just a few clicks, without the need of ANY technical BizTalk knowledge. Richard then continued by stating that we should all embrace the modern resources and connectors provided by the cloud platform. Extend on premises integration with “More Endpoints” by using, for example, Logic-Apps to connect BizTalk to the cloud.

The last part focused on “More Automation”, where he did not only talk about automated build and automated deployment, but also recommended creating environments via automation to achieve the highest possible levels of consistency. In another short demo, Richard showed us how he automatically provisioned a ServiceBus instance and all related Azure resources from the Cloud Foundry Service Broker CLI.

Be sure to check out the recording of this session! It has some valuable insights for everyone involved in cloud integration!

Overcoming Challenges When Taking Your Logic App into Production - Stephen W Thomas

The third session of the day was presented by Stephen W Thomas, who gave us some insights into the challenges he faced during his first Logic Apps implementation at a customer.

He split up his session in three phases, starting with the decisions that had to be taken. After a short overview of the EDI scenario he was facing and going over the available options that were considered for the implementation, it was clear that Logic Apps was the winner due to several reasons. The timeline was pretty strict, and doing custom .NET development would have taken 10 times longer than using Logic Apps. The initial investment for BizTalk, combined with the limited presence of BizTalk development skills, made Logic Apps the logical choice in this case. However, if you already use EDI in BizTalk, it probably makes sense to keep doing so, since your investment is already there.

In the second phase, he reflected on the lessons learned during the project. The architecture had to be made with the rules of a serverless platform in mind. This included a 2-weekly release cadence, which could affect the current functionality, which in turn makes it important to check the release notes. Another thing to keep in mind is the (sometimes) unpredictable pricing: where every Action in Logic Apps costs money, in BizTalk you can just keep adding on expression shapes without worrying about additional cost.

In the last phase, he left us with some tips and tricks that he gained through experience with Logic Apps. "Don't be afraid to use JSON". Almost every new feature is introduced in code view first, so take advantage of it by learning to work with it. It's also good to know that a For-Each loop in Logic Apps runs concurrently by default, but luckily this behaviour can be changed to Sequential (in the code view).

BizTalk Server Deep Dive into Feature Pack 1 - Tord Glad Nordahl

Tord had a few announcements to make which were appreciated by the audience:

  • The BizTalk connector for Logic Apps, which was in preview before today, is now generally available (GA).
  • Microsoft IT publicly released the BizTalk Server Migration Tool, which they use internally for their own BizTalk migrations. This tool should help in migrating your environment towards BizTalk Server 2016.

Tord discussed the BizTalk Server 2016 Feature Pack 1 next.

With the new ALM features, it's possible to deploy BizTalk solutions to multiple environments from any repository supported by Visual Studio Team Services. Just like the BizTalk Deployment Framework (BTDF), it is also possible to have one central binding file with variables being replaced automatically to fit your specific target environment.
 
The Management API included in Feature Pack 1 enables you to do almost anything that is possible in the BizTalk Management Console. You can create your own tools based on the API. For example: end users can be provided with their own view on the BizTalk environment. The API even supports both XML and JSON.
 
Feature Pack 1 also includes a new PowerBI template, which comes with the added Analytics. The template should give you a good indication on the health of your environment(s). The PowerBI template can be changed or extended with everything you can see on the BizTalk Management Console, according to your specific needs.

Tord also discussed that the BizTalk team is working on several new things already, but he could not announce anything new at the moment. We are all very anxious to hear what will come in the next Feature Pack!

BizTalk Server Fast & Loud - Sandro Perreira

Fast and loud: a session about BizTalk performance optimizations. The key takeaway is that you need to tune your BizTalk environments, beyond a default installation, if you want to achieve really high throughput and low latency. Sandro pointed out that performance tuning must be done on three levels: SQL Server, BizTalk Server and hardware.

SQL Server is the heart of your BizTalk installation and the performance heavily depends on its health. The most critical aspect is that you need to ensure that the SQL Agent Jobs are up and running. The SQL agent jobs keep your MessageBox healthy and avoid that your DTA database gets flooded. Treat BizTalk databases as a black box: don't create your own maintenance plans, as they might jeopardize performance and you'll end up with unsupported databases. Besides that, he mentioned that you should avoid large databases and that it is always preferable to go with dedicated SQL resources for BizTalk.

Performance tuning on the BizTalk Server level is mostly done by tuning and configuring host instances. You should have a balanced strategy for assigning BizTalk artifacts to the appropriate hosts. A dedicated tracking host is a must-have in every BizTalk environment. Be aware that there are also configuration settings at host (instance) level, of which the polling interval setting provides the quickest performance win to reduce latency.

It's advised to take a look at all the surrounding hardware and software dependencies. Your network should provide high throughput, the virtualization layer must be optimized and disks should be separated and fast.

These recommendations are documented in the Codit best practices and it's also part of our BizTalk training offering.

BizTalk Health Check – What and How? - Saffieldin Ali

After all the technical and conceptual sessions, it is good to be reminded that existing BizTalk environments and solutions need to be monitored properly to assure a healthy BizTalk platform and maximize both reliability and performance proactively. Identifying threats and issues lower or even avoid downtime in case of a disaster.
 
Microsoft's Saffieldin Ali shared his own experience, including various quotes that he collected throughout the years.
 
When visiting and interviewing customers, Ali has a list of red flags which, without even examining the environments, indicate that BizTalk may not be as healthy as you would want it to be. Discovering that customers have their own procedures to do backups, a lack of documentation of a BizTalk environment or not having the latest updates installed can be a sign of bad configuration. Any of which can cause issues in the future, affect operations and disrupt business.
 
To detect these threats, Ali explained how you can use tools like BizTalk Health Monitor (BHM), Performance Analysis of Logs (PAL) and Microsoft Baseline Security Analyzer (MBSA). He also showed us that, in BHM, there are two modes: a monitoring mode, which should be used as a basic monitoring tool and secondly, a reporting tool on the health of a BizTalk environment.

Incorporating the use of these tools in your maintenance plan is definitely a best practice every BizTalk user should know about!

The Hitchhiker's Guide to Hybrid Connectivity - Dan Toomey

In the first session after the afternoon break, Dan Toomey presented the different types of hybrid connectivity that allow us to easily set-up secure connections between systems. 

The network based options being Azure Virtual Network (VNET), with integration for web and mobile apps and VNET with API Management. This has all the advantages of APIM, but with an added layer of security. The non-network based options are WCF Relay, Azure Relay Hybrid Connections and the  On-Premises Data Gateway.

The concept of WCF-Relay is based on a secured listener endpoint in the cloud, which is opened via an outbound connection from within a corporate network. Clients send messages via the listeners endpoint, without the receiving party having to make any changes to the corporate firewall.

WCF Relay, which has the advantage of being the cheapest option, works on the application layer, whereas Hybrid Connections (HC) work on the transport layer. HC rely on port forwarding and work cross-platform. It is set-up in Azure (Service Bus) and connects to the HC Manager which is installed on premises.

The On-Premises Data Gateway acts as a bridge between Azure PaaS and on premises resources, and works with connectors for Logic Apps, Power Apps, Flow & Power BI.

In the end, Dan went through some scenarios to illustrate which relay is the better fit for specific situations. Being a big fan of the Hybrid Connection, the Hybrid Connection was often the preferred solution.

Dan finally mentioned that he has a Pluralsight training that goes into this topic. Although a bit dated since it also discusses BizTalk Services, the other material is still relevant.

Unlocking Azure Hybrid Integration with BizTalk Server - Wagner Silveira

Why should we use BizTalk Server and Azure together? That is the question Wagner Silveira kicked off his talk with.

He then talked about the fact that, if you are working on a complex scenario, you may want to use BizTalk Server if there are multiple systems you wish to call on premises. If there are multiple cloud endpoints to interface with, you might want to base the solution on Azure components. The goal being to avoid creating a slingshot solution with multiple roundtrips between on premises and cloud.
Since most organizations still have on premises systems, they can use BizTalk Server to continually get value out of their investments, and to continue leveraging the experience which developers and support teams have acquired.

He went on to talk about the available options that are available to connect to Azure. Wagner gave an overview of these options, in which he discussed Service Bus, Azure WCF Relay, App Services, API Management and Logic Apps.
When discussing Service Bus for example, he talked about how Service Bus allows full content based routing and asynchronous messaging. The latter would allow you to overcome unreliable connectivity, allow for throttling into BizTalk Server and multicasting scenarios from BizTalk to multiple subscribers.

Next he spoke about WCF-Relay. He talked about some of the characteristics of this option, stating that it supports both inbound and outbound communication based on dynamic relay, which is optimized for XML and supports ACS and SAS Security. WCF-Relay also has REST-support, which can be used to expose REST-services as well. You can then use WCF-Relay to publish for either inbound or outbound communication. Outbound communication is generally allowed by default, inbound communication will require network changes. Finally, you can also define outbound headers to support custom authentication.

A couple of typical scenarios for inbound WCF-relay that Wagner gave as examples were: real-time communication, exposing legacy or bespoke systems and to minimize the surface area (no "swiss cheese" firewall).
Examples of outbound scenarios are: leveraging public API’s and shifting compute to the cloud (for batch jobs for example), which allows us to minimize the BizTalk infrastructure footprint.

Next up was the Logic Apps adapter for BizTalk Server. Scenarios for using this solution would include extending workflows into Azure (think of connecting BizTalk Server to SalesForce for example). Another example would be exposing on premise data to Logic Apps.
For flows from Logic Apps into BizTalk on the other hand, it allows for securing internal systems, pre-validating messages and leveraging on premises connectors to expose legacy/bespoke systems.

The main takeaway for this session is that you should get to know the tools available, understand the sweet spots and know what to avoid. Not only from a technology and functional point of view, but from a pricing perspective as well.

There are many ways to integrate… Mix, match, and experiment to find the balance!

From Zero to App in 45 minutes (using PowerApps + Flow) - Martin Abbott

It is hard to give an overview of the last session by Martin Abbot about PowerApps since Martin challenged the "demo gods", by making it a 40-minute demo, with only 3 slides. A challenging, but interesting session where Martin created a PowerApps app, using some entities in the Common Data Service. He then connected PowerApps to Microsoft Flow and created a custom connector to be consumed as well, demonstrating the power of the tools. As one of the "founding fathers" of the Global Integration Bootcamp, he also announced the date for the next #GIB2018 event: the event will occur on March 24th 2018

 

Thank you for reading our blog post, feel free to comment with your feedback. Keep coming back, since there will be more blogs post to summarize the event and to give you some recommendations on what to watch when the videos are out.

 

This blogpost was prepared by:

Pieter Vandenheede (BE)
Toon Vanhoutte (BE)
Jonathan Gurevich (NL) 
Carlo Garcia-Mier (UK)
Jef Cools (BE)
Tom Burnip (UK)
Michel Pauwels (BE)
Ricardo Marques (PT)
Paulo Mendonça (PT)
Pim Simons (NL)
Iemen Uyttenhove (BE)
Mariëtte Mak (NL)
Jasper Defesche (NL)
Robert Maes (BE)
Vincent Ter Maat (NL)
Henry Houdmont (BE)
René Bik (NL)
Bart Defoort (BE)
Peter Brouwer (NL)
Iain Quick (UK)

Categories: Community

Posted on Tuesday, June 27, 2017 8:25 PM

Integrate 2017 is the yearly conference for Microsoft integration. This is the day 2 recap of the sessions presented at Integrate with the views and opinions of the Codit staff.

Microsoft IT: journey with Azure Logic Apps - Padma/Divya/Mayank Sharma

In this first session, Mayank Sharma and Divya Swarnkar talked us through Microsoft’s experience implementing their own integrations internally. We got a glimpse of their approach and the architecture of their solution.

Microsoft uses BizTalk Server and several Azure services like API Management, Azure Functions and Logic Apps, to support business processes internally.
They run several of their business processes on Microsoft technologies (the "eat your own dog food"-principle). Most of those business processes now run in Logic App workflows and Divya took the audience through some examples of the workflows and how they are composed.

Microsoft has built a generic architecture using Logic Apps and workflows. It is a great example of a decoupled workflow, which makes it very dynamic and extensible. It intensively uses the Integration Account artifact metadata feature.

They also explained how they achieve testing in production. They can, for example, route a percentage of traffic via a new route, and once they are comfortable with it, they switch over the remaining traffic. She however mentioned that they will be re-evaluating how they will continue to do this in the future, now that the Logic Apps drafts feature was announced.

For monitoring, Microsoft Operations Management Suite (MOMS) is used to provide a central, unified and consistent way to monitor the solution.

Divya gave some insights on their DR (disaster recovery) approach to achieve business continuity. They are using Logic Apps to keep their Integration Accounts in sync between active and passive regions. BizTalk server is still in use, but acts mostly as the proxy to multiple internal Line-of-Business applications. 

All in all, a session with some great first-hand experience, based on Microsoft using their own technology.
Microsoft IT will publish a white paper in July on this topic. A few Channel9 videos are also coming up, where they will share details about their implementation and experiences.

Azure Logic Apps - Advanced integration patterns - Jeff Hollan/Derek Li

Jeff Hollan and Derek Li are back again with yet another Logic Apps session. This time they are talking about the architecture behind Logic Apps. As usual, Jeff is keeping everyone awake with his viral enthusiasm!

A very nice session that explained that the Logic Apps architecture consists out of 3 parts:

The Logic Apps Designer is a TypeScript/React app. This contained app can run anywhere e.g.: Visual Studio, Azure portal, etc... The Logic Apps Designer uses OpenAPI (Swagger) to render inputs and outputs and generate the workflow definition. The workflow definition can be defined as being the JSON source code of the Logic App.

Secondly, there is the Logic App Runtime, which reads the workflow definition and breaks it down into a composition of tasks, each with its own dependencies. These tasks are distributed by the workflow orchestrator to workers which are spread out over any number of (virtual) machines. Depending on the worker - and its dependencies - tasks run in parallel to each other. e.g. a ForEach action which loops a 100 times might be executed on 100 different machines.

This setup makes sure any of the tasks get executed AT LEAST ONCE. Using retry policies and controllers, the Logic App Runtime does not depend on any single (virtual) machine. This architecture allows a resilient runtime, but also means there are some limitations.

And last, but not least, we have the Logic Apps Connectors, connecting all the magic together.
These are hosted and run separately from the Logic App or its worker. They are supported by the teams responsible for the connector. e.g. the Service Bus team is responsible for the Service Bus connectors. Each of them has their own peculiarities and limits, all described in the Microsoft documentation.

Derek Li then presented an interesting demo showing how exceptions can be handled in a workflow using scopes and the "RunAfter" property, which can be used to execute different actions if an exception occurs. He also explained how retry policies can be configured to determine how many times an action should retry. Finally, Jeff gave an overview of the workflow expressions and wrapped up the session explaining how expressions are evaluated inside-out.

Enterprise Integration with Logic Apps - Jon Fancey

Jon Fancey, Principal Program Manager at Microsoft, took us on a swift ride through some advanced challenges when doing Enterprise Integration with Logic Apps.

He started the session with an overview and a demo where he showed how easy it is to create a receiver and sender Logic App to leverage the new batch functionality. He announced that, soon, the batching features will be expanded with Batch Flush, Time-based batch-release trigger options and EDI batching.

Next, he talked about Integration Accounts and all of its components and features. He elaborated on the advanced tracking and mapping capabilities.
Jon showed us a map that used XSLT parameters and inline C# code processing. He passed a transcoding table into the map as a parameter and used C# to do a lookup/replace of certain values, without having to callback to a database for each record/node. Jon announced that the mapping engine will be enriched with BOM handling and the ability to specify alternate output formats like HTML or text instead of XML only.

The most amazing part of the session was when he discussed the tracking and monitoring capabilities. It’s as simple as enabling Azure Diagnostics on your Integration Account to have all your tracking data pumped into OMS. It’s also possible to enable property tracking on your Logic Apps. The Operations Management Suite (OMS) centralizes all your tracking and monitoring data.

Jon also showed us an early preview of some amazing new features that are being worked on. OMS will provide a nice cross-Logic App monitoring experience. Some of the key features being:

  • Overview page with Logic App run summary
  • Drilldown into nested Logic-App runs
  • Multi-select for bulk download/resubmit of your Logic App flows.
  • New query engine that will use the powerful Application Insights query language!

We’re extremely happy and excited about the efforts made by the product team. The new features shown and discussed here, provethat Microsoft truly listens to the demands of their customers and partners.

Bringing Logic Apps into DevOps with Visual Studio - Jeff Hollan/Kevin Lam

The last Microsoft session of Integrate 2017 was the second time Kevin Lam and Jeff Hollan got to shine together. The goal of their session was to enlighten us about how to use some of the tooling in Visual Studio for Logic Apps.

Kevin took to the stage first, starting with a small breakdown of the Visual Studio tools that are available:

  • The Logic Apps Designer is completely integrated in a Visual Studio "Resource Group Project".
  • You can use Cloud Explorer to view deployed Logic Apps
  • Tools to manage your XML and B2B artifacts are also available

The Visual Studio tools generate a Resource Group deployment template, which contains all resources required for deployment. These templates are used, behind the scenes, by the Azure Resource Manager (ARM). Apart from your Logic Apps this also includes auto-generated parameters, API connections (to for example Dropbox , Facebook, ...) and Integration Accounts. This file can be checked-in into Source Control, giving you the advantage of CI and CD if desired. The goal is to create the same experience in Visual Studio as in the Portal.

Jeff then started off by showing the Azure Resource Explorer. This is an ARM catalog of all the resources available in your Azure subscription.

Starting with ARM deployment templates might be a bit daunting at first, but by browsing through the Azure Quickstart Templates you can get a hang of it quickly. It's easy to create a single template and deploy that parameterized template to different environments. By using a few tricks like Service Principals to automatically get OAuth tokens and using the resourceId() function to get the resourceId of a freshly created resource, you are able to automate your deployment completely.

What's there & what's coming in BizTalk360 & ServiceBus360 - Saravana Kumar

On the tune of "Rocky", Saravana Kumar entered the stage to talk about the latest updates regarding BizTalk360 and ServiceBus360.

He started by explaining the standard features of BizTalk360 around operations, monitoring and analytics.
Since May 2011, 48 releases have been published of BizTalk360, adding 4 or 5 new features per release.

The latest release includes:

  • BizTalk Server License Calculator
  • Folder Location Monitoring for FILE, FTP/FTPS, SFTP
  • Queue Monitoring for IBM MQ
  • Email Templates
  • Throttling Monitoring

Important to note: BizTalk360 supports more and more cloud integration products like Service Bus and Logic Apps. What they want to achieve is having a single user interface to configure monitoring and alerting.

Similar to BizTalk360, with ServiceBus360, Kovai wants to simplify the operations, monitoring and analytics for Azure Service Bus.

Give your Bots connectivity, with Azure Logic Apps - Kent Weare

Kent Weare kicked off by explaining that the evolution towards cloud computing does not only result in lower costs and elastic scaling, but it provides a lot of opportunities to allow your business to scale. Take advantage of the rich Azure ecosystem, by automating insights, applying Machine Learning or introducing bots. He used an example of an energy generation shop, where bots help to increase competitiveness and the productivity of the field technicians.

Our workforce is changing! Bring insights to users, not the other way around.

The BOT Framework is part of the Cognitive Services offering and can leverage its various vision, speech, language, knowledge and search features. Besides that, the Language Understanding Intelligence Service (LUIS) ensures your bot can smoothly interact with humans. LUIS is used to determine the intent of a user and to discover the entity on which the intent acts. This is done by creating a model, that is used by the chat bot. After several iterations of training the model, you can really give your applications a human "face".

Kent showed us two impressive demos with examples of leveraging the Bot Framework, in which both Microsoft Teams and Skype were used to interact with the end users. All backend requests went through Azure API Management, which invoked Logic Apps reaching out to multiple backend systems: SAP, ServiceNow, MOC, SQL and QuadrigaCX. Definitely check out this session, when the videos are published!

Empowering the business using Logic Apps - Steef-Jan Wiggers

Previous sessions about Logic Apps mainly focused on the technical part and possibilities of Logic Apps.
Steef-Jan Wiggers took a step back and looked at the potential of Logic Apps from a customer perspective.

Logic Apps is becoming a worthy player in the IPaaS hemipshere. Microsoft started an entirely new product in 2015, which has matured to its current state. Still being improved upon on a weekly basis, it seems it is not yet considered as a a rock-solid integration platform.
Customers, but even Gartner in their Magic Quadrant, often make the mistake of comparing Logic Apps with the functionality that we are used to, with products like BizTalk Server. They are however totally different products. Logic Apps is still evolving and should be considered within a broader perspective, as it is intended to be used together with other Azure services.
As Logic Apps continues to mature, it is quickly becoming "enterprise integration"-ready.

Steef-Jan ended his session by telling us that Logic Apps is a flexible and easy way to deliver value at the speed of the business and will definitely become a centralized product in the IPaaS market.

Logic App continuous integration and deployment with Visual Studio Team Services - Johan Hedberg

In the last session before the afternoon break, Johan Hedberg outlined the scenario for a controlled build and release process for Logic Apps. He described a real-life use case, with 3 typical personas you encounter in many organizations. He stressed on the importance of having a streamlined approach and a shared team culture/vision. With the available ARM templates and Visual Studio Team Services (VSTS), you have all the necessary tools to setup continuous integration (CI) and continuous deployment (CD).  

The session was very hands-on and to the point. A build pipeline was shown, that prepared the necessary artifacts for deployment. Afterwards, the release process kicked off, deploying a Logic App, an Azure Function and adding maps and schemas to a shared Integration Account. Environment specific parameter files ensured deployments that are tailored for each specific environment. VSTS can cover the complete ALM story for your Logic Apps, including multiple release triggers, environment variables and approval steps. This was a very useful talk and demo, because ALM and governance of your Azure application is key if you want to deliver professional solutions.

Integration of Things. Why integration is key in IoT solutions? - Sam Vanhoutte

The penultimate session of the day was held by our very own CTO Sam Vanhoutte. Sam focused his presentation in sharing some of the things Codit learned and experienced while working on IoT projects.

He started by stressing the importance of connectivity within IoT projects: "Connectivity is key" and "integration matters". Sam summarized the different connectivity types: direct connectivity, cloud gateways and field gateways and talked about each of their use cases and pitfalls.

Another important point of Sam's speech was in regard to the differences in IoT projects during Proof of Concepts (PoC) and an actual project implementation. During a PoC, it’s all about showing functionally, but in reality it is about focusing on robustness, security and connectivity.
Sam also approached the different responsibilities and activities regarding to gateways. He talked about the Nebulus IoT gateway and his ideas and experiences with it.

But IoT is not only about the cloud, Sam shared some insights on Azure IoT Edge as a Microsoft solution. Azure IoT Edge will be able to run within the devices own perimeter, but is not available yet or even in private preview. It can run on a variety of operating systems like Windows or Linux. Even on devices as small or even smaller than a Raspberry Pi. The session was concluded with the quote "Integration people make great IoT Solutions".

Be sure to check out our two IoT white-papers:

Also be sure to check out our IoT webinar, acccessible via the Codit YouTube channel.

IoT - Common patterns and practices - Mikael Hakansson

Mikael Hakansson started the presentation by introducing IoT Hub, Azure IoT Suite and what this represents in the integration world. The Azure IoT Hub enables bi-directional connectivity between devices and cloud, for millions of devices, allowing communication in a variety of patterns and with reliable command & control.

A typical IoT solution consists of a cold path, which is based on persistent data, and a hot path, where the data is analyzed on the fly. Since a year,  the device twin concept has been introduced in IoT Hub. A twin consists of tags, a desired state and a reported state; so really maintaining device state information (metadata, configurations, and conditions). 

Mikael Hakansson prepared some demos, where a thermometer and a thermostat were simulated. The demos began with a simulated thermometer with a changing temperature, while that information was being sent to Power BI, via IoT Hub and Stream Analytics. After that, an Azure Function was able to send back notifications to that device. To simulate the thermostat, a twin device with a desired state was used to control the temperature in the room. 

 

Thank you for reading our blog post, feel free to comment or give us feedback in person.

This blogpost was prepared by:

Pieter Vandenheede (BE)
Toon Vanhoutte (BE)
Jonathan Gurevich (NL) 
Carlo Garcia-Mier (UK)
Jef Cools (BE)
Tom Burnip (UK)
Michel Pauwels (BE)
Ricardo Marques (PT)
Paulo Mendonça (PT)
Pim Simons (NL)
Iemen Uyttenhove (BE)
Mariëtte Mak (NL)
Jasper Defesche (NL)
Robert Maes (BE)
Vincent Ter Maat (NL)
Henry Houdmont (BE)
René Bik (NL)
Bart Defoort (BE)
Peter Brouwer (NL)
Iain Quick (UK)

Categories: Community