Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Friday, July 7, 2017 12:00 PM

Stijn Moreels by Stijn Moreels

In this part of the Test Infected series, I will talk about the Test Doubles. These elements are defined to be used as a “stand-in” for our SUT (System under Test). These Test Doubles can be DOC (Depend on Components); but also, other elements we need to inject to exercise the SUT.


This is probably the first post in the Test Infected series. The term “test infected” was first used by Erich Gamma and Kent Beck in their article.

“We have been amazed at how much more fun programming is and how much more aggressive we are willing to be and how much less stress we feel when we are supported by tests.”

The term "Test-Driven Development" was something I heard in my first steps of programming. But it was when reading different books about the topic that I really understood what they meant.

The Clean Coder by Robert C. Martin talks about the courage and a level of certainty of Test-Driven Development. Test-Driven Development: By Example by Kent Beck has taught me the mentality behind the practice and Gerard Meszaros with his book xUnit Test Patters has showed me the several practices that not only improved my daily development, but also my Test-First mindset. All these people have inspired me to learn more about Test-Driven Development and the Test-First Mindset. To see relationships between different visions and to combine them the way I see it; that's the purpose of my Test Infected series.

In this part of the Test Infected series, I will talk about the Test Doubles. These elements are defined to be used as a “stand-in” for our SUT (System Under Test). These Test Doubles can be DOC (Depend on Components); but also, other elements we need to inject to exercise the SUT.

I find it not only interesting to examine the theoretical concept of a Test Double, but also how we can use it in our programming.


No, a Stub isn’t a Mock; no, a Dummy isn’t a Fake. There are differences in the way we test our code. Some test direct inputs other indirect outputs. Each type has a clear boundary and reason to use.

But be careful, overuse of these Test Doubles leads to Over Specified Software in which the test is Tightly-Coupled to the Fixture Setup of the tests which result in more refactoring work for your tests (sometimes more than the production code itself).

Test Code must be as clear, simple and maintainable… as Production Code – maybe even more.

Dummy Object

We use a Dummy Object if we want to inject some information that will never be used. null (C#), None (Python) … are good examples; but even “ignored data” (string) are valid Dummy Objects. If we’re talking about actual valid objects, we could throw exceptions when the methods of that object are called. This way we make sure that the object isn’t used.

We introduce these kinds of objects because the signature of the object to test requires some information. But if this information is not of interest of the test, we could introduce a Dummy Object to only show the test reader the related test information.

We must introduce custom Dummy Objects if the SUT doesn’t allow us to send null / None

Test Stub

In the literature, I found two different types of Test Stubs. One that returns or exposes some data, which can be used to validate the actual outcome of the System under Test (SUT); this is called a Responder and one that throws exceptions when the SUT interacts with the Stub (by calling methods, data…) so the Unhappy Path is being tested; this is called a Saboteur.

But, I encountered a possible third type which I sometimes use in test cases. I like to call it a Sink but it’s actually just a Null Object. This type of Stub object would just act as a “sink”, which means that the Stub isn’t doing anything with the given data. You could use a Sink in situations where you must for example inject a “valid” object but don’t feel like the test case doesn’t really care about what’s happening outside the SUT (in what cases does it?).

By introducing such an Anonymous Object, you let the test reader know that the object you send to the SUT is not of any value for the test.

This kind of “stubbing” can also be accomplished by introducing a new Subclass with Test Specifics and override with empty, expected or invalid implementations to test all paths of the SUT.

Following example shows how the Basket gets calculated with a valid (Anonymous) and invalid (Saboteur) product. I like to call valid items “filled” or “anonymous” to reference the fact that I don’t care what it contains or is structured. You can use “saboteur” to indicate the you have a “invalid” product in place that throws exceptions when it gets called.

I don’t know why, but sometimes, especially in this case where you have a several valid and a single invalid item – the setup reminds me of a pattern called Poison Pill. This is used in situations where you want to stop an execution task from running by placing a “poison pill” in the flow.

This type of Stub isn’t a Dummy object because the methods, properties, etc… are called. Also note that there’s a difference between a Saboteur and a Dummy Object which throws exceptions when called. The Saboteur is used to test all the paths of the SUT; whether the Dummy Object guards against calls that aren’t supposed to happen (which result in a test failure).

You would be amazed what a Stub can do in your design. Some developers even use this stub later as the actual production implementation. This is for me the ultimate example of Incremental Design. You start by writing your tests and incrementally start writing classes that are dependencies of your SUT. These classes will eventually evolve in actual production code.

Now, here is an example of a Stub. The beauty of Functional Programming, is that we can use Object Expressions. This means we can inline our Stub in our test.

Java also has a feature to define inline classes and override only a part of the methods you exercise during the test run


  • To decrease the Test Duplication, we can define Pseudo Objects for our Stubs. This means we define a default implementations that throws exceptions for any called member (like a Dummy for example). This allows us to override only those members we are interested in, in our Stub.
  • During my first experience with Kent Beck's Test-Driven Development, I came across the Self-Shunt idea. This can actually be any Test Double, but I use it most of the time as a Stub. Here, we use the Test Class itself as Test Double. Because we don't create an extra class, and we specify the return value explicitly; we have a very clear Code Intent. Note that I only use this practice if the Test Double, can't be reused somewhere else. Sometimes your Test Double start as a Self-Shunt but can grow to a full-blown Stub.

Test Spy

Ok, Stubs are cool – very cool. The Saboteur is especially useful to test the unhappy/rainy paths throughout the SUT. But there’s a downside to a pure stub: we cannot test the Indirect Output of our SUT.

That’s where the Test Spy comes in. With this Test Double, we can capture the output calls (the Indirect Output) of our SUT for later verification in our test.

Most of the time, this is interesting if the SUT doesn’t return anything useful that we can use to verify if the test was successful. We can just write the ACT statement and no ASSERT statement and the test will automatically result in a test failure if any exception during the exercise of the SUT is being thrown.

But, it’s not a very explicit assertion AND (more importantly), if there are any changes to the SUT, we cannot fully verify if that change in behavior doesn’t break our software.

When developing a logging framework (for example); you will have a lot of these situations because a log-function wouldn’t return anything (the log framework I came across didn’t). So, if we only get a void (in C#), how can we verify if our log message is written correctly?

When working in an asynchronous environment; Test Spies also can be useful. Testing asynchronous code will always have some blocking system in place if we want to test Indirect Outputs – so a Test Spy is the ideal solution.

By hiding the blocking mechanism, we have a clear test and the test reader knows exactly what the purpose of the test is, what the SUT should do to make the test pass, and what the DOC (Depend-on Component) do in the background to make the right verification in our assert-phase.

All of this makes sure that we have a true test positive.

The time-out is (off course) context specific – try to limit to the very minimum; 5 seconds is a very long time for a single unit test to pass but not for an integration test.

Mock Object

If we want to test Indirect Outputs right away and not at the end of the test run (like a Test Spy uses “is called” in the assert-phase); we can use a Mock Object.

There’s a subtle difference with a Mock Object and a Test Spy. A Spy will capture its observations so that it can be verified in a later part (in the assert-phase); while a Mock will make the test fail when it encounters something that was not expected.

Of course, combinations can be made, but there’s something that I would like to warn you about the Mock Object. Actually, two somethings.

1) One must be careful what he/she mocks and what he/she exercises in the SUT. If we mock too much or mock the wrong parts, how do we verify that our SUT will survive in the “real world” where there aren’t any Mock Objects that return just the data the SUT expect?

2) One must be careful that it doesn’t use the Mock Object for all his/her tests. It will result in a Tight-Coupling between the test cases and the SUT. Especially when one uses mocking frameworks, it can be overused. Try to imagine that there must change something in your SUT. Try to go down the path how many Mock Objects you must change in order to get that change in your SUT.

Tests are there to help us, not to frustrate us.

These two items can also be applied on Test Stubs for example, if we specify too much information in our Indirect Input. The  different with a Mock is that we validate also the Indirect Output immediately the output and not in our Assert-phase. Tight-Coupling and overusing any pattern is a bad practice in my opinion. So, always start with the smallest: can you use a Dummy? Then a Stub? Maybe we can just Spy that? Ok, now we can use a Mock.

Look at the following example: we have a function that transforms a given input to an output, but only after we asserted on the expected input. This is a good example of how we assert directly and giving expected output for our SUT.

Fake Object

The last Test Double I would like to discuss, the Fake Object. This Test Double doesn’t always need be configured. An object that is a “fake” is actually a full-implementation object that implement the functionality in such a way that the test can use it during the test run.

A perfect example is the in-memory datastore. We implement the whole datastore operations, all within memory so we don’t need a full configured datastore in our tests.

Yes, of course you must test the datastore connection but with a Fake Object in place you can limit the tests that connect to the “real” database to a minimum and run all the other tests with a “fake”

Your first reaction for external components should be to check if you can fake the whole external connection. Tests that use in-memory storage rather than the file system, datastore, network connectivity… will run a lot faster – and therefore will be run a lot more by the developers.

This type of Test Double is different from the others, in a way that there is no verification in place. This type of object “just” replaces the whole implementation the SUT is dependent on.


Honestly, I think that the reason why I wrote this blog post is because I heard people always talk about “mocks” instead of the right words. Like Martin Fowler says in in his blog post: “Mocks aren’t Stubs”.

I know that in different environments people use different terms. A Pragmatic Programmer will use other words or the same for some Test Doubles than someone from the Extreme Programming background. But it is better that you state what you mean with the right terminology, than to call everything a “mock”.

What I also wanted to show was, that a Test Double isn’t “bound” to Object Oriented Programming or Functional Programming. A Test Double is a concept (invented by Gerard Meszaros) that alter the behavior of your SUT in such a way that the test can verify the expected outcome.

It’s a concept, and concepts can be used everywhere.

Categories: Technology
written by: Stijn Moreels

Posted on Monday, July 3, 2017 11:52 AM

Toon Vanhoutte by Toon Vanhoutte

Azure Service Bus is a very robust and powerful message broker. As with every Azure service, you need to be aware of its strengths and limitations. The most important limitation of Azure Service Bus is the message size. The Standard tier allows messages up to 256kB, within the Premium tier you hit the limit for messages larger than 1 MB. A way to overcome this limit, is implementing the claim check pattern. This blog posts explains how you can use this pattern within Logic Apps to send/receive large messages to/from Azure Service Bus queues.

Claim Check Pattern

The claim check pattern is described over here. The pattern aims to reduce the size of the message being exchanged, without sacrificing information content. In a nutshell, this is how it works:

  1. The sender uploads the payload in an external data store, to which the receiver has also access.
  2. The sender sends a message, that includes a reference to the uploaded payload, to the receiver.
  3. The receiver downloads the payload, using the reference extracted from the exchanged message.

A real-life example of this pattern is the way WeTransfer is used to email large data.

  1. The sender uploads the large data to the WeTransfer data store
  2. An email, including a download link, is sent to the receiver
  3. The receiver clicks the download link and receives the large data

Claim Check API App

Logic Apps and Azure Service Bus work perfectly together. If we can overcome the message size limit of 256kB, a whole bunch of new scenarios reveals. Azure Blob Storage can perfectly take the role of external data store and we can leverage its SAS tokens to give read access to the receiver.

As a proof of concept, I created a custom API app that provides this functionality. You can view and download the code here. This page also includes instructions on how the deploy and configure this API App. Are you new to creating API Apps for Logic Apps? Definitely check out this post that explains how to create a custom polling trigger and how you can leverage the cool TRex library.

This is how the API App implements the claim check pattern:

  1. The sending Logic App uploads the payload to blob storage and assigns a read-only SAS policy 
  2. The sending Logic App sends a message to a service bus queue, containing the blob URI (including SAS token) in the 'claimcheck-uri' header.
  3. The receiving Logic App receives the message from the queue and retrieves the blob via the URI provided in the 'claimcheck-uri' header.

The custom API App contains several actions and triggers. The ones relevant for this post are Send Message to Queue and Receive Message from Queue.

Send Message to Queue

The user experience of this action is very similar to the default Service Bus action. However, under the hood, the claim check pattern is applied. The following parameters are available:

  • Content: Content of the message.
  • Content Type: Content type of the message content.
  • Queue Name: Name of the queue.
  • Properties: Message properties in JSON format (optional).
  • Scheduled Enqueue Time: UTC time in MM/dd/yyyy HH:mm:ss format (optional).

Receive Message from Queue

This polling trigger is used to receive messages from the queue. When there are still messages available in the queue, the trigger will fire continuously, until the queue is empty.

As an output, this trigger provides the message content (retrieved from blob storage), the content type and the message properties. The lock token must be used to explicitly complete the message in the Logic App, as this is required to ensure at-least-once delivery.

The API App provides also a variant on this trigger to retrieve multiple messages from the queue within one batch.


API Apps are very powerful extension points to Logic Apps. In this scenario, it helped us to overcome the Service Bus message size limitation of 256kB. By implementing the claim check pattern with Azure Blob Storage, we are now capable of exchanging payloads up to 50 MB, which is the current Logic Apps message size limit!

Hope you enjoyed this one!


Categories: Azure
Tags: Logic Apps
written by: Toon Vanhoutte

Posted on Wednesday, June 28, 2017 4:17 PM

Integrate 2017 is the yearly conference for Microsoft integration. This is the day 3 recap of the sessions presented at Integrate with the views and opinions of the Codit staff.

Rethinking Integration - Nino Crudele

Nino Crudele was perfectly introduced as the "Brad Pitt" of integration. We will not comment on his looks, but rather focus on his ability to always bring something fresh and new to the stage!

Nino's message was that BizTalk Server has the ideal architecture for extensibility across all of its components. Nino described how he put a "Universal Framework" into each component of BizTalk. He did this to be able to improve the latency and throughput of certain BizTalk solutions, when needed and appropriate.

He also shared his view on how not every application is meant to fully exist in BizTalk Server alone. In certain situations BizTalk Server may only act as a proxy to something else. It's always important to choose the right technology for the job. As an integration expert it is important to keep up with technology and to know its capabilities, allowing for a best of breed solution in which each component fits a specific purpose e.g. Event Hubs, Redis, Service Bus, etc...

Nino did a good job delivering a very entertaining session and every attendee will forever remember "The Chicken Way".

Moving to Cloud-Native Integration - Richard Seroter

Richard Seroter presented the 2nd session of the day. He shared his views on moving to cloud-native thinking when building integration solutions. He started by comparing the traditional integration approach with the cloud-computing model we all know today. Throughout the session, Richard shared some interesting insights on how we should all consider a change in mindset and shift our solutions towards a cloud-native way of thinking.

“Built for scale, built for continuous change, built to tolerate failure”

Cloud-native solutions should be built “More Composable”. Think loose-coupling, building separate blocks that can be chained together in a dynamic fashion. This allows for targeted updates, without having to schedule downtime… so “More Always-On”. With a short demo, Richard showed how to build a loosely-coupled Logic App that consumed an Azure Function, which would be considered a dependency in the traditional sense. Then he deployed a change to the Azure Function - on-the-fly - to show us that this can be accomplished without scheduled downtime. Investing time in into the design and architecture aspect of your solution pays off, when this results in zero downtime deployments.

Next, he talked about adding “More Scalability” and “More Self-Service”. The cloud computing model excels in ease of use and makes it possible for citizen developers or ad-hoc integrators to take part in creating these solutions. This eliminates the need for a big team of integration specialists, but rather encourages a shift towards embedding these specialists in cross-functional teams.

In a fantastic demo, he showed us a nice Java app that provides a self-service experience on top of BizTalk Server. Leveraging the power of the new Management API (shipped with Feature Pack 1 for BizTalk 2016 Enterprise), he deployed a functioning messaging scenario in just a few clicks, without the need of ANY technical BizTalk knowledge. Richard then continued by stating that we should all embrace the modern resources and connectors provided by the cloud platform. Extend on premises integration with “More Endpoints” by using, for example, Logic-Apps to connect BizTalk to the cloud.

The last part focused on “More Automation”, where he did not only talk about automated build and automated deployment, but also recommended creating environments via automation to achieve the highest possible levels of consistency. In another short demo, Richard showed us how he automatically provisioned a ServiceBus instance and all related Azure resources from the Cloud Foundry Service Broker CLI.

Be sure to check out the recording of this session! It has some valuable insights for everyone involved in cloud integration!

Overcoming Challenges When Taking Your Logic App into Production - Stephen W Thomas

The third session of the day was presented by Stephen W Thomas, who gave us some insights into the challenges he faced during his first Logic Apps implementation at a customer.

He split up his session in three phases, starting with the decisions that had to be taken. After a short overview of the EDI scenario he was facing and going over the available options that were considered for the implementation, it was clear that Logic Apps was the winner due to several reasons. The timeline was pretty strict, and doing custom .NET development would have taken 10 times longer than using Logic Apps. The initial investment for BizTalk, combined with the limited presence of BizTalk development skills, made Logic Apps the logical choice in this case. However, if you already use EDI in BizTalk, it probably makes sense to keep doing so, since your investment is already there.

In the second phase, he reflected on the lessons learned during the project. The architecture had to be made with the rules of a serverless platform in mind. This included a 2-weekly release cadence, which could affect the current functionality, which in turn makes it important to check the release notes. Another thing to keep in mind is the (sometimes) unpredictable pricing: where every Action in Logic Apps costs money, in BizTalk you can just keep adding on expression shapes without worrying about additional cost.

In the last phase, he left us with some tips and tricks that he gained through experience with Logic Apps. "Don't be afraid to use JSON". Almost every new feature is introduced in code view first, so take advantage of it by learning to work with it. It's also good to know that a For-Each loop in Logic Apps runs concurrently by default, but luckily this behaviour can be changed to Sequential (in the code view).

BizTalk Server Deep Dive into Feature Pack 1 - Tord Glad Nordahl

Tord had a few announcements to make which were appreciated by the audience:

  • The BizTalk connector for Logic Apps, which was in preview before today, is now generally available (GA).
  • Microsoft IT publicly released the BizTalk Server Migration Tool, which they use internally for their own BizTalk migrations. This tool should help in migrating your environment towards BizTalk Server 2016.

Tord discussed the BizTalk Server 2016 Feature Pack 1 next.

With the new ALM features, it's possible to deploy BizTalk solutions to multiple environments from any repository supported by Visual Studio Team Services. Just like the BizTalk Deployment Framework (BTDF), it is also possible to have one central binding file with variables being replaced automatically to fit your specific target environment.
The Management API included in Feature Pack 1 enables you to do almost anything that is possible in the BizTalk Management Console. You can create your own tools based on the API. For example: end users can be provided with their own view on the BizTalk environment. The API even supports both XML and JSON.
Feature Pack 1 also includes a new PowerBI template, which comes with the added Analytics. The template should give you a good indication on the health of your environment(s). The PowerBI template can be changed or extended with everything you can see on the BizTalk Management Console, according to your specific needs.

Tord also discussed that the BizTalk team is working on several new things already, but he could not announce anything new at the moment. We are all very anxious to hear what will come in the next Feature Pack!

BizTalk Server Fast & Loud - Sandro Perreira

Fast and loud: a session about BizTalk performance optimizations. The key takeaway is that you need to tune your BizTalk environments, beyond a default installation, if you want to achieve really high throughput and low latency. Sandro pointed out that performance tuning must be done on three levels: SQL Server, BizTalk Server and hardware.

SQL Server is the heart of your BizTalk installation and the performance heavily depends on its health. The most critical aspect is that you need to ensure that the SQL Agent Jobs are up and running. The SQL agent jobs keep your MessageBox healthy and avoid that your DTA database gets flooded. Treat BizTalk databases as a black box: don't create your own maintenance plans, as they might jeopardize performance and you'll end up with unsupported databases. Besides that, he mentioned that you should avoid large databases and that it is always preferable to go with dedicated SQL resources for BizTalk.

Performance tuning on the BizTalk Server level is mostly done by tuning and configuring host instances. You should have a balanced strategy for assigning BizTalk artifacts to the appropriate hosts. A dedicated tracking host is a must-have in every BizTalk environment. Be aware that there are also configuration settings at host (instance) level, of which the polling interval setting provides the quickest performance win to reduce latency.

It's advised to take a look at all the surrounding hardware and software dependencies. Your network should provide high throughput, the virtualization layer must be optimized and disks should be separated and fast.

These recommendations are documented in the Codit best practices and it's also part of our BizTalk training offering.

BizTalk Health Check – What and How? - Saffieldin Ali

After all the technical and conceptual sessions, it is good to be reminded that existing BizTalk environments and solutions need to be monitored properly to assure a healthy BizTalk platform and maximize both reliability and performance proactively. Identifying threats and issues lower or even avoid downtime in case of a disaster.
Microsoft's Saffieldin Ali shared his own experience, including various quotes that he collected throughout the years.
When visiting and interviewing customers, Ali has a list of red flags which, without even examining the environments, indicate that BizTalk may not be as healthy as you would want it to be. Discovering that customers have their own procedures to do backups, a lack of documentation of a BizTalk environment or not having the latest updates installed can be a sign of bad configuration. Any of which can cause issues in the future, affect operations and disrupt business.
To detect these threats, Ali explained how you can use tools like BizTalk Health Monitor (BHM), Performance Analysis of Logs (PAL) and Microsoft Baseline Security Analyzer (MBSA). He also showed us that, in BHM, there are two modes: a monitoring mode, which should be used as a basic monitoring tool and secondly, a reporting tool on the health of a BizTalk environment.

Incorporating the use of these tools in your maintenance plan is definitely a best practice every BizTalk user should know about!

The Hitchhiker's Guide to Hybrid Connectivity - Dan Toomey

In the first session after the afternoon break, Dan Toomey presented the different types of hybrid connectivity that allow us to easily set-up secure connections between systems. 

The network based options being Azure Virtual Network (VNET), with integration for web and mobile apps and VNET with API Management. This has all the advantages of APIM, but with an added layer of security. The non-network based options are WCF Relay, Azure Relay Hybrid Connections and the  On-Premises Data Gateway.

The concept of WCF-Relay is based on a secured listener endpoint in the cloud, which is opened via an outbound connection from within a corporate network. Clients send messages via the listeners endpoint, without the receiving party having to make any changes to the corporate firewall.

WCF Relay, which has the advantage of being the cheapest option, works on the application layer, whereas Hybrid Connections (HC) work on the transport layer. HC rely on port forwarding and work cross-platform. It is set-up in Azure (Service Bus) and connects to the HC Manager which is installed on premises.

The On-Premises Data Gateway acts as a bridge between Azure PaaS and on premises resources, and works with connectors for Logic Apps, Power Apps, Flow & Power BI.

In the end, Dan went through some scenarios to illustrate which relay is the better fit for specific situations. Being a big fan of the Hybrid Connection, the Hybrid Connection was often the preferred solution.

Dan finally mentioned that he has a Pluralsight training that goes into this topic. Although a bit dated since it also discusses BizTalk Services, the other material is still relevant.

Unlocking Azure Hybrid Integration with BizTalk Server - Wagner Silveira

Why should we use BizTalk Server and Azure together? That is the question Wagner Silveira kicked off his talk with.

He then talked about the fact that, if you are working on a complex scenario, you may want to use BizTalk Server if there are multiple systems you wish to call on premises. If there are multiple cloud endpoints to interface with, you might want to base the solution on Azure components. The goal being to avoid creating a slingshot solution with multiple roundtrips between on premises and cloud.
Since most organizations still have on premises systems, they can use BizTalk Server to continually get value out of their investments, and to continue leveraging the experience which developers and support teams have acquired.

He went on to talk about the available options that are available to connect to Azure. Wagner gave an overview of these options, in which he discussed Service Bus, Azure WCF Relay, App Services, API Management and Logic Apps.
When discussing Service Bus for example, he talked about how Service Bus allows full content based routing and asynchronous messaging. The latter would allow you to overcome unreliable connectivity, allow for throttling into BizTalk Server and multicasting scenarios from BizTalk to multiple subscribers.

Next he spoke about WCF-Relay. He talked about some of the characteristics of this option, stating that it supports both inbound and outbound communication based on dynamic relay, which is optimized for XML and supports ACS and SAS Security. WCF-Relay also has REST-support, which can be used to expose REST-services as well. You can then use WCF-Relay to publish for either inbound or outbound communication. Outbound communication is generally allowed by default, inbound communication will require network changes. Finally, you can also define outbound headers to support custom authentication.

A couple of typical scenarios for inbound WCF-relay that Wagner gave as examples were: real-time communication, exposing legacy or bespoke systems and to minimize the surface area (no "swiss cheese" firewall).
Examples of outbound scenarios are: leveraging public API’s and shifting compute to the cloud (for batch jobs for example), which allows us to minimize the BizTalk infrastructure footprint.

Next up was the Logic Apps adapter for BizTalk Server. Scenarios for using this solution would include extending workflows into Azure (think of connecting BizTalk Server to SalesForce for example). Another example would be exposing on premise data to Logic Apps.
For flows from Logic Apps into BizTalk on the other hand, it allows for securing internal systems, pre-validating messages and leveraging on premises connectors to expose legacy/bespoke systems.

The main takeaway for this session is that you should get to know the tools available, understand the sweet spots and know what to avoid. Not only from a technology and functional point of view, but from a pricing perspective as well.

There are many ways to integrate… Mix, match, and experiment to find the balance!

From Zero to App in 45 minutes (using PowerApps + Flow) - Martin Abbott

It is hard to give an overview of the last session by Martin Abbot about PowerApps since Martin challenged the "demo gods", by making it a 40-minute demo, with only 3 slides. A challenging, but interesting session where Martin created a PowerApps app, using some entities in the Common Data Service. He then connected PowerApps to Microsoft Flow and created a custom connector to be consumed as well, demonstrating the power of the tools. As one of the "founding fathers" of the Global Integration Bootcamp, he also announced the date for the next #GIB2018 event: the event will occur on March 24th 2018


Thank you for reading our blog post, feel free to comment with your feedback. Keep coming back, since there will be more blogs post to summarize the event and to give you some recommendations on what to watch when the videos are out.


This blogpost was prepared by:

Pieter Vandenheede (BE)
Toon Vanhoutte (BE)
Jonathan Gurevich (NL) 
Carlo Garcia-Mier (UK)
Jef Cools (BE)
Tom Burnip (UK)
Michel Pauwels (BE)
Ricardo Marques (PT)
Paulo Mendonça (PT)
Pim Simons (NL)
Iemen Uyttenhove (BE)
Mariëtte Mak (NL)
Jasper Defesche (NL)
Robert Maes (BE)
Vincent Ter Maat (NL)
Henry Houdmont (BE)
René Bik (NL)
Bart Defoort (BE)
Peter Brouwer (NL)
Iain Quick (UK)

Categories: Community

Posted on Tuesday, June 27, 2017 8:25 PM

Integrate 2017 is the yearly conference for Microsoft integration. This is the day 2 recap of the sessions presented at Integrate with the views and opinions of the Codit staff.

Microsoft IT: journey with Azure Logic Apps - Padma/Divya/Mayank Sharma

In this first session, Mayank Sharma and Divya Swarnkar talked us through Microsoft’s experience implementing their own integrations internally. We got a glimpse of their approach and the architecture of their solution.

Microsoft uses BizTalk Server and several Azure services like API Management, Azure Functions and Logic Apps, to support business processes internally.
They run several of their business processes on Microsoft technologies (the "eat your own dog food"-principle). Most of those business processes now run in Logic App workflows and Divya took the audience through some examples of the workflows and how they are composed.

Microsoft has built a generic architecture using Logic Apps and workflows. It is a great example of a decoupled workflow, which makes it very dynamic and extensible. It intensively uses the Integration Account artifact metadata feature.

They also explained how they achieve testing in production. They can, for example, route a percentage of traffic via a new route, and once they are comfortable with it, they switch over the remaining traffic. She however mentioned that they will be re-evaluating how they will continue to do this in the future, now that the Logic Apps drafts feature was announced.

For monitoring, Microsoft Operations Management Suite (MOMS) is used to provide a central, unified and consistent way to monitor the solution.

Divya gave some insights on their DR (disaster recovery) approach to achieve business continuity. They are using Logic Apps to keep their Integration Accounts in sync between active and passive regions. BizTalk server is still in use, but acts mostly as the proxy to multiple internal Line-of-Business applications. 

All in all, a session with some great first-hand experience, based on Microsoft using their own technology.
Microsoft IT will publish a white paper in July on this topic. A few Channel9 videos are also coming up, where they will share details about their implementation and experiences.

Azure Logic Apps - Advanced integration patterns - Jeff Hollan/Derek Li

Jeff Hollan and Derek Li are back again with yet another Logic Apps session. This time they are talking about the architecture behind Logic Apps. As usual, Jeff is keeping everyone awake with his viral enthusiasm!

A very nice session that explained that the Logic Apps architecture consists out of 3 parts:

The Logic Apps Designer is a TypeScript/React app. This contained app can run anywhere e.g.: Visual Studio, Azure portal, etc... The Logic Apps Designer uses OpenAPI (Swagger) to render inputs and outputs and generate the workflow definition. The workflow definition can be defined as being the JSON source code of the Logic App.

Secondly, there is the Logic App Runtime, which reads the workflow definition and breaks it down into a composition of tasks, each with its own dependencies. These tasks are distributed by the workflow orchestrator to workers which are spread out over any number of (virtual) machines. Depending on the worker - and its dependencies - tasks run in parallel to each other. e.g. a ForEach action which loops a 100 times might be executed on 100 different machines.

This setup makes sure any of the tasks get executed AT LEAST ONCE. Using retry policies and controllers, the Logic App Runtime does not depend on any single (virtual) machine. This architecture allows a resilient runtime, but also means there are some limitations.

And last, but not least, we have the Logic Apps Connectors, connecting all the magic together.
These are hosted and run separately from the Logic App or its worker. They are supported by the teams responsible for the connector. e.g. the Service Bus team is responsible for the Service Bus connectors. Each of them has their own peculiarities and limits, all described in the Microsoft documentation.

Derek Li then presented an interesting demo showing how exceptions can be handled in a workflow using scopes and the "RunAfter" property, which can be used to execute different actions if an exception occurs. He also explained how retry policies can be configured to determine how many times an action should retry. Finally, Jeff gave an overview of the workflow expressions and wrapped up the session explaining how expressions are evaluated inside-out.

Enterprise Integration with Logic Apps - Jon Fancey

Jon Fancey, Principal Program Manager at Microsoft, took us on a swift ride through some advanced challenges when doing Enterprise Integration with Logic Apps.

He started the session with an overview and a demo where he showed how easy it is to create a receiver and sender Logic App to leverage the new batch functionality. He announced that, soon, the batching features will be expanded with Batch Flush, Time-based batch-release trigger options and EDI batching.

Next, he talked about Integration Accounts and all of its components and features. He elaborated on the advanced tracking and mapping capabilities.
Jon showed us a map that used XSLT parameters and inline C# code processing. He passed a transcoding table into the map as a parameter and used C# to do a lookup/replace of certain values, without having to callback to a database for each record/node. Jon announced that the mapping engine will be enriched with BOM handling and the ability to specify alternate output formats like HTML or text instead of XML only.

The most amazing part of the session was when he discussed the tracking and monitoring capabilities. It’s as simple as enabling Azure Diagnostics on your Integration Account to have all your tracking data pumped into OMS. It’s also possible to enable property tracking on your Logic Apps. The Operations Management Suite (OMS) centralizes all your tracking and monitoring data.

Jon also showed us an early preview of some amazing new features that are being worked on. OMS will provide a nice cross-Logic App monitoring experience. Some of the key features being:

  • Overview page with Logic App run summary
  • Drilldown into nested Logic-App runs
  • Multi-select for bulk download/resubmit of your Logic App flows.
  • New query engine that will use the powerful Application Insights query language!

We’re extremely happy and excited about the efforts made by the product team. The new features shown and discussed here, provethat Microsoft truly listens to the demands of their customers and partners.

Bringing Logic Apps into DevOps with Visual Studio - Jeff Hollan/Kevin Lam

The last Microsoft session of Integrate 2017 was the second time Kevin Lam and Jeff Hollan got to shine together. The goal of their session was to enlighten us about how to use some of the tooling in Visual Studio for Logic Apps.

Kevin took to the stage first, starting with a small breakdown of the Visual Studio tools that are available:

  • The Logic Apps Designer is completely integrated in a Visual Studio "Resource Group Project".
  • You can use Cloud Explorer to view deployed Logic Apps
  • Tools to manage your XML and B2B artifacts are also available

The Visual Studio tools generate a Resource Group deployment template, which contains all resources required for deployment. These templates are used, behind the scenes, by the Azure Resource Manager (ARM). Apart from your Logic Apps this also includes auto-generated parameters, API connections (to for example Dropbox , Facebook, ...) and Integration Accounts. This file can be checked-in into Source Control, giving you the advantage of CI and CD if desired. The goal is to create the same experience in Visual Studio as in the Portal.

Jeff then started off by showing the Azure Resource Explorer. This is an ARM catalog of all the resources available in your Azure subscription.

Starting with ARM deployment templates might be a bit daunting at first, but by browsing through the Azure Quickstart Templates you can get a hang of it quickly. It's easy to create a single template and deploy that parameterized template to different environments. By using a few tricks like Service Principals to automatically get OAuth tokens and using the resourceId() function to get the resourceId of a freshly created resource, you are able to automate your deployment completely.

What's there & what's coming in BizTalk360 & ServiceBus360 - Saravana Kumar

On the tune of "Rocky", Saravana Kumar entered the stage to talk about the latest updates regarding BizTalk360 and ServiceBus360.

He started by explaining the standard features of BizTalk360 around operations, monitoring and analytics.
Since May 2011, 48 releases have been published of BizTalk360, adding 4 or 5 new features per release.

The latest release includes:

  • BizTalk Server License Calculator
  • Folder Location Monitoring for FILE, FTP/FTPS, SFTP
  • Queue Monitoring for IBM MQ
  • Email Templates
  • Throttling Monitoring

Important to note: BizTalk360 supports more and more cloud integration products like Service Bus and Logic Apps. What they want to achieve is having a single user interface to configure monitoring and alerting.

Similar to BizTalk360, with ServiceBus360, Kovai wants to simplify the operations, monitoring and analytics for Azure Service Bus.

Give your Bots connectivity, with Azure Logic Apps - Kent Weare

Kent Weare kicked off by explaining that the evolution towards cloud computing does not only result in lower costs and elastic scaling, but it provides a lot of opportunities to allow your business to scale. Take advantage of the rich Azure ecosystem, by automating insights, applying Machine Learning or introducing bots. He used an example of an energy generation shop, where bots help to increase competitiveness and the productivity of the field technicians.

Our workforce is changing! Bring insights to users, not the other way around.

The BOT Framework is part of the Cognitive Services offering and can leverage its various vision, speech, language, knowledge and search features. Besides that, the Language Understanding Intelligence Service (LUIS) ensures your bot can smoothly interact with humans. LUIS is used to determine the intent of a user and to discover the entity on which the intent acts. This is done by creating a model, that is used by the chat bot. After several iterations of training the model, you can really give your applications a human "face".

Kent showed us two impressive demos with examples of leveraging the Bot Framework, in which both Microsoft Teams and Skype were used to interact with the end users. All backend requests went through Azure API Management, which invoked Logic Apps reaching out to multiple backend systems: SAP, ServiceNow, MOC, SQL and QuadrigaCX. Definitely check out this session, when the videos are published!

Empowering the business using Logic Apps - Steef-Jan Wiggers

Previous sessions about Logic Apps mainly focused on the technical part and possibilities of Logic Apps.
Steef-Jan Wiggers took a step back and looked at the potential of Logic Apps from a customer perspective.

Logic Apps is becoming a worthy player in the IPaaS hemipshere. Microsoft started an entirely new product in 2015, which has matured to its current state. Still being improved upon on a weekly basis, it seems it is not yet considered as a a rock-solid integration platform.
Customers, but even Gartner in their Magic Quadrant, often make the mistake of comparing Logic Apps with the functionality that we are used to, with products like BizTalk Server. They are however totally different products. Logic Apps is still evolving and should be considered within a broader perspective, as it is intended to be used together with other Azure services.
As Logic Apps continues to mature, it is quickly becoming "enterprise integration"-ready.

Steef-Jan ended his session by telling us that Logic Apps is a flexible and easy way to deliver value at the speed of the business and will definitely become a centralized product in the IPaaS market.

Logic App continuous integration and deployment with Visual Studio Team Services - Johan Hedberg

In the last session before the afternoon break, Johan Hedberg outlined the scenario for a controlled build and release process for Logic Apps. He described a real-life use case, with 3 typical personas you encounter in many organizations. He stressed on the importance of having a streamlined approach and a shared team culture/vision. With the available ARM templates and Visual Studio Team Services (VSTS), you have all the necessary tools to setup continuous integration (CI) and continuous deployment (CD).  

The session was very hands-on and to the point. A build pipeline was shown, that prepared the necessary artifacts for deployment. Afterwards, the release process kicked off, deploying a Logic App, an Azure Function and adding maps and schemas to a shared Integration Account. Environment specific parameter files ensured deployments that are tailored for each specific environment. VSTS can cover the complete ALM story for your Logic Apps, including multiple release triggers, environment variables and approval steps. This was a very useful talk and demo, because ALM and governance of your Azure application is key if you want to deliver professional solutions.

Integration of Things. Why integration is key in IoT solutions? - Sam Vanhoutte

The penultimate session of the day was held by our very own CTO Sam Vanhoutte. Sam focused his presentation in sharing some of the things Codit learned and experienced while working on IoT projects.

He started by stressing the importance of connectivity within IoT projects: "Connectivity is key" and "integration matters". Sam summarized the different connectivity types: direct connectivity, cloud gateways and field gateways and talked about each of their use cases and pitfalls.

Another important point of Sam's speech was in regard to the differences in IoT projects during Proof of Concepts (PoC) and an actual project implementation. During a PoC, it’s all about showing functionally, but in reality it is about focusing on robustness, security and connectivity.
Sam also approached the different responsibilities and activities regarding to gateways. He talked about the Nebulus IoT gateway and his ideas and experiences with it.

But IoT is not only about the cloud, Sam shared some insights on Azure IoT Edge as a Microsoft solution. Azure IoT Edge will be able to run within the devices own perimeter, but is not available yet or even in private preview. It can run on a variety of operating systems like Windows or Linux. Even on devices as small or even smaller than a Raspberry Pi. The session was concluded with the quote "Integration people make great IoT Solutions".

Be sure to check out our two IoT white-papers:

Also be sure to check out our IoT webinar, acccessible via the Codit YouTube channel.

IoT - Common patterns and practices - Mikael Hakansson

Mikael Hakansson started the presentation by introducing IoT Hub, Azure IoT Suite and what this represents in the integration world. The Azure IoT Hub enables bi-directional connectivity between devices and cloud, for millions of devices, allowing communication in a variety of patterns and with reliable command & control.

A typical IoT solution consists of a cold path, which is based on persistent data, and a hot path, where the data is analyzed on the fly. Since a year,  the device twin concept has been introduced in IoT Hub. A twin consists of tags, a desired state and a reported state; so really maintaining device state information (metadata, configurations, and conditions). 

Mikael Hakansson prepared some demos, where a thermometer and a thermostat were simulated. The demos began with a simulated thermometer with a changing temperature, while that information was being sent to Power BI, via IoT Hub and Stream Analytics. After that, an Azure Function was able to send back notifications to that device. To simulate the thermostat, a twin device with a desired state was used to control the temperature in the room. 


Thank you for reading our blog post, feel free to comment or give us feedback in person.

This blogpost was prepared by:

Pieter Vandenheede (BE)
Toon Vanhoutte (BE)
Jonathan Gurevich (NL) 
Carlo Garcia-Mier (UK)
Jef Cools (BE)
Tom Burnip (UK)
Michel Pauwels (BE)
Ricardo Marques (PT)
Paulo Mendonça (PT)
Pim Simons (NL)
Iemen Uyttenhove (BE)
Mariëtte Mak (NL)
Jasper Defesche (NL)
Robert Maes (BE)
Vincent Ter Maat (NL)
Henry Houdmont (BE)
René Bik (NL)
Bart Defoort (BE)
Peter Brouwer (NL)
Iain Quick (UK)

Categories: Community

Posted on Monday, June 26, 2017 7:18 PM

Integrate 2017 is the yearly conference for Microsoft integration. This is the day 1 recap of the sessions presented at Integrate with the views and opinions of the Codit staff.


Codit is back in London for Integrate 2017! This time with a record number of around 26 blue-shirted colleagues representing us. Obviously this makes sense now that Codit is bigger than ever with offices in Belgium, France, The Netherlands, UK, Switzerland, Portugal and Malta. This blog post was put together by each and everyone of our colleagues attending Integrate 2017.

Keynote: Microsoft Brings Intelligence to its Hybrid Integration Platform - Jim Harrer

What progress has Microsoft made in the Integration space (and their Hybrid Integration Platform) over the last year? How is Artificial Intelligence changing the way we think about enterprise application integration? Jim Harrer, Pro Integration Program Manager for Microsoft, kicks off with the keynote here at Integrate 2017. 

With a "year in review" slide, Jim reminded us how a lot of new Azure services are now in GA. Microsoft also confirmed, once again, that hybrid integration is the path forward for Microsoft. Integration nowadays is a "Better Together"-story. Hybrid integration bringing together BizTalk Server, Logic Apps, API Management, Service Bus, Azure Functions and … Artificial Intelligence.

Microsoft is moving at an incredible pace and isn't showing any signs of slowing down. Jim also spoke briefly about some of the great benefits which are now being seen since the Logic Apps, BizTalk, HIS and APIM fall under the same Pro-Integration team.  

Integration today is about making the impossible, possible; The fact that Microsoft is working very hard to bring developers the necessary tooling and development experience to make it easier and faster to deliver complex integration solutions. It's about keeping up - AT THE SPEED OF BUSINESS - to increase value and to unlock "the impossible".

Jim made a very good point:

Your business has stopped asking if you can do this or that, because it's always been a story about delivering something which takes months or will cost millions of dollars. Nowadays, you have the tools to deliver solutions at a fraction of the cost and a fraction of the time. Integration specialists should now go and ask business what they can do for them to maximize added value to that business and make your business as efficient as possible.

Jim had fewer slides in favor of some short, teasing demos:

  • Jeff Hollan demonstrated how to use Logic Apps with the Cognitive Services Face API to build a kiosk application to on-board new members at a fictitious gym ("Contoso Fitness"), adding the ability to enter the gym without needing to bring a card or fob but simply by using face recognition when entering the building.
  • Jon Fancey showed off some great new batching features which are going to be released for Logic Apps soon.
  • Tord Glad Nordahl tackled the scenario where the gyms sell products like energy bars and protein powders and needs to track sales and stock at all the locations, to determine when new products need to be ordered. BizTalk was the technology behind the scenes, with some Azure Machine learning thrown in.

Watch out for new integration updates later in the week to be announced.

Innovating BizTalk Server to bring more capabilities to the Enterprise customer - Tord Glad Nordahl

In the second session of the day, Tord walked us through the BizTalk lifecycle and emphasized that the product team is still putting a lot of effort in improving the product and its capabilities. He talked about the recent release of the first feature pack for BizTalk Server 2016 and how it tackles some of the pain points gathered from customer feedback. FP1 is just a first step in enriching BizTalk, more and more functionalities will be added and further improved in the time to come.  

"BizTalk is NOT dead"

Tord emphasized how important it is to receive feedback from partners and end-users. He urged everyone to report all bugs and inconviences using the Uservoice page so we can all together help shape the future of BizTalk Server.
The product team is working hard to release CU packs at a steady cadence, and plan on getting vNext of BizTalk ready before the end of 2018. 

No breaking news unfortunately (other than more features coming to the new automated deployment that came in Feature Pack 1), but we're looking forward to Tord's in-depth session about FP1 coming Wednesday. If you can't wait to have a look of what FP1 can do, check out Toon's blog posts!

BTS2016 FP1: Scheduling Capabilities
BTS2016 FP1: Continuous Deployment
BTS2016 FP1: Management & Operational API
BTS2016 FP1: Continuous Deployment Walkthrough

Messaging yesterday, today, and tomorrow - Dan Rosanova

The third speaker of the day was Dan Rosanova, giving us an overview of the evolution of the Messaging landscape and its future.

He started with some staggering numbers: currently Azure Messaging is processing 23 TRILLION (23,000,000,000,000,000,000) messages per month. Which is a giant increase from the 2.75 trillion per month last year (at Integrate).

In the past, picking a messaging system was comparable to choosing a partner to marry: you pick one you like and you're stuck with the whole package, peculiarities and all. It wasn't easy, and very expensive to change.

Messaging systems are now changing to more modular systems. From the giant pool of (Azure) offerings, you pick the services that best fit your entire solution. A single solution can now include multiple messaging products, depending on your (and their) specific use case.

"Event Hubs is the ideal service for telemetry ingestion from websites, apps and streams of big data."

Where Event Hubs used to be seen as an IoT service, this has now been repositioned as part of the Big Data stack. Although still on the edge with IoT.

The Microsoft messaging team has been very busy. Since last year they have implemented new Hybrid Connections, new java and open-source .NET clients, Premium Service Bus went GA in 19 regions and a new portal was created. They're currently working on more encryption (encryption at rest and Bring Your Own Key) and security: Managed Secure Identity and IP Filtering features which will be coming soon. So it looks to be a promising year!

Dan introduced Geo-DR, which is a dual-region active-passive disaster recovery tool coming this summer. The user decides when to trigger this fail-forward disaster recovery. However this is only meant as a disaster recovery solution, and is NOT intended for high-availability or other scenarios. 

Finally, Dan added a remark that messaging is under-appreciated and his goal is reaching transparent messaging by making messaging as simple as possible. 

Azure Event Hubs: the world’s most widely used telemetry service - Shubha Vijayasarathy

"The Azure Event Hubs are based on three S's: Simple, stable and Scalable.

Shubba talked about Azure Event Hubs Capture replacing the existing Azure Event Hubs Archive service. With Event Hubs Capture there is no overhead with code or configuration. The separate data transfer will reduce the service management hassle. It's possible to opt-in or -out at any time. Azure Event Hubs Capture will be GA June 28th 2017, price changes will go into effect August 1st 2017.

The next item was Event Hubs Auto-Inflate. With Auto-Inflate it's possible to auto-scale TU's, to meet your usage needs. It also prevents throttling (when data ingress and egress rates exceed preconfigured TUs). This is ideal for handling burst workloads. It's downside is that it only scales up and doesn’t scale back down again.
Dedicated Event Hubs are designed for massive scale usage scenarios. It has a completely dedicated platform, so there are no noisy neighbours sharing resources on Azure. Dedicated Event Hubs are sold in Capacity Units (CU). Message sizes are up to 1 MB.  

Event Hubs Clusters will enable you to create your own clusters in less than 2 hours in which Azure Event Hubs Capture is also included. Message sizes go up to 1MB and pricing starts at $5000. The idea is to start small and scale out as you go. Event Hubs Clusters is currently in private preview and will be available as public preview starting September 2017 in all regions.

Coming soon

- Geo-DR capability
- Encryption at rest
- Metrics in the new portal
- ADLS for public preview
- Dedicated EH clusters for private preview

Azure Logic Apps - build cloud-scale integrations faster - Jeff Hollan / Kevin Lam

Jeff Hollan and Kevin Lam had a really entertaining session which was perfect to avoid an after-lunch-dip! 

Some great new connectors were announced, which will be added in the near future. Among them: Azure storage tables, Oracle EBS, Service Now and SOAP. Besides the connectors that Microsoft will make available, the ability to create custom connectors, linked with custom API connections, sounds very promising!  It's great to hear that Logic Apps is now certified for Drummond AS2, ISO 27001, SCO (I, II, IIII), HIPAA and PCI DSS.

Quite a lot of interesting new features will be released soon:

  • Expression authoring and intellisense will improve the user experience, especially combined with detailed tracing of expression runtime executions.
  • Advanced scheduling capabilities will remove the need to reach out to Azure Scheduler.  
  • The development cycle will be enhanced by executing Logic Apps in draft, which means your Logic Apps can be developed without being activated in production and the ability to promote them.
  • The announced mock testing features will be a great addition to the framework.
  • Monitoring across Logic Apps through OMS and resubmitting from a failed action, will definitely make our cloud integration a lot easier to manage!
  • And last, but not least: out-of-the-box batching functionality will be released next week!

Azure Functions - Serverless compute in the cloud - Jeff Hollan

Whereas Logic Apps executes workflows based on events, Azure Functions executes code on event triggers. They really complement each other. It's important to understand that both are serverless technologies, which comes with the following advantages: reduced DevOps, more focus on business logic and faster time to market.

The Azure Functions product team has made a lot of investments to improve the developer experience. It is now possible to create Azure Functions locally in Visual Studio 2017, which gives developers the ability to use intellisense to test locally and to write unit tests.

There's out-of-the-box Application Insights monitoring for Azure Functions. This provides real details on how your Azure Functions are performing. Very powerful insights on that data are available by writing fairly simple queries. Jeff finished his session by emphasizing that Azure Functions can also run on IoT edge. As data has "gravity", some local processing on data is desired in many scenarios, to reduce network dependencies, cost and bandwith.

Integrating the last mile with Microsoft Flow - Derek Li

In the first session after the last break, Derek Li took us for a ride through Microsoft Flow, the solution to the "last mile" of integration challenges. Microsoft Flow helps non-developers work smarter by automating workflows across apps and services to provide value without code.

Derek explained why you should care about Flow, even if you're a developer and already familiar with Logic Apps: 

  • You can advise business users how they can solve some of their problems themselves using Flow, while you concentrate on more complex integrations.
  • You'll have more engaged customers and engaged customers are happy customers.
  • Integrations originally created in Flow can graduate to Logic Apps when they become popular, mission critical or they need to scale.
  • With the ability to create custom connectors you can connect to your own services.

Some key differences between Flow and Logic Apps:

Flow Logic Apps
Citizen-developers IT Professionals
Web & mobile interface Visual Studio or web interface
Access with Microsoft/O365 account Access with Azure Subscription
Ad-hoc Source control
Deep SharePoint integration  
Approval portal  

In short: Use Flow to automate personal tasks and get notifications, use Logic Apps if someone must be woken up in the middle of the night to fix a broken (mission-critical) workflow.

To extend the reach of your custom connectors beyond your own tenant subscription, you can publish your custom connector by performing the following steps:

  1. Develop custom connector within your Flow tenant, using swagger/postman
  2. Test using the custom connector test wizard
  3. Submit your connector to Microsoft for review and certification to provide support for the customer connector
  4. Publish to Flow, Power Apps, and Logic Apps

State of Azure API Management - Vladimir Vinogradsky

This session started with Vladimir pointing out the importance of API's, as API's are everywhere: IoT, Machine Learning, Software as a Service, cloud computing, blockchain... The need to tie all of these things together is what makes API Management a critical component in Azure: abstracting complexity and thereby forming a base for digital transformation.

Discover, mediate and publish are the keywords in API Management. For instance: existing backend services can be discovered using the API management development portal.

There is no strict versioning strategy in API Management as this depends on the specific organization The reason for this is that there is a lot of discussion on versioning of API's, with questions such as:

  • Is versioning a requirement?
  • When is a new version required?
  • What defines a breaking change?
  • Where to place versioning information? And in what format?

Microsoft chose an approach to versioning is fully featured. It allows the user full control on whether or not to implement it. The approach is based on the following principles:

  • Versioning is opt-in.
  • Choose the API versioning scheme that is appropriate for you.
  • Seamlessly create new API versions without impacting legacy versions.
  • Make developers aware of revisions and versions.

The session concluded with an overview of upcoming features for API Management:

Integrate heritage IBM systems using new cloud and on-premises connectors - Paul Larsen / Steve Melan

Last session of the day was all about integrating heritage IBM systems with Microsoft Azure technologies. It's interesting to know that still lots of organizations (small, medium and large) have some form of IBM systems running in their organization.

Microsoft developed a brand new Microsoft MQSeries client: extremely light weight, no more IBM binaries to be installed and outstanding performance improvements (up to 4 times faster). Thanks to this, the existing integration capabilities with old-school mainframes can now run in the Azure cloud as e.g. as Logic Apps connectors. An impressive demo was shown, showcasing cloud integration with legacy mainframe systems.

The story is even more compelling with the following improvements!


Thank you for reading our blog post, feel free to comment or give us feedback in person.

This blogpost was prepared by:

Pieter Vandenheede (BE)
Jonathan Gurevich (NL)
Toon Vanhoutte (BE)
Carlo Garcia-Mier (UK)
Jef Cools (BE)
Tom Burnip (UK)
Michel Pauwels (BE)
Pim Simons (NL)
Iemen Uyttenhove (BE)
Mariëtte Mak (NL)
Jasper Defesche (NL)
Robert Maes (BE)
Vincent Ter Maat (NL)
Henry Houdmont (BE)
René Bik (NL)
Bart Defoort (BE)
Peter Brouwer (NL)
Iain Quick (UK)
Ricardo Marques (PT)
Paulo Mendonça (PT)

Categories: Community