wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Tuesday, June 27, 2017 8:25 PM

Integrate 2017 is the yearly conference for Microsoft integration. This is the day 2 recap of the sessions presented at Integrate with the views and opinions of the Codit staff.

Microsoft IT: journey with Azure Logic Apps - Padma/Divya/Mayank Sharma

In this first session, Mayank Sharma and Divya Swarnkar talked us through Microsoft’s experience implementing their own integrations internally. We got a glimpse of their approach and the architecture of their solution.

Microsoft uses BizTalk Server and several Azure services like API Management, Azure Functions and Logic Apps, to support business processes internally.
They run several of their business processes on Microsoft technologies (the "eat your own dog food"-principle). Most of those business processes now run in Logic App workflows and Divya took the audience through some examples of the workflows and how they are composed.

Microsoft has built a generic architecture using Logic Apps and workflows. It is a great example of a decoupled workflow, which makes it very dynamic and extensible. It intensively uses the Integration Account artifact metadata feature.

They also explained how they achieve testing in production. They can, for example, route a percentage of traffic via a new route, and once they are comfortable with it, they switch over the remaining traffic. She however mentioned that they will be re-evaluating how they will continue to do this in the future, now that the Logic Apps drafts feature was announced.

For monitoring, Microsoft Operations Management Suite (MOMS) is used to provide a central, unified and consistent way to monitor the solution.

Divya gave some insights on their DR (disaster recovery) approach to achieve business continuity. They are using Logic Apps to keep their Integration Accounts in sync between active and passive regions. BizTalk server is still in use, but acts mostly as the proxy to multiple internal Line-of-Business applications. 

All in all, a session with some great first-hand experience, based on Microsoft using their own technology.
Microsoft IT will publish a white paper in July on this topic. A few Channel9 videos are also coming up, where they will share details about their implementation and experiences.

Azure Logic Apps - Advanced integration patterns - Jeff Hollan/Derek Li

Jeff Hollan and Derek Li are back again with yet another Logic Apps session. This time they are talking about the architecture behind Logic Apps. As usual, Jeff is keeping everyone awake with his viral enthusiasm!

A very nice session that explained that the Logic Apps architecture consists out of 3 parts:

The Logic Apps Designer is a TypeScript/React app. This contained app can run anywhere e.g.: Visual Studio, Azure portal, etc... The Logic Apps Designer uses OpenAPI (Swagger) to render inputs and outputs and generate the workflow definition. The workflow definition can be defined as being the JSON source code of the Logic App.

Secondly, there is the Logic App Runtime, which reads the workflow definition and breaks it down into a composition of tasks, each with its own dependencies. These tasks are distributed by the workflow orchestrator to workers which are spread out over any number of (virtual) machines. Depending on the worker - and its dependencies - tasks run in parallel to each other. e.g. a ForEach action which loops a 100 times might be executed on 100 different machines.

This setup makes sure any of the tasks get executed AT LEAST ONCE. Using retry policies and controllers, the Logic App Runtime does not depend on any single (virtual) machine. This architecture allows a resilient runtime, but also means there are some limitations.

And last, but not least, we have the Logic Apps Connectors, connecting all the magic together.
These are hosted and run separately from the Logic App or its worker. They are supported by the teams responsible for the connector. e.g. the Service Bus team is responsible for the Service Bus connectors. Each of them has their own peculiarities and limits, all described in the Microsoft documentation.

Derek Li then presented an interesting demo showing how exceptions can be handled in a workflow using scopes and the "RunAfter" property, which can be used to execute different actions if an exception occurs. He also explained how retry policies can be configured to determine how many times an action should retry. Finally, Jeff gave an overview of the workflow expressions and wrapped up the session explaining how expressions are evaluated inside-out.

Enterprise Integration with Logic Apps - Jon Fancey

Jon Fancey, Principal Program Manager at Microsoft, took us on a swift ride through some advanced challenges when doing Enterprise Integration with Logic Apps.

He started the session with an overview and a demo where he showed how easy it is to create a receiver and sender Logic App to leverage the new batch functionality. He announced that, soon, the batching features will be expanded with Batch Flush, Time-based batch-release trigger options and EDI batching.

Next, he talked about Integration Accounts and all of its components and features. He elaborated on the advanced tracking and mapping capabilities.
Jon showed us a map that used XSLT parameters and inline C# code processing. He passed a transcoding table into the map as a parameter and used C# to do a lookup/replace of certain values, without having to callback to a database for each record/node. Jon announced that the mapping engine will be enriched with BOM handling and the ability to specify alternate output formats like HTML or text instead of XML only.

The most amazing part of the session was when he discussed the tracking and monitoring capabilities. It’s as simple as enabling Azure Diagnostics on your Integration Account to have all your tracking data pumped into OMS. It’s also possible to enable property tracking on your Logic Apps. The Operations Management Suite (OMS) centralizes all your tracking and monitoring data.

Jon also showed us an early preview of some amazing new features that are being worked on. OMS will provide a nice cross-Logic App monitoring experience. Some of the key features being:

  • Overview page with Logic App run summary
  • Drilldown into nested Logic-App runs
  • Multi-select for bulk download/resubmit of your Logic App flows.
  • New query engine that will use the powerful Application Insights query language!

We’re extremely happy and excited about the efforts made by the product team. The new features shown and discussed here, provethat Microsoft truly listens to the demands of their customers and partners.

Bringing Logic Apps into DevOps with Visual Studio - Jeff Hollan/Kevin Lam

The last Microsoft session of Integrate 2017 was the second time Kevin Lam and Jeff Hollan got to shine together. The goal of their session was to enlighten us about how to use some of the tooling in Visual Studio for Logic Apps.

Kevin took to the stage first, starting with a small breakdown of the Visual Studio tools that are available:

  • The Logic Apps Designer is completely integrated in a Visual Studio "Resource Group Project".
  • You can use Cloud Explorer to view deployed Logic Apps
  • Tools to manage your XML and B2B artifacts are also available

The Visual Studio tools generate a Resource Group deployment template, which contains all resources required for deployment. These templates are used, behind the scenes, by the Azure Resource Manager (ARM). Apart from your Logic Apps this also includes auto-generated parameters, API connections (to for example Dropbox , Facebook, ...) and Integration Accounts. This file can be checked-in into Source Control, giving you the advantage of CI and CD if desired. The goal is to create the same experience in Visual Studio as in the Portal.

Jeff then started off by showing the Azure Resource Explorer. This is an ARM catalog of all the resources available in your Azure subscription.

Starting with ARM deployment templates might be a bit daunting at first, but by browsing through the Azure Quickstart Templates you can get a hang of it quickly. It's easy to create a single template and deploy that parameterized template to different environments. By using a few tricks like Service Principals to automatically get OAuth tokens and using the resourceId() function to get the resourceId of a freshly created resource, you are able to automate your deployment completely.

What's there & what's coming in BizTalk360 & ServiceBus360 - Saravana Kumar

On the tune of "Rocky", Saravana Kumar entered the stage to talk about the latest updates regarding BizTalk360 and ServiceBus360.

He started by explaining the standard features of BizTalk360 around operations, monitoring and analytics.
Since May 2011, 48 releases have been published of BizTalk360, adding 4 or 5 new features per release.

The latest release includes:

  • BizTalk Server License Calculator
  • Folder Location Monitoring for FILE, FTP/FTPS, SFTP
  • Queue Monitoring for IBM MQ
  • Email Templates
  • Throttling Monitoring

Important to note: BizTalk360 supports more and more cloud integration products like Service Bus and Logic Apps. What they want to achieve is having a single user interface to configure monitoring and alerting.

Similar to BizTalk360, with ServiceBus360, Kovai wants to simplify the operations, monitoring and analytics for Azure Service Bus.

Give your Bots connectivity, with Azure Logic Apps - Kent Weare

Kent Weare kicked off by explaining that the evolution towards cloud computing does not only result in lower costs and elastic scaling, but it provides a lot of opportunities to allow your business to scale. Take advantage of the rich Azure ecosystem, by automating insights, applying Machine Learning or introducing bots. He used an example of an energy generation shop, where bots help to increase competitiveness and the productivity of the field technicians.

Our workforce is changing! Bring insights to users, not the other way around.

The BOT Framework is part of the Cognitive Services offering and can leverage its various vision, speech, language, knowledge and search features. Besides that, the Language Understanding Intelligence Service (LUIS) ensures your bot can smoothly interact with humans. LUIS is used to determine the intent of a user and to discover the entity on which the intent acts. This is done by creating a model, that is used by the chat bot. After several iterations of training the model, you can really give your applications a human "face".

Kent showed us two impressive demos with examples of leveraging the Bot Framework, in which both Microsoft Teams and Skype were used to interact with the end users. All backend requests went through Azure API Management, which invoked Logic Apps reaching out to multiple backend systems: SAP, ServiceNow, MOC, SQL and QuadrigaCX. Definitely check out this session, when the videos are published!

Empowering the business using Logic Apps - Steef-Jan Wiggers

Previous sessions about Logic Apps mainly focused on the technical part and possibilities of Logic Apps.
Steef-Jan Wiggers took a step back and looked at the potential of Logic Apps from a customer perspective.

Logic Apps is becoming a worthy player in the IPaaS hemipshere. Microsoft started an entirely new product in 2015, which has matured to its current state. Still being improved upon on a weekly basis, it seems it is not yet considered as a a rock-solid integration platform.
Customers, but even Gartner in their Magic Quadrant, often make the mistake of comparing Logic Apps with the functionality that we are used to, with products like BizTalk Server. They are however totally different products. Logic Apps is still evolving and should be considered within a broader perspective, as it is intended to be used together with other Azure services.
As Logic Apps continues to mature, it is quickly becoming "enterprise integration"-ready.

Steef-Jan ended his session by telling us that Logic Apps is a flexible and easy way to deliver value at the speed of the business and will definitely become a centralized product in the IPaaS market.

Logic App continuous integration and deployment with Visual Studio Team Services - Johan Hedberg

In the last session before the afternoon break, Johan Hedberg outlined the scenario for a controlled build and release process for Logic Apps. He described a real-life use case, with 3 typical personas you encounter in many organizations. He stressed on the importance of having a streamlined approach and a shared team culture/vision. With the available ARM templates and Visual Studio Team Services (VSTS), you have all the necessary tools to setup continuous integration (CI) and continuous deployment (CD).  

The session was very hands-on and to the point. A build pipeline was shown, that prepared the necessary artifacts for deployment. Afterwards, the release process kicked off, deploying a Logic App, an Azure Function and adding maps and schemas to a shared Integration Account. Environment specific parameter files ensured deployments that are tailored for each specific environment. VSTS can cover the complete ALM story for your Logic Apps, including multiple release triggers, environment variables and approval steps. This was a very useful talk and demo, because ALM and governance of your Azure application is key if you want to deliver professional solutions.

Integration of Things. Why integration is key in IoT solutions? - Sam Vanhoutte

The penultimate session of the day was held by our very own CTO Sam Vanhoutte. Sam focused his presentation in sharing some of the things Codit learned and experienced while working on IoT projects.

He started by stressing the importance of connectivity within IoT projects: "Connectivity is key" and "integration matters". Sam summarized the different connectivity types: direct connectivity, cloud gateways and field gateways and talked about each of their use cases and pitfalls.

Another important point of Sam's speech was in regard to the differences in IoT projects during Proof of Concepts (PoC) and an actual project implementation. During a PoC, it’s all about showing functionally, but in reality it is about focusing on robustness, security and connectivity.
Sam also approached the different responsibilities and activities regarding to gateways. He talked about the Nebulus IoT gateway and his ideas and experiences with it.

But IoT is not only about the cloud, Sam shared some insights on Azure IoT Edge as a Microsoft solution. Azure IoT Edge will be able to run within the devices own perimeter, but is not available yet or even in private preview. It can run on a variety of operating systems like Windows or Linux. Even on devices as small or even smaller than a Raspberry Pi. The session was concluded with the quote "Integration people make great IoT Solutions".

Be sure to check out our two IoT white-papers:

Also be sure to check out our IoT webinar, acccessible via the Codit YouTube channel.

IoT - Common patterns and practices - Mikael Hakansson

Mikael Hakansson started the presentation by introducing IoT Hub, Azure IoT Suite and what this represents in the integration world. The Azure IoT Hub enables bi-directional connectivity between devices and cloud, for millions of devices, allowing communication in a variety of patterns and with reliable command & control.

A typical IoT solution consists of a cold path, which is based on persistent data, and a hot path, where the data is analyzed on the fly. Since a year,  the device twin concept has been introduced in IoT Hub. A twin consists of tags, a desired state and a reported state; so really maintaining device state information (metadata, configurations, and conditions). 

Mikael Hakansson prepared some demos, where a thermometer and a thermostat were simulated. The demos began with a simulated thermometer with a changing temperature, while that information was being sent to Power BI, via IoT Hub and Stream Analytics. After that, an Azure Function was able to send back notifications to that device. To simulate the thermostat, a twin device with a desired state was used to control the temperature in the room. 

 

Thank you for reading our blog post, feel free to comment or give us feedback in person.

This blogpost was prepared by:

Pieter Vandenheede (BE)
Toon Vanhoutte (BE)
Jonathan Gurevich (NL) 
Carlo Garcia-Mier (UK)
Jef Cools (BE)
Tom Burnip (UK)
Michel Pauwels (BE)
Ricardo Marques (PT)
Paulo Mendonça (PT)
Pim Simons (NL)
Iemen Uyttenhove (BE)
Mariëtte Mak (NL)
Jasper Defesche (NL)
Robert Maes (BE)
Vincent Ter Maat (NL)
Henry Houdmont (BE)
René Bik (NL)
Bart Defoort (BE)
Peter Brouwer (NL)
Iain Quick (UK)

Categories: Community

Posted on Monday, June 26, 2017 7:18 PM

Integrate 2017 is the yearly conference for Microsoft integration. This is the day 1 recap of the sessions presented at Integrate with the views and opinions of the Codit staff.

Introduction

Codit is back in London for Integrate 2017! This time with a record number of around 26 blue-shirted colleagues representing us. Obviously this makes sense now that Codit is bigger than ever with offices in Belgium, France, The Netherlands, UK, Switzerland, Portugal and Malta. This blog post was put together by each and everyone of our colleagues attending Integrate 2017.

Keynote: Microsoft Brings Intelligence to its Hybrid Integration Platform - Jim Harrer

What progress has Microsoft made in the Integration space (and their Hybrid Integration Platform) over the last year? How is Artificial Intelligence changing the way we think about enterprise application integration? Jim Harrer, Pro Integration Program Manager for Microsoft, kicks off with the keynote here at Integrate 2017. 

With a "year in review" slide, Jim reminded us how a lot of new Azure services are now in GA. Microsoft also confirmed, once again, that hybrid integration is the path forward for Microsoft. Integration nowadays is a "Better Together"-story. Hybrid integration bringing together BizTalk Server, Logic Apps, API Management, Service Bus, Azure Functions and … Artificial Intelligence.

Microsoft is moving at an incredible pace and isn't showing any signs of slowing down. Jim also spoke briefly about some of the great benefits which are now being seen since the Logic Apps, BizTalk, HIS and APIM fall under the same Pro-Integration team.  

Integration today is about making the impossible, possible; The fact that Microsoft is working very hard to bring developers the necessary tooling and development experience to make it easier and faster to deliver complex integration solutions. It's about keeping up - AT THE SPEED OF BUSINESS - to increase value and to unlock "the impossible".

Jim made a very good point:

Your business has stopped asking if you can do this or that, because it's always been a story about delivering something which takes months or will cost millions of dollars. Nowadays, you have the tools to deliver solutions at a fraction of the cost and a fraction of the time. Integration specialists should now go and ask business what they can do for them to maximize added value to that business and make your business as efficient as possible.

Jim had fewer slides in favor of some short, teasing demos:

  • Jeff Hollan demonstrated how to use Logic Apps with the Cognitive Services Face API to build a kiosk application to on-board new members at a fictitious gym ("Contoso Fitness"), adding the ability to enter the gym without needing to bring a card or fob but simply by using face recognition when entering the building.
  • Jon Fancey showed off some great new batching features which are going to be released for Logic Apps soon.
  • Tord Glad Nordahl tackled the scenario where the gyms sell products like energy bars and protein powders and needs to track sales and stock at all the locations, to determine when new products need to be ordered. BizTalk was the technology behind the scenes, with some Azure Machine learning thrown in.

Watch out for new integration updates later in the week to be announced.

Innovating BizTalk Server to bring more capabilities to the Enterprise customer - Tord Glad Nordahl

In the second session of the day, Tord walked us through the BizTalk lifecycle and emphasized that the product team is still putting a lot of effort in improving the product and its capabilities. He talked about the recent release of the first feature pack for BizTalk Server 2016 and how it tackles some of the pain points gathered from customer feedback. FP1 is just a first step in enriching BizTalk, more and more functionalities will be added and further improved in the time to come.  

"BizTalk is NOT dead"

Tord emphasized how important it is to receive feedback from partners and end-users. He urged everyone to report all bugs and inconviences using the Uservoice page so we can all together help shape the future of BizTalk Server.
The product team is working hard to release CU packs at a steady cadence, and plan on getting vNext of BizTalk ready before the end of 2018. 

No breaking news unfortunately (other than more features coming to the new automated deployment that came in Feature Pack 1), but we're looking forward to Tord's in-depth session about FP1 coming Wednesday. If you can't wait to have a look of what FP1 can do, check out Toon's blog posts!

BTS2016 FP1: Scheduling Capabilities
BTS2016 FP1: Continuous Deployment
BTS2016 FP1: Management & Operational API
BTS2016 FP1: Continuous Deployment Walkthrough

Messaging yesterday, today, and tomorrow - Dan Rosanova

The third speaker of the day was Dan Rosanova, giving us an overview of the evolution of the Messaging landscape and its future.

He started with some staggering numbers: currently Azure Messaging is processing 23 TRILLION (23,000,000,000,000,000,000) messages per month. Which is a giant increase from the 2.75 trillion per month last year (at Integrate).

In the past, picking a messaging system was comparable to choosing a partner to marry: you pick one you like and you're stuck with the whole package, peculiarities and all. It wasn't easy, and very expensive to change.

Messaging systems are now changing to more modular systems. From the giant pool of (Azure) offerings, you pick the services that best fit your entire solution. A single solution can now include multiple messaging products, depending on your (and their) specific use case.

"Event Hubs is the ideal service for telemetry ingestion from websites, apps and streams of big data."

Where Event Hubs used to be seen as an IoT service, this has now been repositioned as part of the Big Data stack. Although still on the edge with IoT.

The Microsoft messaging team has been very busy. Since last year they have implemented new Hybrid Connections, new java and open-source .NET clients, Premium Service Bus went GA in 19 regions and a new portal was created. They're currently working on more encryption (encryption at rest and Bring Your Own Key) and security: Managed Secure Identity and IP Filtering features which will be coming soon. So it looks to be a promising year!

Dan introduced Geo-DR, which is a dual-region active-passive disaster recovery tool coming this summer. The user decides when to trigger this fail-forward disaster recovery. However this is only meant as a disaster recovery solution, and is NOT intended for high-availability or other scenarios. 

Finally, Dan added a remark that messaging is under-appreciated and his goal is reaching transparent messaging by making messaging as simple as possible. 

Azure Event Hubs: the world’s most widely used telemetry service - Shubha Vijayasarathy

"The Azure Event Hubs are based on three S's: Simple, stable and Scalable.

Shubba talked about Azure Event Hubs Capture replacing the existing Azure Event Hubs Archive service. With Event Hubs Capture there is no overhead with code or configuration. The separate data transfer will reduce the service management hassle. It's possible to opt-in or -out at any time. Azure Event Hubs Capture will be GA June 28th 2017, price changes will go into effect August 1st 2017.

The next item was Event Hubs Auto-Inflate. With Auto-Inflate it's possible to auto-scale TU's, to meet your usage needs. It also prevents throttling (when data ingress and egress rates exceed preconfigured TUs). This is ideal for handling burst workloads. It's downside is that it only scales up and doesn’t scale back down again.
 
Dedicated Event Hubs are designed for massive scale usage scenarios. It has a completely dedicated platform, so there are no noisy neighbours sharing resources on Azure. Dedicated Event Hubs are sold in Capacity Units (CU). Message sizes are up to 1 MB.  

Event Hubs Clusters will enable you to create your own clusters in less than 2 hours in which Azure Event Hubs Capture is also included. Message sizes go up to 1MB and pricing starts at $5000. The idea is to start small and scale out as you go. Event Hubs Clusters is currently in private preview and will be available as public preview starting September 2017 in all regions.

Coming soon

- Geo-DR capability
- Encryption at rest
- Metrics in the new portal
- ADLS for public preview
- Dedicated EH clusters for private preview

Azure Logic Apps - build cloud-scale integrations faster - Jeff Hollan / Kevin Lam

Jeff Hollan and Kevin Lam had a really entertaining session which was perfect to avoid an after-lunch-dip! 

Some great new connectors were announced, which will be added in the near future. Among them: Azure storage tables, Oracle EBS, Service Now and SOAP. Besides the connectors that Microsoft will make available, the ability to create custom connectors, linked with custom API connections, sounds very promising!  It's great to hear that Logic Apps is now certified for Drummond AS2, ISO 27001, SCO (I, II, IIII), HIPAA and PCI DSS.

Quite a lot of interesting new features will be released soon:

  • Expression authoring and intellisense will improve the user experience, especially combined with detailed tracing of expression runtime executions.
  • Advanced scheduling capabilities will remove the need to reach out to Azure Scheduler.  
  • The development cycle will be enhanced by executing Logic Apps in draft, which means your Logic Apps can be developed without being activated in production and the ability to promote them.
  • The announced mock testing features will be a great addition to the framework.
  • Monitoring across Logic Apps through OMS and resubmitting from a failed action, will definitely make our cloud integration a lot easier to manage!
  • And last, but not least: out-of-the-box batching functionality will be released next week!

Azure Functions - Serverless compute in the cloud - Jeff Hollan

Whereas Logic Apps executes workflows based on events, Azure Functions executes code on event triggers. They really complement each other. It's important to understand that both are serverless technologies, which comes with the following advantages: reduced DevOps, more focus on business logic and faster time to market.

The Azure Functions product team has made a lot of investments to improve the developer experience. It is now possible to create Azure Functions locally in Visual Studio 2017, which gives developers the ability to use intellisense to test locally and to write unit tests.

There's out-of-the-box Application Insights monitoring for Azure Functions. This provides real details on how your Azure Functions are performing. Very powerful insights on that data are available by writing fairly simple queries. Jeff finished his session by emphasizing that Azure Functions can also run on IoT edge. As data has "gravity", some local processing on data is desired in many scenarios, to reduce network dependencies, cost and bandwith.

Integrating the last mile with Microsoft Flow - Derek Li

In the first session after the last break, Derek Li took us for a ride through Microsoft Flow, the solution to the "last mile" of integration challenges. Microsoft Flow helps non-developers work smarter by automating workflows across apps and services to provide value without code.

Derek explained why you should care about Flow, even if you're a developer and already familiar with Logic Apps: 

  • You can advise business users how they can solve some of their problems themselves using Flow, while you concentrate on more complex integrations.
  • You'll have more engaged customers and engaged customers are happy customers.
  • Integrations originally created in Flow can graduate to Logic Apps when they become popular, mission critical or they need to scale.
  • With the ability to create custom connectors you can connect to your own services.

Some key differences between Flow and Logic Apps:

Flow Logic Apps
Citizen-developers IT Professionals
Web & mobile interface Visual Studio or web interface
Access with Microsoft/O365 account Access with Azure Subscription
Ad-hoc Source control
Deep SharePoint integration  
Approval portal  

In short: Use Flow to automate personal tasks and get notifications, use Logic Apps if someone must be woken up in the middle of the night to fix a broken (mission-critical) workflow.

To extend the reach of your custom connectors beyond your own tenant subscription, you can publish your custom connector by performing the following steps:

  1. Develop custom connector within your Flow tenant, using swagger/postman
  2. Test using the custom connector test wizard
  3. Submit your connector to Microsoft for review and certification to provide support for the customer connector
  4. Publish to Flow, Power Apps, and Logic Apps

State of Azure API Management - Vladimir Vinogradsky

This session started with Vladimir pointing out the importance of API's, as API's are everywhere: IoT, Machine Learning, Software as a Service, cloud computing, blockchain... The need to tie all of these things together is what makes API Management a critical component in Azure: abstracting complexity and thereby forming a base for digital transformation.

Discover, mediate and publish are the keywords in API Management. For instance: existing backend services can be discovered using the API management development portal.

There is no strict versioning strategy in API Management as this depends on the specific organization The reason for this is that there is a lot of discussion on versioning of API's, with questions such as:

  • Is versioning a requirement?
  • When is a new version required?
  • What defines a breaking change?
  • Where to place versioning information? And in what format?

Microsoft chose an approach to versioning is fully featured. It allows the user full control on whether or not to implement it. The approach is based on the following principles:

  • Versioning is opt-in.
  • Choose the API versioning scheme that is appropriate for you.
  • Seamlessly create new API versions without impacting legacy versions.
  • Make developers aware of revisions and versions.

The session concluded with an overview of upcoming features for API Management:

Integrate heritage IBM systems using new cloud and on-premises connectors - Paul Larsen / Steve Melan

Last session of the day was all about integrating heritage IBM systems with Microsoft Azure technologies. It's interesting to know that still lots of organizations (small, medium and large) have some form of IBM systems running in their organization.

Microsoft developed a brand new Microsoft MQSeries client: extremely light weight, no more IBM binaries to be installed and outstanding performance improvements (up to 4 times faster). Thanks to this, the existing integration capabilities with old-school mainframes can now run in the Azure cloud as e.g. as Logic Apps connectors. An impressive demo was shown, showcasing cloud integration with legacy mainframe systems.

The story is even more compelling with the following improvements!

 

Thank you for reading our blog post, feel free to comment or give us feedback in person.

This blogpost was prepared by:

Pieter Vandenheede (BE)
Jonathan Gurevich (NL)
Toon Vanhoutte (BE)
Carlo Garcia-Mier (UK)
Jef Cools (BE)
Tom Burnip (UK)
Michel Pauwels (BE)
Pim Simons (NL)
Iemen Uyttenhove (BE)
Mariëtte Mak (NL)
Jasper Defesche (NL)
Robert Maes (BE)
Vincent Ter Maat (NL)
Henry Houdmont (BE)
René Bik (NL)
Bart Defoort (BE)
Peter Brouwer (NL)
Iain Quick (UK)
Ricardo Marques (PT)
Paulo Mendonça (PT)

Categories: Community

Posted on Friday, June 23, 2017 10:39 AM

Stijn Moreels by Stijn Moreels

As you may know, from previous blog posts, I use FAKE as build script tool to automate my compiling, testing, inspection, documentation and many other things. FAKE has already a lot of functionality in place, but they didn’t have any support for StyleCop. Until now.

StyleCop and F#

Why StyleCop?

StyleCop is a tool that analyzes your source files with the default Microsoft coding conventions describing how code should look like.

It’s not that I agree with every StyleCop rule, but there are more rules that I do agree on then otherwise.

FAKE already had support for FxCop, but since ReSharper had Intellisense support for StyleCop in place, I found it reasonable that my automated build (local and remote) is depending on an analyzing tool that I both can use in my development practice as in my automated build.

StyleCop Command Line

FAKE is written in F#, which means the whole .NET framework is available to us. Besides working with FAKE and some personal fun-projects, I didn’t have much experience with F# so it was a fun challenge.

StyleCop already has some command line tools that you can use to analyze your source files, so in theory my StyleCop implementation of F# could just use some of those tools and pass the right arguments with it?

F# Implementation

F# is a functional language with imperative features (such as mutable values, foreach…). My goal was to write a purely functional implementation of the command line tools that analyzed the source files.

I’m not going to run through every bit of the code, you can do that yourself with the link on the end of this post.

Before we get started, some practical numbers:

  • I wrote the implementation in 2-3 hours (so it certainly can be improved)
  • The single file contains around 100 lines of code

Just for fun, I checked the length of some C# implementation, and found that all together they had 650 lines of code.

Imperative to Functional

For Each For Each

One of the things I had to implement, was the possibility to analyze Source, Project and Solution Files. Source files could be analyzed directly; project files must first be decomposed in source files and solution files must first be decomposed in project files.

When I looked at the C# implementation, you could see that they had implemented a foreach, in a foreach, in a foreach, to get the three different levels of lists.
Each source file must be wrapped in a StyleCop Project object so it can be analyzed, so you must indeed go through every project file and solution file to obtain all those underlying source files.

Functional programming has different approaches. What we want to do is: “create a StyleCop Project for every source file”. That was my initial idea. I don’t want to know where those files came from (project, solution). I came up with this solution:

Every StyleCop Project must have an identifier which must be incremental by each source file it analyzes.

In my whole file, there are no foreach loops, but this function is called three times directly and time indirectly. So, you could say it’s the core of the file.

The function takes a start ID, together with the files to run through, to create StyleCop Project instances. It’s a nice example of the Tail Recursion in Functional Programming, where you let a function run through every element in a list (via the tail).
At the end, we combine all the projects to StyleCop Project instances and return that list.

Shorthanded Increment

In the C# implementation, they used the shorthanded increment (++) to assign the next integer as the Project Id. In the previous snippet, you see that the ID is also sent with the function and is increased internally. This way we can reuse this function because we could start from zero, but from any valid integer number. And that’s what I’ve done.

The source files can call this function directly, project files go through the source files and the solution files go through the project files but they all uses this function, the Tail Recursion. At the end, we combine all the StyleCop Projects created from the source files, project files and solution files.

I could have created a counter function that has a Closure inside to count the ID’s though:

This would have reduced the arguments we must send with the function, and would remove the implicit reference to the project ids and the project length.

Feel free to contribute!

C# Assemblies

The assemblies used in this file, are also written in C#. So, this is an example of how C# assemblies can be used in F# files without much effort. The bad side is that a “list” in C# isn’t the same as a “list” in F#, so some conversions are needed.

Also, Currying or Partial Functioning in F# isn’t possible with C# objects. If this was possible, I think my implementation would look just that bit more Functional than now.

Conclusion

Personally, I was surprised that I could write a full implementation in just 2 - 3 hours. F# is a fun language to play with, but also to write solid declarative implementations quickly. I hope that I can use my functional skills more in actual production projects.

My interest in functional programming has increased by implementing an implementation for StyleCop and I expect that not only my functional programming skills will become better in the future but also my object-oriented programming skills.

Thanks for reading and check the implementation on my GitHub.

Remember that I only worked for 2 - 3 hours; so, contribute to the FAKE library if you have any suggestions because EVERYTHING can be written better.

Categories: Technology
Tags: Code Quality, F#
written by: Stijn Moreels

Posted on Thursday, June 22, 2017 2:41 PM

Toon Vanhoutte by Toon Vanhoutte

In this blog post, I'll explain in depth the routing slip pattern and how you can leverage it within enterprise integration scenarios. As always I'll have a look at the benefits, but also the pitfalls will get some well-deserved attention.

The Pattern

Introduction

A routing slip is a configuration that specifies a sequence of processing steps (services). This routing slip must be attached to the message to be processed. Each service (processing step) is designed to receive the message, perform its functionality (based on the configuration) and invoke the next service. In that way, a message gets processed sequentially by multiple services, without the need of a coordinating component. The schema below is taken from Enterprise Integration Patterns.

Some examples of this pattern are:

Routing Slip

Routing slips can be configured in any language, JSON or XML are quite popular. An example of a simple routing slip can be found below. The header contains the name of the routing slip and a counter that carries the current step number. Each service is represented by a routing step. A step has its own name to identify the service to be invoked and has a specific key-value configuration pairs.

Remark that this is just one way to represent a routing slip. Feel free to add your personal flavor…

Assign Routing Slip

There are multiple ways to assign a routing slip to a message. Let's have a look:

  • External: the source system already attaches the routing slip to the message
  • Static: when a message is received, a fixed routing slip is attached to it
  • Dynamic: when a message is received, a routing slip is attached, based on some business logic
  • Scheduled: the integration layer has routing slips scheduled that contain also a command to retrieve a message

Service

A service is considered as a "step" within your routing slip. When defining a service, you need to design it to be generic. The executed logic within the service must be based on the configuration, if any is required. Ensure your service has a single responsibility and there's a clear boundary of its scope.

A service must consist of three steps:

  • Receive the message
  • Process the message, based on the routing slip configuration
  • Invoke the next service, based on the routing slip configuration

There are multiple ways to invoke services:

  • Synchronous: the next service is invoked without any persistence in between (e.g. in memory). This has the advantage that it will perform faster.
  • Asynchronous: the next service is invoked with persistence in between (e.g. a queue). This has the advantage that reliability increases, but performance degrades.

Think on the desired way to invoke services. If required, a combination of sync and async can be supported.

Advantages

Encourages reuse

Integrations are composed of reusable and configurable building blocks. The routing slip pattern forces you to analyze, develop and operate in a streamlined manner. Reuse is heavily encouraged on different levels: the way analysis is performed, how patterns are implemented, the way releases are rolled out and how operational tasks are performed. One unified way of working, built on reusability.

Configuration based

Your integration is completely driven by the assigned routing slip. There are no hard-coded links between components. This allows you to change its behavior without the need of a re-deployment. This configuration also serves as a great source of documentation, as it explains exactly what message exchanges are running on your middleware and what they exactly do.

Faster release cycles

Once you have set up a solid routing slip framework, you can increase your release cadence. By leveraging your catalogue of reusable services, you heavily benefit from previous development efforts. The focus is only on the specifics of a new message exchange, which are mostly data bound (e.g. mapping). There's also a tremendous increase of agility, when it comes to small changes. Just update the routing slip configuration and it has an immediate effect on your production workload.

Technology independent

A routing slip is agnostic to the underlying technology stack. The way the routing slip is interpreted, is of course specific to the technology used. This introduces ways to have a unified integration solution, even if it is composed of several different technologies. It enables also cross technology message exchanges. As an example, you can have an order that is received via an AS2 Logic App, being transformed and sent to an on premise BizTalk Server that inserts it into the mainframe, all governed by a single routing slip config.

Provides visibility

A routing slip can introduce more visibility into the message exchanges, for sure from an operational perspective. If a message encounters an issue, operations personnel can immediately consult the routing slip to see where the message comes from, what steps are already executed and where it is heading to. This visibility can be improved, by updating the routing slip with some extra historical information, such as the service start and end time. Why even not including an URL in the routing slip that points to a wiki page or knowledge base about that interface type?

Pitfalls

Not enough reusability

Not every integration project is well-suited to use the routing slip pattern. During analysis phase, it's important to identity the integration needs and to see if there are a lot of similarities between all message exchanges. When a high level of reusability is detected, the routing slip pattern might be a good fit. If all integrations are too heterogenous, you'll introduce more overhead than benefits.

Too complex logic

A common pitfall is adding too much complexity into the routing slip. Try to stick as much as possible to a sequential series of steps (services) that are executed. Some conditional decision logic inside a routing slip might be acceptable, but define clear boundaries for such logic. Do not start writing you own workflow engine, with its own workflow language. Keep the routing slip logic clean and simple, to stick to the purpose of a routing slip.

Limited control

In case of maintenance of the surrounding systems, you often need to stop a message flow. Let's take the scenario where you face the following requirement: "Do not send orders to SAP for the coming 2 hours". One option is to stop a message exchange at its source, e.g. stop receiving messages from an SFTP server. In case this is not accepted, as these orders are also sent to other systems that should not be impacted, things get more complicated. You can stop the generic service that sends a message to SAP, but then you also stop sending other message types… Think about this upfront!

Hard deployments

A very common pain-point of a high level of reuse, is the impact of upgrading a generic service that is used all over the place. There are different ways to reduce the risks of such upgrades, of which automated system testing is an important one. Within the routing slip, you can specify explicitly the version of a service you want to invoke. In that way, you can upgrade services gradually to the latest version, without the risk of a big bang deploy. Define a clear upgrade policy, to avoid that too many different versions of a service are running side-by-side.

Monitoring

A message exchange is spread across multiple loosely coupled service instances, which could impose a monitoring challenge. Many technologies offer great monitoring insights for a single service instance, but lack an overall view across multiple service instances. Introducing a correlation ID into your routing slip, can highly improve the monitoring experience. This ID can be generated the moment you initialize a routing slip.

Conclusion

Routing slips are a very powerful mechanism to deliver unified and robust integrations in a fast way. The main key take-aways of this blog are:

  • Analyze in depth if can benefit from the routing slip pattern
  • Limit the complexity that the routing slip resolves
  • Have explicit versioning of services inside the routing slip
  • Include a unique correlation ID into the routing slip
  • Add historical data to the routing slip

Hope this was a useful read!
Toon

 

Categories: Architecture
Tags: Design
written by: Toon Vanhoutte

Posted on Tuesday, June 20, 2017 11:24 PM

Pieter Vandenheede by Pieter Vandenheede

In this blog I'll try and explain a real world example of a Logic App used to provide the short links to promote the blog posts appearing on our blog. Ready for the journey as I walk you through?

Introduction

At Codit, I manage the blog. We have some very passionate people on board who like to invest their time to get to the bottom of things and - also very important - share it with the world!
That small part of my job means I get to review blog posts before publishing on a technical level. It's always good to have one extra pair of eyes reading the post before publishing it to the public, so this definitely pays off!

An even smaller part of publishing blog posts is making sure they get enough coverage. Sharing them on Twitter, LinkedIn or even Facebook is part of the job for our devoted marketing department! And analytics around these shares on social media definitely come in handy! For that specific reason we use Bitly to shorten our URLs.
Every time a blog post gets published, someone needed to add them manually to out Bitly account and send out an e-mail. This takes a small amount of time, but as you can imagine it accumulates quickly with the amount of posts we generate lately!

Logic Apps to the rescue!

I was looking for an excuse to start playing with Logic Apps and they recently added Bitly as one of their Preview connectors, so I started digging!

First, let's try and list the requirements of our Logic App to-be:

Must-haves:

  • The Logic App should trigger automatically whenever a new blog post is published.
  • It should create a short link, specifically for usage on Twitter.
  • It also should create a short link, specifically for LinkedIn usage.
  • It should send out an e-mail with the short links.
  • I want the short URLs to appear in the Bitly dashboard, so we can track click-through-rate (CTR).
  • I want to spend a minimum of Azure consumption.

Nice-to-haves:

  • I want the Logic App to trigger immediately after publishing the blog post.
  • I want the e-mail to be sent out to me, the marketing department and the author of the post for (possibly) immediate usage on social media.
  • If I resubmit a logic app, I don't want new URLs (idempotency), I want to keep the ones already in the Bitly dashboard.
  • I want the e-mail to appear as if it was coming directly from me.

Logic App Trigger

I could easily fill in one of the first requirements, since the Logic App RSS connector provides me a very easy way to trigger a logic app based on a RSS feed. Our Codit blog RSS feed seemed to do the trick perfectly!

Now it's all about timing the polling interval: if we poll every minute we get the e-mail faster, but will spend more on Azure consumption since the Logic App gets triggered more... I decided 30 minutes would probably be good enough.

Now I needed to try and get the URL for any new posts that were published. Luckily, the links - Item provides me the perfect way of doing that. The Logic Apps designer conveniently detects this might be an array of links (in case two posts get published at once) and places this within a "For each" shape!

Now that I had the URL(s), all I needed to do was save the Logic App and wait until a blog post was published to test the Logic App. In the Logic App "Runs history" I was able to click through and see for myself that I got the links array nicely:

Seems there is only one item in the array for each blog post, which is perfect for our use-case!

Shortening the URL

For this part of the exercise I needed several things:

  • I actually need two URLs: one for Twitter and one for LinkedIn, so I need to call the Bitly connector twice!
  • Each link gets a little extra information in the query string called UTM codes. If you are unfamiliar with those, read up on UTM codes here. (In short: it adds extra visibility and tracking in Google Analytics).
    So I needed to concatenate the original URL with some static UTM string + one part which needed to be dynamic: the UTM campaign.

For that last part (the campaign): we already have our CMS cleaning up the title of a blog post in the last part of the URL being published! This seems ideal for us here.

However, due to lack of knowledge in Logic Apps-syntax I got a bit frustrated and - at first - created an Azure Function to do just that (extract the interesting part from the URL):

I wasn't pleased with this, but at least I was able to get things running...
It however meant I needed extra, unwanted, Azure resources:

  • Extra Azure storage account (to store the function in)
  • Azure App Service Plan to host the function in
  • An Azure function to do the trivial task of some string manipulation.

After some additional (but determined) trial and error late in the evening, I ended up doing the same in a Logic App Compose shape! Happy days!

Inputs: @split(item(), '/')[add(length(split(item(), '/')), -2)]

It takes the URL, splits it into an array, based on the slash ('/') and takes the part which is interesting for my use-case. See for yourself:

Now I still needed to concatenate all pieces of string together. The concat() function seems to be able to do the trick, but an even easier solution is to just use another Compose shape:

Concatenation comes naturally to the Compose shape!

Then I still needed to create the short links by calling the Bitly connector:

Let's send out an e-mail

Sending out e-mail, using my Office365 account is actually the easiest thing ever:

Conclusion

My first practical Logic App seems to be a hit! And probably saves us about half an hour of work every week. A few hours of Logic App "R&D" will definitely pay off in the long run!

Here's the overview of my complete Logic App:

Some remarks

During development, I came across - what appear to me - some limitations :

  • The author of the blog post is not in the output of the RSS connector, which is a pity! This would have allowed me to use his/her e-mail address directly or, if it was his/her name, to look-up the e-mail address using the Office 365 users connector!
  • I'm missing some kind of expression shape in Logic Apps!
    Coming from BizTalk Server where expression shapes containing a limited form of C# code are very handy in a BizTalk orchestration, this is something that should be included one way or the other (without the Azure function implementation).
    A few lines of code in there is awesome for dirty work like string manipulation for example.
  • It took me a while to get my head around Logic Apps syntax.
    It's not really explained in the documentation when or when not to use @function() or @{function()}. It's not that hard at all once you get the hang of it. Unfortunately it took me a lot of save errors and even some run-time errors (not covered at design time) to get to that point. Might be just me however...
  • I cannot rename API connections in my Azure Resource Group. Some generic names like 'rss', 'bitly' and 'office-365' are used. I can set some connection properties so they appear nicely in the Logic App however.
  • We have Office365 Multi-Factor Authentication enabled at our company. I can authorize the Office365 API connection, but this will only last for 30 days. I might need to change to an account without multi-factor authentication if I don't want to re-authorize every 30 days...

Let me know what you think in the comments! Is this the way to go?
Any alternative versions I could use? Any feedback is more than welcome.

In a next blog post I will take some of our Logic Apps best practices to heart and optimize the Logic App.

Have a nice day!
Pieter

Categories: Azure
written by: Pieter Vandenheede