wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Friday, April 17, 2015 5:21 PM

Maxim Braekman by Maxim Braekman

On April 3rd, we had the honor of taking part in the world premier of the IoT Dev Camp organized by Microsoft at their offices in Zaventem, Belgium. Our host of the day was Jan Tielens (@jantielens), who guided us through demo's and labs while using both cloud services and electronics.

In general, it might sound easy to implement a range of devices into a proper interface, but there are a lot of things which have to be taken into account when setting this up. Some of the things you need to keep in mind are device registration, security and keeping connectivity-settings up-to-date.

One of the possibilities to take care of securing the communication between the devices and a cloud service, but preventing you to configure the security every single time, is by using a device-gateway.
This gateway will take care of all the communication and corresponding security, between the devices and cloud service. This allows you to easily add new devices to the interface without adapting the current interface.

 

The goal of this session was to create a solution in which sensors are registering data which is sent and processed by cloud services. Before we could actually start tinkering with devices and sensors ourselves, we got a nice presentation, including some demo's, on how to configure and use Azure App Services, such as Event hubs, streaming analytics and mobile services.

Event hubs

An ideal service to be used for collecting all data coming from several devices are Event Hubs. This enables the collection of event streams at high throughput, from a diverse set of devices and services. As this is a pub-sub ingestion service, this can be used to pass on the data to several other services for further processing or analysis. 

Streaming analytics

Thanks to streaming analytics the data retrieved from the sensors can be analyzed at real-time, showing the most recent, up-to-date information on, for example, a dashboard.

As Sam Vanhoutte already gave an extensive description of streaming analytics in this blog post, I will not be diving into this subject. 

Mobile services

Using Azure Mobile services, you can quickly create a service which can be used to process and show any type of data on a website, mobile device or any other application you could be creating.

This session did not go into the details of creating mobile services with custom methods. This was only used as an example to show that the backend database can be used to store the output when using streaming analytics. In a real-life solution this would allow you to make all of the data, collected from several sensors, publicly available.

Devices

There are several types of devices which can be used as a base to start setting up an IoT-interface. Some of those boards are described in short underneath.

Arduino

The Arduino, which is an open-source device, is probably the most-popular one currently available. The biggest benefit of this device is the large community, which allows you to easily get the required information or samples to get you going.

The down-side of this device, are the low specs. With only 32K of flash memory, it has a limited amount of capabilities. Security-wise, for instance, it is not possible to communicate with services, using the HTTPS-protocol, however it is capable of sending data over HTTP, using an UTP-shield.

More info can be found here: http://arduino.cc/

Netduino

Another device, which is quite similar to the Arduino, is the Netduino, as it is based on the platform of the former. This board has better specs than the Arduino, but is, because of these specs, more power-consuming.

The big difference, however is that it allows you to run the .Net Micro Framework, enabling you to develop using .Net languages.

Then again, the downside of this board is that the community is not as big, meaning you will have to figure out more of the details yourselves.

More info can be found here: http://www.netduino.com/

.Net Gadgeteer

One of the other available development-boards is the Microsoft .Net Gadgeteer, which also enables you to use the .Net Micro Framework and allows you to use a range of "plug-and-play" sensors.

The specs of this device are better than both of the previous boards, meaning it has a lot more capabilities, but then again it does not have a large community helping you out.

More info can be found here: http://www.netmf.com/gadgeteer/

Raspberry Pi

Of all of these boards, the Raspberry Pi is the one with the highest specs, even allowing you to run an operating system on top of the device. Typically, this device is running a version of Linux, but as was announced some time ago, Microsoft will publish a free version of Windows 10 that will be able to run on top of the board!

The huge benefit of this board is the capability of using pretty much any programming language for development, since any required framework can be installed.

However, before you can start using the Raspberry Pi, you will need to obtain a copy of any OS, which has to be 'burned' onto a micro-SD card, that will be acting as the 'hard'-drive of this board.

More info can be found here: http://www.raspberrypi.org/

Closure

Once the presentation and demos were finished, we all got the chance, during a ‘hackathon’, to attempt to set up a fully working flow, starting from sensors and ending up in a cloud service.

On overall, this session gave us a nice overview of the capabilities of IoT in real-life situations.

To round up the session, Microsoft came up with a nice surprise. As this was the world premiere, all of the attendees received a Raspberry Pi 2, preparing us for integrating IoT using Windows 10.

Thank you, Microsoft!

 

 

 

Categories: IoT Azure
written by: Maxim Braekman

Posted on Wednesday, April 15, 2015 12:40 PM

Glenn Colpaert by Glenn Colpaert

Henry Houdmont by Henry Houdmont

Pieter Vandenheede by Pieter Vandenheede

The second day of the London BizTalk Summit 2015 is over and did not dissappoint. In this blog post you can find our view on today's sessions and the conclusion.

Intro

The London BizTalk Summit 2015 is over and we’ve enjoyed every minute of it. Below you can find our view on today’s sessions. Be sure to check our day 1 recap of yesterday in case you missed it: London BizTalk Summit 2015 - Day 1 recap

The second day was more focused on developers - not admins ;-) - and started with a number of short back-to-back sessions.

Don’t hesitate to put questions or remarks in the comments part of this blog.

Hybrid Solutions with the current BizTalk Server 2013 R2 platform

Speaker: Steef-Jan-Wiggers, (https://twitter.com/SteefJan)

Steef-Jan had the honor to kick off the second day of the BizTalk Summit 2015.

In the fast changing world of technology, we - as integration people - are constantly triggered with new challenges and opportunities, this forces us to modernize our look at integration.

The focus of Steef-Jan during his session was to demonstrate how to tackle some of these challenges with BizTalk Server 2013 R2.

He showed us how easy it is to consume REST services from BizTalk and how the BizTalk tools facilitate us to work with JSON files.

 

10x latency improvement – how to squeeze performance out of your BizTalk solution

Speaker: Johan Hedberg (https://twitter.com/johed)

In this session Johan analyzed a real-life case of an overarchitected BizTalk solution that required optimization to improve latency and throughput. I’ve listed some of the improvements Johan touched below:

  • Reduce MessageBox hops by nesting orchestrations
  • Consider levels/layers of reuse by using canonical processes and methods
  • Memory is cheap, so consider caching your data
  • Host Management is important, consider host separation, tweaking throttling settings, tweaking polling interval,…
  • Custom Performance Counters to quickly identify where the bottleneck of your application is
  • Optimize your Orchestration Persistance Points

Be aware, none of these improvements is the real silver bullet. The most important is that you know your solution and you know your requirement and act upon them.

DEV – TEST – TUNE - REPEAT

Johan will do a more detailed session on performance optimizations on Integration Mondays next week! (http://www.integrationusergroup.com/)

BizTalk Server tips and tricks for developers and admins

Speaker: Sandro Pereira (https://twitter.com/sandro_asp)

As BizTalk Developers and BizTalk Admins it’s important to maintain the health of the platform and to have tools and techniques to produce efficient integration solutions.

Sandro took us on a trip around some useful BizTalk Server tips, tricks and workarounds for both developers and administrators. In the meanwhile he took the opportunity to bash the BizTalk administrator Tord Glad Nordahl, a good friend of his, which led to some funny situations during his session.

One of the more interesting tips was a tool Sandro created to clean out the BizTalk MarkLog Tables as BizTalk does not provide an out of the box solution for this. You can download and try this tool on following location: https://gallery.technet.microsoft.com/BizTalk-Server-Cleaning-15a1b070

Due the lack of timing Sandro could not cover every tip/trick so be sure to check out all of this valuable information when the slides come online.

Power BI tool

Speaker: Tord Glad Nordahl (https://twitter.com/tordeman)

A session by a BizTalk admin for BizTalk admins but Tord was mainly trying to convince the developers to make nice graphs using PowerBI instead of looking at pure data: an honorable, but difficult mission.

PowerBI has recently been released globally (GA) for any platform. It’s a cheap tool (10USD per month per user) which allows you to manipulate data in an easy way to show nice dynamic graphs.

You can grab data from many different types of sources (SQL, CSV … ) to make reports. In a BizTalk scenario you could use the tracking database to show the business performance graphs about certain flows running in your BizTalk environment.

Basically it comes to this: grab whatever data you want from any source you want, merge, customize and create nice graphs to keep the business guys happy.

Microservices and the cloud-based future of the integration

Speaker: Charles Young (https://twitter.com/cnayoung)

Next up was Charles Young who gave us a very enthousiastic and passionate session around µServices and the evolution of architecture leading up to the µServices. I felt Charles did a great job of explaining this and looking at the crowd there, he certainly shared his enthousiasm with them. 

Charles talked from an architecture viewpoint and explained why moving from an ESB architecture to a µServices architecture can be a good idea. He explained the move from layered to hexagonal architecture and how µServices picked up on this idea.

With µServices it is important to keep sure the layered architecture is maintained and boundaries are kept so you don't have the pitfall of too tightly coupling your services and applications.

He explained how important it is to standardize the interfaces using SOAP or REST and how a lot of established and upcoming services are built around this principle of API's.

The aspirations of µServices are:

  • Simplicity: chop up complex things and make them easy to reuse.
  • Velocity: allows to speed up development.
  • Evolution: allow quick change by gathering the building blocks to form an application.
  • Democratisation: allowing a mild learning curve and the ability to expose something to the public quickly.

From monolothic design to µServices: services are too chunky and need to be decomposed into finer grained µServices

It is important to organize your services around business capabilities from front-end to back-end, so one builds a nice stack from front-end to back-end. You also need to be able to deploy, host and version the µServices independently and try to use lightweight communication. Keeping it simple and fast.
Avoiding centralized governance and management will for example allow a cross-platform approach.
And as a last item: really try to focus on rapid re-use of the µServices. Which comes back to the first point: fine-grained services accomodates this.

Later on in the session Charles - from a pure architectural standpoint in this session - was talking about the limitations around the Microsoft stack as well, which made sense.

 As a closer, a quote from Charles which made a lot of sense:

"It's not because µservices is the new buzzword that we should leave our brains at the door."

Migrating to microservices

Speaker: Jon Fancey (https://twitter.com/jonfancey) & Dan Probert (https://twitter.com/probertdaniel)

Jon started off with some slides to convince the audience to move their flows from on premise to µservices in the cloud with the following arguments:

  • In the cloud you have more flexibility with the “pay as you go” pricing models and the easy scaling possibilities.
  • iPaas (integration Platform as a service) allows you to have an environment that needs less management and the µServices can be easily updated.

After that, he continued by explaining step-by-step the equivalents between the cloud components and the on premises building blocks:

  • A workflow can be considered as a Logic App
  • Maps are converted into BizTalk API App Transforms. An MS-tool exists to do this conversion but XSLT is also supported or you can host the map as-is using an API app which allows you to use XSLT 2.0!!!
  • The Business Rules Engine becomes a Rules API App with a portal-based designer.
  • Trading Partner Management is not yet covered but MS is looking into developing a tool for this.
  • Pipelines become Logic Apps where the Pipeline Components are converted into API Apps.

Dan Probert then took over to announce their new initiative: The Migration Factory

They created a tool to migrate full on premise applications to µServices cloud solutions!
By exporting the MSI and uploading it to their tool, you’ll get a Logic App and API Apps with similar functionality where adapters become connectors, the message box becomes a Service Bus API App etc...

As they didn’t cover everything yet and probably won’t be able to due to technical limitations, on the website you'll get a report with the parts that will be converted and a to-do list of what you should convert yourself.

BAM does not exist in the cloud and tracking can be done using the infrastructure REST API.

More information can be found here: http://migrationfactoryholding.azurewebsites.net/

 

Azure API Management Part 1 and Part 2

Part 1 Speaker: Kent Weare(https://twitter.com/wearsy)

Part 2 Speaker: Tomasso Groenendijk (https://twitter.com/tlagroenendijk)

The first session in the afternoon was covered by Kent Weare 

He started off explaining the difference between an API and WebAPI. The main difference is WebAPI is about HTTP, is RESTful, uses (preferably) JSON (or XML) and Swagger.

Kent showed us how APIs are on the rise. A lot of new public APIs are coming out each day. Which makes you wonder how much APIs are still internal: it's just the tip of the iceberg. More and more APIs will be coming out due to the growth in mobile applications and services, IoT, big data, etc...

Kent then surprised us with a nice comparison with the concept of a Bouncer or a doorman as an API management solution.
Taking care of:

  • Authentication and authorization (API security)
  • Policy enforcement (Play by the rules)
  • Analytics (Being able to see how much calls were made by who)
  • Development Engagement (Allowing other systems/services to connect to your application/service makes it more usable and integration friendly).
  • Agility (being able to quickly adapt the the business)

Azure API Management started off when the race for API management began within the business. Microsoft acquisitioned APIphany and as such ventured into API management land.

Kent then showed us a nice demo covering things like

  • Creating and provisioning in the Azure portal (not the preview)
  • Defining operations
  • Defining policies
  • Test APIs from console
  • Showing analytics
  • The ease of enabling caching
  • Rate Limiting
  • Security

Next up was Tomasso Groenendijk who prepared a different demo than Kent earlier. The demo revolved around API Management in relation to the BizTalk ESB Toolkit patterns. The latter one being one of his favorite BizTalk tools.

Earlier during the summit, Tomasso asked - via Twitter - for participation from the audience to sign-up your API's via a customized developer portal.

Unfortunately for Tomasso his demos didn't go exactly as planned, which was very unfortunate, because he obviously spent a lot of time preparing this. He touched Azure API Management, Azure websites, Azure SQL Database and even BizTalk360.

He explained us about the agility of using an ESB pattern in combination with Azure API management to quickly expose this ESB as an API. Itinerary based services and routing helps to quickly adapt to changing business need.

Azure of Things

Speaker: Nino Crudele (https://twitter.com/ninocrudele)

Nino kicked-of his session in his known style: crazy and fulll of enthousiasm.
He started with an overview of the history of integration technologies and concluded with the fact that even though technologies have evolved a lot over the years, the most used technology is still FILE as it is simple, flexible, adaptable, serializable, reliable ....

On an architecture level, you have multiple options:

  • peer-to-peer (spaghetti integration)
  • central common transports/connectors layer between endpoints and the integration framework
  • Integration Framework contains Transport/connectors layer and endpoints use proxies of this transport/connectors layer to connect to it.

As Azure contains a lot of technologies, we could talk of an “Azure of Things” where the combined use of all those tools can bring us much more than the sum of the possibilities of each tool.

Following this small presentation, Nino presented a few demos with an architecture he created for event propagation using the Azure Event Hubs which he called JitGate (Just in Time Gate).

His demos demonstrated his framework, which is still in an early stage, but looks very promising and performant. Knowing Nino, he might blow us away during the coming year when he upgrades the framework further!

Conclusion

After two days packed with sessions and new information from talented speakers, we were all pretty tired and anxious to get home. We learned a lot of new things and have enough ideas to keep this blog going for at least another year!

We hope you enjoyed our small recap and are eagerly awaiting your feedback! What did you think we missed in our posts or do you have another point of view? Let us know!

 

Posted on Tuesday, April 14, 2015 4:30 PM

Massimo Crippa by Massimo Crippa

The Azure API Management Portal allows API Publishers to set policies to change the behavior of the underlying API by configuration.
The Policies act like a pipeline that executes a set of conditions or rules in a sequence. Policies contain configurable rules for authentication, validation, quota and IP level restriction, caching and more.
The Microsoft Product team is constantly adding features to the service, recently conditional policies have been added.

Policies overview

Policies can be applied at different scopes (Global, Product, API, and Operation), some are allowed only on the inbound channel, some others on the contrary only on the outbound.

The table summarizes the policies as today with their scope and the applicability.

The Policies that are defined at different scopes are then pulled from the higher level to the lowest through the “<base/>” element and flattened to a single policy definition that executes the steps in a sequence.

Here the definition of the policies I setup for this blog post and the effective result per scope.

Conditional policies

On April 10th conditional policies have been rolled out:

  • Policy expressions. C#-based expressions that can be used as values in policy attributes and elements. 
  • Control flow policy. That's an if-then-else construct to conditionally apply policies based on the evaluation of logical expressions.
  • Set variable policy. Use this policy to store a computed value for later re-use within the scope of a single request.
  • Set backend service policy. To override the backend service URL.

The conditional policies bring flexibility to the API Management policy engine enabling the definition of more complex complex rules like evaluate headers in the inbound pipeline, save it in the context and then, based on that properties, take decisions on the outbound channel. 

In the very basic example below I combined these 4 new policies in a single policy definition to route the request message to different endpoints depending on the datetime millisecond.

Test and analyze the trace

To test the policy definition I used the API management built-in console and I added the "ocp-apim-trace:true" header to the operation call to enable the API inspector.

The trace location is returned in the response ocp-apim-trace-location header which can be used to download the json data to inspect the execution.

Here I got the json using fiddler and then the online json visualizer to easily inspect the trace, check the policy execution and the debug messages.

 

With the Azure API Management policies you gain the ability to protect, transform and to modify the underlying behavior of the virtualized backend service.

The April service update and the conditional policies brought more flexibility and power to the policy engine.

Cheers

Massimo

 

 

Categories: Azure
written by: Massimo Crippa

Posted on Tuesday, April 14, 2015 3:16 AM

Glenn Colpaert by Glenn Colpaert

Henry Houdmont by Henry Houdmont

Pieter Vandenheede by Pieter Vandenheede

The first day of the London BizTalk Summit 2015 is over and did not dissappoint. In this blog post you can find our view on today's sessions.

Introduction

The first day of the London BizTalk Summit 2015 is over and did not dissappoint. Below you can find our view on today's sessions. 
Be sure to also check out our Day 2 recap for more sessions.

After an introduction of Saravana Kumar, the long awaited London BizTalk Summit could kick-off with 330 attendees of 161 companies coming from 20 countries. An impressive set of numbers. It was a bumpy ride to get the show on the road as Microsoft asked to change the date and thus the venue had to change but in the end it all turned out great at the Platinum suite of the Excel London.

Josh Twist could not make it for the keynote so Dan Rosanova had the honor of starting the show while Karandeep Anand flew over from Seattle on a last-minute flight.  

Code based orchestrations in the cloud

Speaker: Dan Rosanova (Twitter)

Dan Rosanova started off the first session of the day. And what a session: diving in the code within the first 5 minutes might have taken some attendees for a surprise on the early Monday morning. His session was dealing with the durable task framework, something he has been working on now for quite a while since he seemed to have left the BizTalk path.

Dan explained the objectives and strengths of the Durable Task Framework: mainly about Durabality, Scalability and Reliability.

He took us on a technical tour and went straight for the demos. Speaking to a BizTalk minded crowd he showed us the strengths like auditing, tracing, debugging and replay scenarios which was fun to see.

Too bad the screen was not always perfectly readable due to the console colors, but this might have been an issue with the projector as well as the image was shaking quite a bit this first day.

 

Definitely a nice session which was probably meant more as a teaser for the attendees to get them to investigate the framework. On a question as to why it is any different from WorkFlow (WF) he said that they wanted to provide better and different tooling depending on the target audience. Something WF didn't really have apparently.

Durable Task Framework is available as a NuGet package: http://www.nuget.org/packages/DurableTask

And is also available on GitHub: https://github.com/affandar/durabletask

Integrating cloud with existing IBM Systems 

Speaker: Paul Larsen

The recently released Azure App Services platform introduced a series of Azure Connectors to connect existing on-premise infrastructure and apps to the cloud.
In this session Paul gave us an overview of the new API Apps that enable the possibility to connect to existing on-premise IBM systems via the cloud.
Paul showed us a real-life scenarios using different connectors like DB2, Informix and MQ. All of these connectors allow you to create hybrid enterprise cloud solutions.

To wrap-up Paul gave us a quick look at the upcoming roadmap.
On the Azure Connectors side we can expect a TI Connector (CICS, IMS), TI Service (host-initiated) and a DRDA Service (host-initiated) coming in the near future. For the on-premise servers the HIS V10 Rapid Deployment TAP (BizTalk Adapter for Informix) and Host Integration Server HIS V10 is planned for the upcoming future.

Keynote

Speaker: Karandeep Anand (Twitter)

Karandeep Anand, Partner Director of Program Management at Microsoft did make it to the Summit in time to replace Josh Twist who became ill and presented Microsoft’s view on Azure Websites, BizTalk Services and Mobile Services as well as the roadmap.

As these products are still quite young, a lot of lacking features have already been discovered by the developers, but more feedback is very welcome as they will make these products evolve based on the feedback of the community.

One of the main pillars of these new services is the democratization of these services (mild learning curve and availability to the masses).

Karandeep elaborated on the strong focus on customer and developer feedback & data-driven development and announced some important news:
-          On-premise Azure App Services can be expected  (no date specified) !
-          A major version of BizTalk Server will be released in 2016  to align with the new Windows platform!

Inside Logic Apps

Speaker: Stephen Siciliano (Twitter)

In this very interesting deep dive session, Stephen gave us an insight look on how you can use Azure Logic Apps to automate business processes without using code.

The target audience of Logic Apps is anyone who can use Azure but not necessarily business users or consumers and it has off course all the Azure native capabilities you expect like auditing, role based authentication, deployment lifecycle with Azure Resource Manager, built-in API management and even on-premise support is planned with the next release of the Azure Pack.

Azure Resource Manager was the inspiration for the new Logic App Services, it has the same underlying engine but it’s more powerful. As Azure Resource Manager is focusing purely on the Azure platform the Logic App Services are focusing on supporting different types of resources.

In the demo Stephen used during his talk, he used a HTTP listener that triggers a workflow using Twilio, Yammer and Dropbox. With this demo he showed us some of the more advanced features of the new App Platform like loops, conditions, parameters, debugging capabilities,…

BizTalk API Apps

Speaker: Prashant Kumar

Before moving on to the new Azure App Services for BizTalk, Prashant Kumar started with a recap of the existing Microsoft Azure BizTalk Services (MABS) and how these services lack features for parallel and conditional execution, long running workflows …

Following an overview of some features of the App Services for BizTalk, he finally showed a much anticipated demo to show these features at work using an EDIFACT scenario:
AS2 Connector > BizTalk EDIFACT > BizTalk Transform Service > HTTP

In the end he finished with a view on the BizTalk Rules in the Azure Portal . They want to decouple the business logic from the application code for two main reasons:
- The business analysts having control over the business logic management (something which was already tried in BizTalk Server with BRE)
- Changes to business logic go to production faster

This is done using custom vocabularies to make the rules management more accessible.

Connectors

Speaker: Sameer Chabungbam (Twittter

In this hands-on session Sameer showed us how to build a custom connector for Azure App Services platform and how to make it work for Logic Apps. Once you’ve built a custom connector it can be consumed from everywhere and by any type of App.

Sameer showed us how to build a basic Azure Storage Queue Connector that can be used inside the Logic App Platform.
Next to the basic settings and options, Sameer showed us a demo of the more advanced settings and features.

In the upcoming weeks we will publish a post on this blog that will cover this session more in detail.
We will provide you with all the necessary information on how to create a custom connector from A-Z.

BizTalk 360

Speakers: Saravana Kumar (Twitter) and Nino Crudele (Twitter)

This session started of with Nino demoing his NoS addin with the new features. He and Saravana announced that the NoS add-in has now become a commercial product under the BizTalk360 hood which will be available for the public soon.

More information on it's key features can be found here: http://www.biztalk360.com/nos

The demo went on with Saravana Kumar taking the stage and further explaining the new features of the latest release BizTalk 360.

An Integration Platform to Support Vision 2025

Speakers: Michael Stephenson (Twitter) and Oliver Davy

Higher education has changed drastically in the past decades but their IT systems didn’t. The University of Northumbria is looking into the integration space and how they can use it to evolve during the next 20 years.

Currently, their small LOB (Line Of Business) application grew out to be a huge monolithic application that manages everything and has become too complex to move or change. Therefore the university is researching this to come up with a new view and a new architecture with the help of Michael.

Michael followed Oliver’s speech by taking us on a tour of the Integration tools and technologies in the past, present and future and his view on what is to be the integration future with a core platform using existing integration technology as well as new technology such as Azure Service Bus Queues, Azure Logic Apps, Azure App Service and Azure API Apps. 

His view presents an architecture where you can keep the great monolithic system where you need it and integrate it with the newer technologies (e.g. microservices) to be more flexible.

Mike ended his session with a highly anticipated bang by integrating the game Minecraft with some of the University services in a very playful demo making it one of the most entertaining sessions of the day.

 

Top 14 Integration Challenges I've seen in the past 14 years

Speaker: Stephen W. Thomas (Twitter

With day 1 of the conference already running late, I had the feeling that Stephen needed to rush his session a lot.
Quite a shame in my opinion, since he brings years of experience to the stage which was now condensed in 20 minutes or so.

Stephen gave us a quick overview of the most common challenges and mistakes made within integration projects, which was nice.

 

Posted on Tuesday, March 24, 2015 6:01 PM

Sam Vanhoutte by Sam Vanhoutte

In this post, I'm sharing some of my thoughts about the fresh Azure App Service, that were announced by Scott Guthrie and Bill Staples.

Today, Scott Guthrie and Bill Staples announced a new interesting Azure Service: Azure App Service.  Actually it's a set of services, combined under one umbrella, allowing customers to build rich business oriented applications.  Azure App Services is now the new home for:

  • Azure Web Apps (previously called Azure Websites)
  • Azure Mobile Apps (previously Mobile Services)
  • Azure Logic Apps (the new 'workflow' style apps)
  • Azure API Apps (previously announced as Microservices)

It speaks for itself that the Logic Apps and API apps will be the most important for integration people.  The Azure Microservices were first announced in public on the Integrate 2014 event and it's clear that integration is at the core of App Services, which should make us happy. 

Codit has been part of the private preview program 

Codit has been actively involved in private preview programs and we want to congratulate the various teams in the excellent job they have done.  They have really been listening to the feedback and made incredible improvements over the past months.  While everyone knows there is still a lot to do, it seems they are ready to take more feedback, as everything is public now.  

My personal advise would be to look at it with an open mind, knowing that a lot of things will be totally different from what we've been doing over the past 10-15 years (with BizTalk).  I'm sure a lot of things will (have to) happen in order to make mission critical, loosely coupled integration solutions running on App Services.  But I am confident they will happen.

Is this different from what was said at Integrate2014?

As Integrate 2014 was solely focused on the BizTalk Services, the other things (such as Web Sites and Media apps were not mentioned).  But most of the things we saw and heard back then, now made it to the public preview. 

  • Azure Microservices are now called API apps and are really just web API's in your favorite language, enriched with Swagger metadata and version control.  These API apps can be published to a gallery (public gallery might come later on) or directly to a resource group in your subscription.
  • The Workflows (they used to be called Flow Apps) are now called Logic Apps.  These will allow us to combine various API apps from the gallery in order to instrument and orchestrate logical applications (or workflows).

Important concepts

I tried to list the most important concepts below.

All of the components are built on top of Azure Websites.  This way, they can benefit from the existing out of the box capabilities there:

  • Hybrid connectivity: Hybrid Connections or Azure Virtual Networking.  Both of these are available for any API app you want to write.  And the choice is up to the user of your API app!
  • Auto-scaling: do you want to scale your specific API app automatically?  That's perfectly possible now.  If you have a transformation service deployed and the end of month message load needs to be handled, all should be fine!
  • New pricing model (more pay per use, compared to BizTalk Services)
  • And many more: Speed of deployment, the new portal: we get the new portal

API Apps really form the core of this platform.  They are restful API's, with Swagger metadata that is used to model and link the workflows (you can flow values from one API app to another in Logic apps).

API Apps can be published to the 'nuget-based' gallery, or directly to a resource group in your subscription.  When you will be able to publish to the public gallery over time, it will be possible for other users to leverage your API app in their own applications and logic apps, by provisioning an instance of that package into their own subscription.  That means that all the cost and deployment hassle is for the user of your API app.

Where I hope for improvements

As I mentioned, this is a first version of a very new service.  A lot of people have been working on this and the service will still be shaped over the coming months.  It seems the teams are taking feedback seriously and that's definitely a positive thing.  This is the feedback I posted on uservoice.  If you agree, don't hesitate to go and vote for these ideas!

  • Please think about ALM.  Doing everything in the portal (including rules, mapping, etc) is nice, but for real enterprise scenarios, we need version and source control. I really would love to see a Visual Studio designer experience for more complex workflows as well. The portal is nice for self-service and easy workflows, but it takes some time and is limited in its nature, compared to pro-dev experience in Visual Studio.
    Vote here
  • Seperate configuration from flow or business logic.
    If we now have a look at the json file that makes up a Logic app, we can see that throughout the entire file, references are being added to the actual runtime deployment of the various API apps. We also see values for the various connectors in the json structure. It would really help (in deployment of one flow to various staging slots) to seperate configuration or runtime values from the actual flow logic. 
    Vote here
  • Management
    Now it is extremely difficult to see the various "triggers" and to stop/start them.  With BizTalk, we have receive locations that we can all see in one view and we can stop/start them.  (the same thing for send ports).  Now all of that is encapsulated in the logic app and it would really be a good thing to provide more "management views".  As an example, we have customers with more than 1000 receive endpoints.  I want to get them in an easy to handle and search overview.
    Vote here
  • The usability in the browser has increased a lot, but still I believe it would make sense to make the cards or shapes smaller (or collapsable).  This way, we'll get a better overview of the end to end flow and that will be needed in the typical complex workflows we build today (including exception handling, etc) 
    Vote here

More posts will definitely follow in the coming weeks, so please stay tuned!

Categories: Azure BizTalk
written by: Sam Vanhoutte