Microsoft Azure BizTalk Microservices
Important update / note
Some things were left open for interpretation on the first days and that might have lead to some misunderstandings around Azure Microservices. Therefore, I am adding some comments here to make things more clear:
- Azure Microservices, nor BizTalk Microservices, is a product or an official service name. It is an architectural concept that is used in the ‘future appplat platform’.
- BizTalk Services remains supported and backed by the official SLA
end of update
Today, Bill Staples mentioned a new Azure concept on the BizTalk Integration Summit in Redmond. Azure BizTalk Microservices is a new architecture that allows Azure customers to build composite applications, by using granular Microservices.
The fact that this new service was first mentioned on a BizTalk event was not by coincidence. In fact, the BizTalk Services team will be building on top of BizTalk Microservices for their new wave of Cloud Integration capabilities. It looks like the new Workflow capabilities will be built on top of BizTalk Microservices, leveraging various microservices offered by the BizTalk team and integration partners.
Microsoft announced that this service will also be available through the Azure Pack, allowing customers to run this service in the cloud of their choice. It is also mentioned that there will be a lot of support for migrating artifacts from BizTalk Server to this new platform.
More details will follow during the rest of this event. But it looks like the BizTalk team will be building on a broader, existing platform service. This will provide much more scalability and it looks like it will be a more ‘true’ cloud model, allowing auto scale and flexibility.
In this post, I am giving my vision on this change of direction and I also take the chance to see where opportunities can be found for integration partners (like Codit) and where I see challenges for building enterprise integration solutions on top of this new platform.
Honestly, I was surprised with the sudden chance of direction, when we first learned from this. We have been working very close together with the BizTalk team over the past years and we still are. On last year’s BizTalk Summit, we learned that the product team was working to provide a Workflow engine, Business Rules and BAM in the BizTalk Services offering. And now it seems that instead of building an engine, they will build on a ‘shared engine’ that will be used by customers and other Azure Services.
But, after some discussions and thinking things through, I believe it is for the best. I think there’s a lot of advantages in having the BizTalk team build on a more widely adopted and supported engine, rather than building one from the ground up. This way the BizTalk team can leverage efforts from a larger team and can focus on their own added value: enterprise messaging patterns, transformations, adapters, hybrid connectivity, business rules and advanced workflow logic.
Looking at the various samples that were presented during the BPM talks, they seemed to be rather simple and focused to the IFTTT-type of scenarios. I can’t wait to get my hands on it and try to build a real-world workflow from some of our customers on top of this.
Microservices, the concepts
Martin Fowler (who can arguably be seen as the controversial advocate of the microservice architectural style), defines the stlye as follows (quote) on his page (http://martinfowler.com/articles/microservices.html)
In short, the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare mininum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.
If we look at the BizTalk Microservices as they have been presented today, we can indeed see that the various BizTalk Microservices will all run in their own scalable container (similar to Azure web sites) and that the communication engine seems to be following the lightweight HTTP approach. The deployment capabilities of Azure Resource Manager are definitely interesting in the light of the modeling and composite application definition.
Let’s start with some challenges I’m seeing with the new architecture. This is purely my point of view and I’m sure a lot of these problems have already been tackled or will be tackled in the coming releases. As always, we at Codit will be investing time and resources in testing these services and providing feedback to the teams.
I am curious to see what the latency will be for high-throughput integrations where a lot of ‘messaging only’ actions have to be done. With BizTalk Server, people always optimized the performance of solutions by reducing the number of ‘MessageBox calls’ to the bare minimum. It seems that the equivalent of this will be the number of hops there will be between Microservices. At first this seems to be rather expensive and it also seems to break concepts of message streaming and transactional behavior. So, this is definitely an attention point that I’m eager to learn more about.
Complex enterprise integration messaging patterns
Looking at the engine, I am very interested to learn how (and if) the new architecture will be able to handle the following complex messaging concepts. Most of these things were addressed on the high level and it was claimed that these things would be possible.
- Large messages. HTTP, nor Service Bus are suitable to process larger messages. I hope that the engine allows the processing of these messages without having to ‘program for it’.
- Batching and debatching. A common integration scenario is to receive an interchange that has to be split into several messages. With BizTalk Server, it is possible to define if you want to enable recoverable interchange processing. Curious to see how if we’ll be able to configure this in the new architecture.
- In ordered processing. Without a queuing system (like the MessageBox or ServiceBus), I don’t see any options coming up for in ordered processing (in sequence). Even though I try to avoid in ordered processing at all time, because it takes away all scalability for those instances, in some scenarios this remains a requirement.
- Loose coupling through pub/sub. I really like the fact that it’s possible to model the end-to-end message flow in one designer and this will help a lot in simplifying integration. However, I don’t yet see how to achieve the similar capabilities for a loose coupled, pub/sub based integration.
During the presentations, it was not yet clear on what level of granularity the microservices would be mapped. Looking at the samples we got to see, it makes me believe that every endpoint (receive/transmit adapter), transformation (maps) and pipeline would result in a separate microservice.
While this would allow us to scale every service very easily, it will definitely lead to a lot of complexity in the deployment and management of our micro-services. This is definitely something that has to be tackled.
Tools & Management
The workflow designer seems to be fully hosted in the Azure portal and is 100% web-based. While I believe this is definitely a very interesting concept for a lot of people, it might also reduce the ability to develop complex workflows, to integrate everything in a source control / ALM system and to get the enterprise level of staging/testing. Also support for debugging and troubleshooting will be key, when execution of workflows is in the cloud.
I also hope there is a capability to manage per “type of microservice”. I want to be able to get an overview of all my receive endpoints, transmit endpoints, etc. Cross workflow.
There were no details available on pricing yet, but if customers would be billed per deployed micro-service, this granularity discussion will be having a large impact on the pricing. We have several customers with hundreds of adapter endpoints, hundreds of pipelines and sometimes more than thousands mappings. Some of these mappings are only executed a few times per week and I’m sure customers don’t want to be billed for that.
What is awesome
I always like to end with the positive note. Because that’s how I feel currently about all of this. I feel optimistic and I see a lot of opportunities coming up.
The partner eco-system will definitely be leveraged with the service. The gallery is having a prominent role in the entire setup and design of composite workflows. While the process of publishing microservices to the gallery is not clear yet, it seems that partner services will become very important. And that is the perfect way to make the platform grow. With Codit we are definitely eager to provide a lot of our components through this gallery. It might also be interesting to publish microservices that are private to certain subscriptions too. Enterprise will definitely have the need for that.
No more dedicated big cloud machines, but small, scalable container services
One of the main disadvantages of BizTalk Services was that (in order to support extensibility and custom code) each BizTalk Service instance resulted in a set of dedicated virtual machines (compute) behind the scenes. And that setup resulted in a billing model that was far off the pay-as-you-go promise of cloud.
Azure Microservices provides and offers a high density, scalable runtime that is built for scalability and that will probably allow a better consumption based billing model.
As described above, the new platform is more granular and allows to scale on a much finer-grained level. This will allow customers to tune and scale that one specific integration flow (autoscale & on demand!), while saving money by downsizing the low-volume processes.
And not only the platform becomes more scalable, the BizTalk team will become more scalable, as they will build on and leverage the engine that is built by another Azure team.
A big pain point with the current BizTalk Server set up is the complexity in deploying. BizTalk also has a very specific implementation around ALM, configuration management and deployment. I see a lot of added value in the fact that these new logic apps are being deployed in the exact same way as other Azure applications, using the Azure Resource Manager model. But here, I’m also curious to see how the settings (in json templates?) will be maintained between different environments and stages of deployment.
Extensibility seems to be through custom microservices. It will be possible to write your own microservice and publish that one for use in Logic apps or workflows. As all microservices run in their own fully isolated environment, bad code of one service won’t be able to impact another service. If that only was the case with the OutOfMemoryExceptions we’ve all had with our custom adapters or pipelines.
I am happy to see the new wind in the BizTalk team and to see there’s a bigger buy in on the Azure platform for this service. Azure Microservices might be the next important step in building composite applications in the cloud. Using auto-scale, it might even evolve to be the answer of Microsoft on the AWS Lambda platform. Let’s all hope that the platform will allow the BizTalk team, partners and customers to build the complex integration processes on top of it.
It are exciting times to be an integration person!
Subscribe to our RSS feed