wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Thursday, December 18, 2014 8:06 AM

Sam Vanhoutte by Sam Vanhoutte

This post describes how you can secure specific azure service bus relay endpoints with Shared Access Signatures and move away from the ACS sharedsecret credentials.

For a few months, it is clear that the Azure Service Bus team is moving away from ACS (but still supports it!) as the main authentication mechanism.  This has been detailed in various blog posts and news items and the intention of this blog post is not to repeat what has been said before.

Instead, we will focus on how it is possible to secure relay endpoints with Shared Access Signatures.  This is currently not well-documented.  This blog post describes how you can easily secure a relay service, using SAS.

Concept

Too often, we see the usage of the RootManageSharedAccessKey in demo and sample code (mine sometimes included).  And earlier, a lot of those samples were using the 'owner' SecretKey.  What's worse, I've seen these things in production.  The main reason is probably that it is too much work for people to create specific user rights on messaging entities or on relay endpoints.  But ignoring security is never good, right?

Especially when deploying services in the field, in remote data centers, computers and even devices.  There it is crucial to ensure that the credentials only allows that specific service, device or client to perform the actions they are entitled to.  By doing so, it is also easy to revoke one of those clients if that would be needed, without impacting the other existing connections.

Secure on path-level (Uri)

Just like we can enable authorization rules on messaging entities, such as queues or topics, we can also enable specific access rules for the hierarchical paths on relay endpoints.  When we deploy several of our cloud connectors for Integration Cloud (these are on prem agents, exposing a relay endpoint), we always foresee a unique path per connector endpoint and we also make sure that the credentials that are deployed with the connector have the minimum rights to listen on that exact URI (path) and not on other locations.

Security validation with Relay Services

When using the WCF bindings for relay services, you specify service credentials and the Path where the service will be registered.  When a service wants to register on such an endpoint, the Azure Service Bus Relay service will verify if the following conditions are met:

  • Is the service registering with valid credentials? (authentication)
  • Is the service allowed to Listen on that specific Uri/Path with those credentials? (authorization)

The same method is used when a client is calling a relay endpoint with specific credentials.  Instead now it is verified if he has the Send access rights.  The next table shows the various access rights.

Access rights

The following access rights are available to secure the hierarchy of a Service Bus namespace.  There is a difference in usage and working between the Relay Services and Messaging entities.

  Relay Services Messaging entities
Listen

Right to open a service endpoint on the Uri

Right to receive messages from entities on the entities under that Uri

Send

Right to call a service endpoint on the Uri 

Right to send messages to entities on the entities under that Uri

Manage

Right to change settings of relay endpoints

Right to create, change or delete entities under that Uri

 

Create SAS authorization rules 

In order to deploy a relay service in the wild, we want to use security credentials that have the minimum required Access Rights.  The following code sample shows how to do this.

What's important to notice here is that we explicitly define the RelayDescription for a specific Path and on that Path, we create one or more SharedAccessAuthorizationRules.  These can be used to give Listen, Send or Manage rights. 

In practice, there will be a SharedAccessAuthorizationRule for the service (ListenAccessKey and one for every client that needs to call the service (in this case we created one: SendAccessKey).  This way we have full flexibility in revoking clients and when the SAS Key of the service gets compromised, the potential damage is restricted to the subpath of our namespace.

Using the keys in the web service or web clients

To use the keys in the web client or the web service, the following app/web.config settings can be specified.  There is one thing that is crucial here: the binding has to specify that it is not a dynamic binding!  If this is not done, you can get the AddressAlreadyInUseException: System.ServiceModel.AddressAlreadyInUseException: System error.  Therefore, use the isDynamic="false" setting. 

Note: This took me a real long time to find out and it was Dan Rosanova who pointed that one out to me.  Thanks for that!

The web service web.config

The web client app.config

These things are also be available through the latest update of the ServiceBusExplorer tool, by Paolo Salvatori.

Posted on Saturday, December 6, 2014 12:19 AM

Peter Borremans by Peter Borremans

Today Microsoft went into more detail on the Host Integration roadmap. Read all about it in this article.

Host Integration Server Roadmap

As integration specialists, Codit developed numerous integration projects involving Host Systems.

After the announcements during Integrate 2014, I was very interested to see how BizTalk, BizTalk Services and Host Integration will cope with these changes.

The Host Integration Team represented by Paul Larsen, published a clear roadmap of how Host Integration Server will evolve.
The following Host Integration Server features will become available as Microservices:

  • CICS, IMS and i programs application integration
  • DB2 and Informix databases
  • WebSphere MQ messages (using MS client)

The Host Integration Team will also provide connectors to use in Power BI for Office to DB2 and Informix databases (Power Query, Power Pivot)

Host Integration Server vNext will support Informix databases for both ADO.NET and BizTalk Server.

I was very pleased to see the clear and concrete roadmap and hope to see this from the other product teams as well.

Posted on Friday, December 5, 2014 1:43 PM

Peter Borremans by Peter Borremans

A first glance and closer look at the Azure Microservices technology, the new integration platform within the Microsoft stack.

Azure Microservices – first glance

As my colleague Sam promised in his ‘initial thoughts’ blog post about Azure Microservices, we would come back to you with as much details about Azure Microservices as we can right now.

The Azure Microservice technology will be leveraged as the core to build the new integration platform within the Microsoft stack. As this will be the technology underneath the integration platform, it deserves a closer look.

Let’s have a look at the platform of Azure Microservices. The platform of Azure Microservices contains the following parts:

  • Hosting
  • Development
  • Gateway
  • Workflow engine
  • Gallery

Hosting

Azure Microservices will be hosted in Azure App Containers, which are run as Azure Websites. The choice for Azure Websites as a hosting environment is not a coincidence, it is an enterprise grade cloud that supports global scale and already runs millions of websites and Web API’s.

Each Microservice exists as an independently deployable unit of logic/functionality that is exposed via a RESTful API.

Development

The Azure Microservices technology is open to a wide range of developers. Microservices can be written in one of the following languages: .Net, Java, PHP, Python, Node. This makes it possible for developers to use the language they are most productive in.

Gateway

The gateway will handle calls between Microservices. Microservices will never call each other directly.
Having the gateway in between Microservices calls allows to implement security, monitoring and governance in a centralized manner.

Workflow engine

The workflow engine will orchestrate API execution or the Microservices in that workflow. This workflow definition will be JSON based!

Each of these Microservices in a workflow will be monitored closely. The workflow engine will allow to monitor parameters like installed applications, number of calls to components, network traffic, detailed performance data, up-time and crashes.

Gallery

The gallery will contain Microservices that you can use that are developed by you, Microsoft or third party organizations. Microservices you create can be kept either private to your organization or made public for Azure users. The gallery will allow reusing existing functionality and be more productive when delivering complete workflows.

A Microservice author who publishes his Microservice to the gallery will also get feedback about the performance of his Microservices. Crash logs will be communicated to the Microservice author.

Integration platform on Microservices

To make integration possible on this technology, the integration concepts we use today will be implemented on the Microservices platform. Transform, trading partner management, connectors, rule engine, validation, batching (…) will all be made available as a Microservice that can be plugged into your workflows.

 

 

Categories: Integration Cloud
written by: Peter Borremans

Posted on Wednesday, December 3, 2014 8:30 PM

Sam Vanhoutte by Sam Vanhoutte

Today, Bill Staples mentioned a new Azure concept on the BizTalk Integration Summit in Redmond. Azure BizTalk Microservices is a new architecture that allows Azure customers to build composite applications, by using granular Microservices.
In this post, I am giving my vision on this change of direction and I also take the chance to see where opportunities can be found for integration partners (like Codit) and where I see challenges for building enterprise integration solutions on top of this new platform.

Microsoft Azure BizTalk Microservices

Important update / note

Some things were left open for interpretation on the first days and that might have lead to some misunderstandings around Azure Microservices.  Therefore, I am adding some comments here to make things more clear:

  • Azure Microservices, nor BizTalk Microservices, is a product or an official service name.  It is an architectural concept that is used in the 'future appplat platform'.
  • BizTalk Services remains supported and backed by the official SLA

end of update

Today, Bill Staples mentioned a new Azure concept on the BizTalk Integration Summit in Redmond.  Azure BizTalk Microservices is a new architecture that allows Azure customers to build composite applications, by using granular Microservices.

The fact that this new service was first mentioned on a BizTalk event was not by coincidence.  In fact, the BizTalk Services team will be building on top of BizTalk Microservices for their new wave of Cloud Integration capabilities.  It looks like the new Workflow capabilities will be built on top of BizTalk Microservices, leveraging various microservices offered by the BizTalk team and integration partners.

Microsoft announced that this service will also be available through the Azure Pack, allowing customers to run this service in the cloud of their choice.  It is also mentioned that there will be a lot of support for migrating artifacts from BizTalk Server to this new platform.

More details will follow during the rest of this event.  But it looks like the BizTalk team will be building on a broader, existing platform service.  This will provide much more scalability and it looks like it will be a more 'true' cloud model, allowing auto scale and flexibility.

In this post, I am giving my vision on this change of direction and I also take the chance to see where opportunities can be found for integration partners (like Codit) and where I see challenges for building enterprise integration solutions on top of this new platform.  

First thoughts

Honestly, I was surprised with the sudden chance of direction, when we first learned from this.  We have been working very close together with the BizTalk team over the past years and we still are.  On last year's BizTalk Summit, we learned that the product team was working to provide a Workflow engine, Business Rules and BAM in the BizTalk Services offering.  And now it seems that instead of building an engine, they will build on a 'shared engine' that will be used by customers and other Azure Services. 

But, after some discussions and thinking things through, I believe it is for the best.  I think there's a lot of advantages in having the BizTalk team build on a more widely adopted and supported engine, rather than building one from the ground up.  This way the BizTalk team can leverage efforts from a larger team and can focus on their own added value: enterprise messaging patterns, transformations, adapters, hybrid connectivity, business rules and advanced workflow logic.

Looking at the various samples that were presented during the BPM talks, they seemed to be rather simple and focused to the IFTTT-type of scenarios.  I can't wait to get my hands on it and try to build a real-world workflow from some of our customers on top of this.  

Microservices, the concepts

Martin Fowler (who can arguably be seen as the controversial advocate of the microservice architectural style), defines the stlye as follows (quote) on his page (http://martinfowler.com/articles/microservices.html)

In short, the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare mininum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.

If we look at the BizTalk Microservices as they have been presented today, we can indeed see that the various BizTalk Microservices will all run in their own scalable container (similar to Azure web sites) and that the communication engine seems to be following the lightweight HTTP approach.  The deployment capabilities of Azure Resource Manager are definitely interesting in the light of the modeling and composite application definition.

Challenges

Let's start with some challenges I'm seeing with the new architecture.  This is purely my point of view and I'm sure a lot of these problems have already been tackled or will be tackled in the coming releases.  As always, we at Codit will be investing time and resources in testing these services and providing feedback to the teams.

Messaging engine

I am curious to see what the latency will be for high-throughput integrations where a lot of 'messaging only' actions have to be done.  With BizTalk Server, people always optimized the performance of solutions by reducing the number of 'MessageBox calls' to the bare minimum.  It seems that the equivalent of this will be the number of hops there will be between Microservices.  At first this seems to be rather expensive and it also seems to break concepts of message streaming and transactional behavior.  So, this is definitely an attention point that I'm eager to learn more about.

Complex enterprise integration messaging patterns

Looking at the engine, I am very interested to learn how (and if) the new architecture will be able to handle the following complex messaging concepts.  Most of these things were addressed on the high level and it was claimed that these things would be possible.

  • Large messages.  HTTP, nor Service Bus are suitable to process larger messages.  I hope that the engine allows the processing of these messages without having to 'program for it'.
  • Batching and debatching.  A common integration scenario is to receive an interchange that has to be split into several messages.  With BizTalk Server, it is possible to define if you want to enable recoverable interchange processing.  Curious to see how if we'll be able to configure this in the new architecture.
  • In ordered processing.  Without a queuing system (like the MessageBox or ServiceBus), I don't see any options coming up for in ordered processing (in sequence).  Even though I try to avoid in ordered processing at all time, because it takes away all scalability for those instances, in some scenarios this remains a requirement.
  • Loose coupling through pub/sub.  I really like the fact that it's possible to model the end-to-end message flow in one designer and this will help a lot in simplifying integration.  However, I don't yet see how to achieve the similar capabilities for a loose coupled, pub/sub based integration.

Granularity

During the presentations, it was not yet clear on what level of granularity the microservices would be mapped.  Looking at the samples we got to see, it makes me believe that every endpoint (receive/transmit adapter), transformation (maps) and pipeline would result in a separate microservice.

While this would allow us to scale every service very easily, it will definitely lead to a lot of complexity in the deployment and management of our micro-services.  This is definitely something that has to be tackled.

Tools & Management

The workflow designer seems to be fully hosted in the Azure portal and is 100% web-based.  While I believe this is definitely a very interesting concept for a lot of people, it might also reduce the ability to develop complex workflows, to integrate everything in a source control / ALM system and to get the enterprise level of staging/testing.  Also support for debugging and troubleshooting will be key, when execution of workflows is in the cloud.

I also hope there is a capability to manage per "type of microservice".  I want to be able to get an overview of all my receive endpoints, transmit endpoints, etc.  Cross workflow. 

Billing model

There were no details available on pricing yet, but if customers would be billed per deployed micro-service, this granularity discussion will be having a large impact on the pricing.  We have several customers with hundreds of adapter endpoints, hundreds of pipelines and sometimes more than thousands mappings.  Some of these mappings are only executed a few times per week and I'm sure customers don't want to be billed for that.

What is awesome 

I always like to end with the positive note.  Because that's how I feel currently about all of this.  I feel optimistic and I see a lot of opportunities coming up. 

Partner eco-system

The partner eco-system will definitely be leveraged with the service.  The gallery is having a prominent role in the entire setup and design of composite workflows.  While the process of publishing microservices to the gallery is not clear yet, it seems that partner services will become very important.  And that is the perfect way to make the platform grow.  With Codit we are definitely eager to provide a lot of our components through this gallery.  It might also be interesting to publish microservices that are private to certain subscriptions too.  Enterprise will definitely have the need for that.

No more dedicated big cloud machines, but small, scalable container services

One of the main disadvantages of BizTalk Services was that (in order to support extensibility and custom code) each BizTalk Service instance resulted in a set of dedicated virtual machines (compute) behind the scenes.  And that setup resulted in a billing model that was far off the pay-as-you-go promise of cloud. 

Azure Microservices provides and offers a high density, scalable runtime that is built for scalability and that will probably allow a better consumption based billing model.

Scalability

As described above, the new platform is more granular and allows to scale on a much finer-grained level.  This will allow customers to tune and scale that one specific integration flow (autoscale & on demand!), while saving money by downsizing the low-volume processes.

And not only the platform becomes more scalable, the BizTalk team will become more scalable, as they will build on and leverage the engine that is built by another Azure team.

Deployment model

A big pain point with the current BizTalk Server set up is the complexity in deploying.  BizTalk also has a very specific implementation around ALM, configuration management and deployment.  I see a lot of added value in the fact that these new logic apps are being deployed in the exact same way as other Azure applications, using the Azure Resource Manager model.  But here, I'm also curious to see how the settings (in json templates?) will be maintained between different environments and stages of deployment.

Extensibility

Extensibility seems to be through custom microservices.  It will be possible to write your own microservice and publish that one for use in Logic apps or workflows.  As all microservices run in their own fully isolated environment, bad code of one service won't be able to impact another service.  If that only was the case with the OutOfMemoryExceptions we've all had with our custom adapters or pipelines.

Conclusions

I am happy to see the new wind in the BizTalk team and to see there's a bigger buy in on the Azure platform for this service.  Azure Microservices might be the next important step in building composite applications in the cloud.  Using auto-scale, it might even evolve to be the answer of Microsoft on the AWS Lambda platform.  Let's all hope that the platform will allow the BizTalk team, partners and customers to build the complex integration processes on top of it.

It are exciting times to be an integration person!

written by: Sam Vanhoutte

Posted on Monday, November 24, 2014 3:50 PM

Massimo Crippa by Massimo Crippa

A successful API is one that is easy to consume, designed to understand the use and prevent misuse. To achieve that, the documentation is crucial. Providing an effective documentation helps to drive the APIs adoption and to reduce the learning curve to perform the API intake in the first place.
Recently we have seen the emergence of new trends to describe and document APIs such as Swagger, RAML and apiblueprint.

Swagger

Swagger is a specification which allows you to layout, describe and then document your API. It's built around JSON to specify API metadata, structure and data models. This results in a language agnostic and machine readable interface which allows both humans and machines to smoothly understand the capabilities of your restful service. Since the version 2.0, the YAML format support has been introduced which makes it even more human readable.

Swagger has a growing developer community and cross-industry participation with a meaningful contribution which is leading to a wide adoption. Beside the specification Swagger has multiple implementations and a rich set of tools which enable you to generate interactive documentation (in sync with the application code), client/server code and potentially to build-up a toolset to help you to build a successful API program.

One of those tools is the built-in swagger-ui , a dependency-free collection of HTML, JavaScript, and CSS assets that takes the swagger API specification as input to visualize and consume your restful API. It’s basically shows the list of APIs (description, version, base path), you can than drill down and see the list of the operations with the http methods,  then the details of every operations, description parameters and return types finally can specify the parameters, try out the API and analyze the response.

ASP.NET

I first tried to add the Swagger documentation to my very basic Web.Api. At the moment Swashbuckle is the best option. Swashbuckle basically looks to the ApiExplorer and generate the swagger JSON self-documentation. It has an embedded version of the swagger-ui and it enables the automatic swagger spec generation simply adding the nuget package to your solution.

The automatically generated specifications are available at this address {baseaddress}/swagger/api-docs and the embedded ui at {baseaddress}/swagger.

The good news here is that Swagger will be integrated in the next version of ASP.NET / Visual Studio (not yet in the 2015 preview) as “confirmed” by Microsoft.
If you're interested to try Swashbuckle, check-out this BlogPost with a detailed step by step guide.

Azure API Management

The Azure API Management allows you to create a new virtual API importing the Swagger or the WADL specification file and this is great. The procedure is straightforward and it’s described here http://azure.microsoft.com/en-us/documentation/articles/api-management-howto-import-api/

From the Swagger ui of the WebApi I created before, I got the JSON of the Characters controller clicking on the “raw” link and I imported it on API management. This is the result along with all the operations. 

After adding the API to a new product and after the publishing the virtual interface is accessible at this address https://coditapi.azure-api.net/bb/APIs/Characters?subscription-key={key}

Azure API Management already have a complete user interface to access to the API documentation and effective built-in console which helps developers to learn how to use the published APIs and speed-up testing. Unfortunately, at the moment the import procedure cannot be used to refresh (or create and swap) a previously created API. This means that the swagger spec cannot be used to keep in sync the virtual API with the backend API preserving the API management settings like security and policies.

Sentinet

Sentinet doesn’t support swagger out of the box so at the moment a possible solution is to introduce a custom implementation leveraging the Sentinet extensibility.

I reached out Andrew Slivker for a comment who confirmed that they’ve already planned to introduce the swagger after the 4.0 release.  I’m thrilled to see the Swagger integration with Sentinet so in the meanwhile I updated my Sentinet demo dashboard to embed the Swagger UI and to link to a swagger specification attached to a virtual REST API I created in Sentinet (or to a public API).

 

Conclusion

Swagger is governable, sharable and readable framework for describing, producing and consuming REST services. No matter which technology you use, no matter which language you prefer, it helps from the professional to the enterprise in the REST adoption that’s why everybody loves Swagger.

 

Cheers,

Massimo

Categories: .NET SOA Azure
written by: Massimo Crippa