wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Monday, November 24, 2014 3:50 PM

Massimo Crippa by Massimo Crippa

A successful API is one that is easy to consume, designed to understand the use and prevent misuse. To achieve that, the documentation is crucial. Providing an effective documentation helps to drive the APIs adoption and to reduce the learning curve to perform the API intake in the first place.
Recently we have seen the emergence of new trends to describe and document APIs such as Swagger, RAML and apiblueprint.

Swagger

Swagger is a specification which allows you to layout, describe and then document your API. It's built around JSON to specify API metadata, structure and data models. This results in a language agnostic and machine readable interface which allows both humans and machines to smoothly understand the capabilities of your restful service. Since the version 2.0, the YAML format support has been introduced which makes it even more human readable.

Swagger has a growing developer community and cross-industry participation with a meaningful contribution which is leading to a wide adoption. Beside the specification Swagger has multiple implementations and a rich set of tools which enable you to generate interactive documentation (in sync with the application code), client/server code and potentially to build-up a toolset to help you to build a successful API program.

One of those tools is the built-in swagger-ui , a dependency-free collection of HTML, JavaScript, and CSS assets that takes the swagger API specification as input to visualize and consume your restful API. It’s basically shows the list of APIs (description, version, base path), you can than drill down and see the list of the operations with the http methods,  then the details of every operations, description parameters and return types finally can specify the parameters, try out the API and analyze the response.

ASP.NET

I first tried to add the Swagger documentation to my very basic Web.Api. At the moment Swashbuckle is the best option. Swashbuckle basically looks to the ApiExplorer and generate the swagger JSON self-documentation. It has an embedded version of the swagger-ui and it enables the automatic swagger spec generation simply adding the nuget package to your solution.

The automatically generated specifications are available at this address {baseaddress}/swagger/api-docs and the embedded ui at {baseaddress}/swagger.

The good news here is that Swagger will be integrated in the next version of ASP.NET / Visual Studio (not yet in the 2015 preview) as “confirmed” by Microsoft.
If you're interested to try Swashbuckle, check-out this BlogPost with a detailed step by step guide.

Azure API Management

The Azure API Management allows you to create a new virtual API importing the Swagger or the WADL specification file and this is great. The procedure is straightforward and it’s described here http://azure.microsoft.com/en-us/documentation/articles/api-management-howto-import-api/

From the Swagger ui of the WebApi I created before, I got the JSON of the Characters controller clicking on the “raw” link and I imported it on API management. This is the result along with all the operations. 

After adding the API to a new product and after the publishing the virtual interface is accessible at this address https://coditapi.azure-api.net/bb/APIs/Characters?subscription-key={key}

Azure API Management already have a complete user interface to access to the API documentation and effective built-in console which helps developers to learn how to use the published APIs and speed-up testing. Unfortunately, at the moment the import procedure cannot be used to refresh (or create and swap) a previously created API. This means that the swagger spec cannot be used to keep in sync the virtual API with the backend API preserving the API management settings like security and policies.

Sentinet

Sentinet doesn’t support swagger out of the box so at the moment a possible solution is to introduce a custom implementation leveraging the Sentinet extensibility.

I reached out Andrew Slivker for a comment who confirmed that they’ve already planned to introduce the swagger after the 4.0 release.  I’m thrilled to see the Swagger integration with Sentinet so in the meanwhile I updated my Sentinet demo dashboard to embed the Swagger UI and to link to a swagger specification attached to a virtual REST API I created in Sentinet (or to a public API).

 

Conclusion

Swagger is governable, sharable and readable framework for describing, producing and consuming REST services. No matter which technology you use, no matter which language you prefer, it helps from the professional to the enterprise in the REST adoption that’s why everybody loves Swagger.

 

Cheers,

Massimo

Categories: .NET SOA Azure
written by: Massimo Crippa

Posted on Tuesday, November 18, 2014 4:04 PM

Glenn Colpaert by Glenn Colpaert

This blogpost will describe some weird behavior we've encountered while trying to promote an object to the context. We will take a look at the behavior during receiving and sending of a message.

This blogpost will describe some weird behavior we’ve recently encountered. One of the requirement during our project was that we would promote some values to the context and during development of the pipeline component we discovered the behavior as described below.

The Setup

To promote properties to the context you off course need a property schema. We created following property schema with one property called Customer.

Next to that we have a basic ‘Customer’ class defined that will contain our Customer Object.

Last but not least we have a pipeline component that creates a Customer object and writes that object to the context.

Please note that we’ve intentionally passed the entire customer object as parameter to the Write method. Like the intelligence shows us.

Next to that we have following Receive Location and Send Port created in our BizTalk environment.

Receive Pipeline

When we drop a file in our receive location with above setup and added our CustomerPromotor Pipeline Component to our receive pipeline we noticed following behavior. Our pipeline component executes and our pipeline component return the message successfully.

But the message does not enter the BizTalk MessageBox and the file remains in the drop folder as shown below.

While this message remains in the Inbound folder, the receive location will keep on picking up this message and will keep on trying to process this message unsuccessfully. Important to stress is that this behavior is not triggering any warning or error in the event log. So we were left in the dark when it came to this scenario.

Send Pipeline

So let’s see what kind of behavior will be triggered when we use our pipeline component in a send pipeline scenario.

Again our pipeline component executes and our pipeline component return the message successfully.

But this time we do get a notification that something is wrong and our Message gets suspended with following error:

In the event log we see following error appearing.
A message sent to adapter "FILE" on send port "spCustomerRequestResponse" with URI
"C:\Users\gcolpaert\Desktop\Drop\Out\%MessageID%.xml" is suspended.
Error details: 80004005
MessageId: {6E32D903-E862-4037-9737-FF96D25CDB77}
InstanceID: {EFC3BFEA-8C44-47C5-BE24-4707A76EA1E6}

Conclusion

We discovered this behavior by accident while trying to promote an XML serialized object to the context in an outbound scenario, but we forgot to implement our XML serialization.

The problem with the error we received from BizTalk was not clear and we had to figure it out by debugging our pipeline component. We then took this scenario to our receive location and discovered the problem with the pick-up of the file.

We solved this problem by serializing our customer object to XML string and add that string to the context property, like we planned in the first place. We know not many people will encounter this problem as we mostly promote strings or other base types to the context. The main goal of this blogpost was to intentionally trigger this behavior and get the information out there, because we did not locate any information on this behavior and error.

Hope you enjoyed it!

Cheers,

Glenn Colpaert

Posted on Monday, November 3, 2014 3:31 PM

Massimo Crippa by Massimo Crippa

Sentinet by Nevatech is a lightweight and scalable middleware infrastructure that manages heterogeneous SOA and API services and applications deployed on-premises, in the cloud, or in hybrid environment.

This week I'm in Barcelona attending the Microsoft TechEd so I stopped by at the Nevatech booth 63 and had a chat with Andrew Slivker, the CTO of Nevatech.
I want to share some interesting questions that popped up during our conversation about Nevatech’s Sentinet software product.

Q : What makes Sentinet stand out from other SOA Management offerings?
A: There are actually quite some differentiators. First of all, Sentinet is the only product in SOA Governance and API Management space that is built upon the full Microsoft stack. It is mostly beneficial to organizations that either entirely standardize on a Microsoft stack when it comes to build and operate their services and APIs, or those organizations that use a mix of Microsoft and non-Microsoft service applications and want to integrate them to the fullest extent possible.

Sentinet is also a unified on-premises, cloud and hybrid solution. It is the same product that customers can operate on-premises within their private data centers, in the cloud, or distribute parts of the product between cloud and on-premises in a hybrid scenario. At the same time, Sentinet can manage customer services and APIs that also can be either on-premises, in the cloud and in hybrid environments.

Another differentiator is that Sentinet is highly extensible through standard and simple Microsoft .NET interfaces. Customers can easily add their own specific service behaviors with minimum development effort and integrate these behaviors in the product run-time and User Interface.

Sentinet’s own User Interface is another differentiator where it provides easy to use, highly interactive and yet functionally powerful means to configure and operate complex SOA services and REST API solutions.

 

Q : In the virtualization context, could you shed some light on the Sentinet tracking mechanism and how big is its impact on the virtual service turnaround time?
A: Sentinet is designed to be lightweight and reliable, it processes messages with the minimum average overhead of 5 to 10ms per message, and all the monitoring and business messages tracking information is delivered to the Sentinet Repository asynchronously, which means that the business transactions are not slowed down at the expense of the Sentinet infrastructure.

 


Q: How does the APIs lifecycle management work and how a modification is applied at runtime?
A: The services are transitioned through multiple stages and modifications during their life-cycle. Sentinet users apply these changes using the Sentinet Administrative Console, a rich browser-based application. All changes are tracked for Auditing purposes. Changes that affect the actual runtime environments are delivered remotely to the Sentinet runtime components. For example, a Sentinet user can instruct remote Sentinet Node runtime to open new service endpoint and configure it with specific security policies, access control and monitoring attributes.

 


Q: How do you identify the new product features? From the customers feedback, from your R&D team, from a comparision with the competitors, or what else?
A: All of the above, where customer feedback is the most important source of information for us. We also rely on the past experience of our own engineers and architects who have built large-scale enterprise solutions that have processed billions of commercial transactions up to date.

 


Q: The Sentinet 4.0 release is coming. Could you tell us something about the new features you're going to release?
A: There are multiple areas where we extend Sentinet's capabilities. Some of them are full change audit and change notifications, registering and managing services starting from data schemas (we call it schema first services), graphical services and applications dependency tracking and impact analysis, and highly extensible graphically designed messages Processing Pipeline that will be shipped with a number of built-in message processing and transformation components. As always, we are making our messages Processing Pipeline to be highly extensible through incredibly easy to use .NET interfaces.


Thanks Andrew for answering. We wish you and the Nevatech team the best on the future releases.

For more information on Sentinet and the Codit offering around Sentinet, please visit our Sentinet pages.

 

Massimo

Categories: Sentinet
written by: Massimo Crippa

Posted on Friday, October 24, 2014 6:00 PM

Sam Vanhoutte by Sam Vanhoutte

On October 21, I presented on Azure Hybrid Connections in the Channel 9 studios in Redmond for the AzureConf conference, the biggest online Azure conference in the world. This blog post is the story of my day.

Tuesday was a great day for me.  I had the chance to talk on AzureConf for the first time and it was really great. AzureConf is the biggest Microsoft Azure online conference and it was on for the 3rd time.  I’m really honored, being among this great line-up of speakers.  The conference was streamed from the Channel 9 studios on the Microsoft campus in Redmond and had several thousands of viewers (stats might be shared later on).  Scott Klein organized the conference this year and I really want to thank him and his team for the good organization and the chance we all got.

This blog post is the story of my day.

The preparation

Since I knew I had to present on Hybrid Connections, I immediately started planning for this talk.  I had never given this talk before (had presented Hybrid Connections as part of my session on www.itproceed.be), so there was a lot of preparation needed.   I used the existing presentation of TechEd as input and guideline, but added more specific content and details to it in order to position Hybrid Connections and compare it with Service Bus Relay and Virtual Networking.

I also had some specific questions and things I wanted to get clarified, and for that I could count on the help and guidance of Santosh Chandwani (PM on the hybrid connections team).  As always, I spent most of the time on my demo for which I used our Codit Integration Dashboard and moved it to the cloud, while the data and back end services were still on premises.  I also built a new mobile service and a universal app, my first time.  And to end, I exposes a managed API through Azure API Management.  

Preconf-day

The day before conference, all speakers were invited for technical pre-checks in the Channel 9 studios.  It was great to see the famous studio and all the equipment that was used there.  You immediately felt the atmosphere was really nice over there.

We got to know the nice crew of people there and had to test our laptops for screen projection, network connectivity and sound.  That seemed to be very important as both Mike Martin and me had some screen resolution issues.  Scott also handed our shirts and we all went our own way to prepare for our talk, the day after.  

AzureConf day

October 20 started.  After final dry run of the demo, we drove to the studios at 6:45AM.  Tension was building, as we saw the twittersphere getting more active about this event.  People from all over the world were tuning in for the keynote of Scott Guthrie.

We settled ourselves in the speaker room and all watched Scott Guthrie detailing out a lot of nice announcements that can be found on the azure blog.

The live sessions

We were watching the sessions from the other speakers, either from the speaker room, or from the 'channel9 war room'.  I believe the content of the sessions was very good and showed a good variety of services that are available on the Microsoft Azure platform.  The live sessions are available on channel 9 as well.  So if you have missed a session, go ahead and watch it online.

  • Michael Collier: Michael talked about the resource manager in Azure.  Very interesting capabilities of a service that will definitely evolve over time.
  • Mike Martin: Mike had a nice session on one of the topics that is crucial for every company: backups.  He showed who the Azure platform offers features & services for this.
  • Sam Vanhoutte: I guess that's me.  Hybrid connections, web sites, mobile services, service bus relay & API management.  All in one session.
  • Seth Juarez: This was a great session on one of the newest services in Azure: Machine learning.  By combining humor and complex maths, he made the complex subject of Machine learning much more 'digestable'.
  • Rick G Garibay: Rick gave a session that was very similar to the sessions I gave on IoT at the UKCSUG, Cloudburst and WAZUG.  Positioning the investments of Microsoft around IoT and discussing Reykjavik and the concepts of cloud assisted communications.  Great to see the complex demo worked.  I can guarantee it's not easy.
  • Vishwas Lele: Vishwas showed tons of tools and concepts (of which I believe Docker and the Traffic Manager for SharePoint were really nice).
  • Chris Auld: Chris talked about DocumentDB, the new document database in Azure.  A real good explanation and demonstration of the backend for MSN and Onenote. 

Everything was recorded in the Channel 9 studios and here's a nice group picture of all live speakers with the Channel 9 crew.

The recorded sessions

And to add to the great live content, there are also a lot of recorded sessions available on Channel 9.  I would encourage you all to have a look and download those sessions to watch whenever you have the time as there's real great content out there.

It was a real honour and pleasure to be part of this group of great speakers.  And with this, I would like to thank Scott Klein for having me over, the great crew of Channel 9 and all speakers for the great time.

Sam

 

written by: Sam Vanhoutte

Posted on Wednesday, October 22, 2014 3:35 PM

Maxim Braekman by Maxim Braekman

Ever needed the actions performed by a pipeline component to be different according to the usage of the main or backup transport? This post explains how you can implement a simple verification, before actually performing any of the required changes.

In some occasions an interface requires the send ports to have a configured backup transport location, just in case the primary location is unreachable. These send ports could also be using a pipeline component which has to perform a couple of actions before allowing the file to be transmitted to its destination. Of course the configuration of these components can be modified per send port, but what if these actions also need to differ if the main location is unreachable and the backup transport is being used?

For the purpose of this post let’s say we will be moving a file from one location to another and we need to configure a backup location, just in case. An additional requirement is that the filename needs to contain some values present in the message body, but needs to differ depending on the main/backup location.
Before we start developing the custom pipeline component - needed because of the filename requirement - we need to find out how we can implement a check on what location is being used.

This can be done by setting up a receive and send port, of course with a configured backup transport location on the latter, and make sure the primary send location is unreachable to allow for the backup transport to kick in. Looking at the tracking this would give us something as shown below.

As you can see, current configuration allows the send port to retry 3 times before allowing the backup transport to be used. Once an attempt is being made to send to the backup location, the message is processed successfully.
Now let’s have a closer look at how we can identify what type of transport location is being used. Opening up the context properties of the first ‘transmission failure’-record shows us the existence of a context property named ‘BackupEndpointInfo’

As you can see, this property is containing a GUID referencing to the backup transport which can be used if the main location is unreachable. Now, what if we have a look at the context properties of the message when it is actually being sent over the backup transport?

The ‘BackupEndpointInfo’-property is still present, although it no longer contains any value, since a backup location cannot have another backup.
In order to have a complete overview of the existence/usage of this property, let’s create a send port which does not have a backup transport location configured, and is referring to a correct, reachable location. Send another message through this flow and look at the context properties of the message being processed by this new send port.

Note that these properties are sorted alphabetically and no ‘BackupEndpointInfo’-property is present.
So, this means the property is only available when the backup transport is configured and only contains a value if the send port is sending to the main transport location.

Implementation within custom pipeline component

Since we have figured out what property can be used to check the transport location used, this info can be used to implement our custom pipeline component.
The code can be found below, but to summarize, you have to attempt to retrieve the BackupEndpointInfo-property from the context. If the returned object is NULL, no backup transport has been configured. If the property is present, you can define based on the value if the actual backup transport is being executed or not.

 

Conclusion

Whenever you need to find out if a backup transport is configured and is being executed, check the context properties for the existence of the context property ‘BackupEndpointInfo’, for namespace ‘http://schemas.microsoft.com/BizTalk/2003/system-properties’.

 

Categories: BizTalk Pipelines
written by: Maxim Braekman