wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Thursday, April 28, 2016 3:17 PM

Toon Vanhoutte by Toon Vanhoutte

Jonathan Maes by Jonathan Maes

A real life example of how redis caching improved the performance of a large scale BizTalk messaging platform significantly.

With some colleagues of Codit, we’re working on a huge messaging platform between organizations, which is built on top of Microsoft BizTalk Server. One of the key features we must deliver is reliable messaging. Therefor we apply AS4 as a standardized messaging protocol. Read more about it here. We use the AS4 pull message exchange pattern to send the messages to the receiving organization. Within this pattern, the receiving party sends a request to the AS4 web service and the messaging platform returns the first available message from the organizations inbox.

Initial Setup

Store Messages

In order to support this pattern, the messages must be stored in a durable way. After some analysis and prototyping, we decided to use SQL Server for this message storage. With the FILESTREAM feature enabled, we are able to store the potential large message payloads on disk within one SQL transaction.

(1) The messages are stored in the SQL Server inbox table, using a BizTalk send port configured with the WCF-SQL adapter. The message metadata is saved in the table itself, the message payload gets stored on disk within the same transaction via FILESTREAM.

Retrieve Messages

As the BizTalk web service that is responsible for returning the messages will be used in high throughput scenarios, a design was created with only one pub/sub to the BizTalk MessageBox. This choice was made in order to reduce the web service latency and the load on the BizTalk database.

These are the two main steps:

(2) The request for a message is received and validated on the WCF receive port. The required properties are set to get the request published on the MessageBox and immediately returned to the send pipeline of the receive port. Read here how to achieve this.

(3) A database lookup with the extracted organization ID returns the message properties of the first available message. The message payload is streamed from disk into the send pipeline. This avoids that a potential large message gets published on the MessageBox. The message is returned via this way to the receiving party. In case there’s no message available in the inbox table, a warning is returned.

Potential Bottleneck

The pull pattern puts a lot of additional load on BizTalk, because many organizations (+100) will be pulling for new messages within regular time intervals (e.g. each 2 seconds). Each pull request is getting published on the BizTalk MessageBox, which causes extra overhead. As these pull requests will often result in a warning that indicates there’s no message in the inbox, we need to find a way to avoid overwhelming BizTalk with such requests.

Need for Caching

After some analysis, it became clear that caching is the way to go. Within the cache, we can keep track of the fact whether a certain organization has new messages in its inbox or not. In case there are no messages in the inbox, we need to find a way to bypass BizTalk and return immediately a warning. In case there are messages available in the organization’s inbox, we just continue the normal processing as described above. In order to select the right caching software, we listed the main requirements:

  • Distributed: there must be the ability to share the cache across multiple servers
  • Fast: the cache must provide fast response times to improve message throughput
  • Easy to use: preferably simple installation and configuration procedures
  • .NET compatible: we must be able to extend BizTalk to update and query the cache

It became clear that redis meets our requirements perfectly:

  • Distributed: it’s an out-of-process cache with support for master-slave replication
  • Fast: it’s an in-memory cache, which ensures fast response times
  • Easy to use: simple “next-next-next” installation and easy configuration
  • .NET compatible: there's a great .NET library that is used on Stack Overflow

Implement Caching

To ease the implementation and to be able to reuse connections to the cache we have created our own RedisCacheClient. This client has 2 connection strings: one to the master (write operations), and one to the client (read operations). You can find the full implementation on the Codit GitHub. The redis cache is implemented in a key/value way. The key contains the OrganizationId, the value contains a Boolean that indicates whether there are messages in the inbox or not. Implementing the cache, is done on three levels:

(A) In case a warning is returned that indicates there’s no message in the inbox, the cache gets updated to reflect the fact that there is no message available for that particular OrganizationId. The key/value pair gets also a time-to-live assigned.

(B) In case a message is placed on the queue for a specific organization, the cache gets updated to reflect the fact that there are messages available for that particular OrganizationId. This ensures that the key/value pair is updated as new messages arrive. This is faster than waiting for the time-to-live to expire.

(C) When a new request arrives, it is intercepted by a custom WCF IOperationInvoker. Within this WCF extensibility, the cache is queried with the OrganizationId. In case there are messages in the inbox, the IOperationInvoker behaves as a pass-through component. In case the inbox of the organization is empty, the IOperationInvoker bypasses the BizTalk engine and immediately returns the warning. This avoids the request to be published on the message box. Below there's the main part of the IOperationInvoker, make sure you check the complete implementation on Github.

Results

After implementing this caching solution, we have seen a significant performance increase of our overall solution. Without caching, response times for requests on empty inboxes were on average 1,3 seconds for 150 concurrent users. With caching, response times decreased until an average of 200 ms.

Lessons Learned

Thanks to the good results, we introduced redis cache on other functionality in our solution. We use it for caching configuration data, routing information and validation information. During the implementation, we encountered some lessons learned:

  • Redis is a key/value cache, change your mindset to use it to the maximum.
  • Re-use connections to the cache, as this is the most costly operation.
  • Avoid serialization of cached objects.

Thanks for reading!
Jonathan & Toon

Categories: BizTalk, Performance

Posted on Thursday, April 21, 2016 2:39 PM

Maxim Braekman by Maxim Braekman

BizTalk Server 2010 does not support the use of TLS1.2. Learn how there is a way to get this up and running anyway.

Setting up the connection from a BizTalk Server 2010 send port towards a service with transport-security (https) using certificates, is not always straight-forward. But as long as you’re attempting to use SSL3.0 or TLS1.0, it should, in most cases, not be rocket-science.

However, when attempting to address a service, utilizing the security protocol TLS v1.2, you might get the error as shown below.

The adapter failed to transmit message going to send port " Sp_SendToService_WCF-Custom" with URL " https://some-service/Action". It will be retransmitted after the retry interval specified for this Send Port. Details:"System.ServiceModel.Security.MessageSecurityException: The HTTP request was forbidden with client authentication scheme 'Anonymous'. ---> System.Net.WebException: The remote server returned an error: (403) Forbidden. 
  at System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult) 
   at System.ServiceModel.Channels.HttpChannelFactory`1.HttpRequestChannel.HttpChannelAsyncRequest.CompleteGetResponse(IAsyncResult result) 
   --- End of inner exception stack trace ---

Server stack trace: 
   at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result) 
   at System.ServiceModel.Channels.ServiceChannel.SendAsyncResult.End(SendAsyncResult result) 
   at System.ServiceModel.Channels.ServiceChannel.EndCall(String action, Object[] outs, IAsyncResult result) 
   at System.ServiceModel.Channels.ServiceChannel.EndRequest(IAsyncResult result)

Exception rethrown at [0]: 
   at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) 
   at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) 
   at System.ServiceModel.Channels.IRequestChannel.EndRequest(IAsyncResult result) 
   at Microsoft.BizTalk.Adapter.Wcf.Runtime.WcfClient`2.RequestCallback(IAsyncResult result)".

 

The cause to this issue is the fact that .NET 4.0, which is what BizTalk Server 2010 on Windows Server 2008 (R2), will be running on, does not support anything other than SSL v3.0 and TLS v1.0.

.NET Framework 4.5 however, does support the use of TLS v1.1 and TLS v1.2, therefore is seems obvious that in order for this connection to work, the installation of this version of the .NET framework should be required.

Install .NET v4.5.2

In this case, we chose to install the .NET Framework v4.5.2, just to get all of the latest bits and bobs within the .NET Framework v4.5.

The installer of this version of the framework can, of course, be downloaded from the Microsoft-site:

https://www.microsoft.com/en-us/download/details.aspx?id=42642

The process of installation is very straight-forward, just follow the wizard right up to the point a server-reboot is requested.

Update registry-settings.

Since the installation of the .NET Framework 4.5.2 by itself is not enough to make sure that BizTalk is actually able to use TLS1.2, you need to make some changes in the registry.

Create the following keys and matching DWORDs.

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client]
   “DisabledByDefault”=dword:00000000
   “Enabled”=dword:00000001 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server]
   “DisabledByDefault”=dword:00000000
   “Enabled”=dword:00000001

 

Now, set the .NET Framework 4.0 to use the latest version of the SecurityProtocol, by creating the DWORDs mentioned below, for both 32- and 64-bit hosts.

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v4.0.30319]
   "SchUseStrongCrypto"=dword:00000001

[HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\.NETFramework\v4.0.30319]
   "SchUseStrongCrypto"=dword:00000001

 

Beware! While this will enable BizTalk to use TLS1.2, this will default the SecurityProtocol to this version. In order to use a different version, a custom endpoint behavior would be required.

Once all of the registry-keys have been created/modified, reboot the server in order for the changes to come into effect.

Ready.. set.. (no) go!

Once all of the registry-settings are modified and the server has been rebooted, the next step is to test the connection. In my case this meant triggering the BizTalk flow in order to send a request towards the service.

Unfortunately, the send port got the ‘Transmission Failure’-event type, which clearly meant something is still off. Firstly I wanted to make sure that BizTalk was actually attempting to set up the connection using TLS1.2.

In order to make sure what protocol was being used, I opted to go for Wireshark. Therefore, the next step was to start the Wireshark-trace and trigger the BizTalk flow once more.

As can be seen in the screenshot below, BizTalk Server will actually be using security protocol TLS1.2 at this point, thanks to all of the changes to the registry, as mentioned before.

Once you are sure BizTalk Server 2010 is using TLS 1.2 - even if you are still getting an exception - you no longer need to think about this part of the setup. The next step, however, is to troubleshoot the cause of the error.

Perhaps before diving into the exception handling, it might come in handy to get a little overview of the TLS handshake protocol that is being used to set up the secure channel in between the client and server. Therefore a schema can be found below, which explains what steps are being performed by both client and server.

Running into a ‘Could not create SSL/TLS secure channel’-exception

Rest assured, you are not the only one who is running into a few hiccups while setting up a communication-channel based on TLS1.2 when utilizing BizTalk Server. One of the possible errors you might be getting when actually testing the connection is the one you can read below, which is not quite as elaborate as it could be, or at least not as elaborate as I would have wanted it to be:

The adapter failed to transmit message going to send port "Sp_SendToService_WCF-Custom" with URL "https://some-service/Action". It will be retransmitted after the retry interval specified for this Send Port. Details:"System.ServiceModel.Security.SecurityNegotiationException: Could not establish secure channel for SSL/TLS with authority 'some-service’. ---> System.Net.WebException: The request was aborted: Could not create SSL/TLS secure channel.
   at System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult)
   at System.ServiceModel.Channels.HttpChannelFactory`1.HttpRequestChannel.HttpChannelAsyncRequest.CompleteGetResponse(IAsyncResult result)
   --- End of inner exception stack trace ---

Server stack trace:
   at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result)
   at System.ServiceModel.Channels.ServiceChannel.SendAsyncResult.End(SendAsyncResult result)
   at System.ServiceModel.Channels.ServiceChannel.EndCall(String action, Object[] outs, IAsyncResult result)
   at System.ServiceModel.Channels.ServiceChannel.EndRequest(IAsyncResult result)

Exception rethrown at [0]:
   at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)
   at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)
   at System.ServiceModel.Channels.IRequestChannel.EndRequest(IAsyncResult result)
   at Microsoft.BizTalk.Adapter.Wcf.Runtime.WcfClient`2.RequestCallback(IAsyncResult result)".

 

Okay, this is indicating something went wrong during the actual creation of the communication channel in between BizTalk Server and the target web-service. But as to what might be the cause to this error, isn’t quite as clear as you would want it to be. But hey, what developer doesn’t like a good challenge, right?

Some of the possible solutions you might find while googling this problem are listed below, just in case you might be in one of these situations:

(FYI: the last one in the list, solved the problem in our case.)

Your certificates are in the wrong store

Make sure that the root-certificate is in the actual ‘Trusted Root Certification Authorities’-store and the certificate to be used for this communication is in the ‘Personal’-store.

One possible way to check if these have been imported correctly is to open up the properties of the signing certificate (the one with the private key, that should be in the ‘Personal’-store) and verify if there is not an error-symbol showing as in the screenshot below. If this seems OK, good chances are, the certificates are in the correct store.

Check the send port configuration

While you might be thinking about the more advanced stuff, it is easy to overlook the obvious cases. Just make sure that within the configuration of the WCF-Custom send port, you have set the Security-mode to ‘Transport’ and the ‘Transport/ClientCredentialType’ to ‘Certificate’. 

Explicitly setting the SecurityProtocol to be used, via custom endpoint behavior

One of the other possible solutions might be to explicitly set the SecurityProtocol to be used for this connection, by writing a custom endpoint behavior which does it for you.

While this shouldn’t be required, since the aforementioned registry-settings should’ve made sure that TLS1.2 is being used, the section below should be added into the custom endpoint behavior.

Note: this could also be used in case of using a different SecurityProtocol for other send ports.

A nice recap on how to create such a custom endpoint behavior, can be found in this post, by Mathieu Vermote, while the several ways of registering such a behavior can be found in Toon Vanhoutte’s post over here.

Check the SSL Cipher Suites and their order.

Now this is the one that did the trick for me. While this might not be the most obvious setting one might be thinking of, this does play an important role, while performing the handshake-process during the creation of the channel in between BizTalk Server and the actual service.

While using Wireshark to troubleshoot the connectivity issues, we noticed that the actual negotiation in between client and server went OK and BizTalk Server was actually using TLS 1.2. However, when the actual data-transmission was supposed to start, the handshake was failing and an ‘Encrypted Alert’ was returned by the server, which was followed by the closing down of the connection.

When drilling down into the trace, you will notice that the client, BizTalk in our case, is sending an entire list of possible cipher suites to be used for this connection, while the server will respond by choosing one of these versions. In our case, the server just picked the first one, and responded using this version. 

After troubleshooting for some time, we got informed by the host of this service, that it was only supporting these SSL Cipher Suites:

  • TLS_RSA_WITH_AES_256_GCM_SHA384
  • TLS_RSA_WITH_AES_128_GCM_SHA256
  • TLS_RSA_WITH_AES_256_CBC_SHA256
  • TLS_RSA_WITH_AES_128_CBC_SHA256

Comparing these values, with those we got returned to us by the server, as can be seen in Wireshark, clearly indicated this might be the root-cause to our problem.

Since we could see, based on the complete Wireshark-trace that the aforementioned, required cipher suites were available, we had to check whether the order of these cipher suites could be changed. The same check can be performed to add additional cipher suites to the list, in case the required one is missing.

In order to check this order, 2 approaches can be used:

Local Group Policy Editor

This editor can be opened by typing “gpedit.msc” in de “Run”-dialogue. Once the editor has popped up, navigate down towards the “SSL Configuration Settings”, to find the “SSL Cipher Suite Order”-property.

Opening up the information of this property, shows you a little “Help”-section indicating that if no specific order has been specified, the default order will be used.

However, when looking at this order, those required by the service, in this case, were actually at the top of list. This indicates that this property is in fact not showing the actual order of the SSL Cipher Suites.

Third-party tool: IISCrypto40

As the order of the SSL Cipher Suites did not seem to be correct, based on what we got to see in Wireshark, the third party tool named ‘IISCrypto40’ was used to verify the order of the cipher suites.

The tool itself, can be downloaded at this location: https://www.nartac.com/Products/IISCrypto

Based on what we got to see with this tool, verified the presumption that the order shown by the ’Local Group Policy Editor’ was not the actual order used by BizTalk.

After modifying the list to make sure the SSL Cipher Suites required by the service are on the top, a restart of the server was required.

Once the server was rebooted, a new Wireshark-trace was started and the BizTalk flow got triggered again. This time no transmission failure was showing up, but instead a response was returned by the service, without any type of error!

Looking at the Wireshark-trace, we got to see the complete process of the working connection, as well as the correct cipher suites when drilling down into the details.

Conclusion

There are a couple of settings to modify and take into account, but once these are set, there is no requirement for any custom endpoint behavior, or any other code-change for that matter, in order to successfully switch the connections from a service using TLS 1.0 towards the more secure TLS 1.2-version.

Do keep in mind that some settings cannot be modified within BizTalk and require you to make system-wide modifications. So before making these changes, check if they are not going to break your other flows.

Categories: .NET, BizTalk, Security, WCF
written by: Maxim Braekman

Posted on Thursday, April 7, 2016 5:27 PM

Tom Kerkhove by Tom Kerkhove

Last week Microsoft held her annual //BUILD/ developer conference again in San Francisco with a lot of announcements going from Windows 10, to Office, to Azure and beyond.

Let's walk through some of the announcements that got me excited!

Announcing Azure Functions, Microsofts AWS Lambda competitor

Azure Functions (Preview) is the long awaited competitor of AWS Lambda allowing you to run small pieces of code and only having to pay for what you're using. It uses a event driven model where you connect data sources -from the cloud or on-prem- and (re)act on certain events in an easy-to-use way.

You can either choose to use Continous Deployment & Integration or use the interactive portal to visualize the Triggers, Inputs & Outputs of your functions.

You can write functions in a variety of languages going from Node.js & Python to Bash & PowerShell to C# and others, they even support pre-compiled executables. Here is a small example of code used in a Function.

People who have been working with Azure Web Jobs can see some similarities but the differentiator here is that with Azure Functions you only pay for the compute you use while with Azure Web Jobs you run on an App Plan that is billed per hour.

Azure Functions provide a variety of ways to trigger your functions: Timer-based, webhooks, events from other services i.e. message in Service Bus Queue, etc. allowing you to use them in a variety of scenarios.

From an integration/IoT perspective this is a very nice service where we can use this in a combination with other services. We could react on events in an on-prem SQL database and trigger processes in the cloud. One could trigger them from within a Logic App as a substep of the business process, etc...

Interested in knowing how it works under the hood? Check this //BUILD/ session!

Here is a nice comparison between Azure Functions & AWS Lambda by Tom Maiaroto

But keep in mind - This is only a preview!

Extended Device Management in Azure IoT Hub

Microsoft announced that Azure IoT Hub will have extended Device Management features in the near future enabling us to more easily manage our devices, perform health checks, organise devices into groups, and so on, by exposing several server-side APIs:

  • Device Registry Manager API
  • Device Groups API
  • Device Query API
  • Device Model API
  • Device Job API

What I personally like the most is that I can now define the information model of devices & entities taking the management a step further. In the past a device ID was only linked to access keys without any metadata - Those days are over!

Announcing Azure IoT Gateway SDK

Following Microsofts Internet of Your Things they've announced the Azure IoT Gateway SDK that helps developers & ISVs build flexible field gateways where they can implement edge intelligence to process data before it was even sent to the cloud. This allows us to for example to encrypt our data before sending it over the wire to improve the security of our solutions.

This is really great because it allows us to save cost/time on the gateway part and focus on connecting our devices to the gateway or analyse & process our data in the cloud!

Cortana Analytics Suite is no more, meet Cortana Intelligence Suite!

Last year Microsoft announced the Cortana Analytics Suite, the flagship for building intelligent applications in the cloud or devices based on the (big) data analytics.

At //BUILD/ Microsoft took it a step further and rebranded the Cortana Analytics Suite to Cortana Intelligence Suite!

Next to Cortana the Cortana Intelligence Suite also has two new additional "Intelligence" feature/services:

  • Microsoft Bot Framework enables you to create your own intelligent agents, or bots, to use in your applications to make it feel more natural. After taking a quick look it feels like the idea is to create a WebAPI that is being deployed in Azure as an API App.
  • Project Oxford is now being offered as a service called Azure Cognitive Services (Preview). It is a collection of APIs-as-a-Service that enable you to make your applications more intelligent and contains the following APIs at the moment:
    • Language - Web Language Model, Text Analytics & Language Understanding Intelligent Service API
    • Vision - Face & Emotion API
    • Knowledge - Recommendation API
    • Speech - Speech API

Want to become intelligent yourself? Read more about Azure Cognitive Services here and here.

Azure Data Catalog is now General Available

Azure its enterprise-grade metadata catalog, Azure Data Catalog, is now General Available! Data Catalog stores, describes, indexes, and shows how to access any registered data asset. It enables collaboration on data within the corporation and to make data discovery super easy.

In the new pricing, the limitation on the maximum amount of users in the Free plan is gone & it comes without saying that you really need to have a catalog when your working several data sources, certainly in an IoT solution.

Read the official announcement by Julie Strauss here.

Announcing Azure Power BI Embedded

Microsoft introduced Azure Power BI Embedded, a service that allows you to use embedded interactive visuals in your apps & websites. This allows you to use Power BI Desktop to create reports without having to write any code for the visualization or reporting in your app.

However it's not 100% clear to me how Power BI Embedded relates to Power BI - Is the vision for Embedded to focus on the end user and save developer time while Power BI is focussed for internal usage & data engineers? To be continued...

Here is a small introduction on Azure Power BI Embedded and how you authenticate it against a back-end.

Announcing Azure Storage Service Encryption preview

All new Azure Storage accounts using the Azure Resource Manager now have to possibility to enable Azure Storage Service Encryption (preview). Azure Storage Service Encryption will encrypt all your Blob Storage data-at-rest using the AES-256 algorithm.

You don't need to do anything as this is a managed service where Microsoft will manage the complete process.

Read the full announcement here.

Partitioned collections in DocumentDb across the globe

The DocumentDb team has made several announcements about their service, let's have a look!

For starters: they have a new pricing model that seperates the billing for storage from throughput. Your indexed storage will be billed for each GB per hour that you store while the throughput is based on the throughput units (RU) you've reserved per hour.

With the new Global Databases you can take it a step further and replicate your data from one region to several others allowing you to move your data as close to the consumer as possible. This improves the high-availability of your application and offers a fail-over mechanism.

DocumentDB Global Databases is currently in public preview.

When creating a new DocumentDB collection, you now have the option to create a Single Partition or Partitioned Collections. The partitioned collection allows you to specify a partition key enabling you to store up to 250 GB of data and up to 250 000 request units per second or even increase it more by filing a support ticket.

Last, but not least - DocumentDB now supports using the Apache MongoDB APIs & drivers allowing you to use your existing MongoDb skills & tools to work with DocDb. Because of this you can now use Parse in Azure with DocumentDb.

Here are some additional resources:

Service Fabric going General Available with preview for Windows Server & Linux support

Service Fabric is now General Available and ready to use in production on Azure! Using Service Fabric is free of charge however you'll need to pay for the compute, network & storage that you are using.

For those who have missed last years announcement - Service Fabric is a microservice application platform that allows you to build reliable services & actors in a distributed way. The platform will handle applicationh update/upgrades for you out-of-the-box and is heavily using inside of Microsoft with internal customers such as Azure SQL Databases, Azure DocumentDB, Intune, Cortana and Skype for Business.

Microsoft also announced the public preview of the standalone Service Fabric on Windows Server allowing you to use Service Fabric on-premises or in other clouds. Next to Windows Server, it will also be available on Linx starting with a private preview.

Last, but not least - The runtime has also been improved and the GA SDK is available. Remarkable is that you can now also debug a cluster in Azure from within Visual Studio.

I bet you'd love to read more! Read more about these announcements & improved developement experience here or if you want to learn more about Service Fabric here.

But wait, there is a more!

Here are some small tips/reminders:

  • Azure App Service Advisor now monitors your App Plan giving you recommendations on the resources i.e. to scale out to provide more resources & keep running smoothly. This feature is enabled by default as of last week. Check out this Azure Friday episode if you want to learn more.
  • MyDriving is an Azure IoT & Mobile sample that uses Azure services to build a solution scalable, performant, highly available, and cross platform IoT service and application. The sample comes with a +/- 150 page long guide on how they've built it. Read more here if you want to learn more about it.
  • A small reminder that Azure Managed Cache Service & Azure In-Role Cache will be retired on November 30, 2016

Still want more? Don't forget to browse all the recordings of the event here.

Thanks for reading,

Tom.

written by: Tom Kerkhove

Posted on Thursday, March 31, 2016 3:50 PM

Tom Kerkhove by Tom Kerkhove

Recently I was working on a Service Fabric project where I was using Service Remoting to communicate from one service to another by using the ServiceProxy.

Unfortunately it caused a "Interface id -103369040 is not implemented by object Codit.MyOtherService" exception.

Here is how I fixed it.

While refactoring my Service Fabric services I got the following exception over and over again:

Interface id -103369040 is not implemented by object Codit.MyOtherService

The exception was caused when I was running the following code:

During the refactoring I added an additional operation to the IMyService but apparently it's looking for it in my IMyOtherService implementation. Odd!

During debugging I noticed that the problem was in the configuration of my service that was trying to initiate a call by using the proxy. The "culprit" lies in the following line:

Do you see it? Neither did I because the problem was in the Settings.xml of the service where the configured URI I was using was fabric:/Codit.Demo/MyOtherService instead of fabric:/Codit.Demo/MyService. This caused the runtime to attempt to call a method on a service implementation that didn't implement IMyService but is an implementation of IMyOtherService instead.

While this seems like a stupid mistake -and it is- it took me a while to notice it. What I once again learned is that the key to success is in your logging - Log enough to know what's going on but don't over do it.

In my case it's a good idea to add an ETW entry to what configured endpoint I'm remoting so I can detect this misconfigured earlier in the future.

Thanks for reading,

Tom.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

written by: Tom Kerkhove

Posted on Thursday, March 10, 2016 3:38 PM

Massimo Crippa by Massimo Crippa

With the latest Azure API Management service update, the Git integration has been rolled out. In this post we will see how we can seamlessly control the versions of the proxy configuration and move it between different environments.

Scenario

All the configuration that are applied to the proxy and the portal customizations are stored in a database in the provisioned tenant. Now, every APIM tenants can expose a public Git endpoint to which we can refer to PULL the configuration down to our local (or remote) Git repository.

Once we have our local version, we can apply the changes we need, push it back to the tenant repository and then save it to the APIM database.

In the diagram below, the steps of our scenario

  1. Save (sync from the APIM repository to Tenant Git)
  2. Pull (From Tenant Git to local repo),
  3. Apply changes (on our local repo),
  4. Push (to Tenant Git)
  5. Deploy (from the Tenant Git to the APIM repository)

 

The Git integration is not enabled by default, so first we have to connect to the management portal, go to the Security area and enable the Git access. 

Save and Pull

Next step is to save the proxy configuration to the APIM tenant’s Git repository. This can operation can be done in two different ways:

  • Use the “Save Configuration to Repository” button on the APIM Admin Portal.
  • Call the “Save” operation of the API Management REST API (here how to enable the REST API).

In both cases you have to specify the branch name where to save the configuration and whether to override or not the changes in newer check-ins. This operation can takes a couple of minutes.

Once completed, you can open a Git console and pull the configuration to create a working copy of the remote repository by using the clone command.

Before that you need to get (in the Admin Portal) a temporary password to access to the remote repository.

 

Then run the “git clone https://coditapi.scm.azure-api.net/” command and specify “apim” as username and the temporary password we got at the step before.

Below the folder structure of the local repository. As you can see the proxy configuration is exported (apis, policies, security groups and products) along with the developer portal customizations.

If in the meanwhile a new configuration has been saved in the APIM repo, we can pull it down with the "git pull" command.

Apply a change

Let's imagine we want to change the policy applied to the Echo API to extend the existing basic round robin algorithm.

The policy is applied at API scope so the file to be edited is policies/apis/Echo_API.xml 

This is the result of the "git diff" command after the change.

Now, in order to add the change to the git staging area use the “git add -A” command and then commit the changes with "git commit -m" as in the picture below.

Now we’re ready to push our changes to the Azure API Management Git repo.

Push and deploy

Type “git push” to sync the changes with the repository on our API Management tenant.

The final step is to deploy our configuration from the tenant repository to the APIM proxy.

This can operation can be done in two different ways:

  • Use the “Deploy Repository Configuration” button on the APIM Admin Portal.
  • Call the “Deploy” operation of the API Management REST API (here how to enable the REST API).

For this step I'm going to invoke the Configuration REST API using postman.  Here the details of my API call.

Method : POST
Address : http://{tenantname}.management.azure-api.net/configuration/deploy?api-version=2014-02-14-preview
Headers :
 + Content-Type > application/json
 + Authorization > SharedAccessSignature=....
Body :
    {"branch":"master"}

 

As response I got a 202 (Accepted) and the Location header with the link to the check the status of this transaction.

With the operationResults operation to check the status (Failed, InProgress, Succeded) of the deploy. It's a GET and again we must specify the Authorization header as in the previous call.

If the deploy succeeded, the changes are immediately applied to the proxy. 

Restore a configuration

Now, imagine that you've applied a wrong configuration to the proxy and you want to restore a previous version from your local git repository. For example, these are the timestamps of my configurations:

  • On proxy: updated at 10:22 AM
  • On Teneant Repo: updated at 10:18 AM
  • Local repo: updated at 08:19 AM

I want to discard the 10:18 AM version and replace the 10:22 AM version with the 08:19 AM one. It's a four step procedure. 

A) First thing to do is to bring the tenant repo in sync with the proxy. This step is necessary to mark the proxy as synched. Without the sync you will get this error as result of the the deploy operation : "Deployment operation failed due to invalid data: Snapshot operation is not safe.  Latest sync date: '2015-11-10T10:18:35.5278745'; latest configuration update date: '2015-11-10T10:22:01.7635694'"

B) Apply a modification to the local repo and commit it. This is necessary so the deploy procedure can recognize that there is something to be overwritten. 

C) run the "git push -f origin master" command to overwrite the version in the tenant git.

D) deploy the configuration using the Admin Portal or via REST API

Conclusion

The git integration is a feature that customers have been asking for a while. Now, you can create and manage different versions of your proxy configurations and move them between different environments.

Cheers,

Massimo

Categories: API Management, Azure
written by: Massimo Crippa