wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Thursday, May 12, 2016 12:14 AM

Joachim De Roissart by Joachim De Roissart

Michel Pauwels by Michel Pauwels

Pieter Vandenheede by Pieter Vandenheede

Robert Maes by Robert Maes

The Codit recap of the first day at Integrate 2016.

Event

Joined by our new UK colleagues today, Codit International was represented with no more than 4 countries at Integrate 2016. With a total of 18 members, we were the most represented company at the event!

Bigger and better than last London Summit, so let's summarize the key take-aways from today.

BizTalk + Logic Apps = A hybrid Vision

Microsoft admitted this morning that last year's vision was too cloud focused. Today they adjusted that vision to a hybrid one. One that moves from fully on premise solutions to a hybrid approach. Considering that most companies have only a fraction of their applications running in the cloud.

Their vision of hybrid integration becomes a marriage between Microsoft BizTalk Server and Azure Logic Apps. Customers can now connect both traditional on-premises applications to cloud-native applications seamlessly.


(Picture by Eldert Grootenboer)

What's new in BizTalk?

Additional to the earlier announced features, Microsoft revealed some extra news on the upcoming release of BizTalk Server 2016 RTM:

- A new Logic Apps adapter which enables integration between SaaS and on premise LOB applications.
- Some UI tweaks, which they labeled as a "Repaint" for a consistent look and feel. This however feels like it still needs some more polishing.

So all-in-all the tried and thoroughly tested, existing BizTalk engine remains the same. Fanc(e)y new features on that were not announced.

The planned roadmap for BizTalk Server 2016 RTM remains the same: 2016 Q4. A CTP2 is planned for 2016 Q3 (summer).

What's new in Logic Apps?

A lot more focus on the updates in Logic Apps today:

  • Logic Apps are coming to the Visual Studio IDE in the coming weeks!
  • It will be possible to reuse existing BizTalk Server .BTM mapping files into Logic Apps. There will be full support for BizTalk Server mapping functoids and custom code!
  • Continuous Integration: allowing to add Logic Apps to source control and have versioning, etc.… 
  • A new BizTalk connector which is the counterpart of the Logic Apps adapter for BizTalk Server.
  • Added extensibility allowing to run custom code.
  • New patterns: request/response, solicit/response and "fire and forget"
  • The addition of some new concepts: addition of scopes which allows nesting of components with individual compensation block (e.g. exception handling on each step). Coming in the next few weeks.
  • The addition of tracking and the possibility to debug. To enable you to pinpoint any Logic Apps instances, you can now track custom properties. Something else which is new is chain tracking which allows you to chain all tracking events in your instance.
  • Polling http endpoints is now a possibility!
  • Run logic apps on a schedule.

Also the introduction of a new concept: Enterprise Integration Pack with the Integration Account. This type of Logic Apps account receive the following capabilities:

  • Containers for centralized storage of schemas, transforms, partners and certificates. A predecessor for an upcoming trading partner management addition in the portal.
  • A VETER pipeline concept: A logic app can be designed to Validate, Extract, Transform, Enrich and Route. Adding possibilities to route to topics and add exception handling on any step.
  • X12 and AS/2 support will be added in the future. EDIFACT will be added further down the road, together with TPM and party resolution.

Integrations made simple with Microsoft Flow

At first sight Microsoft Flow looks and feels like IFTTT or Zapier: a recipe based tool to automate workflows across different apps and services. However, this service is based on Logic Apps and used the Logic Apps designer.
For example: integrating Twitter with Dropbox in a couple of clicks.

It might look like just another clone, but Microsoft Flow is much more advanced:

  • More advanced flow design: more branches, more steps
  • More connectors
  • You will be able to add your own custom APIs.
  • Debug the flow and trigger when you need it.
  • Flows can be shared to anyone.

Microsoft Flow brings a lot to the table, but consider the security concerns when handling integrations in a corporate environment. When using Flow, take into account the potential sensitive nature of the data you are working with.

Flow is free at the moment, but will possibly be licensed on a per-user basis. According to Microsoft: in a cheap license model.

Conclusion

A busy first day brings a lot of news and exciting new features. We are all looking forward to the next day of #Integrate2016.

Thank you for reading.

Brecht, Robert, Joachim, Michel and Pieter

Posted on Monday, May 2, 2016 5:34 PM

Korneel Vanhie by Korneel Vanhie

Due to the XslCompiledTransform library in BizTalk 2013, user exceptions thrown by Xsl Terminate are no longer visible in the application log.

Recently we received a comment from one of the students in our XSLT Training.

As shown in one of the examples he tried to throw a custom exception from an XSLT mapping and expected to find his error message in the application event log.

What he saw however was the following generic exception message:

The Messaging Engine failed while executing the inbound map for the message coming from source * with the Message * Details:"Exception has been thrown by the target of an invocation." 

Upon inquiring about the BizTalk version used (BizTalk 2013 R2), we had an inkling what might be the issue and tried to reproduce it, using the following XSLT:

After executing this map from a Send or Receive port we found the same generic exception in the event log:

The Messaging Engine failed while executing the inbound map for the message coming from source * with the Message * Details:"Exception has been thrown by the target of an invocation." 

When debugging the application and catching the exception, we did find the custom user exception message in the inner-exception.  

To validate this behavior was new in 2013 we reverted to the old mapping engine as described in this blogpost.

Sure enough, after failing another message, we got the following entry in the event log:

The Messaging Engine failed while executing the inbound map for the message coming from source URL: * with the Message Type *. Details:"Transform terminated: 'Error: Mapping Terminated from XSL"

Until recently we had to choose either to throw custom exception or use the compiled transform library, which is a shame. To receive readable exceptions from the mapping can greatly simplify the flow. 

Luckily, starting from BizTalk 2013R2 CU2, you can select the mapping engine on a per transformation basis (https://support.microsoft.com/en-us/kb/3123752).

 

 

 

 

 

 

 

 

Categories: BizTalk
written by: Korneel Vanhie

Posted on Thursday, April 28, 2016 3:17 PM

Toon Vanhoutte by Toon Vanhoutte

Jonathan Maes by Jonathan Maes

A real life example of how redis caching improved the performance of a large scale BizTalk messaging platform significantly.

With some colleagues of Codit, we’re working on a huge messaging platform between organizations, which is built on top of Microsoft BizTalk Server. One of the key features we must deliver is reliable messaging. Therefor we apply AS4 as a standardized messaging protocol. Read more about it here. We use the AS4 pull message exchange pattern to send the messages to the receiving organization. Within this pattern, the receiving party sends a request to the AS4 web service and the messaging platform returns the first available message from the organizations inbox.

Initial Setup

Store Messages

In order to support this pattern, the messages must be stored in a durable way. After some analysis and prototyping, we decided to use SQL Server for this message storage. With the FILESTREAM feature enabled, we are able to store the potential large message payloads on disk within one SQL transaction.

(1) The messages are stored in the SQL Server inbox table, using a BizTalk send port configured with the WCF-SQL adapter. The message metadata is saved in the table itself, the message payload gets stored on disk within the same transaction via FILESTREAM.

Retrieve Messages

As the BizTalk web service that is responsible for returning the messages will be used in high throughput scenarios, a design was created with only one pub/sub to the BizTalk MessageBox. This choice was made in order to reduce the web service latency and the load on the BizTalk database.

These are the two main steps:

(2) The request for a message is received and validated on the WCF receive port. The required properties are set to get the request published on the MessageBox and immediately returned to the send pipeline of the receive port. Read here how to achieve this.

(3) A database lookup with the extracted organization ID returns the message properties of the first available message. The message payload is streamed from disk into the send pipeline. This avoids that a potential large message gets published on the MessageBox. The message is returned via this way to the receiving party. In case there’s no message available in the inbox table, a warning is returned.

Potential Bottleneck

The pull pattern puts a lot of additional load on BizTalk, because many organizations (+100) will be pulling for new messages within regular time intervals (e.g. each 2 seconds). Each pull request is getting published on the BizTalk MessageBox, which causes extra overhead. As these pull requests will often result in a warning that indicates there’s no message in the inbox, we need to find a way to avoid overwhelming BizTalk with such requests.

Need for Caching

After some analysis, it became clear that caching is the way to go. Within the cache, we can keep track of the fact whether a certain organization has new messages in its inbox or not. In case there are no messages in the inbox, we need to find a way to bypass BizTalk and return immediately a warning. In case there are messages available in the organization’s inbox, we just continue the normal processing as described above. In order to select the right caching software, we listed the main requirements:

  • Distributed: there must be the ability to share the cache across multiple servers
  • Fast: the cache must provide fast response times to improve message throughput
  • Easy to use: preferably simple installation and configuration procedures
  • .NET compatible: we must be able to extend BizTalk to update and query the cache

It became clear that redis meets our requirements perfectly:

  • Distributed: it’s an out-of-process cache with support for master-slave replication
  • Fast: it’s an in-memory cache, which ensures fast response times
  • Easy to use: simple “next-next-next” installation and easy configuration
  • .NET compatible: there's a great .NET library that is used on Stack Overflow

Implement Caching

To ease the implementation and to be able to reuse connections to the cache we have created our own RedisCacheClient. This client has 2 connection strings: one to the master (write operations), and one to the client (read operations). You can find the full implementation on the Codit GitHub. The redis cache is implemented in a key/value way. The key contains the OrganizationId, the value contains a Boolean that indicates whether there are messages in the inbox or not. Implementing the cache, is done on three levels:

(A) In case a warning is returned that indicates there’s no message in the inbox, the cache gets updated to reflect the fact that there is no message available for that particular OrganizationId. The key/value pair gets also a time-to-live assigned.

(B) In case a message is placed on the queue for a specific organization, the cache gets updated to reflect the fact that there are messages available for that particular OrganizationId. This ensures that the key/value pair is updated as new messages arrive. This is faster than waiting for the time-to-live to expire.

(C) When a new request arrives, it is intercepted by a custom WCF IOperationInvoker. Within this WCF extensibility, the cache is queried with the OrganizationId. In case there are messages in the inbox, the IOperationInvoker behaves as a pass-through component. In case the inbox of the organization is empty, the IOperationInvoker bypasses the BizTalk engine and immediately returns the warning. This avoids the request to be published on the message box. Below there's the main part of the IOperationInvoker, make sure you check the complete implementation on Github.

Results

After implementing this caching solution, we have seen a significant performance increase of our overall solution. Without caching, response times for requests on empty inboxes were on average 1,3 seconds for 150 concurrent users. With caching, response times decreased until an average of 200 ms.

Lessons Learned

Thanks to the good results, we introduced redis cache on other functionality in our solution. We use it for caching configuration data, routing information and validation information. During the implementation, we encountered some lessons learned:

  • Redis is a key/value cache, change your mindset to use it to the maximum.
  • Re-use connections to the cache, as this is the most costly operation.
  • Avoid serialization of cached objects.

Thanks for reading!
Jonathan & Toon

Categories: BizTalk

Posted on Thursday, April 21, 2016 2:39 PM

Maxim Braekman by Maxim Braekman

BizTalk Server 2010 does not support the use of TLS1.2. Learn how there is a way to get this up and running anyway.

Setting up the connection from a BizTalk Server 2010 send port towards a service with transport-security (https) using certificates, is not always straight-forward. But as long as you’re attempting to use SSL3.0 or TLS1.0, it should, in most cases, not be rocket-science.

However, when attempting to address a service, utilizing the security protocol TLS v1.2, you might get the error as shown below.

The adapter failed to transmit message going to send port " Sp_SendToService_WCF-Custom" with URL " https://some-service/Action". It will be retransmitted after the retry interval specified for this Send Port. Details:"System.ServiceModel.Security.MessageSecurityException: The HTTP request was forbidden with client authentication scheme 'Anonymous'. ---> System.Net.WebException: The remote server returned an error: (403) Forbidden. 
  at System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult) 
   at System.ServiceModel.Channels.HttpChannelFactory`1.HttpRequestChannel.HttpChannelAsyncRequest.CompleteGetResponse(IAsyncResult result) 
   --- End of inner exception stack trace ---

Server stack trace: 
   at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result) 
   at System.ServiceModel.Channels.ServiceChannel.SendAsyncResult.End(SendAsyncResult result) 
   at System.ServiceModel.Channels.ServiceChannel.EndCall(String action, Object[] outs, IAsyncResult result) 
   at System.ServiceModel.Channels.ServiceChannel.EndRequest(IAsyncResult result)

Exception rethrown at [0]: 
   at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) 
   at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) 
   at System.ServiceModel.Channels.IRequestChannel.EndRequest(IAsyncResult result) 
   at Microsoft.BizTalk.Adapter.Wcf.Runtime.WcfClient`2.RequestCallback(IAsyncResult result)".

 

The cause to this issue is the fact that .NET 4.0, which is what BizTalk Server 2010 on Windows Server 2008 (R2), will be running on, does not support anything other than SSL v3.0 and TLS v1.0.

.NET Framework 4.5 however, does support the use of TLS v1.1 and TLS v1.2, therefore is seems obvious that in order for this connection to work, the installation of this version of the .NET framework should be required.

Install .NET v4.5.2

In this case, we chose to install the .NET Framework v4.5.2, just to get all of the latest bits and bobs within the .NET Framework v4.5.

The installer of this version of the framework can, of course, be downloaded from the Microsoft-site:

https://www.microsoft.com/en-us/download/details.aspx?id=42642

The process of installation is very straight-forward, just follow the wizard right up to the point a server-reboot is requested.

Update registry-settings.

Since the installation of the .NET Framework 4.5.2 by itself is not enough to make sure that BizTalk is actually able to use TLS1.2, you need to make some changes in the registry.

Create the following keys and matching DWORDs.

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client]
   “DisabledByDefault”=dword:00000000
   “Enabled”=dword:00000001 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server]
   “DisabledByDefault”=dword:00000000
   “Enabled”=dword:00000001

 

Now, set the .NET Framework 4.0 to use the latest version of the SecurityProtocol, by creating the DWORDs mentioned below, for both 32- and 64-bit hosts.

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v4.0.30319]
   "SchUseStrongCrypto"=dword:00000001

[HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\.NETFramework\v4.0.30319]
   "SchUseStrongCrypto"=dword:00000001

 

Beware! While this will enable BizTalk to use TLS1.2, this will default the SecurityProtocol to this version. In order to use a different version, a custom endpoint behavior would be required.

Once all of the registry-keys have been created/modified, reboot the server in order for the changes to come into effect.

Ready.. set.. (no) go!

Once all of the registry-settings are modified and the server has been rebooted, the next step is to test the connection. In my case this meant triggering the BizTalk flow in order to send a request towards the service.

Unfortunately, the send port got the ‘Transmission Failure’-event type, which clearly meant something is still off. Firstly I wanted to make sure that BizTalk was actually attempting to set up the connection using TLS1.2.

In order to make sure what protocol was being used, I opted to go for Wireshark. Therefore, the next step was to start the Wireshark-trace and trigger the BizTalk flow once more.

As can be seen in the screenshot below, BizTalk Server will actually be using security protocol TLS1.2 at this point, thanks to all of the changes to the registry, as mentioned before.

Once you are sure BizTalk Server 2010 is using TLS 1.2 - even if you are still getting an exception - you no longer need to think about this part of the setup. The next step, however, is to troubleshoot the cause of the error.

Perhaps before diving into the exception handling, it might come in handy to get a little overview of the TLS handshake protocol that is being used to set up the secure channel in between the client and server. Therefore a schema can be found below, which explains what steps are being performed by both client and server.

Running into a ‘Could not create SSL/TLS secure channel’-exception

Rest assured, you are not the only one who is running into a few hiccups while setting up a communication-channel based on TLS1.2 when utilizing BizTalk Server. One of the possible errors you might be getting when actually testing the connection is the one you can read below, which is not quite as elaborate as it could be, or at least not as elaborate as I would have wanted it to be:

The adapter failed to transmit message going to send port "Sp_SendToService_WCF-Custom" with URL "https://some-service/Action". It will be retransmitted after the retry interval specified for this Send Port. Details:"System.ServiceModel.Security.SecurityNegotiationException: Could not establish secure channel for SSL/TLS with authority 'some-service’. ---> System.Net.WebException: The request was aborted: Could not create SSL/TLS secure channel.
   at System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult)
   at System.ServiceModel.Channels.HttpChannelFactory`1.HttpRequestChannel.HttpChannelAsyncRequest.CompleteGetResponse(IAsyncResult result)
   --- End of inner exception stack trace ---

Server stack trace:
   at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result)
   at System.ServiceModel.Channels.ServiceChannel.SendAsyncResult.End(SendAsyncResult result)
   at System.ServiceModel.Channels.ServiceChannel.EndCall(String action, Object[] outs, IAsyncResult result)
   at System.ServiceModel.Channels.ServiceChannel.EndRequest(IAsyncResult result)

Exception rethrown at [0]:
   at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)
   at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)
   at System.ServiceModel.Channels.IRequestChannel.EndRequest(IAsyncResult result)
   at Microsoft.BizTalk.Adapter.Wcf.Runtime.WcfClient`2.RequestCallback(IAsyncResult result)".

 

Okay, this is indicating something went wrong during the actual creation of the communication channel in between BizTalk Server and the target web-service. But as to what might be the cause to this error, isn’t quite as clear as you would want it to be. But hey, what developer doesn’t like a good challenge, right?

Some of the possible solutions you might find while googling this problem are listed below, just in case you might be in one of these situations:

(FYI: the last one in the list, solved the problem in our case.)

Your certificates are in the wrong store

Make sure that the root-certificate is in the actual ‘Trusted Root Certification Authorities’-store and the certificate to be used for this communication is in the ‘Personal’-store.

One possible way to check if these have been imported correctly is to open up the properties of the signing certificate (the one with the private key, that should be in the ‘Personal’-store) and verify if there is not an error-symbol showing as in the screenshot below. If this seems OK, good chances are, the certificates are in the correct store.

Check the send port configuration

While you might be thinking about the more advanced stuff, it is easy to overlook the obvious cases. Just make sure that within the configuration of the WCF-Custom send port, you have set the Security-mode to ‘Transport’ and the ‘Transport/ClientCredentialType’ to ‘Certificate’. 

Explicitly setting the SecurityProtocol to be used, via custom endpoint behavior

One of the other possible solutions might be to explicitly set the SecurityProtocol to be used for this connection, by writing a custom endpoint behavior which does it for you.

While this shouldn’t be required, since the aforementioned registry-settings should’ve made sure that TLS1.2 is being used, the section below should be added into the custom endpoint behavior.

Note: this could also be used in case of using a different SecurityProtocol for other send ports.

A nice recap on how to create such a custom endpoint behavior, can be found in this post, by Mathieu Vermote, while the several ways of registering such a behavior can be found in Toon Vanhoutte’s post over here.

Check the SSL Cipher Suites and their order.

Now this is the one that did the trick for me. While this might not be the most obvious setting one might be thinking of, this does play an important role, while performing the handshake-process during the creation of the channel in between BizTalk Server and the actual service.

While using Wireshark to troubleshoot the connectivity issues, we noticed that the actual negotiation in between client and server went OK and BizTalk Server was actually using TLS 1.2. However, when the actual data-transmission was supposed to start, the handshake was failing and an ‘Encrypted Alert’ was returned by the server, which was followed by the closing down of the connection.

When drilling down into the trace, you will notice that the client, BizTalk in our case, is sending an entire list of possible cipher suites to be used for this connection, while the server will respond by choosing one of these versions. In our case, the server just picked the first one, and responded using this version. 

After troubleshooting for some time, we got informed by the host of this service, that it was only supporting these SSL Cipher Suites:

  • TLS_RSA_WITH_AES_256_GCM_SHA384
  • TLS_RSA_WITH_AES_128_GCM_SHA256
  • TLS_RSA_WITH_AES_256_CBC_SHA256
  • TLS_RSA_WITH_AES_128_CBC_SHA256

Comparing these values, with those we got returned to us by the server, as can be seen in Wireshark, clearly indicated this might be the root-cause to our problem.

Since we could see, based on the complete Wireshark-trace that the aforementioned, required cipher suites were available, we had to check whether the order of these cipher suites could be changed. The same check can be performed to add additional cipher suites to the list, in case the required one is missing.

In order to check this order, 2 approaches can be used:

Local Group Policy Editor

This editor can be opened by typing “gpedit.msc” in de “Run”-dialogue. Once the editor has popped up, navigate down towards the “SSL Configuration Settings”, to find the “SSL Cipher Suite Order”-property.

Opening up the information of this property, shows you a little “Help”-section indicating that if no specific order has been specified, the default order will be used.

However, when looking at this order, those required by the service, in this case, were actually at the top of list. This indicates that this property is in fact not showing the actual order of the SSL Cipher Suites.

Third-party tool: IISCrypto40

As the order of the SSL Cipher Suites did not seem to be correct, based on what we got to see in Wireshark, the third party tool named ‘IISCrypto40’ was used to verify the order of the cipher suites.

The tool itself, can be downloaded at this location: https://www.nartac.com/Products/IISCrypto

Based on what we got to see with this tool, verified the presumption that the order shown by the ’Local Group Policy Editor’ was not the actual order used by BizTalk.

After modifying the list to make sure the SSL Cipher Suites required by the service are on the top, a restart of the server was required.

Once the server was rebooted, a new Wireshark-trace was started and the BizTalk flow got triggered again. This time no transmission failure was showing up, but instead a response was returned by the service, without any type of error!

Looking at the Wireshark-trace, we got to see the complete process of the working connection, as well as the correct cipher suites when drilling down into the details.

Conclusion

There are a couple of settings to modify and take into account, but once these are set, there is no requirement for any custom endpoint behavior, or any other code-change for that matter, in order to successfully switch the connections from a service using TLS 1.0 towards the more secure TLS 1.2-version.

Do keep in mind that some settings cannot be modified within BizTalk and require you to make system-wide modifications. So before making these changes, check if they are not going to break your other flows.

Categories: BizTalk
written by: Maxim Braekman

Posted on Thursday, April 7, 2016 5:27 PM

Tom Kerkhove by Tom Kerkhove

Last week Microsoft held her annual //BUILD/ developer conference again in San Francisco with a lot of announcements going from Windows 10, to Office, to Azure and beyond.

Let's walk through some of the announcements that got me excited!

Announcing Azure Functions, Microsofts AWS Lambda competitor

Azure Functions (Preview) is the long awaited competitor of AWS Lambda allowing you to run small pieces of code and only having to pay for what you're using. It uses a event driven model where you connect data sources -from the cloud or on-prem- and (re)act on certain events in an easy-to-use way.

You can either choose to use Continous Deployment & Integration or use the interactive portal to visualize the Triggers, Inputs & Outputs of your functions.

You can write functions in a variety of languages going from Node.js & Python to Bash & PowerShell to C# and others, they even support pre-compiled executables. Here is a small example of code used in a Function.

People who have been working with Azure Web Jobs can see some similarities but the differentiator here is that with Azure Functions you only pay for the compute you use while with Azure Web Jobs you run on an App Plan that is billed per hour.

Azure Functions provide a variety of ways to trigger your functions: Timer-based, webhooks, events from other services i.e. message in Service Bus Queue, etc. allowing you to use them in a variety of scenarios.

From an integration/IoT perspective this is a very nice service where we can use this in a combination with other services. We could react on events in an on-prem SQL database and trigger processes in the cloud. One could trigger them from within a Logic App as a substep of the business process, etc...

Interested in knowing how it works under the hood? Check this //BUILD/ session!

Here is a nice comparison between Azure Functions & AWS Lambda by Tom Maiaroto

But keep in mind - This is only a preview!

Extended Device Management in Azure IoT Hub

Microsoft announced that Azure IoT Hub will have extended Device Management features in the near future enabling us to more easily manage our devices, perform health checks, organise devices into groups, and so on, by exposing several server-side APIs:

  • Device Registry Manager API
  • Device Groups API
  • Device Query API
  • Device Model API
  • Device Job API

What I personally like the most is that I can now define the information model of devices & entities taking the management a step further. In the past a device ID was only linked to access keys without any metadata - Those days are over!

Announcing Azure IoT Gateway SDK

Following Microsofts Internet of Your Things they've announced the Azure IoT Gateway SDK that helps developers & ISVs build flexible field gateways where they can implement edge intelligence to process data before it was even sent to the cloud. This allows us to for example to encrypt our data before sending it over the wire to improve the security of our solutions.

This is really great because it allows us to save cost/time on the gateway part and focus on connecting our devices to the gateway or analyse & process our data in the cloud!

Cortana Analytics Suite is no more, meet Cortana Intelligence Suite!

Last year Microsoft announced the Cortana Analytics Suite, the flagship for building intelligent applications in the cloud or devices based on the (big) data analytics.

At //BUILD/ Microsoft took it a step further and rebranded the Cortana Analytics Suite to Cortana Intelligence Suite!

Next to Cortana the Cortana Intelligence Suite also has two new additional "Intelligence" feature/services:

  • Microsoft Bot Framework enables you to create your own intelligent agents, or bots, to use in your applications to make it feel more natural. After taking a quick look it feels like the idea is to create a WebAPI that is being deployed in Azure as an API App.
  • Project Oxford is now being offered as a service called Azure Cognitive Services (Preview). It is a collection of APIs-as-a-Service that enable you to make your applications more intelligent and contains the following APIs at the moment:
    • Language - Web Language Model, Text Analytics & Language Understanding Intelligent Service API
    • Vision - Face & Emotion API
    • Knowledge - Recommendation API
    • Speech - Speech API

Want to become intelligent yourself? Read more about Azure Cognitive Services here and here.

Azure Data Catalog is now General Available

Azure its enterprise-grade metadata catalog, Azure Data Catalog, is now General Available! Data Catalog stores, describes, indexes, and shows how to access any registered data asset. It enables collaboration on data within the corporation and to make data discovery super easy.

In the new pricing, the limitation on the maximum amount of users in the Free plan is gone & it comes without saying that you really need to have a catalog when your working several data sources, certainly in an IoT solution.

Read the official announcement by Julie Strauss here.

Announcing Azure Power BI Embedded

Microsoft introduced Azure Power BI Embedded, a service that allows you to use embedded interactive visuals in your apps & websites. This allows you to use Power BI Desktop to create reports without having to write any code for the visualization or reporting in your app.

However it's not 100% clear to me how Power BI Embedded relates to Power BI - Is the vision for Embedded to focus on the end user and save developer time while Power BI is focussed for internal usage & data engineers? To be continued...

Here is a small introduction on Azure Power BI Embedded and how you authenticate it against a back-end.

Announcing Azure Storage Service Encryption preview

All new Azure Storage accounts using the Azure Resource Manager now have to possibility to enable Azure Storage Service Encryption (preview). Azure Storage Service Encryption will encrypt all your Blob Storage data-at-rest using the AES-256 algorithm.

You don't need to do anything as this is a managed service where Microsoft will manage the complete process.

Read the full announcement here.

Partitioned collections in DocumentDb across the globe

The DocumentDb team has made several announcements about their service, let's have a look!

For starters: they have a new pricing model that seperates the billing for storage from throughput. Your indexed storage will be billed for each GB per hour that you store while the throughput is based on the throughput units (RU) you've reserved per hour.

With the new Global Databases you can take it a step further and replicate your data from one region to several others allowing you to move your data as close to the consumer as possible. This improves the high-availability of your application and offers a fail-over mechanism.

DocumentDB Global Databases is currently in public preview.

When creating a new DocumentDB collection, you now have the option to create a Single Partition or Partitioned Collections. The partitioned collection allows you to specify a partition key enabling you to store up to 250 GB of data and up to 250 000 request units per second or even increase it more by filing a support ticket.

Last, but not least - DocumentDB now supports using the Apache MongoDB APIs & drivers allowing you to use your existing MongoDb skills & tools to work with DocDb. Because of this you can now use Parse in Azure with DocumentDb.

Here are some additional resources:

Service Fabric going General Available with preview for Windows Server & Linux support

Service Fabric is now General Available and ready to use in production on Azure! Using Service Fabric is free of charge however you'll need to pay for the compute, network & storage that you are using.

For those who have missed last years announcement - Service Fabric is a microservice application platform that allows you to build reliable services & actors in a distributed way. The platform will handle applicationh update/upgrades for you out-of-the-box and is heavily using inside of Microsoft with internal customers such as Azure SQL Databases, Azure DocumentDB, Intune, Cortana and Skype for Business.

Microsoft also announced the public preview of the standalone Service Fabric on Windows Server allowing you to use Service Fabric on-premises or in other clouds. Next to Windows Server, it will also be available on Linx starting with a private preview.

Last, but not least - The runtime has also been improved and the GA SDK is available. Remarkable is that you can now also debug a cluster in Azure from within Visual Studio.

I bet you'd love to read more! Read more about these announcements & improved developement experience here or if you want to learn more about Service Fabric here.

But wait, there is a more!

Here are some small tips/reminders:

  • Azure App Service Advisor now monitors your App Plan giving you recommendations on the resources i.e. to scale out to provide more resources & keep running smoothly. This feature is enabled by default as of last week. Check out this Azure Friday episode if you want to learn more.
  • MyDriving is an Azure IoT & Mobile sample that uses Azure services to build a solution scalable, performant, highly available, and cross platform IoT service and application. The sample comes with a +/- 150 page long guide on how they've built it. Read more here if you want to learn more about it.
  • A small reminder that Azure Managed Cache Service & Azure In-Role Cache will be retired on November 30, 2016

Still want more? Don't forget to browse all the recordings of the event here.

Thanks for reading,

Tom.