wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Monday, December 4, 2017 8:51 PM

Glenn Colpaert by Glenn Colpaert

The Internet of Things (IoT) is a business revolution enabled by technology and is no longer just for early adopters, it offers tremendous business opportunities.

With Microsoft IoT Central, a new SaaS solution, Microsoft is helping to solve IoT challenges.

Microsoft IoT Central is now available in Public Preview!

 

The Internet of Things (IoT) is a business revolution enabled by technology and is no longer just for early adopters, it offers tremendous business opportunities.

As already explained in this blogpost, the path to build, secure and provision a scalable IoT solution from device to cloud can be complex. Evolving products with IoT in most cases require some up-front investment and a whole new set of skills to be learned.

With Microsoft IoT Central, a new SaaS solution, Microsoft is helping to solve these challenges.

Meet Microsoft IoT Central

Microsoft IoT Central was first announced in April 2017, since then Microsoft has been working with partners and customers to align business and user scenarios with the product functionality in a private preview mode, today Microsoft IoT central is available in public preview.

Microsoft IoT Central is a SaaS (Software-as-a-Service) offering that reduces the complexity of IoT Solutions, it is fully managed and makes it easy to create IoT solutions by removing management burdens, operational costs and overhead of a typical IoT project.

A silver bullet for IoT?

There's more than one approach when building an IoT Solution with the Microsoft Azure platform. With the announcement of Microsoft IoT Central it's important to determine whether you need a PaaS or SaaS offering.

SaaS solutions allow you to get started quickly with a pre-configured IoT solution offering where PaaS solutions provide the building blocks for companies to construct customized IoT Solutions.

The decision PaaS vs SaaS is depending on your business, expertise and the amount of control and customization desired. All these topics are important to make the decision between PaaS and SaaS.

If you need more information please check out following announcement blogposts by Microsoft:

I'll be further exploring this new Microsoft offering in the next coming days and keep you posted on my findings and the outcomes.

Cheers,

Glenn

Categories: Azure, Products
written by: Glenn Colpaert

Posted on Tuesday, November 28, 2017 7:45 AM

Toon Vanhoutte by Toon Vanhoutte

Very recently, I discovered a new feature in SQL Server 2016. It allows you to configure the Max Degree of Parallelism (MAXDOP) on a database level, instead of instance level. This is very important to take into account for BizTalk installations. The BizTalk MessageBox database performs at it best when MAXDOP is set to 1.

Quoting the BizTalk documentation:

Max Degree of Parallelism is set to “1” during the configuration of BizTalk Server for the SQL Server instance(s) that host the BizTalk Server MessageBox database(s). This is a SQL Server instance-level setting. This setting should not be changed from this value of “1”. Changing this to anything other than 1 can have a significant negative impact on the BizTalk Server stored procedures and performance. If changing the parallelism setting for an instance of SQL Server will have an adverse effect on other database applications that are being executed on the SQL Server instance, you should create a separate instance of SQL Server dedicated to hosting the BizTalk Server databases.

Thanks to this new feature in SQL Server 2016, we can have the BizTalk MessageBox running with MAXDOP set to 1 on the same instance with databases that have MAXDOP set to 0. Unfortunately, the BizTalk configuration still sets the MAX DOP value to 1 at instance level. Please vote for this UserVoice item, if you agree this should be changed to the MessageBox database level!

This gives us one reason less to install the BizTalk MessageBox database on another instance than the rest of the BizTalk databases. One argument to keep this strategy of two instances, is the fact that you can perform better memory allocation and CPU affinity on a SQL instance level.

Thanks to my co, Pieter Vandenheede, for his assistance on this one!
Toon

Categories: BizTalk
written by: Toon Vanhoutte

Posted on Thursday, November 23, 2017 1:07 PM

Maxim Braekman by Maxim Braekman

Jacqueline Portier by Jacqueline Portier

Starting with SQL Server 2016, SQL Server AlwaysOn Availability Groups supports MSDTC for on-premises and using Azure VMs. As a result, the SQL Server 2016 AlwaysOn feature is supported for BizTalk databases on-premises or in Azure IaaS scenarios.

In a High Availability scenario, however, you will want to cluster the master secret server as well. You can perfectly use the SQL Server Cluster for that purpose. Microsoft has outlined to steps for clustering the SSO server in this article.

However, in Azure, some additional steps are required, which will prevent you from running into the below error message:

The master secret server (cluster-resource-name) specified by the SSO database could not be found 0xC0002A0F Could not Contact the SSO Server %1.

In this article we will describe the steps to solve this problem. We assume the Always On cluster is already installed. If not, you can find directions here and here.

Add a generic service to the cluster

Navigate to the Failover Cluster Manager on one of the SQL-servers and go to the Roles-overview. Add an additional ‘Generic Service’-role to your SQL-cluster and select the 'Enterprise Single Sign-On Service'-service from the list. Assign a name to the new role, for instance 'SSO'.

 

 

Once the role has been created, make sure to assign a static IP-address, by adjusting the properties of the ‘IP Address’-resource. In this case ’10.2.1.10’ was used:

 

 

Add a new Frontend IP Pool to the load balancer

Navigate to the Internal Load Balancer by using the Azure portal and create a new Frontend IP Pool for the static IP address you assigned to the SSO cluster resource. Make sure to select the same subnet as the SQL-servers are located in. 

Next to the IP-address, an additional health probe needs to be created as well, as this will be used in the background to forward the request. Like previously created probes, create an additional probe, referring to an unused port. In this case, the port ‘60005’ has been chosen.

Finally create a new rule that maps port 135 to 135.

 

Make sure to also execute this PowerShell-command, to set several cluster-parameters for this SSO-role, similar as what should have been done for the SQL-roles. This time fill in the IP-address and name used for the SSO-cluster.

As was the case for the SQL-servers, this command only has to be executed on a single node within the cluster, as this will connect the load balancer’s health probe, as configured in the Azure Portal, to this cluster role.

 

Add Loadbalancer rules for each of the ports used by SSO

Because we need to create a load balancing rule for each of the ports used by SSO, we need to limit the range of ports that will be used. To do this, connect to the SQL-servers and perform these steps on all of the servers:

Open Component Services and navigate to ‘My Computer’ and open the properties.

Go to the ‘Default Protocols’-tab and click ‘Properties…’ and make sure to add a range of ports that is located in the ‘Intranet Range’

 

If the port range has been assigned, go back to the Azure Portal, to create the required rules, matching these ports. Make sure to connect these rules to the SSO-IP and SSO-probe we created in the previous step.

This will result into a 'long' list of load balancing rules for all the used ports:

 

Creating the loadbalancer rules manually is a tedious task. Luckily we can script it with PowerShell. 

 

MSDTC

A similar action will be required for MSDTC, as a load balancing rule is required for all communication between the servers. However, for MSDTC you are able to specify a fixed TCP-port instead of a range of ports.

To do this, the below command can be executed, in order to add an entry in the registry to make sure that MSDTC will stick to port ‘20021’, in this case.

reg add HKLM\SOFTWARE\Microsoft\MSDTC /v ServerTcpPort /t REG_DWORD /d 20021 /f

Note: Afterwards a reboot of the servers might be required before this change in the registry will become active.

While we are at the process of editing the registry, let’s keep at it, because in order to ensure that COM+ will be accessible from all servers, the below command needs to be executed on all SQL- and BizTalk-servers as well.

reg add HKLM\SOFTWARE\Microsoft\COM3 /v RemoteAccessEnabled /t REG_DWORD /d 1 /f

Note: Afterwards a reboot of the servers might be required before this change in the registry will become active.

Finally!

When all these configurations are in place, you should be able to go ahead and create a new redundant SSO system on your SQL Server Always On cluster.

 

Posted on Monday, November 20, 2017 6:22 PM

Toon Vanhoutte by Toon Vanhoutte

In a previous post, I've explained how you can enable OMS monitoring in Logic Apps. In the meantime, the product team has implemented some new features to the OMS plugin, so let's have a look at what has been added to our toolbox.

Mass Resubmit

This feature allows you to perform a multi-select of the Logic App runs you want to resubmit. In the upper right corner, you can resubmit them, through a single button click. This comes in very handy, when you're operating a bigger Logic Apps integration environment.

Tracked Properties

Tracked properties allow you to log custom data fields to OMS. These tracked properties are now available for search and the details can be viewed per Logic App run. This is a must have feature to find back messages, based on business related metadata, such as customer name, invoice number, order reference, etc…

Proposed improvements

At our customers, we enable OMS monitoring by default. It's free, if you can live with the 7-day retention and 500 MB per day limit. While using it in real customer environments, we identified that there's still room for improvement in order to make this a solid monitoring solution. These are the most important suggestions towards the product team:

Performance

  • On average, there's a 10 minute delay between the Logic App run execution and the logs being available in OMS. Although a delay is acceptable, 10 minutes is a quite long time span. 
  • This delay is most disturbing when you are resubmitting messages: you perform a resubmit, but you can't see right away the results of that action.

Error Handling

  • When working in an operations team, you have no visibility on what Logic App runs have already been resubmitted. This results in situations that some failed Logic Apps are resubmitted twice, without knowing it.
  • Some failures can be handled through a manual intervention. It would be handy if you can mark these failures as handled, so everyone is aware that this failure can be ignored.

User Experience

  • Tracked properties are only visible when opening the detail view. Would be nice if you could add them as columns in the result pane.
  • The search on tracked properties is limited to one AND / OR combination. Maybe a more advanced free text search on the top could provide a better user experience. 
  • A click through from the results pane, to the Logic Apps run details view, could improve the troubleshooting experience.

Conclusion

Happy to see continuous investments on the operational side of Logic Apps. As always, I'm looking at it with a critical mindset, to give constructive feedback and help to steer the product in the best way for our customers. It's great to see that the product team is taking into account such feedback to continuously improve the product! Be aware that the OMS plugin is still in preview!

Toon

Categories: Azure
Tags: Logic Apps, OMS
written by: Toon Vanhoutte

Posted on Wednesday, November 15, 2017 5:51 PM

Tom Kerkhove by Tom Kerkhove

Azure Key Vault is hard but that's because you need to understand & implement the authentication with Azure AD. That's why Azure AD Managed Service Identity (MSI) now makes this a lot easier for you. There is no reason anymore not to use Azure Key Vault.

As you might know, I'm a big fan of Azure Key Vault - It allows me to securely store secrets and cryptographic keys while still having granular control on whom has access and what they can do.

Another benefit is that since all my secrets are centralized, it is easy to provide automatic rolling of authentication keys by simply updating the secrets during the process. If an application gets compromised or somebody has bad intentions, we can simply revoke their access and the secrets they have will no longer work.

If you want to learn more, you can read more in this article.

However, Azure Key Vault is heavily depending on Azure AD for handling the authentication & authorization and.

This means that in order to use Azure Key Vault, you not only need to understand how you use it, you also need to understand how AD works and what the authentication scheme is - And it ain't easy.

It is also hard to justify using Azure Key Vault as a secure store for all your secrets because instead of storing some of your secrets in an Azure Key Vault, you now need to store your AD authentication information instead. This can be via an authentication key or, preferably, a certificate that is being installed on your compute node instead.

Some actually see this as making the exposure bigger, which is true to a certain degree, because you are now basically storing the keys to the kingdom.

To conclude - Azure Key Vault itself is super easy to use, but the Azure AD part is not.

Introducing Azure AD Managed Service Identity

Azure AD Managed Service Identity (MSI) is a free turnkey solution that simplifies AD authentication by using your Azure resource that is hosting your application as an authentication proxy, if you will.

When enabling MSI, it will create an Azure AD Application for you behind the scenes that will be used as a "proxy application" which represents your specific Azure resources.

Once your application authenticates on the local authentication endpoint, it will authenticate with Azure AD by its proxy application.

This means that instead of creating an Azure AD Application and granting it access to your resource, in our case Key Vault, you will instead only grant the proxy application access.

The best thing is - This is all abstracted for you which makes things very easy. You as a developer, just need to turn on MSI, grant the application access and you're good to go.

This turn key solution makes it super easy for developers to authenticate with Azure AD without knowing the details.

As Rahul explains in his post, you can use the AzureServiceTokenProvider from the Microsoft.Azure.Services.AppAuthentication NuGet package and let the magic do the authentication for you:

It would even be better if this would be built into the KeyVaultClient in the future so that it's more easy to discover and able to turn it on without any hassle.

Big step forward, but we're not there yet

While this is currently only in public preview, it's a big step forward for making authentication with AD dead simple but we're not there yet.

  • AD Application Naming - One of the downsides is that it creates a new AD Application for you, with the same name as your Azure resource. This means that you are not able to pick an existing application or give it a descriptive name. This can be a blocker if you're using naming conventions.
  • Support for limited resources - Currently MSI is only supported for Azure VMs, App Services & Functions. There are more services to come but if you're hoping for Azure Cloud Services, this is not going to happen unfortunately. A full overview is available in the documentation.
  • Native support in Key Vault client - As mentioned before, it would be great if the Azure Key Vault SDK would support MSI out of the box without the need of doing anything ourselves from a coding perspective or need to be aware of the Microsoft.Azure.Services.AppAuthentication
  • Feature Availability - It's still in preview, if you even care about that

Conclusion

With the introduction of Managed Service Identity there are no more reasons why you should not be using Azure Key Vault for your application anymore. It makes it a lot easier and you should aim to move all your secrets to Azure Key Vault.

It is great to see this evolution and have an easy way to do the authentication without making it complicated.

But Azure Key Vault is not the only service that integrates with AD that works well with MSI, other services like Azure Data Lake & SQL support this as well. You can get a full overview here.

I am very thrilled about Azure AD Managed Service Identity and will certainly use this, but there are some points for improvement.

Thanks for reading,

Tom

Categories: Azure, Technology
Tags: Key Vault
written by: Tom Kerkhove