wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Thursday, November 23, 2017 1:07 PM

Maxim Braekman by Maxim Braekman

Jacqueline Portier by Jacqueline Portier

Starting with SQL Server 2016, SQL Server AlwaysOn Availability Groups supports MSDTC for on-premises and using Azure VMs. As a result, the SQL Server 2016 AlwaysOn feature is supported for BizTalk databases on-premises or in Azure IaaS scenarios.

In a High Availability scenario, however, you will want to cluster the master secret server as well. You can perfectly use the SQL Server Cluster for that purpose. Microsoft has outlined to steps for clustering the SSO server in this article.

However, in Azure, some additional steps are required, which will prevent you from running into the below error message:

The master secret server (cluster-resource-name) specified by the SSO database could not be found 0xC0002A0F Could not Contact the SSO Server %1.

In this article we will describe the steps to solve this problem. We assume the Always On cluster is already installed. If not, you can find directions here and here.

Add a generic service to the cluster

Navigate to the Failover Cluster Manager on one of the SQL-servers and go to the Roles-overview. Add an additional ‘Generic Service’-role to your SQL-cluster and select the 'Enterprise Single Sign-On Service'-service from the list. Assign a name to the new role, for instance 'SSO'.

 

 

Once the role has been created, make sure to assign a static IP-address, by adjusting the properties of the ‘IP Address’-resource. In this case ’10.2.1.10’ was used:

 

 

Add a new Frontend IP Pool to the load balancer

Navigate to the Internal Load Balancer by using the Azure portal and create a new Frontend IP Pool for the static IP address you assigned to the SSO cluster resource. Make sure to select the same subnet as the SQL-servers are located in. 

Next to the IP-address, an additional health probe needs to be created as well, as this will be used in the background to forward the request. Like previously created probes, create an additional probe, referring to an unused port. In this case, the port ‘60005’ has been chosen.

Finally create a new rule that maps port 135 to 135.

 

Make sure to also execute this PowerShell-command, to set several cluster-parameters for this SSO-role, similar as what should have been done for the SQL-roles. This time fill in the IP-address and name used for the SSO-cluster.

As was the case for the SQL-servers, this command only has to be executed on a single node within the cluster, as this will connect the load balancer’s health probe, as configured in the Azure Portal, to this cluster role.

 

Add Loadbalancer rules for each of the ports used by SSO

Because we need to create a load balancing rule for each of the ports used by SSO, we need to limit the range of ports that will be used. To do this, connect to the SQL-servers and perform these steps on all of the servers:

Open Component Services and navigate to ‘My Computer’ and open the properties.

Go to the ‘Default Protocols’-tab and click ‘Properties…’ and make sure to add a range of ports that is located in the ‘Intranet Range’

 

If the port range has been assigned, go back to the Azure Portal, to create the required rules, matching these ports. Make sure to connect these rules to the SSO-IP and SSO-probe we created in the previous step.

This will result into a 'long' list of load balancing rules for all the used ports:

 

Creating the loadbalancer rules manually is a tedious task. Luckily we can script it with PowerShell. 

 

MSDTC

A similar action will be required for MSDTC, as a load balancing rule is required for all communication between the servers. However, for MSDTC you are able to specify a fixed TCP-port instead of a range of ports.

To do this, the below command can be executed, in order to add an entry in the registry to make sure that MSDTC will stick to port ‘20021’, in this case.

reg add HKLM\SOFTWARE\Microsoft\MSDTC /v ServerTcpPort /t REG_DWORD /d 20021 /f

Note: Afterwards a reboot of the servers might be required before this change in the registry will become active.

While we are at the process of editing the registry, let’s keep at it, because in order to ensure that COM+ will be accessible from all servers, the below command needs to be executed on all SQL- and BizTalk-servers as well.

reg add HKLM\SOFTWARE\Microsoft\COM3 /v RemoteAccessEnabled /t REG_DWORD /d 1 /f

Note: Afterwards a reboot of the servers might be required before this change in the registry will become active.

Finally!

When all these configurations are in place, you should be able to go ahead and create a new redundant SSO system on your SQL Server Always On cluster.

 

Posted on Monday, November 20, 2017 6:22 PM

Toon Vanhoutte by Toon Vanhoutte

In a previous post, I've explained how you can enable OMS monitoring in Logic Apps. In the meantime, the product team has implemented some new features to the OMS plugin, so let's have a look at what has been added to our toolbox.

Mass Resubmit

This feature allows you to perform a multi-select of the Logic App runs you want to resubmit. In the upper right corner, you can resubmit them, through a single button click. This comes in very handy, when you're operating a bigger Logic Apps integration environment.

Tracked Properties

Tracked properties allow you to log custom data fields to OMS. These tracked properties are now available for search and the details can be viewed per Logic App run. This is a must have feature to find back messages, based on business related metadata, such as customer name, invoice number, order reference, etc…

Proposed improvements

At our customers, we enable OMS monitoring by default. It's free, if you can live with the 7-day retention and 500 MB per day limit. While using it in real customer environments, we identified that there's still room for improvement in order to make this a solid monitoring solution. These are the most important suggestions towards the product team:

Performance

  • On average, there's a 10 minute delay between the Logic App run execution and the logs being available in OMS. Although a delay is acceptable, 10 minutes is a quite long time span. 
  • This delay is most disturbing when you are resubmitting messages: you perform a resubmit, but you can't see right away the results of that action.

Error Handling

  • When working in an operations team, you have no visibility on what Logic App runs have already been resubmitted. This results in situations that some failed Logic Apps are resubmitted twice, without knowing it.
  • Some failures can be handled through a manual intervention. It would be handy if you can mark these failures as handled, so everyone is aware that this failure can be ignored.

User Experience

  • Tracked properties are only visible when opening the detail view. Would be nice if you could add them as columns in the result pane.
  • The search on tracked properties is limited to one AND / OR combination. Maybe a more advanced free text search on the top could provide a better user experience. 
  • A click through from the results pane, to the Logic Apps run details view, could improve the troubleshooting experience.

Conclusion

Happy to see continuous investments on the operational side of Logic Apps. As always, I'm looking at it with a critical mindset, to give constructive feedback and help to steer the product in the best way for our customers. It's great to see that the product team is taking into account such feedback to continuously improve the product! Be aware that the OMS plugin is still in preview!

Toon

Categories: Azure
Tags: Logic Apps, OMS
written by: Toon Vanhoutte

Posted on Wednesday, November 15, 2017 5:51 PM

Tom Kerkhove by Tom Kerkhove

Azure Key Vault is hard but that's because you need to understand & implement the authentication with Azure AD. That's why Azure AD Managed Service Identity (MSI) now makes this a lot easier for you. There is no reason anymore not to use Azure Key Vault.

As you might know, I'm a big fan of Azure Key Vault - It allows me to securely store secrets and cryptographic keys while still having granular control on whom has access and what they can do.

Another benefit is that since all my secrets are centralized, it is easy to provide automatic rolling of authentication keys by simply updating the secrets during the process. If an application gets compromised or somebody has bad intentions, we can simply revoke their access and the secrets they have will no longer work.

If you want to learn more, you can read more in this article.

However, Azure Key Vault is heavily depending on Azure AD for handling the authentication & authorization and.

This means that in order to use Azure Key Vault, you not only need to understand how you use it, you also need to understand how AD works and what the authentication scheme is - And it ain't easy.

It is also hard to justify using Azure Key Vault as a secure store for all your secrets because instead of storing some of your secrets in an Azure Key Vault, you now need to store your AD authentication information instead. This can be via an authentication key or, preferably, a certificate that is being installed on your compute node instead.

Some actually see this as making the exposure bigger, which is true to a certain degree, because you are now basically storing the keys to the kingdom.

To conclude - Azure Key Vault itself is super easy to use, but the Azure AD part is not.

Introducing Azure AD Managed Service Identity

Azure AD Managed Service Identity (MSI) is a free turnkey solution that simplifies AD authentication by using your Azure resource that is hosting your application as an authentication proxy, if you will.

When enabling MSI, it will create an Azure AD Application for you behind the scenes that will be used as a "proxy application" which represents your specific Azure resources.

Once your application authenticates on the local authentication endpoint, it will authenticate with Azure AD by its proxy application.

This means that instead of creating an Azure AD Application and granting it access to your resource, in our case Key Vault, you will instead only grant the proxy application access.

The best thing is - This is all abstracted for you which makes things very easy. You as a developer, just need to turn on MSI, grant the application access and you're good to go.

This turn key solution makes it super easy for developers to authenticate with Azure AD without knowing the details.

As Rahul explains in his post, you can use the AzureServiceTokenProvider from the Microsoft.Azure.Services.AppAuthentication NuGet package and let the magic do the authentication for you:

It would even be better if this would be built into the KeyVaultClient in the future so that it's more easy to discover and able to turn it on without any hassle.

Big step forward, but we're not there yet

While this is currently only in public preview, it's a big step forward for making authentication with AD dead simple but we're not there yet.

  • AD Application Naming - One of the downsides is that it creates a new AD Application for you, with the same name as your Azure resource. This means that you are not able to pick an existing application or give it a descriptive name. This can be a blocker if you're using naming conventions.
  • Support for limited resources - Currently MSI is only supported for Azure VMs, App Services & Functions. There are more services to come but if you're hoping for Azure Cloud Services, this is not going to happen unfortunately. A full overview is available in the documentation.
  • Native support in Key Vault client - As mentioned before, it would be great if the Azure Key Vault SDK would support MSI out of the box without the need of doing anything ourselves from a coding perspective or need to be aware of the Microsoft.Azure.Services.AppAuthentication
  • Feature Availability - It's still in preview, if you even care about that

Conclusion

With the introduction of Managed Service Identity there are no more reasons why you should not be using Azure Key Vault for your application anymore. It makes it a lot easier and you should aim to move all your secrets to Azure Key Vault.

It is great to see this evolution and have an easy way to do the authentication without making it complicated.

But Azure Key Vault is not the only service that integrates with AD that works well with MSI, other services like Azure Data Lake & SQL support this as well. You can get a full overview here.

I am very thrilled about Azure AD Managed Service Identity and will certainly use this, but there are some points for improvement.

Thanks for reading,

Tom

Categories: Azure, Technology
Tags: Key Vault
written by: Tom Kerkhove

Posted on Thursday, November 9, 2017 2:44 PM

Toon Vanhoutte by Toon Vanhoutte

Recently I received some questions about deploying long running Logic Apps. Before providing an answer, I double-checked if my thoughts were correct.

Deployment statements

My answer contained the following statements:

  1. A new version of a Logic App can be deployed, when there are old versions running.
  2. A Logic App completes in the (potentially old) version is was instantiated.
  3. A Logic App gets resubmitted against the latest deployed version.

Deployment test

I quickly took a test to verify if these statements are true.

  • I created a long running Logic App with a delay of 1 minute and a terminate with Version 1 as the message.

  • I fired the Logic App and immediately saved a new version of the Logic App with Version 2 as the terminate message. The Logic App instance continued running and terminated with the message Version 1.

 

  • If I resubmitted this Logic App, it instantiated a new Logic App from the latest deployed workflow definition. You can verify this by the Version 2 terminate message in the resubmitted Logic App.

 

I hope these deployment clarifications were helpful!

Categories: Azure
Tags: Logic Apps
written by: Toon Vanhoutte

Posted on Tuesday, November 7, 2017 2:40 PM

Sam Vanhoutte by Sam Vanhoutte

Internet of things (IoT) is hot. And it should be! But one of the major misconceptions is that IoT projects are overly focused on technology. At times I have been guilty of that myself. It appears that the gap between business and IT has reopened in this respect. The business does not understand enough about IT, of its possibilities, and IT does not know enough about the business, of what is needed.

IT is only a means. And IoT is not necessarily the solution. And I'm not just talking about IoT gadgets like the not-so-smart smartlocks, smart lighting, expensive juicers, connected refrigerators or other online, possibly automatically shopping, consumer equipment.

Even business-oriented and industrial IoT is often too much focused on the technological capabilities, rather than on business use. As a result, many IoT projects are stuck in the proof-of-concept (poc) phase and do not evolve into pilots and practical acceptance. I think the only way to get business buy-in is through the creation of a clear business case.

Past the hype

This is easier said than done. The business case is often hard to predict. Pressure can be high, partly due to the fact that IoT is now beyond its peak on the Gartner hype cycle. The top of the hype lies behind us and the downturn to 'the trough of disappointment' has set in. For those who are not easily discouraged by Gartner, there are still some genuine pitfalls.

In fact, the design of the poc-phase is one of these pitfalls. Many proof-of-concepts are set up without or with insufficient business base. This amounts to a discrepancy between the poc and business reality. Test setups for IoT solutions often put too much emphasis on quick results.

Too much time and effort are spent on matters that are less important in practice. And, perhaps even worse, too little time and effort goes into things that are much more important in practice. One example are the upcoming European Data Protection Rules GDPR.

Go for distinctiveness

A better approach to the poc phase not only increases the chance of success, it also reduces costs because time is spent in more meaningful ways. This also includes insight into what has become a commodity nowadays. Namely, that IoT is an end-to-end value chain.

There is little credit to be gained from developing components like IoT hardware, network edge capabilities, connectivity, and data intake. It is too difficult for organizations to distinguish themselves here. Instead, they should focus on intelligent clouds, data analytics, reporting and action. The latter is what brings the desired business use.

IoT is only a concept; a means to innovation and acceleration. This means can have a goal, for example an unforeseen reduction of energy consumption.

Let us look at the example of a company that stores deep-frozen food. Frozen foods are very energy intensive, but the freezing itself takes place within a specific temperature range. The low temperature does not have to be constant. Sometimes less freezing is acceptable. The company in question has an hourly contract, with a rate structure, from an energy provider. And that gives them the chance to use less energy at times when it is expensive. During cheaper hours, they can freeze harder.

On the way to greater benefits

Nevertheless, many current IoT applications involve no more than the automation of existing business processes and practices. But that is just the beginning. Next to smarter power consumption on an industrial scale, we can think of many new activities and even completely new business models.

Efficient monitoring allows for further optimization of business processes. This solves two problems at once. Because optimization requires data collection, and you can do more with more data. This extends to many departments within the organization, as they know the business very well.

Means for innovation

A good deployment of IoT can thus provide insights that allow other value-added services to be developed. This is completely in line with the shift from hardware sales to services. New services allow us to tap into other markets - through IoT, which is still a means and not the goal. IoT is hot, but no more (or less) than a good concept; a means to drive innovation and acceleration.

Note: This article was first published via Computable on 6 November 2017 (in Dutch) 

Categories: Opinions
written by: Sam Vanhoutte