wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Monday, November 20, 2017 6:22 PM

Toon Vanhoutte by Toon Vanhoutte

In a previous post, I've explained how you can enable OMS monitoring in Logic Apps. In the meantime, the product team has implemented some new features to the OMS plugin, so let's have a look at what has been added to our toolbox.

Mass Resubmit

This feature allows you to perform a multi-select of the Logic App runs you want to resubmit. In the upper right corner, you can resubmit them, through a single button click. This comes in very handy, when you're operating a bigger Logic Apps integration environment.

Tracked Properties

Tracked properties allow you to log custom data fields to OMS. These tracked properties are now available for search and the details can be viewed per Logic App run. This is a must have feature to find back messages, based on business related metadata, such as customer name, invoice number, order reference, etc…

Proposed improvements

At our customers, we enable OMS monitoring by default. It's free, if you can live with the 7-day retention and 500 MB per day limit. While using it in real customer environments, we identified that there's still room for improvement in order to make this a solid monitoring solution. These are the most important suggestions towards the product team:

Performance

  • On average, there's a 10 minute delay between the Logic App run execution and the logs being available in OMS. Although a delay is acceptable, 10 minutes is a quite long time span. 
  • This delay is most disturbing when you are resubmitting messages: you perform a resubmit, but you can't see right away the results of that action.

Error Handling

  • When working in an operations team, you have no visibility on what Logic App runs have already been resubmitted. This results in situations that some failed Logic Apps are resubmitted twice, without knowing it.
  • Some failures can be handled through a manual intervention. It would be handy if you can mark these failures as handled, so everyone is aware that this failure can be ignored.

User Experience

  • Tracked properties are only visible when opening the detail view. Would be nice if you could add them as columns in the result pane.
  • The search on tracked properties is limited to one AND / OR combination. Maybe a more advanced free text search on the top could provide a better user experience. 
  • A click through from the results pane, to the Logic Apps run details view, could improve the troubleshooting experience.

Conclusion

Happy to see continuous investments on the operational side of Logic Apps. As always, I'm looking at it with a critical mindset, to give constructive feedback and help to steer the product in the best way for our customers. It's great to see that the product team is taking into account such feedback to continuously improve the product! Be aware that the OMS plugin is still in preview!

Toon

Categories: Azure
Tags: Logic Apps, OMS
written by: Toon Vanhoutte

Posted on Wednesday, November 15, 2017 5:51 PM

Tom Kerkhove by Tom Kerkhove

Azure Key Vault is hard but that's because you need to understand & implement the authentication with Azure AD. That's why Azure AD Managed Service Identity (MSI) now makes this a lot easier for you. There is no reason anymore not to use Azure Key Vault.

As you might know, I'm a big fan of Azure Key Vault - It allows me to securely store secrets and cryptographic keys while still having granular control on whom has access and what they can do.

Another benefit is that since all my secrets are centralized, it is easy to provide automatic rolling of authentication keys by simply updating the secrets during the process. If an application gets compromised or somebody has bad intentions, we can simply revoke their access and the secrets they have will no longer work.

If you want to learn more, you can read more in this article.

However, Azure Key Vault is heavily depending on Azure AD for handling the authentication & authorization and.

This means that in order to use Azure Key Vault, you not only need to understand how you use it, you also need to understand how AD works and what the authentication scheme is - And it ain't easy.

It is also hard to justify using Azure Key Vault as a secure store for all your secrets because instead of storing some of your secrets in an Azure Key Vault, you now need to store your AD authentication information instead. This can be via an authentication key or, preferably, a certificate that is being installed on your compute node instead.

Some actually see this as making the exposure bigger, which is true to a certain degree, because you are now basically storing the keys to the kingdom.

To conclude - Azure Key Vault itself is super easy to use, but the Azure AD part is not.

Introducing Azure AD Managed Service Identity

Azure AD Managed Service Identity (MSI) is a free turnkey solution that simplifies AD authentication by using your Azure resource that is hosting your application as an authentication proxy, if you will.

When enabling MSI, it will create an Azure AD Application for you behind the scenes that will be used as a "proxy application" which represents your specific Azure resources.

Once your application authenticates on the local authentication endpoint, it will authenticate with Azure AD by its proxy application.

This means that instead of creating an Azure AD Application and granting it access to your resource, in our case Key Vault, you will instead only grant the proxy application access.

The best thing is - This is all abstracted for you which makes things very easy. You as a developer, just need to turn on MSI, grant the application access and you're good to go.

This turn key solution makes it super easy for developers to authenticate with Azure AD without knowing the details.

As Rahul explains in his post, you can use the AzureServiceTokenProvider from the Microsoft.Azure.Services.AppAuthentication NuGet package and let the magic do the authentication for you:

It would even be better if this would be built into the KeyVaultClient in the future so that it's more easy to discover and able to turn it on without any hassle.

Big step forward, but we're not there yet

While this is currently only in public preview, it's a big step forward for making authentication with AD dead simple but we're not there yet.

  • AD Application Naming - One of the downsides is that it creates a new AD Application for you, with the same name as your Azure resource. This means that you are not able to pick an existing application or give it a descriptive name. This can be a blocker if you're using naming conventions.
  • Support for limited resources - Currently MSI is only supported for Azure VMs, App Services & Functions. There are more services to come but if you're hoping for Azure Cloud Services, this is not going to happen unfortunately. A full overview is available in the documentation.
  • Native support in Key Vault client - As mentioned before, it would be great if the Azure Key Vault SDK would support MSI out of the box without the need of doing anything ourselves from a coding perspective or need to be aware of the Microsoft.Azure.Services.AppAuthentication
  • Feature Availability - It's still in preview, if you even care about that

Conclusion

With the introduction of Managed Service Identity there are no more reasons why you should not be using Azure Key Vault for your application anymore. It makes it a lot easier and you should aim to move all your secrets to Azure Key Vault.

It is great to see this evolution and have an easy way to do the authentication without making it complicated.

But Azure Key Vault is not the only service that integrates with AD that works well with MSI, other services like Azure Data Lake & SQL support this as well. You can get a full overview here.

I am very thrilled about Azure AD Managed Service Identity and will certainly use this, but there are some points for improvement.

Thanks for reading,

Tom

Categories: Azure, Technology
Tags: Key Vault
written by: Tom Kerkhove

Posted on Thursday, November 9, 2017 2:44 PM

Toon Vanhoutte by Toon Vanhoutte

Recently I received some questions about deploying long running Logic Apps. Before providing an answer, I double-checked if my thoughts were correct.

Deployment statements

My answer contained the following statements:

  1. A new version of a Logic App can be deployed, when there are old versions running.
  2. A Logic App completes in the (potentially old) version is was instantiated.
  3. A Logic App gets resubmitted against the latest deployed version.

Deployment test

I quickly took a test to verify if these statements are true.

  • I created a long running Logic App with a delay of 1 minute and a terminate with Version 1 as the message.

  • I fired the Logic App and immediately saved a new version of the Logic App with Version 2 as the terminate message. The Logic App instance continued running and terminated with the message Version 1.

 

  • If I resubmitted this Logic App, it instantiated a new Logic App from the latest deployed workflow definition. You can verify this by the Version 2 terminate message in the resubmitted Logic App.

 

I hope these deployment clarifications were helpful!

Categories: Azure
Tags: Logic Apps
written by: Toon Vanhoutte

Posted on Tuesday, November 7, 2017 2:40 PM

Sam Vanhoutte by Sam Vanhoutte

Internet of things (IoT) is hot. And it should be! But one of the major misconceptions is that IoT projects are overly focused on technology. At times I have been guilty of that myself. It appears that the gap between business and IT has reopened in this respect. The business does not understand enough about IT, of its possibilities, and IT does not know enough about the business, of what is needed.

IT is only a means. And IoT is not necessarily the solution. And I'm not just talking about IoT gadgets like the not-so-smart smartlocks, smart lighting, expensive juicers, connected refrigerators or other online, possibly automatically shopping, consumer equipment.

Even business-oriented and industrial IoT is often too much focused on the technological capabilities, rather than on business use. As a result, many IoT projects are stuck in the proof-of-concept (poc) phase and do not evolve into pilots and practical acceptance. I think the only way to get business buy-in is through the creation of a clear business case.

Past the hype

This is easier said than done. The business case is often hard to predict. Pressure can be high, partly due to the fact that IoT is now beyond its peak on the Gartner hype cycle. The top of the hype lies behind us and the downturn to 'the trough of disappointment' has set in. For those who are not easily discouraged by Gartner, there are still some genuine pitfalls.

In fact, the design of the poc-phase is one of these pitfalls. Many proof-of-concepts are set up without or with insufficient business base. This amounts to a discrepancy between the poc and business reality. Test setups for IoT solutions often put too much emphasis on quick results.

Too much time and effort are spent on matters that are less important in practice. And, perhaps even worse, too little time and effort goes into things that are much more important in practice. One example are the upcoming European Data Protection Rules GDPR.

Go for distinctiveness

A better approach to the poc phase not only increases the chance of success, it also reduces costs because time is spent in more meaningful ways. This also includes insight into what has become a commodity nowadays. Namely, that IoT is an end-to-end value chain.

There is little credit to be gained from developing components like IoT hardware, network edge capabilities, connectivity, and data intake. It is too difficult for organizations to distinguish themselves here. Instead, they should focus on intelligent clouds, data analytics, reporting and action. The latter is what brings the desired business use.

IoT is only a concept; a means to innovation and acceleration. This means can have a goal, for example an unforeseen reduction of energy consumption.

Let us look at the example of a company that stores deep-frozen food. Frozen foods are very energy intensive, but the freezing itself takes place within a specific temperature range. The low temperature does not have to be constant. Sometimes less freezing is acceptable. The company in question has an hourly contract, with a rate structure, from an energy provider. And that gives them the chance to use less energy at times when it is expensive. During cheaper hours, they can freeze harder.

On the way to greater benefits

Nevertheless, many current IoT applications involve no more than the automation of existing business processes and practices. But that is just the beginning. Next to smarter power consumption on an industrial scale, we can think of many new activities and even completely new business models.

Efficient monitoring allows for further optimization of business processes. This solves two problems at once. Because optimization requires data collection, and you can do more with more data. This extends to many departments within the organization, as they know the business very well.

Means for innovation

A good deployment of IoT can thus provide insights that allow other value-added services to be developed. This is completely in line with the shift from hardware sales to services. New services allow us to tap into other markets - through IoT, which is still a means and not the goal. IoT is hot, but no more (or less) than a good concept; a means to drive innovation and acceleration.

Note: This article was first published via Computable on 6 November 2017 (in Dutch) 

Categories: Opinions
written by: Sam Vanhoutte

Posted on Wednesday, October 25, 2017 1:44 PM

Stijn Moreels by Stijn Moreels

One of the first questions people sometimes ask me about functional programming, is the question about readability. It was a question I had myself when I started with learning functional concepts.

Readability

Now, before moving on; the term “readability” is something very subjective and yet we can define some common grounds. So, it’s not that easy to find something that everyone agrees on about readability.

Single-Letter Values and Functions

The first thing I (and maybe many before and after me) discovered was that the rules of naming conventions in an object-oriented language couldn’t be entirely used within a functional environment.

Functional programmers have this habit of giving values and functions very short names. So short that it only consists of a single letter. If we use this habit in an object-oriented language; this is almost always a bad practice (maybe not a for-loop with an aggregated index?).

So, why is this different in a functional environment?

The answer to this can be many things I guess; one of the things that comes to mind is that in a functional environment, you will very often write functions that can be used for any type (generic). Naming such values and functions can be difficult. Should we name it “value”?

In functional languages, the x is most of the time used as this “value”. Strangely, by using x I found the code a lot clearer. So, for values: x, y and z for functions: f, g and h. (Note that these letters are the same as we use in mathematics.)

When we talk about multiple values, we add and trailing 's'; like xs.

Ok, look for example at this following “bind” function for the Either Monad (Rop):

We have written the values explicitly like we would do in an imperative scenario. Now look at the following:

Most of all in the infix operator, the most important part is that we clearly see that the arguments passed in to the “bind” function are flipped. This was something we could quite see immediately in the first example.

After a while, when you see an f somewhere; you automatically understand it’s a function, just like x is some “value”.

Now, I’m not going to state anything; but in my personal opinion, the second example shows more what’s going on with less explicit names. We reach a higher level of readability by abstracting our names, rather strange at first.

Partially Applied Functions

One of the powerful concepts in functional languages that I really miss in object-oriented languages (without off course custom-made functionality like Extension Methods in C#); is the concept of Partial Application. This concept describes the idea that if you send an argument to a function, you get back a function with the remaining arguments. This concept is very strong because now we can decompose for example a three arguments function into three functions with 1, 2 and 3 arguments respectively.

In practice, this concept can really be of help when declaring problems. In my previous post; I solved the Coin Change Kata. In one the properties, I needed to describe the overall core-functionality. In the assertion, I needed to assert on the sum all the coin values:

The “( )” around the operators “+” and “=” makes sure I get a function back with the remaining argument. I could have written the assertion as the following expression:

And I can understand that in the beginning this will maybe be more understandable for you than the previous example. But please note that, with this anonymous function explicitly specified, we have written a “lot” of code just for addition and equality verification.

I personally think that every functional programmer will refactor this to the first example. Not only because it can be written in fewer characters, but it also expresses more what we’re trying to solve. In imperative languages, we typically assign a value to a variable, and that can be used to another variable, … and without you knowing you’ve created a pipeline. I like this concept very much. I don’t have to assign the next result to a value anymore but can just pass it along to the next function.

“For the Change”
“We need to have each value”
“So we can sum all the values”
“And sum it with the remaining value”
“This should be the same as we expected”

Notice that we always use the verb in the front of the sentence. The action we’re trying to express in code now in front of the line by partially applying one of the arguments and not at the end.
Also note that, when we specify the function explicitly, you can’t read the expression in the second example from top to bottom without moving you’re eyes to the right to see the addition or the equality verification; which is actually the most important part of the line.

This is also one of the reasons I like this form of programming.

Yes, I know that it takes some time to get used to this way of writing functions; but I can assure you. Once you have mastered this technique, you would want this in your favorite object-oriented language as well. (That's one of the reasons I implemented them in the form of Extension Methods).

Infix Operators

One thing that I can’t find a common approach about yet, is the feature of defining your own operators and where to use it. Infix operators can make your code a lot cleaner and more readable; but it can also harm your readability; that’s why it’s probably difficult to define a common approach.

There are already many operators available and by specifying your own operators that looks similar; we can guess what the operator does.

The (|>) pipe operator already exists, and by defining and operators like (||>) or (|>>), we can guess that it has something to do with piping more than one argument, or has something to do with piping and composing.

I didn’t find a global rule to this approach, but I guess it’s something that must be used carefully. If we would define for every function an operator, the code would be less readable.

The (>>=) operator is used for the binding, and so it’s reasonable to define them instead of writing “bind” over-and-over again; because we're actually more interested in WHAT you're trying to bind. The same can be said about the (<*>) operator for the Applicative Bind or the (<!>, <$>) operator for the Mapping. When you see the (<|>) you know it has something to do with Conditional Piping since it pipes in two directions ("if then?"). Some operators are well known and so, are probably never questionable to define.

FsCheck defines (.&.) and (.|.) operators to define the AND and OR of Properties. We already know the boolean operators without the dots leading and trailing them, that’s why it’s easier to know what the infix operator does.

The tricky part is when we use too much operators. I would like to use those operators when we’re changing the data flow in such a way that we can reuse it somewhere else. In those cases it’s probably a good approach to define an infix operator.

Conclusion

This small blog post was for me a small reminder of why I write lesser characters and still make more Declarative Code. It was strange at first to think about it. Most of the time in Object-Oriented Languages; when you talk about smaller names, short-handed operators, … you’re quickly end up with an “anti-pattern” or a bad practice while in Functional Programming this is the right way to do it.

Both imperative and functional programmers are right in my opinion. It’s just a way the language allows us to write clear/clean readable code, because that is really what we want to do.

Categories: Technology
written by: Stijn Moreels