wiki

Codit Wiki

Informatie wordt geladen...

Codit Blog

Gepost op woensdag 15 november 2017 17:51

Tom Kerkhove door Tom Kerkhove

Azure Key Vault is hard but that's because you need to understand & implement the authentication with Azure AD. That's why Azure AD Managed Service Identity (MSI) now makes this a lot easier for you. There is no reason anymore not to use Azure Key Vault.

As you might know, I'm a big fan of Azure Key Vault - It allows me to securely store secrets and cryptographic keys while still having granular control on whom has access and what they can do.

Another benefit is that since all my secrets are centralized, it is easy to provide automatic rolling of authentication keys by simply updating the secrets during the process. If an application gets compromised or somebody has bad intentions, we can simply revoke their access and the secrets they have will no longer work.

If you want to learn more, you can read more in this article.

However, Azure Key Vault is heavily depending on Azure AD for handling the authentication & authorization and.

This means that in order to use Azure Key Vault, you not only need to understand how you use it, you also need to understand how AD works and what the authentication scheme is - And it ain't easy.

It is also hard to justify using Azure Key Vault as a secure store for all your secrets because instead of storing some of your secrets in an Azure Key Vault, you now need to store your AD authentication information instead. This can be via an authentication key or, preferably, a certificate that is being installed on your compute node instead.

Some actually see this as making the exposure bigger, which is true to a certain degree, because you are now basically storing the keys to the kingdom.

To conclude - Azure Key Vault itself is super easy to use, but the Azure AD part is not.

Introducing Azure AD Managed Service Identity

Azure AD Managed Service Identity (MSI) is a free turnkey solution that simplifies AD authentication by using your Azure resource that is hosting your application as an authentication proxy, if you will.

When enabling MSI, it will create an Azure AD Application for you behind the scenes that will be used as a "proxy application" which represents your specific Azure resources.

Once your application authenticates on the local authentication endpoint, it will authenticate with Azure AD by its proxy application.

This means that instead of creating an Azure AD Application and granting it access to your resource, in our case Key Vault, you will instead only grant the proxy application access.

The best thing is - This is all abstracted for you which makes things very easy. You as a developer, just need to turn on MSI, grant the application access and you're good to go.

This turn key solution makes it super easy for developers to authenticate with Azure AD without knowing the details.

As Rahul explains in his post, you can use the AzureServiceTokenProvider from the Microsoft.Azure.Services.AppAuthentication NuGet package and let the magic do the authentication for you:

It would even be better if this would be built into the KeyVaultClient in the future so that it's more easy to discover and able to turn it on without any hassle.

Big step forward, but we're not there yet

While this is currently only in public preview, it's a big step forward for making authentication with AD dead simple but we're not there yet.

  • AD Application Naming - One of the downsides is that it creates a new AD Application for you, with the same name as your Azure resource. This means that you are not able to pick an existing application or give it a descriptive name. This can be a blocker if you're using naming conventions.
  • Support for limited resources - Currently MSI is only supported for Azure VMs, App Services & Functions. There are more services to come but if you're hoping for Azure Cloud Services, this is not going to happen unfortunately. A full overview is available in the documentation.
  • Native support in Key Vault client - As mentioned before, it would be great if the Azure Key Vault SDK would support MSI out of the box without the need of doing anything ourselves from a coding perspective or need to be aware of the Microsoft.Azure.Services.AppAuthentication
  • Feature Availability - It's still in preview, if you even care about that

Conclusion

With the introduction of Managed Service Identity there are no more reasons why you should not be using Azure Key Vault for your application anymore. It makes it a lot easier and you should aim to move all your secrets to Azure Key Vault.

It is great to see this evolution and have an easy way to do the authentication without making it complicated.

But Azure Key Vault is not the only service that integrates with AD that works well with MSI, other services like Azure Data Lake & SQL support this as well. You can get a full overview here.

I am very thrilled about Azure AD Managed Service Identity and will certainly use this, but there are some points for improvement.

Thanks for reading,

Tom

Categorieën: Azure, Technology
Tags: Key Vault
geschreven door: Tom Kerkhove

Gepost op donderdag 9 november 2017 14:44

Toon Vanhoutte door Toon Vanhoutte

Recently I received some questions about deploying long running Logic Apps. Before providing an answer, I double-checked if my thoughts were correct.

Deployment statements

My answer contained the following statements:

  1. A new version of a Logic App can be deployed, when there are old versions running.
  2. A Logic App completes in the (potentially old) version is was instantiated.
  3. A Logic App gets resubmitted against the latest deployed version.

Deployment test

I quickly took a test to verify if these statements are true.

  • I created a long running Logic App with a delay of 1 minute and a terminate with Version 1 as the message.

  • I fired the Logic App and immediately saved a new version of the Logic App with Version 2 as the terminate message. The Logic App instance continued running and terminated with the message Version 1.

 

  • If I resubmitted this Logic App, it instantiated a new Logic App from the latest deployed workflow definition. You can verify this by the Version 2 terminate message in the resubmitted Logic App.

 

I hope these deployment clarifications were helpful!

Categorieën: Azure
Tags: Logic Apps
geschreven door: Toon Vanhoutte

Gepost op dinsdag 7 november 2017 14:40

Sam Vanhoutte door Sam Vanhoutte

Internet of things (IoT) is hot. And it should be! But one of the major misconceptions is that IoT projects are overly focused on technology. At times I have been guilty of that myself. It appears that the gap between business and IT has reopened in this respect. The business does not understand enough about IT, of its possibilities, and IT does not know enough about the business, of what is needed.

IT is only a means. And IoT is not necessarily the solution. And I'm not just talking about IoT gadgets like the not-so-smart smartlocks, smart lighting, expensive juicers, connected refrigerators or other online, possibly automatically shopping, consumer equipment.

Even business-oriented and industrial IoT is often too much focused on the technological capabilities, rather than on business use. As a result, many IoT projects are stuck in the proof-of-concept (poc) phase and do not evolve into pilots and practical acceptance. I think the only way to get business buy-in is through the creation of a clear business case.

Past the hype

This is easier said than done. The business case is often hard to predict. Pressure can be high, partly due to the fact that IoT is now beyond its peak on the Gartner hype cycle. The top of the hype lies behind us and the downturn to 'the trough of disappointment' has set in. For those who are not easily discouraged by Gartner, there are still some genuine pitfalls.

In fact, the design of the poc-phase is one of these pitfalls. Many proof-of-concepts are set up without or with insufficient business base. This amounts to a discrepancy between the poc and business reality. Test setups for IoT solutions often put too much emphasis on quick results.

Too much time and effort are spent on matters that are less important in practice. And, perhaps even worse, too little time and effort goes into things that are much more important in practice. One example are the upcoming European Data Protection Rules GDPR.

Go for distinctiveness

A better approach to the poc phase not only increases the chance of success, it also reduces costs because time is spent in more meaningful ways. This also includes insight into what has become a commodity nowadays. Namely, that IoT is an end-to-end value chain.

There is little credit to be gained from developing components like IoT hardware, network edge capabilities, connectivity, and data intake. It is too difficult for organizations to distinguish themselves here. Instead, they should focus on intelligent clouds, data analytics, reporting and action. The latter is what brings the desired business use.

IoT is only a concept; a means to innovation and acceleration. This means can have a goal, for example an unforeseen reduction of energy consumption.

Let us look at the example of a company that stores deep-frozen food. Frozen foods are very energy intensive, but the freezing itself takes place within a specific temperature range. The low temperature does not have to be constant. Sometimes less freezing is acceptable. The company in question has an hourly contract, with a rate structure, from an energy provider. And that gives them the chance to use less energy at times when it is expensive. During cheaper hours, they can freeze harder.

On the way to greater benefits

Nevertheless, many current IoT applications involve no more than the automation of existing business processes and practices. But that is just the beginning. Next to smarter power consumption on an industrial scale, we can think of many new activities and even completely new business models.

Efficient monitoring allows for further optimization of business processes. This solves two problems at once. Because optimization requires data collection, and you can do more with more data. This extends to many departments within the organization, as they know the business very well.

Means for innovation

A good deployment of IoT can thus provide insights that allow other value-added services to be developed. This is completely in line with the shift from hardware sales to services. New services allow us to tap into other markets - through IoT, which is still a means and not the goal. IoT is hot, but no more (or less) than a good concept; a means to drive innovation and acceleration.

Note: This article was first published via Computable on 6 November 2017 (in Dutch) 

Categorieën: Opinions
geschreven door: Sam Vanhoutte

Gepost op woensdag 25 oktober 2017 13:44

Stijn Moreels door Stijn Moreels

One of the first questions people sometimes ask me about functional programming, is the question about readability. It was a question I had myself when I started with learning functional concepts.

Readability

Now, before moving on; the term “readability” is something very subjective and yet we can define some common grounds. So, it’s not that easy to find something that everyone agrees on about readability.

Single-Letter Values and Functions

The first thing I (and maybe many before and after me) discovered was that the rules of naming conventions in an object-oriented language couldn’t be entirely used within a functional environment.

Functional programmers have this habit of giving values and functions very short names. So short that it only consists of a single letter. If we use this habit in an object-oriented language; this is almost always a bad practice (maybe not a for-loop with an aggregated index?).

So, why is this different in a functional environment?

The answer to this can be many things I guess; one of the things that comes to mind is that in a functional environment, you will very often write functions that can be used for any type (generic). Naming such values and functions can be difficult. Should we name it “value”?

In functional languages, the x is most of the time used as this “value”. Strangely, by using x I found the code a lot clearer. So, for values: x, y and z for functions: f, g and h. (Note that these letters are the same as we use in mathematics.)

When we talk about multiple values, we add and trailing 's'; like xs.

Ok, look for example at this following “bind” function for the Either Monad (Rop):

We have written the values explicitly like we would do in an imperative scenario. Now look at the following:

Most of all in the infix operator, the most important part is that we clearly see that the arguments passed in to the “bind” function are flipped. This was something we could quite see immediately in the first example.

After a while, when you see an f somewhere; you automatically understand it’s a function, just like x is some “value”.

Now, I’m not going to state anything; but in my personal opinion, the second example shows more what’s going on with less explicit names. We reach a higher level of readability by abstracting our names, rather strange at first.

Partially Applied Functions

One of the powerful concepts in functional languages that I really miss in object-oriented languages (without off course custom-made functionality like Extension Methods in C#); is the concept of Partial Application. This concept describes the idea that if you send an argument to a function, you get back a function with the remaining arguments. This concept is very strong because now we can decompose for example a three arguments function into three functions with 1, 2 and 3 arguments respectively.

In practice, this concept can really be of help when declaring problems. In my previous post; I solved the Coin Change Kata. In one the properties, I needed to describe the overall core-functionality. In the assertion, I needed to assert on the sum all the coin values:

The “( )” around the operators “+” and “=” makes sure I get a function back with the remaining argument. I could have written the assertion as the following expression:

And I can understand that in the beginning this will maybe be more understandable for you than the previous example. But please note that, with this anonymous function explicitly specified, we have written a “lot” of code just for addition and equality verification.

I personally think that every functional programmer will refactor this to the first example. Not only because it can be written in fewer characters, but it also expresses more what we’re trying to solve. In imperative languages, we typically assign a value to a variable, and that can be used to another variable, … and without you knowing you’ve created a pipeline. I like this concept very much. I don’t have to assign the next result to a value anymore but can just pass it along to the next function.

“For the Change”
“We need to have each value”
“So we can sum all the values”
“And sum it with the remaining value”
“This should be the same as we expected”

Notice that we always use the verb in the front of the sentence. The action we’re trying to express in code now in front of the line by partially applying one of the arguments and not at the end.
Also note that, when we specify the function explicitly, you can’t read the expression in the second example from top to bottom without moving you’re eyes to the right to see the addition or the equality verification; which is actually the most important part of the line.

This is also one of the reasons I like this form of programming.

Yes, I know that it takes some time to get used to this way of writing functions; but I can assure you. Once you have mastered this technique, you would want this in your favorite object-oriented language as well. (That's one of the reasons I implemented them in the form of Extension Methods).

Infix Operators

One thing that I can’t find a common approach about yet, is the feature of defining your own operators and where to use it. Infix operators can make your code a lot cleaner and more readable; but it can also harm your readability; that’s why it’s probably difficult to define a common approach.

There are already many operators available and by specifying your own operators that looks similar; we can guess what the operator does.

The (|>) pipe operator already exists, and by defining and operators like (||>) or (|>>), we can guess that it has something to do with piping more than one argument, or has something to do with piping and composing.

I didn’t find a global rule to this approach, but I guess it’s something that must be used carefully. If we would define for every function an operator, the code would be less readable.

The (>>=) operator is used for the binding, and so it’s reasonable to define them instead of writing “bind” over-and-over again; because we're actually more interested in WHAT you're trying to bind. The same can be said about the (<*>) operator for the Applicative Bind or the (<!>, <$>) operator for the Mapping. When you see the (<|>) you know it has something to do with Conditional Piping since it pipes in two directions ("if then?"). Some operators are well known and so, are probably never questionable to define.

FsCheck defines (.&.) and (.|.) operators to define the AND and OR of Properties. We already know the boolean operators without the dots leading and trailing them, that’s why it’s easier to know what the infix operator does.

The tricky part is when we use too much operators. I would like to use those operators when we’re changing the data flow in such a way that we can reuse it somewhere else. In those cases it’s probably a good approach to define an infix operator.

Conclusion

This small blog post was for me a small reminder of why I write lesser characters and still make more Declarative Code. It was strange at first to think about it. Most of the time in Object-Oriented Languages; when you talk about smaller names, short-handed operators, … you’re quickly end up with an “anti-pattern” or a bad practice while in Functional Programming this is the right way to do it.

Both imperative and functional programmers are right in my opinion. It’s just a way the language allows us to write clear/clean readable code, because that is really what we want to do.

Categorieën: Technology
geschreven door: Stijn Moreels

Gepost op maandag 23 oktober 2017 11:18

Glenn Colpaert door Glenn Colpaert

In this blog post, I will go deeper into detail on why IoT is more than just collecting some data from devices and explain you why it's important to engage business into your IoT Solution next to your perfectly built architecture.

Simplifying IoT, one Azure service at a time!

The Internet of Things (IoT) isn't a technology revolution, it is a business revolution enabled by technology. By 2020, there will be 26 Billion connected 'things' and IoT will be good for a 12$ Trillion market share. These connected 'things' will range from more consumer-driven IoT ranging from wearables and home automation to intelligent industrial scenarios like smart buildings and intelligent machine infrastructures.

In this blog post, I will go deeper into detail on why IoT is more than just collecting some data from devices and explain you why it's important to engage business into your IoT Solution next to your perfectly built architecture. I will talk about some of the more complex things you need to think about when building and designing your solution. Some of them might be scary or sound very complex to tackle, but remember that some of these solutions are just one Azure service away...

A simple view on IoT

When creating a simplified overview of an IoT project or solution, it can be drilled down to the following 4 key components.
An IoT project always comes down to securely connecting your devices to the cloud and start flowing your local data streams into the cloud. Once your device data is stored in the cloud you can start creating insights on it. Based on that, you can leverage the business intelligence towards business and allow them to act upon actions or events raised on that data and trigger additional workflows.

IoT projects can be complex!

However, when taking a closer look on IoT Projects there is more to say than the above 4 key components, especially when moving from a POC setup to a full-blown production ready solution with potentially thousands of devices in the field. As IoT is a business-driven revolution, the most important action there is that you need business to be involved from the very start, as they are the key-drivers from your IoT project. The risk of not involving the business into IoT projects is that you potentially get stuck in POC limbo and your IoT solutions will never see the break of day. Once you get business on board, things are getting easier... or not. Some of the most important technical questions or decisions are listed below, all of them are just a small part of your entire solution. 

How to connect things that are hard to connect?

Getting your IP enabled devices connected to the cloud is one thing, but how will you connect your existing devices, that don't speak IP, to the cloud. What if your devices are not capable of change or the risk of changing them is too high? Or what if your devices aren't even allowed to talk to the cloud, due to security reasons. When this is the case you might need to look at other possibilities to connect your devices with the cloud, like for example introducing a gateway that will be responsible for acting as a 'bridge' between your devices and cloud platform.

Device Management/lifecycle

Once your devices are connected, there's still some open questions or challenges you need to tackle before processing your data. How will you securely identify and enroll your devices onto your IoT Platform. How will you scale that enrollment for many devices? Next to enrollment there is also a question of configuration and managing your devices. When looking at Device Management and Lifecycles there are a couple of management patterns like updates, reboots, configuration updates or even software updates.

Data storage/Visualization

Another key component within an IoT solution is data. Data is key on getting the insights the business is looking for. Without a proper data storage/visualization strategy you're in for some trouble, think fast IO and high scale. When it comes to storing your data, there is no silver bullet. It really depends on the use-case and what the ultimate goal is. Key action there is to pick the storage based on the actions you will perform with your stored data. There is storage that is a perfect input for your analytics tiers but mighy not be a good option when it's just about archiving the data for later use.

Analytics

As already mentioned during this blog, data is key inside your IoT solution. The real value of your IoT project is making sense of your data and getting insights from that data. Once you captured that insight, it is key to connect these insights back to the business and evolve your business by learning from those insights.

Edge Computing

When doing IoT projects you're not always in the position of having full-blown connected sites or factories. There might be a limit on communication bandwidth or even limited internet connectivity. What if you would like your devices to only send aggregated data of the last minute to the cloud? What if you would like to keep all your data close to your device and only send fault data to the cloud. If this is the case, you need introduce Edge Computing into your IoT Solution, Edge Computing allows you to perform buffering, analytics, machine learning and even executing custom code on your device without the need of a proper internet connection.

Security

Let's not go into detail on this one. Start implementing it from day zero as this is the most important part of your IoT Solution. Your end to end value chain must be secured. Never cut budget on your security strategy and implementation when doing IoT Projects.

Simplifying IoT

Congratulations, you've survived the scary part... Thanks to the Azure cloud some of the above challenges are just a couple of button clicks away. The goals of Azure and Microsoft is making it easier to build, secure and provision scalable solutions from device to cloud. The list of recent IoT innovations on the Azure platform is endless, with major focus on some of the key challenges every IoT project phases: Security, Device Management, Insights and Edge Computing.
The future is bright, time to do some IoT!!
Cheers, Glenn
Categorieën: Azure
Tags: IoT
geschreven door: Glenn Colpaert