wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Thursday, October 12, 2017 11:35 PM

Toon Vanhoutte by Toon Vanhoutte

After my first blog in this series about Azure Function Proxies, I received several questions related to API management. People were curious how to position Azure Function Proxies compared to Azure API Management. It should be clear that Azure Function Proxies has some very limited API management functionality, but it comes nowhere near the capabilities of Azure API Management! Comparing the feature set of these two Azure services doesn't make sense, as Azure API Management supersedes Azure Function Proxies on all levels. Don't forget why Azure Function Proxies was introduced: it's to unify several separate functions into an API, not to provide full-blown APIM.  Let's just touch upon the functionalities that they have more or less in common!

Common Functionalities

Transformation

Azure Function Proxies have limited transformation capabilities on three levels: rewriting of the URI, modification of the HTTP headers and changing the HTTP body. The options for transformations are very basic and focussed on just creating a unified API. Azure API Management on the other hand, has an impressive range of transform capabilities.

These are the main transformation policies:

Next to these policies, you have the opportunity to write policy expressions that inject .NET C# code into your processing pipeline, to make it even more intelligent.

Security

Azure Function Proxies supports any kind of backend security that can be accomplished through static keys / tokens in the URL or HTTP headers. Frontend-facing, Azure Function Proxies offers out-of-the-box authentication enforcement by several providers: Azure Active Directory, Facebook, Google, Twitter & Microsoft. Azure API Management has many options to secure the frontend and backend API, going from IP restrictions to inbound throttling, from client certificates to full OAuth2 support.

These are the main access restriction policies:

  • Check HTTP header - Enforces existence and/or value of a HTTP Header.
  • Limit call rate by subscription - Prevents API usage spikes by limiting call rate, on a per subscription basis.
  • Limit call rate by key - Prevents API usage spikes by limiting call rate, on a per key basis.
  • Restrict caller IPs - Filters (allows/denies) calls from specific IP addresses and/or address ranges.
  • Set usage quota by subscription - Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per subscription basis.
  • Set usage quota by key - Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per key basis.
  • Validate JWT - Enforces existence and validity of a JWT extracted from either a specified HTTP Header or a specified query parameter.

These are the main authentication policies:

Hybrid Connectivity

Azure Function Proxies can leverage the App Service networking capabilities, if they are deployed within an App Service Plan. This gives three powerful hybrid network integration options: hybrid connections, VNET integration or App Service Environment. Azure API Management, premium tier, allows your API proxy to be part of a Virtual Network. This provides access to all resources within the VNET, which can be extended to on-premises through a Site-to-Site VPN or ExpressRoute. On this level, both services offer quite similar functionality.

Scope

The scope of Azure Function Proxies is really at the application level. It creates one single uniform API, that typically consists of multiple heterogenous backend operations. Azure API Management has more of an organizational reach and typically governs (large parts) of the API's available within an organization. The diagram below illustrates how they can be combined together. The much broader scope of API Management results also in a much richer feature set: e.g. the publisher portal to manage API's, the developer portal with samples for quick starts, advanced security options, the enormous range of runtime policies, great versioning experience, etc…

Use cases

These are some use cases where Azure Function Proxies was already very beneficial:

  • Create a single API that consists of multiple Azure Functions and / or Logic Apps
  • Create a pass-through proxy to access on-premises API's, without any coding
  • Generate a nicer URL for AS2 endpoints that are hosted in Azure Logic Apps
  • Generate a simple URL for Logic Apps endpoints, that works better for QR codes
  • Add explicit versioning in the URL of Azure Functions and / or Logic Apps

Conclusion

Azure Function Proxies really has an added value in the modern world of API's that often consist of multiple heterogenous (micro-)service operations. It offers very basic runtime API management capabilities, that reside on the application level.

Cheers,
Toon

Categories: Azure
Tags: Functions
written by: Toon Vanhoutte

Posted on Thursday, October 5, 2017 11:43 PM

Stijn Moreels by Stijn Moreels

This post will expand on this subject in how I changed my way of writing code and how I became a functional guide.

Introduction

One of the things current Functional Programmers and Enthusiasts come across very often when working in a “mainstream” development environment, is that they must work together with non-functional programmers.

My opinion is that people should not try to convince other people of their opinion, but instead only show you how to do certain things and let the audience decide what they feel most comfortable with.

That’s exactly what happened to me, I worked on a project and because I used functional approaches; people asked me to explain some concepts that I introduced to the rest of the team.

Guidance

Instead of directly talking about Higher-Order Functions, Currying, Monads, Catamorphisms, Endomorphisms, … I decided that I wanted to start with the simplest thing I think was possible to change.

PipeTo

In functional programming, we try to compose all the functions together to create new functions. In imperative languages, we use variables that we can send to the next function instead of sending the result of the first function directly. Piping and Composition are those building blocks.

Note that Composition is the very root of all software development. It always feels to me that functional programming is using this all the way: Everything is an expression, and so, everything can be composed.

My first change was to add an extension method called ‘PipeTo’ in the C# project:

What piping really means to me, is just the order in which you express a value and a function. With piping, you can change that order. Normally we would type the method and then the argument that we want to send to that method.

What piping allows me to do, is to first write the value and then the method.

This simple extension allowed me to write quite powerful expressions:

This is some dummy example of how you can use this approach. Note that I use FsCheck as testing framework and that my test is actually expressed in a single line.

I see two major benefits about this approach:

  1. First, when I use this piping-method, I don’t have to express intermediate variables that sometimes only clutter the actual functionality you want to write. Together with the variables, in C# we must also express the types (if we don’t use ‘var’ everywhere); so, it struck me that I wasted time reading types instead of reading method names.
  1. Second, instead of assigning the result of a method to a variable, we can immediately send it to the next method. This allows us to write in the same order of the data flow like we would express this with intermediate variables.

To go a little deeper on the second benefit. This is what it looks like with the intermediate variables:

We need this intermediate variables to have the data flow from top to bottom (like you read a book). To get rid of the intermediate variabeles without the ‘PipeTo’, we could inline the variables:

But I hope that you find this less readable, that’s why we would extract this in separate variables for the same reason we can use the ‘PipeTo’: to have the data flow from top to bottom but still get that readability.

Maybe

In F#, we use the ‘Option’ type to indicate that a value might be missing. In Haskell we use ‘Maybe’. By default, in C# and other imperative languages, we use ‘null’ to indicate that there’s a value missing.

I’m not going explain fully why because that would lead us to far. There are many posts and books that will explain this you. Even the inventor of the ‘null’ type thought it was the ‘Billion Dolar Mistake’.

So, we use an other type to indicate this missing value. So what?

Well, this is very powerful and a lot more robust because now you now exactly where there’s a value present and where not. C# (for example) doesn’t have any such thing for reference types, but it got a less stronger type called ‘Nullable<>’ for value types.

My second change was to implement some basic functionality of the Maybe Monad.

The ‘Map’ in F# is the ‘Select’ in C#,
the ‘Filter’ in F# is the ‘Where’ in C#.

With this simple implementation, I’ve created some functionality that we can use to start implementing missing values with the ‘Maybe’ type.

The first thing I explained in this type, is the binding functionality. When you show people of endless ‘if’ structures that all would check ‘IsPresent’ on this type, you can show that this is exactly what the ‘Bind’ does:

Normally these ‘if’ structures would check for ‘null’. If we would use our already known practices of refactoring, we would see that there’s a duplication. The thing that’s variable is the action that must be executed when there’s a value. This is exactly what the ‘Bind’ method gets, so we could rewrite it like this:

The other two methods ‘Where’ and ‘Select’ must be familiar to you if you know LINQ. It’s strange to see that experienced C# developers know the functionality of LINQ but aren’t yet using the concepts behind LINQ in their own design. LINQ is functional programming.

The ‘Select’ takes the value from the ‘Maybe’ instance (if there is one) and execute a method that accepts this value. The return of the ‘Select’ is then a new ‘Maybe’ instance with the result of the just executed method.

The ‘Where’ takes a predicate and will return a ‘Maybe’ instance if the value is presents and the predicate holds for the value inside the ‘Maybe’.

This type itself isn’t functional, but what we do with it is; that’ why I think it’s also good first step into functional programming in C#.

 Extensions

I showed some examples of how we can achieve a more functional approach in an object-oriented language like C#. We can extend this idea and come up with even more extensions:

The first ones are probably the simplest. We define a foreach loop that we can use to run through a list of items and execute a (side-effect/dead-end) function on each one. We also define a ‘Tee’ that we can use to send a ‘dead-end’ function inside a pipeline. We don’t have to stop after our method returns ‘void’; we can just continue with the original value.

I also added a ‘Use’ extension to pipe a disposable resource and a ‘Compose’ extension to compose two functions together into a single function.

Now I think it would be a good exercise in functional programming to come up with some changes in your code that uses this extensions!

Conclusion

Instead of directly writing the software in a different language, with different keywords, syntax, practices, … I discovered that people are more comfortable if I use functional approaches first in the languages they're familiar with.

This way, you can clearly see in the same language syntax the “Before” and “After” part.

Remember, don’t try to convince other people of your opinion. Everyone has a different view and ways he or she works and feels comfortable. The only thing you can do, is show how you work and how you see things without blaming other languages or persons because they use a different approach.

Just like everything else, try to be open-minded!

Categories: Technology
written by: Stijn Moreels

Posted on Wednesday, October 4, 2017 1:52 PM

Toon Vanhoutte by Toon Vanhoutte

By creating a uniform API on top of several heterogenous service operations, we also simplify the security model for the API consumer.

After the configuration we've done in part 1, we've hidden the complexity of maintaining 4 SAS tokens and 1 function code client-side. Be aware that, at the moment, the Azure Function Proxy is not secured by default. In some cases, this might be the desired behaviour, in other scenarios we would like to restrict access to the API. Let's have a look how we can achieve the latter!

Enforce Authentication

You can leverage the default App Service authentication feature, that forces clients to get authenticated against one of these providers: Azure Active Directory, Facebook, Google, Twitter & Microsoft. This can be done without any code changes. This only covers authentication. When authorization is required, some minimal code changes are needed.

Suggestions for product team

  • Common security measures like IP restrictions and configurable rate limits to protect against DoS attacks would be great. There is already a feature request on UserVoice.

  • Leveraging the standard Azure Function keys or host keys would be also a simple way to authorize the API endpoint. You can easily setup rotating keys to improve security. Apparently this is on the radar, but no ETA defined yet!

Cheers,
Toon

Categories: Azure
Tags: Functions
written by: Toon Vanhoutte

Posted on Tuesday, October 3, 2017 9:28 AM

Stijn Moreels by Stijn Moreels

In this post, we will look at how F#'s feature Active Patterns can help build a clear, declarative solution for the Validation of Domain Models. By using a Partial Pattern for each Business Rule, we can clearly see how the input is restricted to verify each rule.

Introduction

The reason I wrote this post was to learn more about F# Active Patterns, and how I can use this for certain, specific problems. They say this feature is a real “killer” feature of the F# environment, so I found it a good exercise to think about how I can use this in my daily practice.

Scott Wlaschin has an amazing blog post series where he writes about this topic. He shows how we regularly miss the true definition of the domain and how we can fix this with the simplicity of F#.

My blog post builds upon that idea and looks how we can validate our models in the same simplicity.

Domain Modeling

When thinking about the modeling of the domain, F# has a very nice way to express this. Throughout this post, I will be using F# for my modeling and for the validation of my model. Sometimes I will show your what the alternative would look like in an Object-Oriented Language like C#.

Book

Ok, let’s define our model. We want to define a “Book” in our domain. A book in our domain has several items which defines “a book”; but for the sake of this exercise we’ll keep it very short:

Just like Scott Wlaschin has asked the question, I'll ask it again: “What’s wrong with this design?”.

Several things, as a Security Expert you could say that we’re could have a problem if someone enters negative pages, or special chars for the ISBN or the Author.
As a Domain Expert, you could say that this model doesn’t actually represent the domain.

PositiveInt

Let’s start with a simple one: we can’t have negative pages; so, let’s define a new type for this. Note that we have cleared it “private” so we can’t call this new type directly via its Value Constructor. Because we have made it private; we need another function that will create this type for us. When we enter a negative number, we can’t create a type. That’s sounds like an Option to me:

FYI: At this point, Scott's talk stops because the talk is about the domain itself and not to specify how we can refactor the validation of the models.

Now we can start with the refactoring to Active Patterns. Because this is a simple type, I think you can’t see the immediate benefit of this approach; so, hang on. We use the Partial Pattern approach for these Business Rules because we can’t wrap all possible values in a single pattern.

The Partial Pattern approach needs a return type of unit option. We can use the Some branch to return the pattern itself and the None branch can be used to specify that the input doesn’t match this pattern.

One can argue about the over-engineering of this approach; but personally, I find this a way more simplistic approach than the inlined Guard Clauses in the Match Expression.

Our book looks now like this:

String50

Next up, is the author’s name. It reasonable to think that the length of the name will be no longer than 50 chars.

We can specify all these rules in our model the same way as we did with the pages:

Notice that we now have two branches that cover our type. By extracting the rules into Partial Patterns, we have made it clear, in our Constructor Function, that we need a string that isn’t “null” or empty and is a maximum of 50 characters long.

Now, how would we specify this in C#? Because we do not have an option type by default, only a less stronger Nullable<T> type; we normally use exceptions.

Note that we can reuse the pre-conditions for the empty string and the length across our application in the F# version, while we must redefine them for every class in C# (except off course we extract this functionality in some “utility” classes.

Now, our book type looks like this:

ISBN13

The last type is the most interesting and the reason why I would use Active Patterns for my Domain Model Validation.

If we have some more complex type for example an ISBN13 number; how would we model that? First of all, let’s specify some requirements:

  • Number must have length of 13
  • Last number is checksum

The checksum is calculated by evaluating the following steps:

  1. Take the 12 first chars
  2. Multiply the even numbers in the sequence with 3
  3. Take the sum of all the results
  4. Modulo 10 the result
  5. Substract 10 if the outcome isn’t zero
  6. Final result must be the same as 13th number

I came up with this:

What I like about this, is the declarativity of the checksum calculation and the fact that you can see immediately what rules we have in our ISBN validation.

Note that I changed the Active Pattern for the length of the string by passing in a function; this way I can reuse it for my String50 type and for this one AND can you see more clearly what exactly we're trying to validate with the string's length (greater than, equal to, ...).

Now, I wanted to check this with C#. To achieve the same level of simplicity; we would extract each rule in it’s own method:

If we extract each rule in a method, I think we get that same simplicity. But we should send some arguments with the rules not just for reusability in other models but for readability as well.

Take for example the Regular Expression rule. It’s much simpler to just send the pattern with the rule than to come up with some name for the method (or Active Pattern) that would satisfy what you’re trying to verify.

Note that the C# version isn’t done yet and must be refactored since there’s a lot going on which can’t be comprehend as quickly as the F# version (but that's just my opinion).

Before you say anything about LINQ, I explicitly used and imperative approach because otherwise we would use functional programming again and when I compare functional with imperative I always try to be functional and imperative in the extreme so I can quickly see what's the actual difference is.

 Testing

Of course, to be complete let’s write some properties for our newly created types. I found not every type to be that obvious to write properties for so it might be a good exercise for you as well.

PositiveInt Properties

First, let us look at the positive integer type. This was the simplest type to model and is also the simplest type to test. I came up with these two properties for the two branches:

String50 Properties

The next type must have a length of 50 chars to be a valid type. Following properties came to mind:

ISBN13 Properties

Now, the last type is probably the most interesting. We must generate valid ISBN numbers to check if the checksum acts properly. I came up with a Test Oracle as another way to express the checksum so I’ll could filter with this expression to generate valid ISBN13 numbers:

I love the way FsCheck allows me to write such properties with such little effort. Now I have a way to generate random, valid ISBN13 numbers. Notice that I didn't check the other Active Pattern branch, perhaps this is a good exercise for you? All other cases should result in None.

Small side note: the assertion is now valid (accessible) because I wrote the types and properties in the same file. When this isn't the case, we could test for any type (with wildcards) wrapped inside a Some case, instead of actually creating an ISBN13 or any other type. That way, we could change the values for that type without changing our test. For the sake of this exercise, I thought it was clearer to assert the type this way.

Love to hear your opinion!

Conclusion

In this post, we looked at how F#'s feature Active Patterns can help build a clear, declarative solution for the Validation of Domain Models. By using a Partial Pattern for each Business Rule, we can clearly see how the input is restricted to verify each rule.

In an object-oriented approach, you would normally create a class to wrap the domain model and specify the business rules inside the constructor while in functional programming, this can be done by privatizing the Value Constructor and create a new Constructor Function which uses Active Patterns to specify each business rule.

Thanks for reading!

Categories: Technology
Tags: F#
written by: Stijn Moreels

Posted on Monday, October 2, 2017 10:33 AM

Toon Vanhoutte by Toon Vanhoutte

Connecting cloud services to on premises API's can be quite challenging. Depending on your setup, there are multiple options available. The most enterprise grade options reside on the network level: ExpressRoute and Site-to-Site VPN. Another option is leveraging Azure App Service Hybrid Connections, which gives you a very simple way to connect to on premise resources on the TCP level, in a firewall-friendly and high available manner. This blog will demonstrate how you can consume an on premises API via Azure Function Proxies, without any coding at all.

You can find the documentation of Azure App Service Hybrid Connections over here.

Instructions

Following these instructions to setup hybrid connectivity through hybrid connections:

  • In order to take advantage of hybrid connections, you must ensure that you create the Azure Function Proxy within an Azure App Service Hosting Plan.

 

  • Navigate to Platform features and click on Networking

 

Consumption plans do not support networking features, as they are instantiated at runtime.

  • Click on Configure your hybrid connection endpoints

 

  • Download the Hybrid Connection Manager

  • Start the installation with accepting the License Agreement.

  • The installation doesn't take long.

  • Click Finish to complete the installation.

  • Open the Hybrid Connection Manager UI desktop app.

  • At this moment, you should not see any hybrid connection. Ignore the 'mylaptop' connection in the screen capture below, as this is still a legacy BizTalk hybrid connection.

  • Back in the Azure Function Networking blade, click Add hybrid connection.

  • Choose Create new hybrid connection.

  • Give the connection a name, provide the endpoint host name and port. As Hybrid Connections leverages ServiceBus relay technology underneath, you need to provide a ServiceBus namespace.

  • Choose in the local Hybrid Connection Manager to Configure another Hybrid Connection.

  • Select the previously created hybrid connection and click Save.

  • If all goes well, you should see the Connected status for the hybrid connection.

  • The Azure Portal should also display a Connected status.

  • You can configure now the Azure Function proxy with an external / public URL to point to the local backend URL, which is now available through the Hybrid Connection Manager.

Now you can access your local API from the external world! Be aware that there is currently no security measure applicable. This will be covered in the next part of this blog post series.

High Availability

In case you need high availability, you can install another Hybrid Connection Manager that connects to the same hybrid connection. The documentation states the following:

Each HCM can support multiple hybrid connections. Also, any given hybrid connection can be supported by multiple HCMs. The default behaviour is to round robin traffic across the configured HCMs for any given endpoint. If you want high availability on your hybrid connections from your network, simply instantiate multiple HCMs on separate machines.

Enjoy this easy hybrid connectivity!
Toon

 

 

Categories: Azure
Tags: Functions
written by: Toon Vanhoutte