wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Wednesday, October 25, 2017 1:44 PM

Stijn Moreels by Stijn Moreels

One of the first questions people sometimes ask me about functional programming, is the question about readability. It was a question I had myself when I started with learning functional concepts.

Readability

Now, before moving on; the term “readability” is something very subjective and yet we can define some common grounds. So, it’s not that easy to find something that everyone agrees on about readability.

Single-Letter Values and Functions

The first thing I (and maybe many before and after me) discovered was that the rules of naming conventions in an object-oriented language couldn’t be entirely used within a functional environment.

Functional programmers have this habit of giving values and functions very short names. So short that it only consists of a single letter. If we use this habit in an object-oriented language; this is almost always a bad practice (maybe not a for-loop with an aggregated index?).

So, why is this different in a functional environment?

The answer to this can be many things I guess; one of the things that comes to mind is that in a functional environment, you will very often write functions that can be used for any type (generic). Naming such values and functions can be difficult. Should we name it “value”?

In functional languages, the x is most of the time used as this “value”. Strangely, by using x I found the code a lot clearer. So, for values: x, y and z for functions: f, g and h. (Note that these letters are the same as we use in mathematics.)

When we talk about multiple values, we add and trailing 's'; like xs.

Ok, look for example at this following “bind” function for the Either Monad (Rop):

We have written the values explicitly like we would do in an imperative scenario. Now look at the following:

Most of all in the infix operator, the most important part is that we clearly see that the arguments passed in to the “bind” function are flipped. This was something we could quite see immediately in the first example.

After a while, when you see an f somewhere; you automatically understand it’s a function, just like x is some “value”.

Now, I’m not going to state anything; but in my personal opinion, the second example shows more what’s going on with less explicit names. We reach a higher level of readability by abstracting our names, rather strange at first.

Partially Applied Functions

One of the powerful concepts in functional languages that I really miss in object-oriented languages (without off course custom-made functionality like Extension Methods in C#); is the concept of Partial Application. This concept describes the idea that if you send an argument to a function, you get back a function with the remaining arguments. This concept is very strong because now we can decompose for example a three arguments function into three functions with 1, 2 and 3 arguments respectively.

In practice, this concept can really be of help when declaring problems. In my previous post; I solved the Coin Change Kata. In one the properties, I needed to describe the overall core-functionality. In the assertion, I needed to assert on the sum all the coin values:

The “( )” around the operators “+” and “=” makes sure I get a function back with the remaining argument. I could have written the assertion as the following expression:

And I can understand that in the beginning this will maybe be more understandable for you than the previous example. But please note that, with this anonymous function explicitly specified, we have written a “lot” of code just for addition and equality verification.

I personally think that every functional programmer will refactor this to the first example. Not only because it can be written in fewer characters, but it also expresses more what we’re trying to solve. In imperative languages, we typically assign a value to a variable, and that can be used to another variable, … and without you knowing you’ve created a pipeline. I like this concept very much. I don’t have to assign the next result to a value anymore but can just pass it along to the next function.

“For the Change”
“We need to have each value”
“So we can sum all the values”
“And sum it with the remaining value”
“This should be the same as we expected”

Notice that we always use the verb in the front of the sentence. The action we’re trying to express in code now in front of the line by partially applying one of the arguments and not at the end.
Also note that, when we specify the function explicitly, you can’t read the expression in the second example from top to bottom without moving you’re eyes to the right to see the addition or the equality verification; which is actually the most important part of the line.

This is also one of the reasons I like this form of programming.

Yes, I know that it takes some time to get used to this way of writing functions; but I can assure you. Once you have mastered this technique, you would want this in your favorite object-oriented language as well. (That's one of the reasons I implemented them in the form of Extension Methods).

Infix Operators

One thing that I can’t find a common approach about yet, is the feature of defining your own operators and where to use it. Infix operators can make your code a lot cleaner and more readable; but it can also harm your readability; that’s why it’s probably difficult to define a common approach.

There are already many operators available and by specifying your own operators that looks similar; we can guess what the operator does.

The (|>) pipe operator already exists, and by defining and operators like (||>) or (|>>), we can guess that it has something to do with piping more than one argument, or has something to do with piping and composing.

I didn’t find a global rule to this approach, but I guess it’s something that must be used carefully. If we would define for every function an operator, the code would be less readable.

The (>>=) operator is used for the binding, and so it’s reasonable to define them instead of writing “bind” over-and-over again; because we're actually more interested in WHAT you're trying to bind. The same can be said about the (<*>) operator for the Applicative Bind or the (<!>, <$>) operator for the Mapping. When you see the (<|>) you know it has something to do with Conditional Piping since it pipes in two directions ("if then?"). Some operators are well known and so, are probably never questionable to define.

FsCheck defines (.&.) and (.|.) operators to define the AND and OR of Properties. We already know the boolean operators without the dots leading and trailing them, that’s why it’s easier to know what the infix operator does.

The tricky part is when we use too much operators. I would like to use those operators when we’re changing the data flow in such a way that we can reuse it somewhere else. In those cases it’s probably a good approach to define an infix operator.

Conclusion

This small blog post was for me a small reminder of why I write lesser characters and still make more Declarative Code. It was strange at first to think about it. Most of the time in Object-Oriented Languages; when you talk about smaller names, short-handed operators, … you’re quickly end up with an “anti-pattern” or a bad practice while in Functional Programming this is the right way to do it.

Both imperative and functional programmers are right in my opinion. It’s just a way the language allows us to write clear/clean readable code, because that is really what we want to do.

Categories: Technology
written by: Stijn Moreels

Posted on Monday, October 23, 2017 11:18 AM

Glenn Colpaert by Glenn Colpaert

In this blog post, I will go deeper into detail on why IoT is more than just collecting some data from devices and explain you why it's important to engage business into your IoT Solution next to your perfectly built architecture.

Simplifying IoT, one Azure service at a time!

The Internet of Things (IoT) isn't a technology revolution, it is a business revolution enabled by technology. By 2020, there will be 26 Billion connected 'things' and IoT will be good for a 12$ Trillion market share. These connected 'things' will range from more consumer-driven IoT ranging from wearables and home automation to intelligent industrial scenarios like smart buildings and intelligent machine infrastructures.

In this blog post, I will go deeper into detail on why IoT is more than just collecting some data from devices and explain you why it's important to engage business into your IoT Solution next to your perfectly built architecture. I will talk about some of the more complex things you need to think about when building and designing your solution. Some of them might be scary or sound very complex to tackle, but remember that some of these solutions are just one Azure service away...

A simple view on IoT

When creating a simplified overview of an IoT project or solution, it can be drilled down to the following 4 key components.
An IoT project always comes down to securely connecting your devices to the cloud and start flowing your local data streams into the cloud. Once your device data is stored in the cloud you can start creating insights on it. Based on that, you can leverage the business intelligence towards business and allow them to act upon actions or events raised on that data and trigger additional workflows.

IoT projects can be complex!

However, when taking a closer look on IoT Projects there is more to say than the above 4 key components, especially when moving from a POC setup to a full-blown production ready solution with potentially thousands of devices in the field. As IoT is a business-driven revolution, the most important action there is that you need business to be involved from the very start, as they are the key-drivers from your IoT project. The risk of not involving the business into IoT projects is that you potentially get stuck in POC limbo and your IoT solutions will never see the break of day. Once you get business on board, things are getting easier... or not. Some of the most important technical questions or decisions are listed below, all of them are just a small part of your entire solution. 

How to connect things that are hard to connect?

Getting your IP enabled devices connected to the cloud is one thing, but how will you connect your existing devices, that don't speak IP, to the cloud. What if your devices are not capable of change or the risk of changing them is too high? Or what if your devices aren't even allowed to talk to the cloud, due to security reasons. When this is the case you might need to look at other possibilities to connect your devices with the cloud, like for example introducing a gateway that will be responsible for acting as a 'bridge' between your devices and cloud platform.

Device Management/lifecycle

Once your devices are connected, there's still some open questions or challenges you need to tackle before processing your data. How will you securely identify and enroll your devices onto your IoT Platform. How will you scale that enrollment for many devices? Next to enrollment there is also a question of configuration and managing your devices. When looking at Device Management and Lifecycles there are a couple of management patterns like updates, reboots, configuration updates or even software updates.

Data storage/Visualization

Another key component within an IoT solution is data. Data is key on getting the insights the business is looking for. Without a proper data storage/visualization strategy you're in for some trouble, think fast IO and high scale. When it comes to storing your data, there is no silver bullet. It really depends on the use-case and what the ultimate goal is. Key action there is to pick the storage based on the actions you will perform with your stored data. There is storage that is a perfect input for your analytics tiers but mighy not be a good option when it's just about archiving the data for later use.

Analytics

As already mentioned during this blog, data is key inside your IoT solution. The real value of your IoT project is making sense of your data and getting insights from that data. Once you captured that insight, it is key to connect these insights back to the business and evolve your business by learning from those insights.

Edge Computing

When doing IoT projects you're not always in the position of having full-blown connected sites or factories. There might be a limit on communication bandwidth or even limited internet connectivity. What if you would like your devices to only send aggregated data of the last minute to the cloud? What if you would like to keep all your data close to your device and only send fault data to the cloud. If this is the case, you need introduce Edge Computing into your IoT Solution, Edge Computing allows you to perform buffering, analytics, machine learning and even executing custom code on your device without the need of a proper internet connection.

Security

Let's not go into detail on this one. Start implementing it from day zero as this is the most important part of your IoT Solution. Your end to end value chain must be secured. Never cut budget on your security strategy and implementation when doing IoT Projects.

Simplifying IoT

Congratulations, you've survived the scary part... Thanks to the Azure cloud some of the above challenges are just a couple of button clicks away. The goals of Azure and Microsoft is making it easier to build, secure and provision scalable solutions from device to cloud. The list of recent IoT innovations on the Azure platform is endless, with major focus on some of the key challenges every IoT project phases: Security, Device Management, Insights and Edge Computing.
The future is bright, time to do some IoT!!
Cheers, Glenn
Categories: Azure
Tags: IoT
written by: Glenn Colpaert

Posted on Tuesday, October 17, 2017 12:53 PM

Tom Kerkhove by Tom Kerkhove

Auto-scaling is a great way to not only optimize your costs but also a flexible way of doing asynchronous processing.We will look at how Azure Monitor Autoscale allows you to define auto-scaling rules, what the caveats are and what would be good additions to the service

Building scalable systems is crucial for any cloud platform.

One way to achieve this is to decouple your frontend nodes from your backend processing by using the Competing Consumer pattern. This makes it possible to easily add more processing instances (scale out) when the workload is growing, being messages filling the queue.
Automating things is always great, but it is crucial to be aware of what is going on in your platform. This is often forgotten, but should be part of your monitoring as well.
Once everything is setup you can save money by optimizing your resources based on your needs, instead of overprovisioning.

A question I have received a couple of times is - Great! But how do I do that?

Enter Azure Monitor Autoscale

Azure Monitor Autoscale enables you to define rules that will automatically scale your workloads based on specific metrics.

These metrics can be Service Bus Queues, Storage Queues, Application Insights, custom metrics and more. Currently, Azure Monitor Autoscale is limited to workloads running on Azure Cloud Services (Yes, you've read that right!), App Service Plans and/or Virtual Machine Scale Sets.

When more advanced auto-scaling rules are required, you can define multiple autoscale conditions. This allows you to vary your scaling based on day of the week, time of day or even date ranges.

This makes it really great because this allows you to have more aggressive scaling over the weekend, when more people are buying products than during working hours. The date ranges are also interesting because you can define specific rules for a specific period when you are launching a new marketing campaign and expect more traffic.

Configuring auto-scaling for an Azure Service Bus Queue

Sello is hosting an online platform for selling items online and would like to improve their scalability. To achieve this, they want to start auto-scaling their worker role based on the message count of their Service Bus queue.

In order to configure it, we need to go to "Azure Monitor" and click on "Autoscale". There it will give you an overview of all resources that can be autoscaled and their current status:

As you can see, there is no auto-scaling configured which we can easily add by clicking on the specific role we'd like to autoscale.

When no auto-scaling is configured you can easily change the current instance count, or you can enable auto-scaling and define the profile that fits your needs.

Each auto-scaling condition has a name and contains a set of scaling rules that will trigger a scaling action. Next to that, it provides you the capability to limit the instances to a certain amount of instances.

When adding a scale rule you can select the metric you want to scale on and basically define the criteria that triggers the action you want to perform being scaling up or down.

By using a cooldown, it allows your platform to catch up after the previous scaling activity. This is to avoid that you add more instance again, while the previous scale action has actually already mitigated it.

In this case, we're adding a rule to add 2 instances when the active message count is greater than 2000 with a cooldown of 15 minutes.

Scaling out is great, scaling in is even better! Just follow the same principle, here we're scaling 1 instance down when the criteria are met.

Once everything is configured, your role will start auto-scaling and the configuration looks similar to this:

 

Creating awareness about auto-scaling

Woohoow, auto-scaling! Awesome!

Well - It's great but not done yet. Be aware of how your platform is auto-scaling. By using the Run History you can get an overview of your recent scaling activities and learn from it. Creating scaling definitions is not an easy thing to do and should be re-evaluated frequently.

As you can see below, we can handle the load without any problem but it can be improved by scaling down more aggressively.

A more proactive way of monitoring this is by using notifications where you can either use email notifications or trigger an HTTP webhook when scaling action is happening.

This is very handy when you want to create awareness about these actions - An easy way to achieve this is to create a Logic App that handles these events, similar to how I did this for Azure Alerts.

You can use one centralized handler for this or create dedicated handlers, based on your use-case. I personally prefer to use a centralized handler because it makes it easier to maintain if the handling is the same for all.

When we put everything together, this is a high-level overview of all the settings for auto-scaling.

If we were to add a new autoscale condition, we'd have to specify the period in which it would be in effect and basically ignoring all other scaling conditions.

Caveats

Defining auto-scaling rules are not easy and they come with a few caveats:

Be careful what metric you are auto-scaling on and make sure that it's the correct one. Unfortunately, I've seen a case where we were stuck in an infinite scaling loop because we were auto-scaling our worker roles based on the Message Count of a Service Bus queue. However; Message Count not only includes the active messages but also the dead-lettered messages which weren't going away. What we actually ended up with was changing our auto-scaling metric to Active Message Count which is what we were interested in here.

This brings me to monitor your auto-scaling - This is not only important to detect issues as I've just mentioned but also to learn how your platform is scaling and continuously improve your scaling criteria. It is something that needs to grow since this is use-case specific.

Protect your budget and include instance limitations on your auto-scaling conditions. This will protect you from burning your resource costs in case something goes wrong or if having to wait a little longer is not a problem.

Taking auto-scaling to the next level

Azure Monitor Autoscale is great how it is today, but I see a couple of features that would be nice to have:

  • Scaling Playbooks - Similar to Azure Alerts & Security Center's Security Playbooks, it would be great to have native integration with Azure Logic Apps which makes it not only easier but also encourages people to use a centralized workflow of handling these kinds of notifications. Next to that, it also makes it easier to link both resources together, instead of having to copy the URL of the HTTP connector in your Logic App.
  • Event-Driven Auto-scaling - The current auto-scaling is awesome and it provides a variety of metric sources. However, with the launch of Azure Event Grid, it would be great to see Azure Monitor Autoscale evolve to support an event-based approach as well:
    • Autoscale when certain events are being pushed by Azure Event Grid to react instead of polling a specific metric
    • Emit auto-scaling events when actions are being started or finalized. That would allow subscribers to react on that instead of triggering a webhook. This also provides more extensibility where instead of only notifying one webhook, we can basically open it up for everybody who is interested in this

That said, I think having both a metric-based & eventing-based model would be the sweet spot as these support their own use-cases.

Conclusion

With Azure Monitor Autoscale it is really easy to define auto-scaling rules that handling all the scaling for you, but you need to be careful with it. Having a good monitoring approach here is the key to success.

Every powerful tool comes with a responsibility.

Thanks for reading,

Tom

Categories: Azure
written by: Tom Kerkhove

Posted on Friday, October 13, 2017 10:50 AM

Tom Kerkhove by Tom Kerkhove

A few weeks ago, Microsoft held another edition of its Ignite conference in Orlando, FL.

After going through most of the announcements and digesting them I found that there were a couple of interesting ones in the security & data space.

Let's have a closer look.

Introducing Virtual Network Service Endpoints (Preview)

With the introduction of Virtual Network Service Endpoints (Preview) you can now protect your Azure resources by moving them inside a VNET and thus restricting access to that VNET or subnet itself.

Currently, this is only supported for Azure Storage & Azure SQL Database/Warehouse but the end goal is to provide this for all services.

By using VNET Service Endpoints you can now fully isolate your resources because you can now fully remove all access to the public internet by which you are limiting the risk of exposure.

It has been a long-awaited feature to isolated access, certainly for Azure Storage & Azure SQL Database, and I am excited and very happy that it's finally here!

Additional resources:

Introducing Azure Data Factory 2.0 (Preview)

This must be my favorite announcement - Azure Data Factory 2.0 (Preview)the next generation of data integration.

While Azure Data Factory 1.0 was limited to a data-slicing model only, it now supports different types of triggers such as webhooks.

With Azure Data Factory 2.0 comes the new Integration Runtime that provides you with the infrastructure to orchestrate data movement, activity dispatching & SSIS package execution, both in Azure & on-premises.

But that's not all, there is more - Http activity support, integration with Azure Monitor, integration with Azure Key Vault, and much more! We'll dive deeper into this announcement in a later article.

Additional resources:

Azure DDOS Protection Service (Preview)

Distributed Denial-Of-Service attacks can be brutal and unfortunately is very easy to use. Nowadays, you can find it on the internet as a managed offering or even do it yourself just like Troy Hunt explains.

That's why Microsoft is announcing Azure DDOS Protection Service (Preview) that allows you to protect your Virtual Networks in order to secure your Azure resources even more.

However, Microsoft Azure already brings you DDOS protection out-of-the-box. The difference here is that Azure DDOS Protection Service takes this a step further and give you more features & control.

Here is a nice comparison:

Azure DDOS Protection Service is a turn-key solution which makes it easy to use and is integrated into the Azure Portal. It gives you dedicated monitoring and allows you to define policies on your VNETs. By using machine learning it tries to create a baseline of your traffic pattern and identifies malicious traffic.

Last but not least, it also integrates with Azure Application Gateway allowing you to do L3 to L7 protection.

Additional resources:

Taking Azure Security Center to the next level

Another example of the security investment by Microsoft are there recent announcements for Azure Security Center. You can not only use it for cloud workloads but also for on-premises workloads as well.

Define corporate security standards with Azure Policy (Limited Preview)

Azure Policy allows you to define corporate standards and enforce them on your Azure resources to make sure that the resources are compliant with your standards. They also come with some default rules, such as running at least SQL Server 12.0 and can be scoped to either a management group or resource group level.

By using initiative definitions, you can group one or multiple policy definitions as a set of requirement. An example could be an initiative that consolidates all SQL database related definitions.

To summarize, Azure Policy allows you to define security standards across multiple subscriptions and/or resource groups making it easier to manage your complete infrastructure.

It is currently in limited preview but sign-up for the preview in the Azure portal.

Introduction of Security Playbooks

With the addition of Security Playbooks you can now easily integrate certain playbooks in reaction to specific Security Center alerts.

It allows you to create & link an Azure Logic Apps which orchestrates the handling of the alert, tailored to your security needs.

Investigation Dashboard

Azure Security Center now provides a new visual, interactive investigation experience to analyze alerts and determine root cause analysis.

It visualizes all relevant information linked to a specific security incident, in this case an RDP brute force attack.

It makes it a lot easier to get the big picture of the potential cause, but also the impact of the incident. By selecting certain nodes in the equasion, it provides you with more information about that specific segment. This enables you to drill deeper and get a better understanding of what is going on.

However, these are only a subset of the announcements, you can find all of them in this blog post.

Additional resources:

Introducing SQL Vulnerability Assessment (VA)

SQL Vulnerability Assessment (VA) is a new service that comes with Azure SQL Database and SQL on-premise via SQL Server Management Studio (SSMS).

It allows you to discover, track and remediate potential database vulnerabilities. You can see it as a lite version of Azure Security Center focused on SQL DBes that lists all potential vulnerabilities after running a scan.

This is another example of Microsoft making security more approachable, even if you are not a security expert. After running a scan you will probably see some quick wins making your database more secure step by step.

Additional resources:

Summary

Microsoft made some great announcements at Ignite and this is only the beginning, there were a lot more of them and I recommend read more about them on the Azure blog or watch the Ignite sessions on-demand.

Personally, I recommend Mark Russinovich' interesting talk called "Inside Microsoft Azure datacenter hardware and software architecture" which walks you through how Azure datacenters work, their recent investments & achievements and what their future plans are.

Lately, the IT side of Azure is coming closer to the developer side where services such as Azure Networking is becoming easier to integrate with PaaS services such as Azure Storage & SQL DB. It looks like this is only the beginning and we can expect more of these kinds of integrations making it easier for both IT & Devs to build more secure solutions.

Last but not least, don't forget that the Azure Roadmap gives a clear overview of what service is at what stage. Here you can see all services that are in preview for example.

Thanks for reading,

Tom Kerkhove.

Categories: Azure
written by: Tom Kerkhove

Posted on Friday, October 13, 2017 12:05 AM

CONNECT 2017, a 2-day event organized by Codit filled with integration concepts and the latest trends within Internet of Things and Azure technologies. Read the recap here.

Introduction

CONNECT 2017 focused on Digital Transformation with international speakers from Microsoft, the business and the community. The full-day event was organized in Utrecht and Ghent and inspired participants to strengthen their integration strategy and prepare them for the next steps towards a fully connected company.

This blogpost will capture the key take-aways and some of the lessons learned during both days.

[NL] Opening keynote - Ernst-Jan Stigter, Microsoft Netherlands

Ernst-Jan started off with the fact that we can all agree the cloud is here to stay and the next step to accelerate by applying Digital Transformation. Microsoft's vision on Digital Transformation focuses on bringing people, data and processes together to create value for your customers and keep your competitive advantage. In his keynote, Ernst-Jan explains the challenges and opportunities this Digital Transformation offers.

Microsoft's Digital Transformation framework focuses on 4 pillars: Empower employees, Engage customers, Optimize operations and Transform products where the latter one is an outcome of the first 3 pillars. Digital Transformation is enabled by the modern workplace, business applications, applications & infrastructure, data and AI.

Ernst-Jan continues to lay out Microsoft's strategy towards IoT. By collecting, ingesting, analyzing data and acting upon that data, customers can be smarter than ever before, being able to build solutions that were unthinkable in the past. He shares some IoT use cases examples at Dutch Railways, City of Breda, Rijkswaterstaat and Q-Park to illustrate this.

[BE] Opening keynote - Michael Beal, Microsoft BeLux

Michael started explaining what digital transformation means and the vision of Microsoft on that subject. Microsoft is focusing on empowering people and building trust in Technology.

Michael continued his talk with the vision of Microsoft on the Intelligent cloud combined with an intelligent edge. To wrap up, Michael talked about how Microsoft thinks about IoT and how Microsoft is focusing on simplifying IoT.

Democratizing IoT by allowing everyone to access the benefits of IoT and providing the foundation of Digital Transformation is one of the core missions of Microsoft in the near future.

A great inspiring talk to start the day with in Belgium.

[NL/BE] Hybrid Integration and the power of Azure Services - Jon Fancey, Microsoft Corp

Jon Fancey is a Principal Product Manager at Microsoft and is responsible for the BizTalk Server, Logic Apps and Azure API Management products.

He shares his vision on integration and the fact there is a continuous pressure between the forces and trends in the market. He explains that companies need to manage change effectively to be able to adapt in a quickly changing environment.

Azure enables organizations to innovate their businesses. To deal with digital disruptions (rapid evolving technology), Digital Transformation is required. Jon goes through the evolution of inter-organizational communication technologies from EDI, RPC, SOAP, REST to Swagger/Open API.
Logic Apps now has 160+ connectors currently available for all types of needs: B2B, PaaS support, SaaS, etc.... This number is continually growing, if needed you can build your own connector and use that in your Logic Apps.

Today, Azure Integration Services consist of BizTalk Server, Logic Apps, API Management, ServiceBus and Azure Functions. Each of these components can be leveraged in several scenarios and, when combined, can fulfill unlimited opportunities. Jon talks about serverless integration. Key advantages are reduced DevOps effort, reduced time-to-market and per action billing.

[NL] Mature IoT Solutions with Azure - Sam Vanhoutte, Codit

In this session Sam Vanhoutte, CTO of Codit, explained us how businesses can leverage IoT solutions to establish innovation and agility.

He first showed us some showcases from enterprises that are using IoT today to create innovative solutions with a relatively small effort. All that while gaining a very high TCO. He showed us how a large transport company combined Sigfox (an IoT connection service), geofencing and the Nebulus IoT gateway to track "black circuit" movements of containers. Sam also showed us how a large manufacturer of food processing machines uses IoT to connect existing machines to gather data for remote monitoring and predictive maintenance, even though these machines communicate with legacy protocols.

Next, Sam reflected on the pitfalls of IoT projects and how to address them. He stressed the importance of executive buy-in. Solutions will rarely make it to production if this is lacking. Sam also advised to use the existing installed-base of enterprises in order to decrease the time to market and add value fast. This can be achieved by adding a IoT gateway. Also, you need to think about how to process all the data these devices are generating and add some filtering and aggregation before storage costs become too high. Sam then stressed the importance of security and patching the devices.

One last thing to keep in mind is to spend your money and time wisely in an IoT project. Use the bits and pieces from the cloud patform that are already there and focus on value generators. In the last part of the presentation, Sam showed us how the Nebulus gateway takes care of the heavy lifting of connecting devices and how it can jumpstart a companies’ journey into its first IoT project.

[BE] Cloud Integration: What's in it for you? - Toon Vanhoutte & Massimo Crippa, Codit

During this session Toon Vanhoutte (Lead Architect) and Massimo Crippa (API Management Domain Lead) gave us more information about different integration scenarios.

Massimo started with showing us the different scenarios as they were yesterday, today and how it will become tomorrow. In the past everything was On-Premise. Nowadays we have a hybrid landscape which includes the huge advantage of connectivity, for example the ease of use of Logic Apps. There is also the integrated azure environment, the velocity e.g. the continuous releases for LogicApps and the network (VNET integration).

Toon introduced Cloud Integration which has the following advantages. Serverless technology, migration path, the pricing is consumption based and the use of ALM which stands for continuous integration & delivery. The shift towards the cloud can start with IAAS (Infrastructure as a service). The main advantages of IAAS are : availability, security and the lower costs. But why we should choose for Hybrid Integration? Flexibility and agility towards your customers and it is future proof. Serverless integration reduces the total cost of ownership, you have less devops, you can instantly scale your setup with a huge business value.

Massimo told us that security is very important through governance, firewall, identity and access rules. Another topic is monitoring, in the below photo you have all of the different types of monitoring.

The Codit approach in moving forward is a mix between on-premise (Biztalk - Sentinet - SWL server) and an Azure infrastructure.

[NL] The Microsoft Integration Platform - Steef-Jan Wiggers, Codit

The presentation of Steef-Jan started with an overview of the application landscape from yesterday’s, today’s and tomorrow’s organizations. Previously, all applications, which were mostly server products, were running at on-premises data centers. Today, the majority of the enterprises have a hybrid application landscape: the core applications are still running on-premises, but they are already using some SaaS applications in the cloud . Tomorrow, cloud-based applications will take over our businesses

The integration landscape is currently undergoing a switch from on-premises to hybrid to the cloud. On-premise integration is based on BizTalk Server and Sentinet for API Management. BizTalk is used for running missing-critical productions workloads and Sentinet for virtualizing API's with minimal latency. Both have been made cloud ready. Adapters for Logic Apps (On premise Gateway) and Service Bus (Queues, Topics and Relay) have been added in BizTalk, for Sentinet integration with Azure Service bus and more focus on REST, OAuth and OpenID. In Hybrid Integration, Logic Apps is used for connecting the cloud and API Management as well.
You have the advantage of continuous releases, moving faster and adapting faster to change. For networking you can use VNET and Relays. Cloud Integration has the advantage of Serverless integration (no server installation & patching, inherent high availability, …).
The pricing is consumption based: pay per executed action.

Different paths are available to switch from on-premises to cloud: "it should be a natural evolution and not a revolution".
One way is IaaS integration for obtaining better availability for your server infrastructure. IaaS improves security and has less costs. Hybrid integration gives you flexiblity in your application landscape. It is agile towards the business and you can release faster. A hybrid setup ensures you are set for the future. Serverless integration reduces the efforts you put in operations tremendously: no more server patching, backups… The costs are lower and you have the advantage to be able to scale much faster as well.

The Codit Approach

If you look at the hybrid integration platform you can distinguish several blocks. On premises has the known integration technologies. In Azure you find the standard compute and storage options. Connectivity enables smooth integration between on premises and the cloud. Messaging solutions like Service Bus and Event Grid allow decoupling of application. For integration, Logic Apps are used which orchestrate all integrations that can be extended via Azure Functions and API Apps. Integration with Azure API Management ensures governance and security using of Azure AD and Azure Key Vault. Administration and operations are done by using VSTS Release Management to rollout the solutions throughout the DTAP street in a consistent manner. A role-based monitoring experienced is offered by App Insights for developers, OMS for operations and Power BI reports for business users.

Codit wants you to be fully connected: Integration is the backbone of your Digital Transformation. Now more than ever.

[NL] update links - How the Azure ecosystem is instrumental to your IoT solution - Glenn Colpaert, Codit

IoT is here to stay so we'd better get ready for it. In the future everything will be connected, even cows. Glenn kicked off his session by giving a good overview of all main IoT pillars ranging from data storage & analytics to edge computing and connectivity and device management. Of course, that's not the only things to take into account. Security is often forgotton about, or "applied on top of it" later on. But security should be designed from the ground up. Microsoft's goal is to simplify IoT on several perspectives: Security, Device Management, Insights, Edge. Microsoft Azure provides a whole ecosystem of services that can assist you with this:

  • Azure IoT Hub that provides a gateway between the edge and the cloud with Service Assisted Communications built-in by default
  • Perform near-real-time stream processing with Azure Stream Analytics
  • Write custom business logic with Service Fabric or Azure Functions
  • Enable business connectivity with Azure Logic Apps for building a hybrid story
  • Azure Time Series Insights enabling real-time streaming insights
  • Setup DevOps pipelines with Visual Studio Team Services

However, when you want to get your feet wet: Azure IoT Central & Solutions are very easy. Start small and play around before spending a big budget on custom development. By using a Raspberry Pi simulator Glenn showed how easy it is to send telemetry to Azure IoT Hub and how you can visualize all the telemetry without writing a single line of code with Azure Time Series Insights. The key take-aways from this session are:

  • Data Value is created by making sense of your data
  • Insights Connect insights back to business
  • Security Start thinking about security from day zero
  • Edge IoT Edge is there for low latency scenarios
  • Evolve Learn by experience with new deployments

If you are interested in learning more about data storage & analytics, we highly recommend reading Zoiner Tejada's Mastering Azure Analytics

[NL/BE] Event-Driven Serverless Architecture - the next big thing in the cloud - Clemens Vasters, Microsoft Corp

Clemens starts the session with explaining the "Serverless" concept, which frees you entirely from any infrastructure pain points. You don't have to worry about patching, scaling and all the other infrastructure tasks that you normally have in a hosted environment. It lets you solely focus on your apps & data. Very nice! Clemens teaches us that there are different PaaS options for hosting your services, each having its own use cases and advantages.

Managed Cluster

Applications are being deployed on a cluster that handles the placement, replication, ownership consensus and management of stateful resources. This option is used to host complex, stateful, highly reliable and always-on services.

Managed Middleware

Applications are deployed on sets of independant "stateless" middleware servers, like web servers or pure compute hosts. These applications may be "always-on" or "start on demand" and typically maintain a shared cached state and resources.

Managed Functions

Function implementations can be triggered by a configured condition (event driven) and are short lived. There is a high level of abstraction of the infrastructure where your function implementations are running. Next to that, you have different deployment models you can use to host your services. The classic "monolith" approach divides the functional tiers on designated role servers (like a web server, database server,…). The disadvantage of this model is that you need to scale your application by cloning the service on multiple servers or containers. The more modern approach is the "microservice" approach, where you seperate functionality into smaller services and host them as a cluster. Each service can be scaled out independently by creating instances across servers or containers. It's an autonomous unit that manages a certain part of a system and can be built and deployed independently.

[BE] Maturing IoT Solutions with Microsoft Azure - Sam Vanhoutte & Glenn Colpaert, Codit

Sam and Glenn kicked off their session talking about the IoT End-to-End Value chain. A typical IoT solution chain is comprised of the following layers:

  • Devices are the basis for the IoT solution because they connect to the cloud backend.
  • The Edge brings the intelligence layer closer to the devices to reduce the latency.
  • Gateways are used to help devices to connect with the cloud.
  • The Ingestion layer is the entry into the IoT backend and is typically the part that must be able to scale out to handle a lot of parallel incoming data streams from the (thousands of) devices.
  • The Automation layer is where business rules, alerting and anomaly detection typically take place.
  • The Data layer is where analytics and machine learning typically take place and where all the stored data gets turned into insights and information. Report and Act is all about turning insights in action, where business events get integrated with the backend systems, or where insights get exposed in reports, apps or open data.

At Codit, we have built a solution, the Nebulus IoT Gateway, that helps companies jump start the IoT connectivity phase and generate value as quickly as possible. The Gateway is a software-based IoT solution that instantly connects your devices to (y)our cloud. The gateway provides all required functionality to cope with connectivity issues, cloud-based configuration management and security challenge.

As integration experts, we at Codit can help you simplify this IoT Journey. Our IoT consultants can guide you through the full IoT Service offering and evolve your PoC to a real production scenario.

The session ended with the following conclusion:

[NL/BE] Closing keynote - Richard Seroter, Pivotal

The theory of constraints tells you the way to improve performance is to find and handle bottlenecks. This also applies to Integration and the software delivery of the solution. It does not matter how fast your development team is working if it takes forever to deploy the solution. Without making changes, your cloud-native efforts go to waste.

Richard went on comparing traditional integration with cloud-native integration, showing the move is also a change in mindset.

A cloud-native solution is composable: it is built out by chaining together independent blocks allowing targeted updates without the need of downtime. This is part of the always-on feature of the integration: a cloud-native solution assumes failure and is built for it. Another aspect of the solution is that it's built for scale; The solution scales with demand, and the different components do this separately. Making the solution usable for 'citizen integrators' by developing for self-service, will reduce the need for big teams of integration specialists. The integration project should be done with the modern resources and connectors in mind allowing for more endpoints and data streams. The software lifecycle will be automated; The integration can no longer be managed and monitored by people. Your software is managed by your software.

 

Thank you for reading our blog post, feel free to comment or give us feedback in person. You can find the presentations of both days on following links:

This blogpost was prepared by:

Glenn Colpaert - Nils Gruson - René Bik - Jacqueline Portier - Filiep Maes - Tom Kerkhove - Dennis Defrancq - Christophe De Vriese - Korneel Vanhie - Falco Lannoo

Categories: Community