wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Friday, April 14, 2017 1:27 PM

Tom Kerkhove by Tom Kerkhove

As you might have noticed, a few months ago Codit Belgium moved to a new brand office in Zuiderpoort near the center of Ghent.

Because of that, we've built an internal visitor system running on Azure.
Keep on reading to learn all about it!

As you might have noticed, a few months ago Codit Belgium moved to a new brand office in Zuiderpoort near the center of Ghent.

One of the center pieces, and my favorite, is our Codit Wall of Employees: 


For these new offices Codit had a need for a visitor system that allows external people to check-in, notify employees that their visitor arrived, etc. The biggest requirement was the ability to list all the external people currently in the office for scenarios such as when there is a fire.

That's how Alfred came to life, our personal butler that assists you when you arrive in our office.

Thanks to our cloudy visitor platform in Microsoft Azure, codenamed Santiago, Alfred is able to assist our visitors but also provide reporting on whom is in the building, sending notifications, etc.

We started off with our very own Codit Hackaton - Dedicated teams were working features and got introduced to new technologies and more experienced colleagues were teaching others how to achieve their goal.

Every Good Backend Needs A Good Frontend

For Alfred, we chose to use a Universal Windows Platform (UWP) app that is easy to use for our visitors. To avoid that people are messing with our Surface we are even running it in Kiosk-mode.

Behind the scenes, Alfred just communicates with our backend via our internal API catalog served by Azure API Management (APIM going forward).

This makes sure that Alfred can easily authenticate via a subscription key towards Azure API Management where after Azure APIM just forwards the request to our physical API by authenticating with a certificate. This allows us to fully protect our physical API while consumers can still easily authenticate with Azure APIM.

The API is the façade to our "platform" that allows visitors to check-in and check-out, send notifications upon check-in, provide a list of all offices and employees, etc. It is hosted as a Web App sharing the same App Service Plan on which our Lunch Order website is running to optimize costs.

We are using Swagger to document the API for a couple of reasons:

  1. It is crucial that we provide a self-explanatory API that enables developers to see what the API offers at a glance and what to expect. As of today, only Alfred is using it but if a colleague wants to build a new product on top of the API or needs to change the platform, everything should be clear.
  2. Using Swagger enables us to make the integration with Azure API Management easier as we can create Products by importing the Swagger.

Storing Company Metadata in Azure Document DB

The information about the company is provided by Azure Document DB where we use a variety of documents that describe what offices we have, whom is working at Codit, what their preferred notification configuration is, etc.

We are using a simple structure where each type of information that we store has a dedicated document of a specific type that we link to each other grouped in one collection. By using only one collection we can group all the relevant company metadata in one place and save costs since Azure bills for RUs per collection.

As an example, we currently have an Employee-document for myself where we have a dedicated Notification Configuration-document that describes the notification I've configured. If I were to have notifications configured for both Slack and SMS messages, that means there will be two documents stored.

This allows us to easily remove and add documents for each configured notification configuration for a specific employee of using one document dedicated per employee and updating specific sections which makes it more cumbersome.

As of today, this is all static information but in the future, we will provide a synchronization process between Azure Document DB and our Azure AD. This will remove the burden of keeping our metadata up-to-date so that when somebody joins or leaves Codit we don't have to manually update it.

Housekeeping For Our Visitors

For each new visitor that arrives we want to make their stay as comfortable as possible. To achieve this, we do some basic housekeeping now, but plan to extend this in the future.

Nowadays when a visitor is registered we keep persisting an entry in Azure Table Storage for that day & visitor so that our reporting knows whom entered our office. After that we track a custom event in Azure Application Insights with some context about the visit and publish the event on an Azure Service Bus Topic. This allows us to be very flexible in how we process such an event and if somebody wants to extend the current setup they can just add a new subscription on the topic.

Currently we handle each new visitor with a Logic App that will fetch the notification configuration for the employee he has a meeting with and notify him on all the configured ways we support; that can be SMS, email and/or Slack.

Managing The Platform

For every software product, it comes without saying that it should also be easy to maintain and operate the platform once it is running. To achieve this, we use a combination of Azure Application Insights, Azure Monitor and Logic Apps.

Our platform telemetry is being handled by Azure Application Insights where we send specific traces, track requests, measure dependencies and log exceptions, if any. This enable us to have one central technical dashboard to operate the platform where we can use Analytics-feature to dive deeper into issues. In the future we will even add Release Annotations to our release pipeline to easily detect performance impact on our system.

Each resources has a certain set of Azure Alerts configured in Azure Monitor that will trigger a webhook that is hosted by an Azure Logic App instance. This consolidates all the event handling in one central place and provides us with the flexibility to handle them how we want, without having to change each alert's configuration.

Securing what matters

At Codit; building secure solutions is one of our biggest priorities, if not the biggest. To achieve this, we are using Azure Key Vault to store all our authentication keys such as Document DB key, Service Bus keys, etc. so that only the people and applications can access them while keeping track of when and how frequent they access them.

Each secret is automatically being regenerated by using Azure Automation where every day we will create new keys and store the new key in the secret. By doing this the platform will always use the latest version and leaked information becomes invalid allowing us to reduce the risk.

One might say that this platform is not considered a risk for leaking information but we've applied this pattern because in the end, we store personal information about our employees and it is a good practice to be as secure as possible. Applying this approach takes a minimal effort, certainly if you do this early in the project.

Security is very important, make sure you think about it and secure what matters.

Shipping With Confidence

Although Alfred & Santiago are developed as a side-project, it is still important that everything we build is production ready and have confidence that everything is still working fine. To achieve this, we are using Visual Studio Team Services (VSTS) that hosts our Git repository. People can come in, work on features they like and create a pull request once they are ready. Each pull request will be reviewed by at least one person and automatically built by VSTS to make that it builds and no tests are broken. Once everything is ready to go out the door we can easily deploy to our environments by using release pipelines.

This makes it easier for new colleagues to contribute and providing an easy way to deploy new features without having to perform manual steps.

This Is Only The Beginning

A team of colleagues were willing to spend some spare time to learn from each other, challenge each other and have constructive discussions to dig deeper into our thinking. And that's what lead to our first working version, ready as a foundation and to which we can start adding new features, try new things and make Alfred more intelligent.

Besides having a visitor system that is up and running, we also have a platform available where people can consume the data to play around with, to test certain scenarios with representable data. This is great if you ask me because then you don't need to worry about the demo data, just focus on the scenario!

To summarize, this is our current architecture but I'm sure that it is not final. 

Personally, I think that a lot of cloud projects, if not all, will never be "done" but instead we should be looking for trends, telling us how we can improve to optimize it and keep on continuously improve the platform.

Don't worry about admitting your decision was not the best one - Learn, adapt, share.

Thanks for reading,

Tom Kerkhove

Categories: Technology
written by: Tom Kerkhove

Posted on Monday, October 17, 2016 11:11 AM

Deallocating Azure Service Fabric cluster nodes, means you will be paying less for your cluster! In this blog post I will show you how to do this.

Probably you used the Azure Portal to create your first Service Fabric cluster, but when done playing with it, it seems impossible to deallocate/stop/suspend your cluster so you are not paying for the cluster nodes. The Azure Portal appears to expose no obvious way to "disable" the cluster.

One way would be to delete all resources that make up the cluster or delete the whole Resource Group and recreate it when you want to use it again.

Imagine you have a test environment that only needs to run once a week. I can imagine you don't want to recreate it every week but you also don't want to pay for all this when you are not using it. Good news, there is a solution, and I'm here to tell you!

First deactivate the necessary nodes with the Service Fabric cluster Explorer, otherwise you might have problems getting your cluster back up and running.

Service Fabric cluster nodes reside within a Virtual Machine Scale set, through PowerShell and even with the Azure portal you can disable instances within the Scale set:

Stop-AzureRmVmss -ResourceGroupName "resource group name" -VMScaleSetName "scale set name" -InstanceId #

Now you can deallocate your Service Fabric cluster whenever you want!

Categories: Technology

Posted on Tuesday, May 31, 2016 8:36 PM

Pieter Vandenheede by Pieter Vandenheede

For a BizTalk automated deployment, I needed to automatically add machine.config WCF behaviorExtensions by using some C# code.

I've been playing around with BizTalk Deployment Framework lately and for one particular BizTalk application, I need to add 4 custom behaviorExtensions. I've had some very specific logic that I needed to put into some WCF Message Inspectors. When you think about automating the installation of a BizTalk application, you don't want to be manually adding the behaviorExtensions to the machine.config. So I set out to add these via a C# application. Seems this was not as trivial as I though it would be. First things first, we need to be able to retrieve the location of the machine.config file:

The above code, will give you the path of the machine.config file, depending on what runtime (x86 or x64) you are running the code under. When running as x86, you will get the following path:

"C:\Windows\Microsoft.NET\Framework\v4.0.30319\Config\machine.config"

When running as x64, you will get the following path:

"C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Config\machine.config"

Once the path has been found, we need to open the file and position ourselves at the correct location in the file (system.serviceModel/extensions):

Now this is the point where I initially got stuck. I had no idea I had to cast the Section as a System.ServiceModel.Configuration.ExtensionsSection. Doing so, allows you to add your behaviorExtension in the config file as such:

Don't forget to set the ForceSave, as - without it - it doesn't seem to write the update. All together, this gives you the following code:

If you do like me and you want to adapt the x86 AND the x64 machine.config file, just replace the "Framework" in the x86 machine.config path into "Framework64" before doing the same. I made myself a simple console application which allows me to call it while running the MSI for my BizTalk application. Only make sure the MSI runs as an administrator!

Categories: Technology
written by: Pieter Vandenheede

Posted on Friday, February 5, 2016 12:50 PM

Toon Vanhoutte by Toon Vanhoutte

This blog post series aims to provide an introduction to AS4, a recent B2B messaging standard. It provides the basic insights on how AS4 establishes a reliable and secure message exchange between trading partners. This is a jump start to get to know the ins and outs of the standard, as the specifications are quite complex to understand without any prior introduction.

If you are active in B2B/B2C messaging, then AS4 is definitely coming your way! Make sure you are prepared.

AS2 Comparison

As AS4 is inspired by AS2, both are often considered in message exchange over the internet. On a high level, they are very similar, however there are some key differences that you should be aware of in order to make a profound decision on this matter. Let’s have a look!

Common Characteristics

These are the most important common characteristics of AS2 and AS4:

  • Payload agnostic: both messaging protocols are payload agnostic, so they support any kind of payload to be exchanged: XML, flat file, EDI, HL7, PDF, binary…
  • Payload compression: AS2 and AS4 support both compression of the exchanged messages, in order to reduce bandwidth. It’s however done via totally different algorithms.
  • Signing and encryption: the two protocols support both signing and encryption of the exchanged payloads. It’s a trading partner agreement whether to apply it or not.
  • Non-repudiation: the biggest similarity is the way non-repudiation of origin and receipt are achieved. This is done by applying signing and using acknowledgement mechanisms.

Technology Differences

The common characteristics are established by using totally different technologies:

  • Message packaging: within AS2, the message packaging is purely MIME based. In AS4, this is governed by SOAP with Attachments, a combination of MIME and SOAP.
  • Security: AS2 applies security via the S/MIME specifications, whereas AS4’s security model is based on the well-known WS-Security standard.
  • Acknowledgements: in AS2 and AS4, acknowledgements are introduced to support reliable messaging and non-repudiation of receipt. In AS2 this is done by so-called Message Disposition Notifications (MDN), AS4 uses signed Receipts.

AS4 Differentiators

These are the main reasons why AS4 could be chosen, instead of AS2. If none of these features are applicable to your scenario, AS2 might be the best option as it is currently more known and widely adopted.

  • Support for multiple payloads: AS4 has full support for exchanging more than one business payload. Custom key/value properties for each payload are available.
  • Support for native web services: being built on top of SOAP with Attachments, AS4 offers native web service support. It’s a drawback that SwA is not supported out-of-the-box by .NET.
  • Support for pulling: this feature is very important if message exchange is required with a trading partner that cannot offer high availability or static addressing.
  • Support for lightweight client implementations: three conformance clauses are defined within AS4, so client applications must not support immediately the full AS4 feature stack.
  • Support for modern crypto algorithms: in case data protection is an important aspect, AS4 can offer more modern and less vulnerable crypto algorithms.
  • Support for more authentication types: AS4 supports username-password and X.509 authentication. There is also a conformance clause on SAML authentication within AS4.

Getting Started

Are you interested to learn more on AS4?

From an architecture / specifications perspective it’s good to have a look at the related OASIS standards. This includes the ebMS 3.0 Core Specifications and the AS4 profile of ebMS 3.0. In case you are more interested on how an AS4 usage profile could be described between trading partners, this ENTSOG AS4 Usage Profile is a great example.

If you’re more developer oriented, it’s good to have a look at Holodeck B2B. It is a java-based open-source software that fully supports the AS4 profile. Some sample files provide you a head start in creating your first AS4 messages. Unfortunately SOAP with Attachments or AS4 is not supported out-of-the-box within Microsoft .NET.

Can we help you?

Codit is closely involved with AS4. It is represented in the OASIS ebXML Messaging Services TC by Toon Vanhoutte, as a contributing member. In this way, Codit keeps a close eye on the evolution of the AS4 messaging standard. Throughout several projects, Codit has gained an extended expertise in AS4. Do not hesitate to contact us if you need any assistance on architecture, analysis or development of your AS4 implementation.

Within our R&D department, we have developed a base .NET library with support for the main AS4 features. In the demo for Integration Monday, this library was plugged into the BizTalk Server engine in order to create AS4 compatible messages. At the time of writing, Codit is defining its AS4 offering. If you would like to be informed about the strategy of Codit when it comes to AS4, please reach out to us. We will be more than happy to exchange ideas.

Categories: Technology
Tags: AS4
written by: Toon Vanhoutte

Posted on Thursday, February 4, 2016 4:39 PM

Toon Vanhoutte by Toon Vanhoutte

This blog post series aims to provide an introduction to AS4, a recent B2B messaging standard. It provides the basic insights on how AS4 establishes a reliable and secure message exchange between trading partners. This is a jump start to get to know the ins and outs of the standard, as the specifications are quite complex to understand without any prior introduction.

Security is a crucial aspect of every messaging standard, so it is for AS4! This section describes the most common mechanisms to establish a secure AS4 communication.

Non-repudiation of origin

In order to guarantee message integrity, it’s important to have non-repudiation of origin. This means that the receiver is a 100% sure that the message originates from the sending party and that the message was unmodified during the exchange.

AS4 provides this evidence by applying the WS-Security signing feature on User Messages. This message signing is performed via asymmetric keys: the sender of the message signs with its own private key and the receiver authenticates the sender via the public key, which could be included in the message as a BinarySecurityToken.

According to the AS4 usage profile, the following parts of the ebMS message must be signed:

  • The ebMS Messaging Header
  • The SOAP Body
  • All SOAP Attachments

The receiving MSH checks if the hash values within the signature correspond to the actual received payloads. If not, an ebMS Error is generated and sent to the sending MSH, according to the agreed P-Mode configuration.

Non-repudiation of receipt

Another important aspect is non-repudiation of receipt. This means that the sender is a 100% sure that receiver has received the message, without being modified during the message exchange.

AS4 provides legal proof for this non-repudiation of receipt, by applying the WS-Security signing feature on Receipts. The exchanged Receipts are signed by the receiving MSH, which authenticates the originator of the receipt. The signature of Receipts must be applied on the ebMS Messaging Header.

In addition to this signature, the ebMS Messaging Header must include Non Repudiation Information within the Receipt element. This Non Repudiation Information contains the hashes of the signed payloads of the referenced User Message. This information is verified again at the sending MSH, in order to establish evidence for non-repudiation of receipt.

Data confidentiality

Data confidentiality can be ensured on both the communication and message level.

Encryption, which ensures data confidentiality, on the communication channel could be applied by using Transport Layer Security (SSL). Due to known vulnerabilities in this HTTPS protocol, it is recommended to use TLS 1.1 or higher. TLS may be offloaded to specific infrastructure components (e.g. proxy), so an AS4 MSH does not need to explicitly offer this functionality.

Data confidentiality on the message level could be achieved by leveraging the WS-Security encryption feature on User Messages. This message encryption is performed via asymmetric keys: the sender of the message encrypts with the public key of the receiver, so the message can only be decrypted by the receiver, who owns the private key. Encryption must be applied on:

  • The SOAP Body
  • All SOAP Attachments

In case the receiving MSH is unable to decrypt the message, an ebMS Error is generated and sent to the sending MSH, according to the agreed P-Mode configuration.

 

Categories: Technology
Tags: AS4
written by: Toon Vanhoutte