wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Friday, April 14, 2017 1:27 PM

Tom Kerkhove by Tom Kerkhove

As you might have noticed, a few months ago Codit Belgium moved to a new brand office in Zuiderpoort near the center of Ghent.

Because of that, we've built an internal visitor system running on Azure.
Keep on reading to learn all about it!

As you might have noticed, a few months ago Codit Belgium moved to a new brand office in Zuiderpoort near the center of Ghent.

One of the center pieces, and my favorite, is our Codit Wall of Employees: 


For these new offices Codit had a need for a visitor system that allows external people to check-in, notify employees that their visitor arrived, etc. The biggest requirement was the ability to list all the external people currently in the office for scenarios such as when there is a fire.

That's how Alfred came to life, our personal butler that assists you when you arrive in our office.

Thanks to our cloudy visitor platform in Microsoft Azure, codenamed Santiago, Alfred is able to assist our visitors but also provide reporting on whom is in the building, sending notifications, etc.

We started off with our very own Codit Hackaton - Dedicated teams were working features and got introduced to new technologies and more experienced colleagues were teaching others how to achieve their goal.

Every Good Backend Needs A Good Frontend

For Alfred, we chose to use a Universal Windows Platform (UWP) app that is easy to use for our visitors. To avoid that people are messing with our Surface we are even running it in Kiosk-mode.

Behind the scenes, Alfred just communicates with our backend via our internal API catalog served by Azure API Management (APIM going forward).

This makes sure that Alfred can easily authenticate via a subscription key towards Azure API Management where after Azure APIM just forwards the request to our physical API by authenticating with a certificate. This allows us to fully protect our physical API while consumers can still easily authenticate with Azure APIM.

The API is the façade to our "platform" that allows visitors to check-in and check-out, send notifications upon check-in, provide a list of all offices and employees, etc. It is hosted as a Web App sharing the same App Service Plan on which our Lunch Order website is running to optimize costs.

We are using Swagger to document the API for a couple of reasons:

  1. It is crucial that we provide a self-explanatory API that enables developers to see what the API offers at a glance and what to expect. As of today, only Alfred is using it but if a colleague wants to build a new product on top of the API or needs to change the platform, everything should be clear.
  2. Using Swagger enables us to make the integration with Azure API Management easier as we can create Products by importing the Swagger.

Storing Company Metadata in Azure Document DB

The information about the company is provided by Azure Document DB where we use a variety of documents that describe what offices we have, whom is working at Codit, what their preferred notification configuration is, etc.

We are using a simple structure where each type of information that we store has a dedicated document of a specific type that we link to each other grouped in one collection. By using only one collection we can group all the relevant company metadata in one place and save costs since Azure bills for RUs per collection.

As an example, we currently have an Employee-document for myself where we have a dedicated Notification Configuration-document that describes the notification I've configured. If I were to have notifications configured for both Slack and SMS messages, that means there will be two documents stored.

This allows us to easily remove and add documents for each configured notification configuration for a specific employee of using one document dedicated per employee and updating specific sections which makes it more cumbersome.

As of today, this is all static information but in the future, we will provide a synchronization process between Azure Document DB and our Azure AD. This will remove the burden of keeping our metadata up-to-date so that when somebody joins or leaves Codit we don't have to manually update it.

Housekeeping For Our Visitors

For each new visitor that arrives we want to make their stay as comfortable as possible. To achieve this, we do some basic housekeeping now, but plan to extend this in the future.

Nowadays when a visitor is registered we keep persisting an entry in Azure Table Storage for that day & visitor so that our reporting knows whom entered our office. After that we track a custom event in Azure Application Insights with some context about the visit and publish the event on an Azure Service Bus Topic. This allows us to be very flexible in how we process such an event and if somebody wants to extend the current setup they can just add a new subscription on the topic.

Currently we handle each new visitor with a Logic App that will fetch the notification configuration for the employee he has a meeting with and notify him on all the configured ways we support; that can be SMS, email and/or Slack.

Managing The Platform

For every software product, it comes without saying that it should also be easy to maintain and operate the platform once it is running. To achieve this, we use a combination of Azure Application Insights, Azure Monitor and Logic Apps.

Our platform telemetry is being handled by Azure Application Insights where we send specific traces, track requests, measure dependencies and log exceptions, if any. This enable us to have one central technical dashboard to operate the platform where we can use Analytics-feature to dive deeper into issues. In the future we will even add Release Annotations to our release pipeline to easily detect performance impact on our system.

Each resources has a certain set of Azure Alerts configured in Azure Monitor that will trigger a webhook that is hosted by an Azure Logic App instance. This consolidates all the event handling in one central place and provides us with the flexibility to handle them how we want, without having to change each alert's configuration.

Securing what matters

At Codit; building secure solutions is one of our biggest priorities, if not the biggest. To achieve this, we are using Azure Key Vault to store all our authentication keys such as Document DB key, Service Bus keys, etc. so that only the people and applications can access them while keeping track of when and how frequent they access them.

Each secret is automatically being regenerated by using Azure Automation where every day we will create new keys and store the new key in the secret. By doing this the platform will always use the latest version and leaked information becomes invalid allowing us to reduce the risk.

One might say that this platform is not considered a risk for leaking information but we've applied this pattern because in the end, we store personal information about our employees and it is a good practice to be as secure as possible. Applying this approach takes a minimal effort, certainly if you do this early in the project.

Security is very important, make sure you think about it and secure what matters.

Shipping With Confidence

Although Alfred & Santiago are developed as a side-project, it is still important that everything we build is production ready and have confidence that everything is still working fine. To achieve this, we are using Visual Studio Team Services (VSTS) that hosts our Git repository. People can come in, work on features they like and create a pull request once they are ready. Each pull request will be reviewed by at least one person and automatically built by VSTS to make that it builds and no tests are broken. Once everything is ready to go out the door we can easily deploy to our environments by using release pipelines.

This makes it easier for new colleagues to contribute and providing an easy way to deploy new features without having to perform manual steps.

This Is Only The Beginning

A team of colleagues were willing to spend some spare time to learn from each other, challenge each other and have constructive discussions to dig deeper into our thinking. And that's what lead to our first working version, ready as a foundation and to which we can start adding new features, try new things and make Alfred more intelligent.

Besides having a visitor system that is up and running, we also have a platform available where people can consume the data to play around with, to test certain scenarios with representable data. This is great if you ask me because then you don't need to worry about the demo data, just focus on the scenario!

To summarize, this is our current architecture but I'm sure that it is not final. 

Personally, I think that a lot of cloud projects, if not all, will never be "done" but instead we should be looking for trends, telling us how we can improve to optimize it and keep on continuously improve the platform.

Don't worry about admitting your decision was not the best one - Learn, adapt, share.

Thanks for reading,

Tom Kerkhove

Categories: Technology
written by: Tom Kerkhove

Posted on Thursday, April 6, 2017 4:16 PM

Toon Vanhoutte by Toon Vanhoutte

"Logic Apps is not a fit for real enterprise integration!". This is a statement I often heard throughout the last year. In this blog, I'll summarize the main reasons why people tend to make such statements. Even though their reasoning is completely true, I can't disagree more with their final conclusion.

Scenario

Let's get our hands dirty and try to develop the following old-school integration scenario: receive a flat file from disk, archive it, convert it to XML, transform the result and finally write the XML message again to disk.

Based on some documentation on the internet, I got the following scenario working in minutes!  The Logic App kicks off with a Reccurrence trigger. It loops over all files in the input folder, reads their content and deletes them.  The flat file content gets archived.  Next, we decode it and perform a transformation towards the desired format.  The result gets written to the file system again, where we check if we need to perform a Create file or an Update file action.

Issues

When this Logic Apps gets analysed by people with a background of enterprise integration, you'll get the following remarks:

We cannot reuse this development effort in similar integrations!
As an example: sending a file will typically be a combination of multiple Logic App actions working together: inject values into the file name and decide whether an update or create should happen. It's not desired to redevelop this functionality repeatedly for every Logic App that requires it! This will cost us too much time and we will end-up with different flavours of this functionality, which is not maintainable.

This workflow is too tightly coupled with its receiving transport protocol!
If we want to change the receiving transport protocol from FILE to FTP, we almost need to redevelop the Logic App from scratch again, as the subsequent actions have a hard dependency on it. We are even not sure if the FTP connector has the same actions available as the FILE connector. We have also integrations that are using SFTP in production, but are tested with FTP in the other environments. With this design, we cannot switch easily between environments.

There is no option to resume this workflow if it fails during processing!
For our solution, the resubmit functionality of Logic Apps is useless. The Logic App is initiated by a simple Reccurrence trigger. If we initiate a resubmit, it behaves exactly the same as if a new trigger is fired. If something goes wrong inside the Logic App, it will take a huge (manual) effort to get that message reprocessed again. In case of enterprise integration, we're processing thousands of files, so it will be a cumbersome experience for our operations personnel.

This is spaghetti interfacing. There's no loose coupling or pub/sub involved!
This is real point-to-point interfacing! If we design our new integration platform in this way, we will end up with unmanageable spaghetti interfacing. Production proven integration patterns like loosely coupling and publish / subscribe are nowhere to be found. A redeployment is required in case another backend system is interested in this dataset.

This workflow handles multiple files, which results in difficult troubleshooting!
This specific Logic App loops over every single file in the input folder. Logic Apps for each statements are - by default - executed in parallel, so an issue with one file will not block the processing of the other files. In case there is a problem with one file, the Logic App will end up in a Failed state. To resolve the issue, an operator will need to scroll through each iteration until the faulted message is found. Again, a manual intervention will be required to resume this file.

Conclusion

Do I agree with the statements above? Of course I do, I wrote them myself! However, this is exercise should not result in concluding that Logic Apps can't handle the job! Contrary, this is intended to create awareness about potential pitfalls when doing enterprise integration within Logic Apps. Understanding these pitfalls is a first step in designing Logic Apps solutions for robust enterprise integration!

Want to learn more? Check out my Integration Monday session on this matter!

Hope to see you there!
Toon

Categories: Azure
Tags: EAI, Logic Apps
written by: Toon Vanhoutte

Posted on Thursday, March 30, 2017 1:56 PM

Stijn Moreels by Stijn Moreels

When people talk about Code Style they refer to their Naming Conventions, Code Structure…
but what they sometimes forget are the Formatting Guidelines. Something that’s not been taken serious enough in my opinion; but it’s one of the guidelines that I like very much to talk about.

Introduction

When people talk about Code Style they refer to their Naming Conventions, Code Structure…
What they tend to forget are the Formatting Guidelines. Something that’s not been taken serious enough in my opinion; but it’s one of the guidelines that I like to talk about.

With Formatting Guidelines, I mean the rules and guidelines that have been adopted in your team to format your code base files. When I hear people question my vision about these guidelines; that’s the moment when I’m certain that it’s INDEED important to adopt Formatting Guidelines for your team.

The reason people tend to question this, is because they don’t see the purpose and reason why these guidelines are important. I am convinced that Formatting in the topic of Code Style is important.

Formatting Basics

Formatting?

What do I mean with Formatting? Isn’t that just wasteful work? It doesn’t change my code behavior?
The first question I will discuss in this paragraph; and NO, it isn’t wasteful work; and NO, it doesn’t change your codes behavior.

What formatting means is the structure of your code. Just like adding an ENTER statement to start a new paragraph, or adding a trailing space after the point of your sentence.

Formatting is all about these little modifications in your code base. We developed a Formatting Guidelines document which captures our teams’ guidelines involving all the different ways we format our code. This document isn’t fixed but is dynamically formed and changes weekly by adding more and more verified ideas to format our code.

Formatting is about changing your code in a uniform way to make your intensions clear. Only a non-professional would ignore formatting and not use it in his/her daily practice.

Formatting Principles

One of the most important principles in Extreme Programming (XP) is Humanity. Maybe a simple principle at first but a very large portion of today’s literature doesn’t validate this simple fact:

“People develop software”
-Kent Beck

The Humanity together with Quality is for me the MOST important principles if you think about formatting in code style conventions. We as people develop software; we as people are reading this software daily; we as people are changing this software daily…

Wouldn’t it be more than normal that we create an environment where people feel safe to contribute, feel like they belong to the teams’ efforts and growth so they fully are being understood by everyone?

THAT’S what Humanity is all about. Formatting is about people. Formatting allows you to structure your code in a uniform way so that everyone can quickly spot what they are searching for. We, as developers, are reading more code than we write; so, with that in mind we should have our focus on the way people read code instead of just writing the code.

When I first heard about this statement, I was even more convinced that code should be: Simple, Clean, and Structured.

Formatting Patterns

The following patterns are just some basic ideas that I’d like to share with you. These aren’t all the patterns but I list them here to give you an idea to define your own patterns and come up with new ideas for your own.

Separate Concepts

It’s magical what just a single ENTER can do in code, but be careful where you use it. If you have a 10-line method and you place some ENTER here and there to increase Readability, that is a good thing, but maybe not the best solution.
Maybe you have the beginning of the Long Method Code Smell and have to refactor some with Extract Method Refactoring.

What I like to discuss are the other cases: the cases with where you have a method with a return value for example. Look at the following code snippet:

I could just have placed the “return new Attachment” statement with the two others, but instead I placed it on a separate line. Another example:

Here I have an ENTER after the Guard Clause; while I’ll just could have place the “UpdatePersonFromStore()” method a line up.

Both examples indicate the pattern of Separate Concepts. This patterns is about the separation of different concepts. The ENTER is the Separation in this context and the return/guard-clause are the Concepts.

The “return” statement in the first example is the actual Plot of the method. This plot is more important than the retrieval of the content type or location. It’s also a total different Concept, so we place it on a different line.

The second example with the if-statement is about time you spend reading. When there’s an exception being thrown, we won’t have to look at any other statements. So, we placed an ENTER after the exception statement.

These two examples are just a start for the practical implementation of the Separate Concepts pattern; but now you can find your own places to separate the concepts.

Conceptual Affinity

The second pattern I want to discuss is Conceptual Affinity. The relationships of elements of the same concept; that’s what this pattern means.

The practice is finding ways to place elements of the same concept as close as possible with each other.

We could just place the “SetLow(int)” method below the “Calculate()” method; but we didn’t do that because:

  1. SetHigh and SetLow are method with the same Signature
  2. SetHigh and SetLow are doing structurally the same action

With these two reasons, we can conclude that these two methods are more likely in the same Concept than the relationship with the "Calculate()" method.

A bit subtler is the following example. We could also have retrieved the one hour before we retrieved the current time; but to support the Concept of calculating the time from now with one hour, we have the retrieval logic in the same order as the formula.

This way the concept of calculating is related to the concept of retrieval and increases the Cleaniness and Simplicity of the code.

Try to switch the values and see for yourself that this way of writing is easier to follow.

Now it’s time to spot your concepts and the best solution. The last example of is on the edge of the refactoring Extract Method I think; if the calculation of the two values would become more complex, than that would be a good time to extract the two temporary variables to their own methods.

Newspaper Metaphor

The last pattern I will discuss with you is the Newspaper Metaphor (from the book Clean Code). This pattern talks about the global structure of your code files. The location of each element and the proper position.

When you write a class, try to hold this pattern in mind. Just like a newspaper, you place the Header and important items on the top, and place the Details and more concrete information at the bottom.

This will increase the readability of your code file; and you will spend less time searching the file.

Following structure could you use to define the order of the global elements:

  1. Fields
  2. Constructors
  3. Properties
  4. Public Methods

In each element, you could also define a structure for the visibility and place for example the Public Methods before the Private Methods.

Besides the global structure of elements is the Newspaper Metaphor useful for the flow of the code file. Just like a newspaper you must make sure that you can read your file from top to bottom without jumping (too much) back up. This is also called the Step-Down Rule.

Notice that this sequence is a standard convention of the Microsoft Guidelines and not an invented idea. Also, when writing in this structure always think of the relationship between components:
If a public method has several private methods which it delegates to, place those methods as close to the public method as possible. If you’re reading a headline in a newspaper and find it interesting, you want to read more of the details directly under the headline and not on a different page.

With this last pattern, you can start with searching for the perfect article in your code and make it worth the read.

Conclusion

When people don't find it very interesting or don’t see the added-value of Formatting; they don’t think in people-terms but in terms of tasks. They’re Task-Minded and not People-Minded.

I hope with this short article about Formatting, you will also be/become People-Minded and use these peoples as your teams’ strength.

There’s a lot more patterns about Formatting which increases readability and supports Humanity; but I’m sure that, with this as starting point, you will find your own patterns that make YOU more comfortable. Try to make a Formatting Guidelines document for your team so everyone feels comfortable about the whole Formatting practice and everyone can make sure that everyone’s daily development work is just that little bit more restful and less annoying.

My mistake was to write a long document with all the possible approaches I could think of; but not everyone is so disciplined in adopting these patterns so try to limit yourself in a One-Page-Document (Example in Java) with maybe a class as example to show:

  • How do you sort fields, members?
  • How to write flow statements (return, break, continue, …)?
  • Where do you place enters to increase readability?

Codit is a company which primary focus is People so it’s only logical that we are paying attention to this practice of Formatting and making sure that everyone feels comfortable about reading code.

 

“People develop software”

Categories: Architecture
written by: Stijn Moreels

Posted on Tuesday, March 28, 2017 3:50 PM

Glenn Colpaert by Glenn Colpaert

A couple of weeks ago we hosted a webinar on Logic Apps, BizTalk and the Enterprise Integration pack. This blogpost will give you a small recap and invite you to take the next steps in the Azure IPaaS platform.

Integration has traditionally been all about ESB's, MessageBrokers and the exchange of messages between on-premises systems. Today, many companies wish to integrate beyond their firewall, typically with SaaS based application. This change is reflected in the uplift of API based integration using lightweight protocols.

What we really talk about here is modern integration where both Logic Apps, BizTalk and the Enterprise Integration Pack will play a major role. All of the above components allow you to enable and deliver powerful hybrid connectivity with ease.

This webinar I did a couple of weeks ago will give you a overview of Logic Apps and the Enterprise Integration Pack. 'How does it work', 'How is it Made' and 'How does it all fit together’? Just a couple of questions you will find the answer to in the video below.

Next Steps

At Codit we are heavily investing in building best-practices, approaches and lessons learned when it comes to Logic Apps. All these resources are collected on this blog. So I invite you to scroll through these sections and have a more in-depth overview of all the posibilities of Logic Apps and Enterprise Integration in the cloud.

Community Resources

Thanks for reading!

Cheers,

Glenn

Categories: Azure
Tags: Azure, Logic Apps
written by: Glenn Colpaert

Posted on Tuesday, March 28, 2017 10:30 AM

Toon Vanhoutte by Toon Vanhoutte

Debatching is a common need in enterprise integration. This blog post includes several ways to achieve debatching in Logic Apps for both JSON and XML messages. The aspect of monitoring and exception handling is also covered.

SplitOn Command

Logic Apps offer the splitOn command that can only be added to a trigger of a Logic App. In this splitOn command, you can provide an expression that results in an array. For each item in that array, a new instance of the Logic App is fired.

Debatching JSON Messages

Logic Apps are completely built on API's, so they natively support JSON messages. Let's have a look on how we can debatch the JSON message below, by leveraging the splitOn command.

Create a new Logic App and add the Request trigger.  In the code view, add the splitOn command to the trigger.  Specify the following expression: @triggerBody()['OrderBatch']['Orders']

Use Postman to send the JSON message to the HTTP trigger.  You'll notice that one input message, triggers 3 workflow runs.  Very easy way to debatch a message!

Debatching XML Messages

In old-school integration, XML is still widely spread. When dealing with flat file or EDI messages, they are also converted into XML. So, it's required to have this also working for XML messages. Let's consider the following example.

Update the existing Logic App with the following expression for the splitOn command: @xpath(xml(triggerBody()), '//*[local-name()=\"Order\" and namespace-uri()=\"http://namespace\"]').  In order to visualize the result, add a Terminate shape that contains the trigger body as the message.

Trigger the workflow again.  The result is as expected and the namespaces are nicely preserved!

Exception Handling

The advantage of this approach is that every child message immediately starts processing independently from the others. If one message fails during further processing, it does not impact the others and exception handling can be done on the level of the child message. This is comparable to recoverable interchange processing in BizTalk Server. In this way, you can better make use of the resubmit functionality. Read more about it here.

Let's have a look what happens if the xPath expression is invalid. The following exception is returned: The template language expression evaluation failed: 'The template language function 'xpath' parameters are invalid: the 'xpath' parameter must be a supported, well-formed XPath expression. Please see https://aka.ms/logicexpressions#xpath for usage details. This behavior is as desired.

What happens if the splitOn command does not find a match within the incoming trigger message? Just change the xPath for example to @xpath(xml(triggerBody()), '//*[local-name()=\"XXX\" and namespace-uri()=\"http://namespace\"]'). In this case, no workflow instance gets triggered. The trigger has the Succeeded status, but did not fire. The consumer of the Logic App receives an HTTP 202 Accepted, so assumes everything went fine.

This is important to bear in mind, as you might lose invalid messages in this way. The advice is to perform schema validation before consuming a nested Logic App with the splitOn trigger.

Monitoring

Within the standard overview blade, you cannot see that the three instances relate to each other. However, if you look into the Run Details, you notice that they share the same Correlation ID. It's good to see that in the backend, these workflow instances can be correlated. Let's hope that such functionality also makes it to the portal in a user-friendly way!  

For the time being, you can leverage the Logic Apps Management REST API to build your custom monitoring solution.

For Each Command

Another way to achieve a debatching-alike behavior, is by leveraging the forEach command. It's very straightforward to use.

Debatching JSON Messages

Let's use the same JSON message as in the splitOn example. Add a forEach command to the Logic App and provide the same expression: @triggerBody()['OrderBatch']['Orders'].

If we now send the JSON message to this Logic App, we get the following result. Remark that the forEach results in 3 loops, one for each child message.

Debatching XML Messages

Let's have a look if the same experience applies for XML messages. Modify the Logic App, to perform the looping based on this expression: @xpath(xml(triggerBody()), '//*[local-name()=\"Order\" and namespace-uri()=\"http://namespace\"]')

Use now the XML message from the first example to trigger the Logic App. Again, the forEach includes 3 iterations.  Great!

Exception Handling

I want to see what happens if one child message fails processing. Therefore, I take the JSON Logic App and add the Parse JSON action that validates against the schema below. Remark that all fields are required.

Take the JSON message from previous example and remove in the second order a required field. This will cause the Logic App to fail for the second child message, but to succeed for the first and third one.

Trigger the Logic App and investigate the run history. This is a great result! Each iteration processes independent from the other. Quite similar behavior as with the splitOn command, however it's more difficult to use the resubmit function.

You must understand that by default, the forEach branches are executed in parallel. You can modify this to sequential execution. Dive into the code view and add "operationOptions" : "Sequential" to the forEach.

Redo the test and you will see that this has no influence on the exception behavior. Every loop gets invoked, regardless whether the previous run failed.

Monitoring

The monitoring experience is great! You can easily scroll through all iterations to see which iteration succeeded and which on failed. If one of the actions fails within a forEach, the Logic App gets the Failed status assigned.

What should we use?

In order to have a real debatching experience, I recommend the splitOn command to be used within enterprise integration scenarios. The fact that each child message gets immediately its specific workflow instance assigned, makes the exception handling strategy easier and operational interventions more straightforward.

Do not forget to perform first schema validation and then invoke a nested workflow with the Request trigger, configured with the splitOn command. This will ensure that no invalid message disappears. Calling a nested workflow also offers the opportunity to pass the batch header information via the HTTP headers, so you can preserve header information in the child message. Another way to achieve this, is by executing a Transformation in the first Logic App, that adds header information to every child message.

The nested workflow cannot have a Response action, because it's decorated with a splitOn trigger.  If you want to invoke such a Logic App, you need to update the consuming Logic App action with the following expression: "operationOptions": "DisableAsyncPattern".

If we run the setup, explained above, we get the following debatching experience with header information preserved!

Conclusion

Logic Apps provides all required functionality to debatch XML and JSON messages. As always, it's highly encouraged to investigate all options in depth and to conclude what approach suites the best for your scenario.

Thanks for reading!
Toon

Categories: Azure
written by: Toon Vanhoutte