wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Wednesday, December 21, 2016 11:23 AM

Tom Kerkhove by Tom Kerkhove

When maintaining dozens, or even hundreds of Azure resources it is crucial to keep track of what is going on and how they behave. To achieve this, Azure provides a set of functionalities and services to help you monitor them.

One of those functionalities is the ability to configure Azure Alerts on your assets. By creating an alert you define how and under what circumstances you want to be notified of a specific event, ie. sending an email when DTU capacity is over 90% of your production database.

Unfortunately, receiving emails for alerts can be annoying and is not a flexible approach for handling alerts. What if I want to automate a process when our database is overloaded? Should I parse my mailbox? No!

Luckily, you can configure your Azure Alerts to push to a webhook where you process the notifications and Azure Logic Apps is a perfect fit for this! By configuring all your Azure Alerts to push the events to your Azure Logic Apps, you decouple the processing from the notification medium and can easily change the way an event can be handle.

Today I will show you how you can push all your Azure Alerts to a dedicated Slack channel, but you can use other Logic App Connectors to fit your need as well!

Creating a basic Azure Alert Handler

For starters, we will create a new Logic App that will receive all the event notifications - In this example azure-alert-handler.

Note: It is a best practice to host it as close to your resources as possible so I provision it in West Europe.

Once it is provisioned, we can start by adding a new Request Triggerconnector. This trigger will expose a webhook that can be called on a dedicated URL. This URL is generated once you save it for the first time.

As you can see you can also define the schema of the events that will be received but more on this later on.

Now that our Logic App is ready to receive events, we can configure our Azure Alerts. In this scenario we will create an alert on an API, but you could do this on almost any Azure resource. Here we will configure it to get a notification once there are more than 100 HTTP Server Errors in 5 minutes.

To achieve this, navigate to your Web App and search for "Alerts".

Click on "New Alert", define what the alert should monitor and specify the URL of the Request Trigger in our Logic App.

We are good to go! This means that if our Alert will change its state, it will push a notification to our webhook inside our Logic Apps.

You can see all events coming in our Logic App by using the Trigger History and All Runs of the Logic App itself.

When you select a Run you can view the details and thus what it has sent. Based on this sample payload, I generated the schema with jsonschema.net and used that to define the schema of the webhook. 

Didn't specify the schema? Don't worry you can still change it!

While this is great, I don't want to come back every 5 minutes to see whether or not there were new events.

Since we are using Slack internally this is a good fit consolidate all alerts in a dedicated channel so that we have everything in one place.

To push the notification to Slack, add a new Slack (Post Message) Action and authenticate with your Slack team.

Once you are authenticated, it is fairly simple to configure the connector - You need to tell it to what channel you want to push messages, what the message should look like or even other things like the name of the bot et al.

Here is an overview of all the settings of the Slack connector that you can use.

I used the @JSON function to parse the JSON input dynamically, later on we will have a look how we can simplify this.

"Alert *'@{JSON(string(trigger().outputs.body)).context.name}'* is currently *@{JSON(string(trigger().outputs.body)).status}* for *@{JSON(string(trigger().outputs.body)).context.resourceName}* in @{JSON(string(trigger().outputs.body)).context.resourceRegion}_(@{JSON(string(trigger().outputs.body)).context.portalLink})_"

Once our updated Logic App is triggered you should start receiving messages in Slack.

Tip - You can also Resubmit a previous run, this allows you to take the original input and re-run it again with that information.

Awesome! However, it tends to be a bit verbose because it mentions a lot of information, despite that it's already resolved. Nothing we can't fix with Logic Apps!

Sending specific messages based on the alert status

In Logic Apps you can add a Condition that allows you to execute certain logic if some criteria are met.

In our case we will create a more verbose message when a new Alert is 'Activated' while for other statuses we only want to give a brief update about that alert.

As you can see we are no longer parsing the JSON dynamically but rather using dynamic content, thanks to our Request Trigger Schema. This allows us to create more human-readable messages while omitting the complexity of the JSON input.

Once our new alerts are coming in it will now send customized messages based on the event!

Monitoring the Monitoring

The downside of centralizing the processing of something is that you create a single-point-of-failure. If our Logic App is unable to process events we won't see Slack messages assuming that everything is fine, while it certainly isn't. Because of that, we need to monitor the monitoring!

If you search for "Alerts" in your Logic App you will notice that you can create alerts for it as well. 
As you can see there are no alerts available by default so we will add one.

In our case we want to be notified if a certain amount of runs is failing. When that happens we want to receive an email. You could setup another webhook as well but I think emails are a good fit here.

Wrapping up

Thanks to this simple Logic App I can now easily customize the processing of our Azure Alerts without having to change any Alerts.

This approach also gives more flexibility in how we process them - If we have to process database alerts differently, we want to change Slack with SMS or another integration it is just a matter of changing the flow.

But don't forget to monitor the monitoring!

Thanks for reading,

Tom Kerkhove.

PS - Thank you Glenn Colpaert for the jsonschema.net tip!

Categories: Azure
Tags: Logic Apps
written by: Tom Kerkhove

Posted on Wednesday, December 21, 2016 9:35 AM

Pieter Vandenheede by Pieter Vandenheede

On December 13th I spoke at BTUG.be XL about BizTalk 2016, the new features and the key aspects of its Always On support. In this post I share with you my slide deck.

Last week, I had a great time at BTUG.be, while presenting my session on BizTalk 2016.

I presented the new features in BizTalk Server 2016 RTM and a few takeaway from SQL Server 2016. More specifically, and in-depth, on SQL Server AlwaysOn support for BizTalk Server 2016 on-premise and in the Azure cloud, as well as an intro on the new Logic App adapter and how to install and connect it to your on-premise BizTalk Server.

As promised there, please find my slide deck below via SlideShare:

Contact me if you have any questions regarding the slides, I'd be happy to answer you.

The other speakers there were Glenn Colpaert (session about Azure Functions), Kristof Rennen (session on Building scalable and resilient solutions using messaging) and Nino Crudele (session on Holistic approaches to Integration).

As always, it was nice to talk to the people present. A big thank you to BTUG.be for having me again!

Enjoy the slide deck!

Pieter

Categories: Community
written by: Pieter Vandenheede

Posted on Thursday, December 1, 2016 2:05 PM

Massimo Crippa by Massimo Crippa

Don’t dump your internal data model on your clients. Work outside-in, design your API with the clients in mind. Build your server side API once and then tailor the API to different clients (Backend-For-Frontends pattern).

The nature of mobile experience is often different than the desktop mobile experience. Different screen size and different functionalities. We normally display less data and it’s a good practice to perform less calls to avoid to kill the battery life. 
A common way to accommodate more than one type of device and UI is to add more functionalities over time to a compound API for multiple clients. At the end of the day this could result in a complex and not easy to maintain API.

The BFF pattern offers a possible solution to this problem: having a dedicated backend API for every type of client. The BFF pattern is growing in popularity especially its implementation within API management gateways.

In this post, we will see how to leverage the power and the flexibility of the Azure API Management policy engine to reduce the complexity of one the downstream APIs therefore make it more suitable for mobile clients.

Excel as data service

On August 3rd, Microsoft announced the general availability of the Microsoft Excel REST API for Office 365. This API open new opportunities for developers to create new digital experiences using Excel as backed service.

Carpe diem! Don’t miss the chance and let’s use Excel as it would be one of the downstream services that power my brand new mobile application. To use Excel as data service I first created a new Excel file in my Office 365 drive and created a table inside the worksheet to define the area where the data will be stored.

To write a single line on the excel workbook we must:

  • refer to the workbook (specifying the user id and the workbook id)
  • create a session in order to get the workbook-session-id value.
  • post/get the data adding the “workbook-session-id” as http header.

And what about the structure of the data to be sent? What about the response? In the picture below the request/response example to GET the rows from the Excel table. 

BFF (aka “experience APIs”)

The goals of this exercise is to create an API dedicated for the mobile experience, so to remove the complexity in the URL/ HTTP headers, have a simpler inbound/outbound data contracts and hide the details about the downstream service.

Here is where API Management comes into the picture allowing the API publisher to change the behavior of the API through configuration policies, so that developers can iterate quickly on the client apps, so that innovation can happen at faster pace. 

An API has been added to the APIM gateway and three operations has been configured: Init (to create a session), send message (save a message on the excel workbook) and get messages (list of all the sent messages).

Simplify the URL

First step is to create the BFF mobile API then add the rewrite URI policy to expose a simpler URI in the gateway.

Remove HTTP header complexity

In this step we want to avoid to inject the "workbook-session-id" header all of the time. So the main idea is to create a init operation that call the "createSession" on the Excel REST API, read the "id" value from the response and store into the workbook-session-id into the gateway cache.

To achieve that let's use a combination of policies associated to the INIT operation.

  • set-body to specify that the data need to be persisted on the Excel workbook
  • set-variable to read "id" response and store into the workbook-session-id variable
  • cache-store-value to store the workbook-session-id into the cache using the JWT token as a cache key.
  • set-body to return a custom 200 response 

On the outbound, in case of valid response, the session identifier is read via context.Response.Body 

The policy associated to the GET messages operation, retrieves the workbook-session-id parameter from the cache, adds to the session header and forward the request to the downstream service.

Simplify the data contract (message transformation)

The goal of this step is having a data contract tailored to the client. Simpler and compact in terms of size.

The contract to send a message has been reduced to the minimum, a string. In the inbound policy the message is enriched with the name of the sender (from the JWT token) and a timestamp. The set body policy is used to to create the json object to be forwarded to the underlying API.

On the outbound channel the result set of the GET Messages is filtered to reduce the data transferred over the wire and it's mapped to a simpler JSON structure.

Hide the backend details

As a final step, some HTTP headers are deleted (product scope policy) to hide the details of downstream service.

In Action

Conclusion

The BFF supports transformative design and moves the underlying system into a better, less-coupled state giving the dev teams the autonomy to iterate quickly on the client apps and deliver new digital experiences faster.

The tight coupling between the client and the API is therefore moved in the API Management layer where we can benefit of capabilities like aggregation, transformation and the possibility to change the behavior of the API by configuration.

Cheers

Massimo

Categories: Azure
written by: Massimo Crippa

Posted on Tuesday, November 22, 2016 5:14 PM

Pim Simons by Pim Simons

Why do HTTP 400 status codes return a NACK, while HTTP 500 status codes return an ACK? And how to deal with this...
This post describes an interesting scenario we encountered at a customer and our considerations for dealing with it.

At one of our customers I had implemented the ReturnAddress messaging pattern (http://www.enterpriseintegrationpatterns.com/patterns/messaging/ReturnAddress.html), by using a generic BizTalk application which sends an asynchronous response message to a client application. The solution had been running successfully for some time, when we encountered a strange situation.

The BizTalk application uses a one-way WCF-Custom send port, using a wsHttpBinding, to send a message to the client application. Also, I had added the Delivery Notification functionality to make sure messages are delivered successfully.

It is important to realize that one-way send ports that use SOAP will receive a technical response containing an HTTP status code. If and when the send ports receive a HTTP status code in the 200 range, the Delivery Notification generates an ACK and BizTalk knows the message was successfully delivered.

So far, so good. The application had been through testing on the Test and Acceptance environments, had been deployed to the Production environment and had been running for several months without any problems. Until it appeared that some of the messages that were being sent using the generic BizTalk application were ‘not arriving’ at the client application. This would happen at random and, what was really strange, the logging in BizTalk showed that the message was successfully sent and BizTalk had received an ACK response as part of the Delivery Notification. Also there was no mention of an error in the event log of the BizTalk servers.

After some debugging we found the source of the problem. The message sent by BizTalk was successfully received by the client application, however the client application encountered an error processing the message and returned the HTTP 500 status code. So now the question was, why is the Delivery Notification not generating a NACK response when a HTTP 500 status code is received? I had expected that any status code in the HTTP 400 and 500 range would result in a NACK.

This turned out not to be the case. While status codes in the HTTP 400 range will result in a NACK, the status codes in the HTTP 500 range will result in an ACK and BizTalk will view this message as successfully delivered at the client application. The logic behind this seems to be that the status codes in the HTTP 400 range indicate that the message was not received by the client application (hence the NACK) and the status codes in the HTTP 500 range indicate that the message was received by the client application, but that the client application encountered an exception. Since the message was delivered at the client application, BizTalk views this as a successful delivery and will generate an ACK as part of the Delivery Notification.

Unfortunately, there isn’t any documentation on MSDN on which status codes will return an ACK or NACK.

The documentation on the SOAP HTTP response states that “In case of a SOAP error while processing the request, the SOAP HTTP server MUST issue an HTTP 500 "Internal Server Error" response and include a SOAP message in the response containing a SOAP Fault element indicating the SOAP processing error”. For reference, see https://www.w3.org/TR/2000/NOTE-SOAP-20000508/#_Toc478383529.

Some discussion followed on the validity of catching the HTTP 500 error in BizTalk, since the message was successfully delivered and accepted by the client application. That means that, from a technical perspective, the responsibility would now lie at the client application to handle the error. From a functional responsibility perspective however, it was decided to find a way to catch the HTTP 500 error in BizTalk, as this would enable the customer's administrators to use the same resubmit functionality we had created by using a generic BizTalk error handling framework.

So I had to make sure the HTTP 500 status code was somehow caught, so that BizTalk would return a NACK which would result in the error handling catching the error. Fortunately, this can be achieved quite easily by implementing a WCF behavior on the one-way send port. The WCF behavior checks in the AfterReceiveReply message inspector if the reply is a fault message, and if so it will throw an exception using the fault description.

By implementing this WCF behavior on a one-way send port BizTalk will generate a NACK when a response is received with an HTTP status code in the 400 or 500 range. Sometimes the default behavior surrounding technical responsibility doesn’t align with the requirements and responsibilities from a functional point a view, and this may just offer a solution for you as well.

Categories: BizTalk
written by: Pim Simons

Posted on Monday, November 21, 2016 4:11 PM

Stijn Moreels by Stijn Moreels

What drives software design? Several mechanisms are introduced over the years that explains what different approaches you can use or combine. Following article talks about just another technique: Responsible-Driven Design, and how I use it in my daily development work.

1    INTRODUCTION

“You cannot escape the responsibility of tomorrow by evading it today”
-Abraham Lincoln

Have you taken responsibility in your work lately? I find it frustrating that some people may not take responsibility in their software design and just writing software until it “works”. We all need to take responsibility in our daily work; software design/development work is no exception.

What drives software design? Several mechanisms are introduced over the years that explains what different approaches you can use or combine. Test-Driven Development talks about writing tests first before writing production code, Data-Driven Development talks about defining processing strategies in function of your data, Domain-Driven Design talks about solving a domain problem by using the vocabulary of the Ubiquitous Language with a high-level abstraction, Behavior-Driven Design is in short extension of Test-Driven Development with concepts of Domain-Driven Design…

Following article talks about just another technique: Responsible-Driven Design, and how I use it in my daily development work.

Rebecca Wirfs-Brock (founder of RDD) talks about some constructs and stereotypes that defines this approach and helps you build a system that takes responsibility at first place. I’m not going to give you a full description about her technique, though I’m going to give you a very quick view of how this can be used to think about your daily designing moments; at high-level and low-level software design.

A lot of programmers don’t think that they should be involved in designing the software, “That’s work for the Application Architect”. Their so wrong. This article is for developers, but also project managers and architects.

"Software Development is all design"
—Eric Evans

Your goal should not be to find THE solution at the beginning of your design process; it should be your goal to have a flexible design that you can refactor constantly to have THE solution at the end of your design process.

“Good programmers know they rarely write good code the first time”
—Martin Fowler

"The problem with software projects isn't change, per se, because change is going to happen; the problem, rather, is the inability to cope with change when it comes."
—Kent Beck

2    DEFINING RESPONSIBILITIES

“Understanding responsibilities is key to good object-oriented design”
—Martin Fowler

Defining responsibilities is crucial to have a good software design. Use Cases are a good starting point for defining responsibilities. These cases state some information of "What if… Then… and How" chains. Though, it isn't the task of use cases to define coordination or control of the software or the design; these tasks you must define yourself.
 
Rebecca describes some Roles to describe different types of responsible implementations to help you define software elements. A “Role” is some collection of related tasks that can be bundled to a single responsibility.

  • Service providers: designed to do things
  • Interfaces: translate requests and convert from one level of abstraction to another
  • Information holders: designed to know things
  • Controllers: designed to direct activities
  • Coordinators: designed to delegate work
  • Structurers: manage object relations or organize large numbers of similar objects

Some basic principles of responsibilities are the following: doing, knowing and deciding. Some element does something, another knows something and another decides what's next or what must be done.
This can help in the defining process of responsibilities. Mixing more than one of these principles in a single element is not a good sign of your design.

When structuring the requirements, and use cases, try to find the work that must be done, the information that must be known, the coordination/control activities, possible solutions to structure these elements…

It’s a good practice to define these responsibilities in the Software Design Document for each significant element. Rebecca talks about CRC Cards which list all the possible information one element knows, what work it must do…

3    ASSIGNING RESPONSIBILITIES 

Data-Driven Design talks about a centralized controlled system in its approach to have application logic in one place; this get quick complex though. Responsible-Driven Design is all about delegated control to assign responsibilities to elements and therefore each element can be reused very quickly because it only fulfills its own responsibilities and not ones from some other elements in the design. Distributing to much responsibilities can lead to weaker objects and collaboration/communication of objects.

When assigning responsibilities, there’s a lot of Principles/Patterns that helps me to reflect constantly on my design. Here are some I think about daily:

Keep information in one place:     
“Single Point of Truth Principle”: principle that states that each piece of information is stored exactly once.

Keep a responsibility small:     
“Law of Demeter - Principle of the least knowledge”: each element should have a limited piece of knowledge stored itself and have about other elements (see also Information Hiding and Information Expert).

Wrap related operations:     
“Whole Value Object” (for example): wrap related operations/information in an object on its own and give it a descriptive name.

Only use what you need:    
“Interface Segregation Principle”: each interface should only implement method which it needs.

Aligned responsibility:    
“Single Responsible Principle (a class should only have one reason to change)”: each part of your software should have a single responsibility and this responsibility should entirely be wrapped in this part.
 
And so, so many more…

4    CLASS RESPONSIBILITIES

When defining classes, the Single Responsibility Principle comes in mind. Each class should do only one predefined task/responsibility. A trick I use to keep me always aware of this responsibility, is to write above each class in comments what that class should do and only do.
 
If you find yourself writing words like: "and", "or", "but", "except"… you're probably trying to do more than just one thing. It's also very useful when adding new code to check if it's still in the class its responsibility; if not, rethink your design to find the exact spot to where to put your new code. Also, try to speak in one sentence about responsibility.
 
Classes which names contains: "manager", "info", "process"… is also an indication that you're doing something more than it should in your class. It could also mean that you named the class with such a general name because otherwise it will not state the class its responsibility.

In that way, you have a "nice" Anti-Pattern in place which I like to call: Responsibility-Hiding Anti-Pattern. Hiding of responsibilities should (of course) be eliminated, it only obscures the design and is a weak excuse for a bad design. A class named “ElementProcessor” for example is a perfect example; “What’s happening in the Process-part?”. It feels the class has some black magic in it and you can call the class to say: “Do the magic!”. Use strong descriptions when defining responsibilities for your class, if you can't do that, refactor your class until you can.

One of the best reasons to create classes is Information Hiding. If a subset of methods in a class uses a subset of information. Then that part could/must be refactored into a new class. This way we have not only Hide the information, we have refactored the data and logic so it’s on the same Rate of Change.

Can you spot following misplaced responsibility?
 


Is it the job of the File Attachment Uploader class to know what extension the Attachment should need? What if I need a FTP Attachment Uploader?

For people who wants more read: assigning to many responsibilities is actually related to the Anti-Patterns from Brown: Software Development Anti-Pattern: Blob/God-Class Anti-Pattern which talks about a single class that contains a bunch information, has some logic exposed… a class that contains SO MANY RESPONSIBILITIES; and the Software Architecture Anti-Pattern: Swiss Army Knife Anti-Pattern which talks about a complex interface (so multiple implementations) that has multiple RESPONSIBILITIES and is used in to many software problems (as a Swiss Army Knife) as solution.

5    FUNCTION RESPONSIBILITIES

I told you that I get very deep, now I'm going to talk about just a function. What of responsibility has a function for example. The name of a function should always be a verb for a start. Something to do.

Naming is key in good communication; which follows also another Principle: Principle of Least Surprise. Only to look at some name you should have a clue what’s been done in that function/class/package… so you don’t get surprised.

A function should (definitely) only do one thing. Robert C. Martin talks about different ways to spot if a function does more than one thing: if you can extract a part of the function and give a meaningful name that doesn’t restate the original function name; you’re doing more than one thing; and so, have multiple responsibilities.

 Looking back at the previous example (after refactoring of the “GetExtension” method). Does “Upload Attachment” one thing? No, it gets first the exact path from attachment related information.


Of course, this example and still be refactored, and please do. Also, note that the extracted function called “GetAttachmentLocation” and not “GetLocation”. Because we added the “Attachment” part. The function logically gets an attachment as argument from the “UploadAttachment” function.

Try to always use descriptive names in your functions and say what you do in the function, not HOW you do it. Name your function after its responsibility. If we named the function “GetLocation” there would be not logically explanation why we would send an Attachment with it because it isn’t in the function its responsibility.


After again a refactoring session, we could see that there’s maybe a missing concept. Why doesn’t have the Attachment a location as information? Also, I don’t like to see if a function has one or multiple parameters AND a return value. It violates the Query-Command Separation Principle.


Note that we also extracted the function with the File Stream related information. We did this so each function is on the same level of abstraction. Each function should do as the name says it does, no surprises. And every time you go to the next implementation, you should get deeper, more concrete and less abstract. This way each function exists on one and only one layer of abstraction. “UploadAttachment” is on a higher layer of abstraction that “AssignAttachmentLocation” and “SaveAttachmentToFileSystem”.

It’s the responsibility of the “UploadAttachment” function to delegate to the two functions and not to try do something concrete itself.

Now, we could still refactor this because there isn’t any exception handling functionality for example. Just like refactoring is an iterative process, so is software designing. Please constantly refactor your code to a better approach.

I rename my classes/functions/… daily for example. I then look at it from a distance and think: “does this really explains what I want to say?”, “Is there any better approach?”, “Is it the responsibility of this variable to know this?”, “Does this function only its responsibility?” …

6    DOCUMENTATION RESPONSIBILITIES

One of the reasons documentation exist, is to define responsibilities. Many developers don’t see the purpose of defining a software design documentation, because it's extra work and gets out of date fast. But also, because it explains something they already know (or think they know). In this document, you describe what each element/package's responsibility is, why you named something like that…
 
Most of the time, I define for each layer in my design (see Layered Architecture) a separated block of documentation. Each title starts with the same explanation: Purpose. What's the reason this package exists? After that a Description section is placed to describe some common information and responsibility the layer has. Next, is the UML Schema/Interaction Diagram section where schemas are placed to have some technical description how object are connected (UML) and collaborate (Interaction) with each other.

Eric Evans state that we should name our layers not only by their technical term like: Infrastructure Layer, Domain Layer, Application Layer, Presentation Layer… but also by their Domain Name. What Role plays this layer in your software design?

So, to summarize:

  • Title
  • Purpose
  • Description
  • UML Schema
  • Interaction Diagram

Just like writing Tests make you think about dependencies and help you to rework your design to a better approach; helps writing high-level software documentation me to get the purpose of each layer in the design. While typing purposes/descriptions/responsibilities… in my documentation, I somethings stops and thinks: “Didn’t I just wrote a piece of code that doesn’t fall in this layer responsibility?”.

Interaction Diagrams are less popular than UML but describes the actual flow your design is following. You quick find a spot in your diagrams where there are just too many arrows. This also helps you think about coordination and control of your design. Please do not underestimate this kind of approach, it helped me to get the design in its whole form.

If your team plans a weekly code freeze, this could be an ideally time to update the documentation and schema’s. This not only keeps track of your changes and responsibilities, it also helps to introduce the design to new members of the team.

This kind of approach helps me to move elements through layers to find a right home.

7    CONCLUSION

When writing every word of code, think about what you're doing. It is the right place to put this here? Is it my job to know that kind of information? Do I have the responsibility to do this? Why does this element have to make that decision? …

Every word, every line, every class, every package… has a responsibility and a purpose to exist. If you can't say way in a strong description why this class knows something, why a function has an argument, why a package is named that way… that you should probably put on your refactoring-head.
 
@Codit we aren’t satisfied just because our code works. The first obvious step in programming is that your code works; only then the real work begins…

Think about what you're doing and take responsibility.

Categories: Architecture
Tags: Design
written by: Stijn Moreels