wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Tuesday, February 28, 2017 4:28 PM

Stijn Moreels by Stijn Moreels

What do I mean with “Continuous Refactoring”? related terms are: Merciless Refactoring, The Boy Scout Rule… The main principle is that you refactor constantly without mercy to improve the design of your code. The boy scout rule state that “you must leave the playground cleaner than you found it”.

Some teams are still cautious about this practice because they see the risk of change; what they don’t see is the risk of keeping it and the cost of working around it.

Introduction

My favorite topic in software development is code quality; and to be specific: Refactoring.

Refactoring you ask? One of the very building blocks in software quality is refactoring. You can’t simply deliver a high-quality software product without refactoring; period. Since everyone is responsible of quality (everyone!); therefore, everyone is responsible to refactor their software.

What do I mean with “Continuous Refactoring”? related terms are: Merciless Refactoring, The Boy Scout Rule… The main principle is that you refactor constantly without mercy to improve the design of your code. The boy scout rule state that “you must leave the playground cleaner than you found it”.

Some teams are still cautious about this practice because they see the risk of change; what they don’t see is the risk of keeping it and the cost of working around it.

Always search for places that Smell; be critical about your work and call everything into question.

What the current culture of software design school is taught are the different Design Patterns, Principles, Practices, and other different solutions in software design to define solutions. But what sometimes is forgotten is that’s a lot more interesting and valuable to learn the problems instead of the solutions.

"See the Design Patterns in the context of Refactoring"
-Joshua Kerievsky, Refactoring to Patterns

“Refactoring allows us to change our mind”
-Martin Fowler, from Refactoring

“For each desired change, make the change easy, then make the easy change”
-Kent Beck

 Values, Principles, and Practices

Before I start explaining any further about Continuous Refactoring, I will explain you some required values, principles, and practices to perform refactoring.

Refactoring is not just “rewriting code” or anything like that. Some people think they refactor by changing some code, go through every error until it compiles and then think because it compiles they have done "refactoring”.

Refactoring is a process of well-structured steps, each by which tests play a fundamental role.

“Refactoring is a controlled technique for improving the design of an existing code base. Its essence is applying a series of small behavior-preserving transformations, each of which "too small to be worth doing"
-Marin Fowler, Refactoring

In the line of Extreme Programming (XP) I will go through some values, principles, and practices that I hold high.

"Driving is not about getting the car going in the right direction.
Driving is about constantly paying attention, making a little correction this way, a little correction that way"
-Kent Beck (Extreme Programming, Embrace Change)

Values

Communication is one of the values (maybe THE value) in Extreme Programming that’s important for the effective cooperation in a team environment. Here as well, refactoring plays a huge part. It’s by performing refactoring that you create more understanding of your code base.

This is called Comprehension Refactoring as one of the Workflows of Refactoring. By reading the code base; you change a variable name, you move function around, you inline a parameter…

By doing so, you’re starting to increasingly learn your code base and become familiar with it.

Another value I would like to discuss is Simplicity. “What is the simplest thing that could possibly work?”. At first this looks maybe a “simple” value; but I think it’s one of the hardest ones. When is something simple? When you could understand something without too much thinking? When it’s obvious?

A lot of refactoring patterns (low-level and high-level) have Simplicity in mind. Always find a way to make your code simpler, more obvious, more explicit... so anyone could immediately understand what your intensions are.

Simplicity only works in defined context: when something is simple for you, it may not be simple for your other team members, and therefore not Simple Enough. That’s also one of the excuses that people use to explain their bad code by making No Design as alternative of Big-Design-Up-Front; but you must use Little-Design- or Enough-Design-Up-Front.

They think Quick-And-Dirty is the same as Simple Design; but Quick-And-Dirty is an oxymoron, it means slow, not quick in the end. It’s by designing just enough and refactor every day that you increase the Simplicity of your software.

“The problem with quick and dirty is that dirty remains long after quick has been forgotten”
-Steve C McConnell, Software Project Survival Guide

Principles

One of the principles of XP I would discuss here is Collective Ownership. The code base is owned by the team. It’s a very bad practice to have an environment where everyone is “responsible” for their own piece of code. This would create barriers for each developer and a tunnel-vision in writing your software.

It’s by reviewing others that you will improve your own code. The purpose is that you create the simplest and most obvious software possible, and not just simple and obvious for yourself. The ONLY way this could work is that everyone (everyone!) write and run tests for their code.

Every one of the team can improve any part of the system at any time.

This may first feel uncomfortable when you change “someone else his/her code”; but that’s the problem. There is no his/her code, only our code. That’s the solution.

Other principles of XP are Quality, Baby Steps and Improvement which exactly making my point when I was talking about refactoring. People sometimes cut in quality to increase their project speed. (you know, the Quick-And-Dirty approach). Decreased quality will not increase the speed of your project; it’s just by increasing the quality that your project will have faster delivery. It’s with lower quality that you will have a less predictable and later delivery.

Refactoring talks about making “small behavior-preserving transformations” or Baby Steps that must undertake in your refactoring process. It may be very tempting to skip some steps, don’t run your tests after a baby step or do some steps together to have the illusion that you will be done in a lesser time. The opposite is true. Always think in the least step that you can take in the right direction; what is the smallest step you can take?

Renaming a variable? Moving a method? Extracting an interface? Pulling a field?...

Practices

What do you need to do to perform a valid refactoring? I’ll show you some practices.

The one I will always place forward first, is Automated Tests. Unit Tests should have a virtual code coverage percentage of 100% at all time of your code. In XP and other methodologies, you also need to define Acceptance Tests to perform story building tests at system-level.

Both type of tests is written first in a Test-First scenario. Refactoring is the third other part that define Test-Driven-Development (TDD).

Test-First + Refactoring = Test-Driven-Development

You first write a failing unit test at the beginning of your implementation, and you first write a failing acceptance test in the beginning of your agile iteration. That’s ideally. Some write their tests after implementation in a Test-Least-Development approach; but I think that every respectable developer thinks about the tests while developing and whether you write first your tests and then the implementation or write directly after your implementation your tests: in both cases, you think of how your production code will be tested and that’s the purpose of a Test-First approach, you’re only doing it backwards in a Test-Least Development scenario.

So, I leave it up to you what approach you will use, but I feel more comfortable when I writing my tests first.

Test – Code – Refactor

Another practice that you will absolutely need is Continuous Integration. By continuously integrating with your team members, will decrease unpredictability and the cost of integration. Develop your features in a different branch, but always integrate in a couple of hours. It’s a bad practice if you can’t point out a valuable reason to have multiple code streams, you’ll have an enormous waste in your software.

Instead of having continuously multiple streams, fix your underlying design problem by refactoring.

How, Why, and When?

Now that I give you a quick introduction, I’ll show you some basic concepts around Continuous Refactoring.

How Should You Refactor?

The easiest one is the How. How should you refactor? That’s rather simple, Martin Fowler and Joshua Kerievsky has started the refactor pattern story by creating a starting point of what the mechanics are in a refactoring environment.

Martin talks about low-level refactorings like Move Method, Encapsulate Field, Replace Inheritance with Delegation, Decompose Conditional, Encapsulate Collection… while Joshua talks about composed and test-driven refactorings like Replace Implicit Tree with Composite, Unify Interfaces with Adapter, Encapsulate Composite with Builder

A lot of these refactoring patterns are based on a pattern Expand – Migrate – Contract which allows Parallel Change and backwards compatibility. You should always hold in mind that during every step, EVERYTHING MUST STILL WORK. The pattern of Expand – Migrate – Contract tells you to expand your code base always first instead of directly change your code. Because you first add new code instead of adapting existing; you create an environment of backwards-compatible components. Just like unit tests are given you confidence, so does this pattern.

As example I can talk about a “simple” refactoring pattern Add Parameter from Martin. This pattern describes how to add a parameter to a method. Instead of just adding the parameter to the existing method, you first create a new method with this added parameter (Expand phase). After that, you change every call to the old method to the new one (Migrate phase). And as last, you remove the old method so the new method is now fully integrated (Contract phase).

After each phase, you compile/test, but also after each change from method calls. The process described in here for Add Parameter is how you refactor. (NOTE: C#, Java, Python… has the possibility to add default-value parameters to methods, which would help you with this refactoring. But I find it always useful to know more than a single solution for a problem).

What’s important is that you take Baby Steps in your refactoring process and slowly go to, towards or away from a current implementation choice by testing after every transformation step.

Why Should You Refactor?

It bit harder one to explain (especially to non-technical people) is the Why. Why should you refactor? Try to explain to a customer or non-technical manager that you will change your software so it will have more quality, and it would be easier to maintain and expand without there’s anything that will be changed or added of the current implemented functionality.

“If you aren’t going to change or add anything, could we just add the next feature?”

Business people think in features. Features added, removed, changed… but never refactored. The very definition of refactoring is that you are not going to change any outside observable behavior.

There is a way, though, that you can use to explain what “refactoring” really means to those people. It calls Technical Debt (synonym for Code Debt or Design Debt). Technical (or code or design) debt describes the virtual debt you must repay if you don’t decrease your debt. This is often called the Financial Metaphor and is a valuable solution to explain to non-technical people what the cost of not-refactoring means.

The next time you must defend yourself against someone who doesn’t see the risk your taking by NOT refactoring, pull out your credit card and tell them: “Would you skip the payment of your bills every month?”. Just like your own financial bills that you must pay to get back on track, so must you spent time refactoring so you don’t get in a situation that you can’t pay back your Technical (Code/Design) Debt.

Why should we refactor? Because we just can’t have THE perfect solution in the begin of the project with Up-Front Design so we must refactor daily to get each time closer to THE solution.

There are many reasons why you should refactor: change/adding/removing a feature will become easier, people will understand more quickly your design, bugs will experience more difficulty to hide, your code will remain fresh (without smells), …

When Should You Refactor?

"Seeing the trouble spot is often the hardest and most uncertain part"
- Eric Evans, Domain-Driven Design

The When is probably the hardest one. It’s a very lame answer but experience is the long-way to gain knowledge of the when; and you only gain experience by learning from making right and wrong decisions.

Reading books about refactoring (I’ve learned a lot from Martin Fowler, Joshua Kerievsky, Robert C. Martin; a.k.a. Uncle Bob…), Continuously Refactor and; fail. That’s the way you will start to learn when to refactor. Most design problems come when your design is:

  • Duplicated, implicit or explicit
  • Unclear, don’t show codes intent
  • Too complicated, over-engineered, not the simplest thing that could work

To spot places that need refactoring, you can start with learning different types of Code Smells. These smells will indicate that something is strange about the design, but hold on. The smells are only indicators and are not always the problem itself but only signals that there may be a problem. 

Some smells from Martin that are introduced in his Refactoring book: Feature Envy, Shotgun Surgery, Primitive Obsession, Refused Bequest… By building a vocabulary of refactorings patterns and code smells, you can quicker communicate with your team members about problem indicators you’ve found and solutions you propose.

You can also start introduce a “Code Smell Day” (typically at the end of each iteration) by which each team members starts looking in the code base at places that Smell. Or you can introduce a Refactoring Day in which you only refactor all the left over TODO items that needs to be refactored so your debt is repaid.

That’s for the technical part; the other part is about planning and time management. If you spot a candidate for refactoring; I find it always hard to place it in the scale of Reporting/Not-Reporting.

When the refactoring is about the feature you’re implementing and it will benefit the further process; then there’s no doubt, than you MUST refactor. If on the other hand, you find some smells in other parts of the system and it has nothing to do with what you’re doing. It could be useful to have a Public Refactor List by hand to describe what you’ve found and discuss it at the Weekly Meeting of the iteration so it can be a part of the next iteration.

OR you can refactor it right away and tell everyone of the team what you’re doing. Continuous Integration plays a big role in this one.

What I was saying about the Reporting/Not-Reporting part, is about the scale of your refactoring. If you refactor a method by performing Rename Method on it, or removing a temporary variable by executing Inline Temp... it would be an overload to define a work item for each refactoring; it should be a part of your development work. Just like you don’t specify for each feature a task “Writing Unit Tests”, you should not specify items with this low-level refactorings (even composite or test-driven refactorings). It should be a part of your work. Period.

But if the refactoring is a chain of multiple low-level, composite, or test-driven refactorings; than it smells more like a Design Refactoring and could it be of use to add this to the global planning as a story.

I find it sometimes difficult to know when you should Report the refactoring or Not. Should it be a Planned Refactoring? Should it be a Long-Term Refactoring?

The more technical Refactoring Workflows are already part of the Development Workflow; so, it would be redundant to specify these in the planning.

Conclusion

Refactoring is nowadays widely adopted in teams as a natural thing; although there’s still so many misunderstanding about the topic and what risks you and your team takes by NOT Continuously Refactoring. It should be a part of your development process; just like writing Unit- and Acceptance- Tests before implementing.

Practice your Continuous Refactoring skills daily and don’t be afraid to change your code daily. It’s a lot riskier to leave a dirty piece of code and to make workarounds every day than to change it so everyone can feel more comfortable.

Always think you’re programming in clay and not in stone: with some water, you can always continuous change the structure of your clay; while water only destroys stone.

Refactoring removes Duplication, increases Readability, Simplicity, and the Code’s Intent

So, start refactoring your project continuously and without any mercy for any written piece of code. When this is your daily practice; you may call yourself a Merciless Refactorer.

Categories: Architecture
written by: Stijn Moreels

Posted on Tuesday, February 21, 2017 2:38 PM

Pieter Vandenheede by Pieter Vandenheede

Dive into BizTalk 2016 with this webinar, describing the new features on BizTalk 2016.

Let’s dive into the new BizTalk Server 2016 and summarize its enhancements and newest features. Learn how BizTalk 2016 allows a much easier integration towards the cloud. BizTalk 2016 also supports for high availability both on premises and on Microsoft Azure with SQL Server AlwaysOn. You will hear about the platform upgrades and various updates around BizTalk tooling, operations and monitoring.

To conclude, your host Pieter will help you find an answer to the key question: whether or not you should upgrade to BizTalk 2016.

Watch the webinar below:

If you have any questions, don't hesitate to leave a comment below.

Cheers,
Pieter

Categories: BizTalk
written by: Pieter Vandenheede

Posted on Monday, February 20, 2017 3:41 PM

Toon Vanhoutte by Toon Vanhoutte

Lately, I tried to connect to a Service Bus queue with limited permissions (Listen only). I encountered an issue that I want to share with you, so it can save you some time!

 

When you manage a Service Bus namespace, it's important to think about security.  The recommended way to deal with it, is by leveraging its Shared Access Signature (SAS) authentication and authorization mechanism.  You are able to configure SAS policies on your complete ServiceBus namespace or on individual queues and topics.  Use what best meets your expectations!

On the 'coditblog' queue, I created a ReadOnly shared access policy that only contains the Listen claim.  This policy was intended to be used by a Logic App that only needs to read messages from the queue. 

After creating the policy, I copied the primary connection string.

Then I created a Logic App from scratch, by adding the ServiceBus trigger 'When a message is received in a queue (auto-complete)'.  A connection was created by providing a connection name and the copied connection string.

When trying to select the queue name, I got the following exception:
Could not retrieve values. ConnectionString should not include EntityPath.

I double checked my connection string several times and tried multiple variations, but without any success. After some investigation, it turned out that the connector requires the Manage claim to navigate through the list of available queues. A misleading exception message...

Luckily we are not blocked by this! Just choose 'Enter custom value', type the queue name and you're good to go!

The Logic Apps starts successfully when a message arrives on the queue!

Hope this can save you some troubleshooting time!
Toon

 

Categories: Azure
Tags: Logic Apps
written by: Toon Vanhoutte

Posted on Thursday, February 16, 2017 5:20 PM

Toon Vanhoutte by Toon Vanhoutte

This blog post covers some aspects about exception handling in case you expose nested workflows as a web service. When dealing with web services, it's important to ensure that no time outs are triggered and that meaningful exception messages are returned to the client. Let's have a look how this can be done.

Scenario


As a starting point, I create Logic App 1 that is exposed as a web service and which invokes Logic App 2 in a synchronous way. The second Logic App just puts a request on a Service Bus queue. Once that action is completed, a response is returned and the first Logic App can return a success to the consuming application.

Configure retry policy

Because I want to avoid timeouts because of automatic retries, I update the Call Logic App 2 action, so it will not perform a retry. This is done in the code view. Depending on your use case, you could also limit the number of retries. The minimal time interval between retries is 20 seconds, which is also the default value. This is quite high for a web service scenario, as most clients time out after 60 seconds.

Run an exception scenario

To see what happens in case of an exception, I reconfigure Logic App 2 to send to a non-existing queue. In both Logic Apps, the Response action is skipped because the Logic App stops processing in case of a failure.

The web service call results in a meaningless exception message for the consuming application:

Optimize for exceptions

Ensure a meaningful exception is returned

Let's first change the Logic App 2, so it returns the real Service Bus exception in case of a failure. The recommended way, is to add a Scope around the Send Message action. The details of the scope outcome are provided via the @result function. More information is given here. Depending on the resulting outcome of the scope, a different response can be returned.

Remark the Switch statement, based on the status of the scope.  The @result function returns an array, that provides a result for each action within the scope.   The status of the first action is extracted via this expression: @result('scope')[0]['status'].
In the Failure Response, I return the resulting status code - @result('scope')[0]['outputs']['statusCode'] - and exception message - @result('scope')[0]['outputs']['body']['message']

Changing Logic App 1 to meet the expectations is a lot easier.  Just ensure it returns the status code and message from the invoked Logic App.

Ensure the Logic App continues on failure

Another optimization we must perform, is to configure the Response action and Switch statement to also run in case the preceding action fails. This can be done within the code view:

Inspect the result

Exception scenario

If we trigger an exception scenario, the run detail of Logic App 2 looks good. The Scope has failed, which is detected by the Switch statement and the Logic App returns the Failure Response.

The outcome of Logic App 1 also meets the expectations, as the response of Logic App 2 is returned to the consuming client.

The web service client also gets a nice exception message.

Success scenario

We completely focused on the exception scenario, so it's good to double check that the happy path still behaves well. Just update Logic App 2, to send now to an existing queue. Logic App 2 sends a success response, which is great!

Logic App 1 shows only green statuses:

The web service client gets a confirmation that the message got queued!

Conclusion

The default settings for retry policies and exception handling might not always fit your scenario. Make sure you understand the Logic Apps capabilities on this matter and apply them according to your needs. Small prototyping exercises can serve as basis for development guidelines and best practices.

Happy exception handling!
Toon

Categories: Azure
Tags: Logic Apps
written by: Toon Vanhoutte

Posted on Thursday, February 9, 2017 4:00 PM

Massimo Crippa by Massimo Crippa

In Azure API Management, Groups are used to manage the visibility of products to developers so the developers can view and consume the APIs that are contained into the groups in which they belong.

Suppose that we have a custom group for developers affiliated with a specific business partner and we want to allow those developers (that signed up with different identity providers) to access only to the partner's relevant Products.

Let's combine Logic Apps, Azure API Management and ARM together to automate the user group association.

In short: no matter what which identity provider (AAD, Google, Twitter, etc..) is used to sign up, when the user belongs to the @codit.eu domain it should be added to the "Codit Dev Team" custom group.

The basic idea here is to use logic apps as a batch process to get the list of registered users and then call a child logic app to assign the current developer to a proper custom group to manage the product visibility.

Logic Apps and Azure API Management

There are three ways to invoke an API Management endpoint from a Logic App:

  • API Management connector. The connector is pretty straightforward. You first select an APIM tenant, the API and then the operation to be called. Finally, the available headers and parameters are automatically displayed. The APIM connector by default shows only the APIM tenants created in the same subscription where the Logic App was created. 
  • Http + Swagger connector. This connector provides a similar user experience as the APIM connector. The shape of the API with the parameters are automatically integrated in the designer.
  • Http connector. It requires to specify HTTP verb, URL, Headers and Body to perform an HTTP call. Simple as that!

In this exercise, all the services that had been integrated are located in different Azure subscriptions therefore I used only Http and Http+Swagger connectors.

Manage 'em all

With the "Every API should be a managed API" mantra in mind and with the final goal to have a more information about which API is called and its performance we created a facade API for every HTTP call.

Here the list of managed APIs:

  • Every call to the Azure Resource Manager (get users, get groups by user, add user to group)
  • Get the token to authorize the ARM call
  • Call the child Logic App

And here the Logic App workflows that had been created. 

Some other benefits we got from the virtualization: 

  • Use of a single authorization model between Logic App and APIM by providing an API Key via the "Ocp-Apim-Subscription-Key" header.
  • Balancing complexity and simplicity. The ARM authentication is delegated to the API Management layer.
  • Apply a consistent resource naming convention. 

Azure API Management Policies

The policy engine is where the core power of Azure API Management lies. Let's go through the policies that have been configured for this exercise. 

Get the bearer token

A token API with a GET operation is used by the ARM facade API to get the bearer token to authorize the call to Azure Resource Manager endpoint. The policy associated to the "get-token" operation changes the HTTP request method and sets the body of the request to be sent to the AAD token endpoint using the password flow.

Call the ARM

This is the call to the ARM endpoint (get users, get groups by user, add user to group). The "send-request"-policy is used to perform a call to the private token API and to store the response in the bearerToken context property.

The "set-header" policy in combination with a policy expression is used to extract the token and to add it as a header to the request sent to the ARM endpoint.

This policy can be improved by adding the policy expressions to store and retrieve the token from the cache. Here an example. 

Logic Apps facade API

The Logic Apps workflows that expose an HTTP trigger call can be called by using the POST verb only and passing the parameters in the body of the request.

The child workflow that takes care to assign a user to a specific group has been virtualized via Azure API Management to change the URL segments as here https://api.codit.eu/managedarm/users/{uid}/groups/{groupname} and to change the request method to PUT.

Conclusion

Thanks to this simple Logic App and some APIM power I can now be sure that every new colleague that sign up to our developer portal is automatically associated to the internal developer team so that he/she can get access to a broader set of APIs.

A similar result can be achieved using the Azure B2B/B2C integration in combination with the AAD security groups but, at the time of writing, the APIM integration with AAD B2C has not been completed yet.

Another benefit of managed APIs is the gain of visibility of the exposed assets and their performance. Discover how an API is used, information about the consumer and be able to discover the trends that are most impacting the business.

Cheers

Massimo

Categories: API Management
Tags: Azure
written by: Massimo Crippa