wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Friday, July 22, 2016 4:02 PM

Brecht Vancauwenberghe by Brecht Vancauwenberghe

In this post I will show you how you can restart a Service Fabric service from code

Currently I'm working on a Azure Service Fabric implementation. The official documentation contains a lot of valuable information, but it can take you quite some time to find the right piece of code you are looking for.

Our Service Fabric framework contains dynamically created Stateless services where during service startup our “configuration service” returns cached configuration.

When service configuration updates occur, it is necassary to restart instances of all the service. You can achieve this by using the Observer-pattern.

I was looking into how you can restart a service from code but it took me quite some time to figure out how to achieve that. By using the following code you can restart a Stateless or Statefull service from within the service itself:

You can find the original code on GitHub.

It is good to know that this only works on unsecured clusters - If you deploy this to a secure cluster, the service should run with elevated permissions.

Happy Service Fabric programming!

 

Categories: Azure

Posted on Friday, July 15, 2016 1:26 PM

Tom Kerkhove by Tom Kerkhove

Today I will talk about how we are currently using code reviews to build better solutions and how it gives me more confidence in the quality that we are shipping.

Working in teams brings several challenges, one of them is a mixture of coding styles, causing code inconsistency across your project which makes it hard to read or follow what it does.

The bigger the team, the more important it is to transfer your knowledge about what you've worked on, so you have a limited Bus-factor in your team.

Or have you ever been working on a new feature fully confident that it's ready to ship, only to notice that you've forgotten to take into account about the caching? Or that you've forgotten to update the documentation (if any)?

Sounds familiar to you?

Note - While some of these "pain points" can be tackled by using tools like Roslyn Analyzers or FxCop I prefer a more humane approach and discuss the why instead of the how.

Code reviews to the rescue

By using code reviews we can avoid these problems by collaborating before we ship the code - Let's first take a look at an example:

While this code could perfectly process your order, there are some issues:

  • Variable o represents a certain amount of state but what does it represent? Looking at the signature it is clearly an Order, but how do I know that in a 50-line method at the bottom?
  • Time zones, they are evil! What happens if this code runs in the U.S.?
  • Calling MyMethod<T> takes in a boolean but what does it really do and how does the boolean come in to play?
  • How does the caller know what the Process method does? Hopefully bill for the order? Also, it couldn't hurt to add additional documentation throughout the implementation in certain cases.

While performing a code review, the reviewer can indicate these concerns with the reviewee and have a polite & constructive discussion, backed by a set of coding guidelines. By doing this, both parties get to know how the other person thinks about it and they learn from each other. Yet, they also learn to express how they did something or what they have forgotten about.

Having a second pair of eyes on a certain topic can help a lot. Everybody has a different perspective and this can make sure that you forget about a certain topic and also, potentially, leads to interesting discussions. This gives you a certain "Don't worry, I've got your back" feeling and forces you to think deeper about what you've written.

At the end of the review, the reviewee can have some feedback to process, where after the code gets the seal of approval and is ready to ship.

Next to the code quality, you also perform small knowledge transfers with each other. You will not remember everything but when needed you will remember certain pieces that can help you guide to the potential bug or cause.

Last but not least is automated testing. It's a good thing to add the unit/behavior/scenario testing to your reviews as well because then the reviewer gets an indication of what you are testing and what you are NOT testing. Do the tests make sense or should the reviewee cover additional scenarios?

Challenges

Using code reviews is of course not a free lunch and it comes with its own difficulties.

The biggest challenge is that your team members need to be open for feedback and willing to incorporate the feedback! If they are not up for it, you will just spend your valuable time to only notice that they are ignoring it. You, as a team, will need to decide whether or not you want to commit to the code-review-system.

Every review takes a decent amount of time, so incorporate that into your planning. The reviewer needs to go through it, discuss it with the reviewee and then the reviewee need to process the feedback. However, one might argue that it is better to take your time during development instead of having to spend twice that amount while fixing bugs or trying to understand what's going on.

New to code reviews? Here are some tips!

After using code reviews for a while, I've learned a couple of things on how to not do it or what can be challenging. Here are some tips that help you avoid some pitfalls.

Review early & frequently - The earlier you review, the better. This avoids reviewing something that is considered ready while you've misunderstood some aspects of it or have re-invented the wheel.

Define code guidelines - Agree upon a list of coding guidelines with your team to back your reviews. By doing this you have a clear list of styles, paradigms and DO's & DON'T DO's that the team should follow to unify your coding styles & practices. This makes reviewing a lot easier and a clear guidance on how it should look.

An example of this could be that each parameter should be checked for null and that it should throw an ArgumentNullException when appropriate.

Add code reviews to your definition-of-done - By adding code reviews to your definition of done you are certain that each new feature or bug fix has passed at least two pair of eyes and that multiple people agree on it.

By doing that, you also remove the burden of one person being responsible for one aspect, since it's the whole team that is responsible for it.

Don't review to bash, review to teach & improve - Finding a balance between being strict or agreeing with everything is hard. If you just bash it will have a negative impact on the team collaboration and frustrations will arise. Be constructive & open.

Review in-person but have a look at the changes in advance - This allows you to have a personal opinion instead of simply following the reviewee. This avoids you having to make decisions on the spot but instead digesting it first so you can think of obvious aspects

Challenge and be challenged - Ask questions about the topic to see if the reviewee has covered all the possible scenarios and learn about how they envision it. Discussions are a good thing, not a bad thing.

Learn from each other - Don't be afraid to say what you like and don't like or if you don't know about something. Learn from others and how he/she did it this way and not like how you thought about it.

Conclusion

While the internet has a wide variety of blogs talking about this "concept" I wanted to share my vision on this since I'm a big fan of this practice and believe that this really improves the quality. However, your success will depend on the cooperation of your colleagues and project management if they want to commit in doing so.

One thing is certain - I've used this on my current project, but will keep on doing so in the future.

Thanks for reading,

Tom.

Categories: Architecture
written by: Tom Kerkhove

Posted on Thursday, July 7, 2016 11:58 AM

Luis Delgado by Luis Delgado

Discover how to unit test your node.js Azure functions on Azure, to increase code quality and productivity, using these code samples.

Writing unit and integration tests for Azure Functions is super critical to the development experience, since their execution relies on context variables and are beyond your control and supplied by the runtime. Furthermore, currently there is no local development or debugging experience available for Azure Functions. Therefore, testing if your functions behave properly, in the context of their runtime, is extremely critical to catch defects and increase your productivity.

Because Node.js is dynamically-typed, I want to share a quick trick on how to mimick the Azure Functions runtime context in order to test your functions. I did not find any documentation from Microsoft related to unit testing Node.js Azure Functions, so feel free to comment on the approach I propose here.

As an example, we are going to make a function that posts an observation every minute to Azure IoT Hub:

deviceSimulator/index.js

Now we want to write a unit/integration test for this function.

deviceSimulator/test.js

The function getContextObject simply returns an object the mimics the context object expected by the Azure Functions runtime. The test will simply import your function from index.js, create the mock-up context object and feed it to your function for execution. Finally, within your test, you can override the context.done() function to do the assertions you need and call done();

Is this the proper way to test Azure Functions on Node.js? I will let the Functions Product Group comment on that :). However, this method works for me.

The other alternative you have is to create your inside (internal) functions on other files that you can test separately in the traditional way you would test JS code, and import those files in your index.js file. The problem I see with that approach is, if your internal functions make a call to the context object, your tests will probably fail because of this.

Comments, feedback or suggestions? Submit an issue to the repository or write them below.

Categories: Azure
written by: Luis Delgado

Posted on Friday, July 1, 2016 1:42 PM

Pieter Vandenheede by Pieter Vandenheede

In this post you can find the slide deck I presented at BTUG.be on June 30th around the recap of the Integrate 2016 conference, the updates in Azure Logic Apps and BizTalk Server 2016 CTP2.

Yesterday I had a great time at btug.be, while presenting my session on Integrate 2016.

I presented the new changes in BizTalk Server 2016 CTP2, covered the upcoming changes in RTM and the new schema update of Azure Logic Apps, together with the new features available in the public preview of the Enterprise Integration Pack.
Thanks again to our company, Codit, for providing me the opportunity to be there!

As promised there, please find my slide deck below via SlideShare:

Feel free to comment if you have any questions.

The second speaker yesterday was Eldert Grootenboer. He had a great talk on IoT, gateways, sensors and ... boats! Keep an eye out on his blog, since he promised some more IoT posts coming up.

As always, it was nice to talk to the people present. A big thank you to them! Especially since it was not such a great time to attend, just before holiday period and there were a lot of traffic jams around Antwerp yesterday evening. We do have a good community out there!

Enjoy the slide deck!

Pieter

Categories: Community
written by: Pieter Vandenheede

Posted on Wednesday, June 22, 2016 8:58 AM

Maxim Braekman by Maxim Braekman

Have you set up communication with a web service before and had to find a way to keep track of some information that was available in the request, but is no longer present in the response? Continue reading, since this post could help you sort out this problem.

Setting up communication with a web service can always turn out to be a bit tricky. You need to take care of configuring the certificate-settings, if required, configure the bindings to use the correct protocol, security and so on. But once all of these settings are correct and you start testing, now and again, depending on the scenario, you might notice you are losing some useful data across two-way communication, since some of the data which was available in the request, no longer appears to be in the response.
In such a case, one could opt to use an orchestration, although this is not the best solution, performance-wise.

An alternative way of storing those values is by creating a static class which will be storing the data based on the Interchange ID of the message. Since this static class needs to be told what data it needs to track, 2 custom pipeline components are needed. Why 2 components? You’ll need one to pass on the data from the message into the static class and another to retrieve those values from the class and pass them back onto the response message.

Yes, this can also be done by merging the components into a single one and using a property to indicate the direction, but for the sake of this post we will be using 2 separate components, just to keep everything clear.

Imagine a large system, such as AX, which is containing a whole bunch of data about several orders, but needs to retrieve some additional information - from another system - before processing these orders. Since these requests could be working asynchronously, in the AX point-of-view, the source-system will need some kind of ID to match the response to the initial request. In this case the request that is being sent towards BizTalk will be containing an orderID or requestID or any other form of identification, just to make sure each response is matched to the correct request.

Okay, so now this request, containing the ID, has arrived in BizTalk, but since the destination-system has no need for any ID from an external system, no xml-node will be provided for this ID, nor will it be returned within the response. In such a situation this issue becomes a “BizTalk-problem” to be resolved by the integration developer/architect.

This is when the use of the aforementioned static class comes in handy. Since the actual call to the destination-system is a synchronous action and there is no need for an orchestration to perform any additional actions, we can simply use the custom pipeline components to store/retrieve the original ID, assigned by the source-system.

The custom pipeline components

The static class might look as the example below, which allows BizTalk to save a complete list of context-properties for a specific interchangeID.

Next, you’ll be needing the pipeline components to actually access this class and allow for the data to be saved and restored when needed.

This post will not be zooming into the code of these pipeline components, but below is the general explanation of what these components are supposed to do.

SaveContextOverCallRequest

The first custom pipeline component will retrieve the InterchangeID from the context of the incoming message and use this as a unique ID to save a specified list of properties to the static class. This list could be scaled down by setting a specific namespace, which can be used to filter the available context properties. This would make sure only the properties from the specified namespace are being saved, preventing an overload of data being stored in memory.

SaveContextOverCallResponse

The second custom pipeline component will again retrieve the InterchangeID from the context of the incoming message, but this time it will use this value to retrieve the list of context-properties from the static class. Once the properties have been collected, there is no need for the static class to keep track of these values any longer, therefore it can remove these from its dictionary.

Next, you’ll be needing the pipeline components to actually access this class and allow for the data to be saved and restored once needed.

This post will not be zooming into the code of these pipeline components, but below is the general explanation of what these components are supposed to do.

Using the component

Once these components have been created, they will have to be added to the send/receive pipeline, depending on the type of component.

The send pipeline will contain the ‘SaveContextOverCallRequest’-component to make sure the required properties are being saved. The custom pipeline component should be the last component of this pipeline, since you want to make sure all of the property promotion is finished before the properties are being saved into the static class.

The receive pipeline will contain the ‘SaveContextOverCallResponse’-component, as this will be restoring the saved properties to the context. This should also be the first component in this pipeline, because we want the saved properties to be returned to the context of the message as soon as possible, to make sure these values are accessible for any further processing. Be aware that whether or not you are able to put this first, will laregely depend on your situation and transport protocol.

Example

To show the functionality of these components, a simple test-case has been set up, in which a request-message is being picked up from a file-location, a connection is made with a service and the response is sent back to a different folder. To give you an idea of the complete flow, the tracking data has been added here.

The request that will be used in this sample is a pretty simple xml-message, which can be seen below:

<ns0:GetData xmlns:ns0="http://Codit.Blog.Stub">
                <ns0:request>
                               <ns0:RequestDate>2016-06-15</ns0:RequestDate>
                               <ns0:RequestNumber>0002</ns0:RequestNumber>
                               <ns0:CustomerID>0001</ns0:CustomerID>
                               <ns0:Value>Codit offices</ns0:Value>
                </ns0:request>
</ns0:GetData>

              

 

 

 

 

 

As you can see, this request contains both a request- and customer-ID, which are the two ‘important’ values in this test-case. To make sure these properties are available in the context of the message, we made sure these are being promoted by the XML Disassembler, since the fields are indicated as promoted in the schema. Once the flow is triggered, we can have a look at the context properties and notice that the 2 values have been promoted.

The initial raw response that comes back from the service, which is the message before any pipeline processing has been performed, no longer contains these context-properties and neither does it contain these values in the body.

<GetDataResponse xmlns="http://Codit.Blog.Stub">
                <GetDataResult xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
                               <ResponseDate>2016-06-15T20:06:02.8208975+01:00</ResponseDate>
                               <Value>Ghent, Paris, Lisbon, Zurich, Hampshire</Value>
               </GetDataResult>
</GetDataResponse>

However, if we have another look at the context properties after the receive pipeline has done its job, we notice the properties are back in place and can be used for further processing/routing/....

Conclusion

Whenever you need to save a couple of values cross-call, there is an alternative solution to building an orchestration to keep track of these values.

Whenever you are building a flow, which will be saving a huge amount of data, you could of course build a solution which saves this data to disk/SQL/...,  but that is all up to you. 

Categories: .NET, BizTalk, Pipelines
written by: Maxim Braekman