wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Thursday, August 24, 2017 7:35 AM

Toon Vanhoutte by Toon Vanhoutte

I've always been intrigued by agile development and scrum methodology. Unfortunately, I never had the opportunity to work within a real agile organization. Because I strongly believe in scrum and its fundamental key principles, I've tried to apply an agile mindset on integration projects; even within very waterfall oriented organizations and projects. I'm not an expert at all on scrum methodology, I've just adopted it in a very pragmatic way.

Please do not hesitate to share your vision in the comments section below, even if it conflicts with my statements!

Important note: this post is not intended to state that when you do scrum, you should make your own interpretation of it. It's to explain how you can benefit from agile / scrum principles on integration projects, that are not using the scrum methodology at all. It's a subtle, but important difference!

1. Prototype in an early stage 

I'm working for more than 10 years on integration projects and every new assignment comes with its specific challenges: a new type of application to integrate with, a new protocol that is not supported out-of-the-box and specific non-functional requirements that you never faced before. Challenges can become risks if you do not tackle them soon. It's important to list them and to perform a short risk assessment.
 
Plan proof of concepts (PoC) to overcome these challenges. Schedule these prototyping exercises early in the project, as they might influence overall planning (e.g. extra development required) and budget (e.g. purchase of third party tool or plug-in). Perform them in an isolated sandbox environment (e.g. cloud), so you do not lose time with organizational procedures and administration overhead. A PoC must have a clear scope and success criteria defined. Real life examples where we introduced a PoC: validate performance characteristics of the BizTalk MLLP adapter, determine the best design to integrate with the brand-new Dynamics 365 for Operations (AX), test the feature set of specific Logic Apps connectors against the requirements…

2. Create a Definition of Ready 

A Definition of Ready is a kind of a prerequisite list that the development team and product owner agree on. This list contains the essential information that is required in order to kick off the development of a specific backlog item. It's important to agree on a complete, but not too extended Definition of Ready. Typical items that are listed on an integration focused Definition of Ready are: samples files, data contracts, transformation analysis, single point of contact of relying backend application.

This is a very important aspect in large integration projects. You want to avoid that your development team is constantly blocked by unclear dependencies, but on the other hand it's not advised to postpone development constantly as this imposes a risk. It's a difficult balancing exercise that requires a pragmatic approach and a decent level of flexibility.
 
It's important to liberate your development team from the task of gathering these prerequisites, so they can focus on delivering business value. In large integration projects, it's a full-time occupation to chase responsibles from the impacted teams to get the required specs or dependencies. The person taking up this responsibility has a crucial role in the success of the project. Excellent communication and people skills are a must.

3. Strive for a self-organized team

"The team lead gives direct orders to each individual team member". Get rid of this old-fashioned idea of "team work". First, the development team must be involved in estimating the effort for backlog items. In that way, you get a realistic view on the expected development progress and you get the team motivated to meet their estimates. Secondly, it's highly advised to encourage the team to become self-organized. This means they decide on how they organize themselves to get the maximum out of the team, to deliver high quality and to meet the expectations. In the beginning you need to guide them towards that direction, but it's amazing how quick they adapt to that vision.

Trust is the basis of this kind of collaboration between the team lead (or product owner) and the team. I must admit that it wasn't easy for me in the beginning, as my natural flavour is to be in control. However, the advantages are incredible: team members become highly involved, take responsibility, are better motivated and show real dedication to the project.

One might think you lose control, but nothing is less true. Depending on the development progress, you can shift the product backlog in collaboration with your stakeholders. It's also good to schedule regular demo sessions (with or without the customer) to provide your feedback to the development team.

Each team member has its own role and responsibilities within the team, even though no one ever told them to do so. Replacing one member within the team, always has a drastic impact on the team performance and behaviour. It's like the team loses part of its DNA and needs some time to adjust to the new situation. I'm blessed that I was always able to work together with highly motivated colleagues, but I can imagine it's a hell of a job to strive for a self-organized team that includes some unmotivated individuals.

4. Bridge the gap between teams

The agile vision encourages cross-functional teams, consisting of e.g. business analysts, developers and testers. Preferably one person within the team, can take multiple roles. However, if we face reality, many large organizations still have the mindset of teams per expertise (HR, Finance, .NET, Integration, Java, Testing…). Often there is no good interaction amongst these teams and they are even physically separated.

If you are part of the middleware team, you're stuck between the two teams: the ones who manage the source application and those who are developing the target system. Try to convince them to create cross-functional project teams, that are preferably working at the same place. If this is not an option, you can aim at least for a daily stand-up meeting with the most important key-players (the main analysts and developers) involved. Avoid at all time that communication always goes via a management layer, as this is time consuming and a lot of context is lost. As a last resort, you can just go on a daily basis to the floor where the team is situated and discuss the most urgent topics.

Throughout many integration projects, I've seen the importance of people and communication skills. These soft skills are a must to bridge the gap between different teams. Working full time behind your laptop on your own island, is not the key to success within integration. Collaborate on all levels and cross teams!

5. Leverage the power of mocking

In an ideal scenario, all backend services and modules we need to integrate with are already up and running. However, if we face reality, this is almost never the case. In a waterfall approach, integration would be typically scheduled in the last phase of the project, assuming all required prerequisites are ready at that moment in time. This puts a big risk on the integration layer. According to the scrum and agile principles, this must be avoided at all time.
 
This introduces a challenge for the development team. Developers need to make an abstraction of external systems their solution relies on. They must get familiar with dependency injection and / or mocking frameworks that simulate back-end applications. These techniques allow to start development of the integration layer with less prerequisites and ensure a fast delivery once the depending backend applications are ready. A great mocking framework for BizTalk Server is Transmock, definitely worth checking out if you face problems with mocking. Interesting blogs about this framework can be found here and here, I've also demonstrated its value in this presentation.

6. Introduce spikes to check connectivity

Integration is all about connecting backend systems seamlessly with each other. The setup of a new connection with a backend system can often be a real hassle: exceptions need to be made in the corporate firewall, permissions must be granted on test environments, security should be configured correctly, valid test data sets must be available, etc...
 
In many organizations, these responsibilities are spread across multiple teams and the procedures to request such changes can cause a lot of administrative and time consuming overhead. In order to avoid your development team is being blocked by such organizational waste, it is advised to put these connectivity setups early on the product backlog as "spikes". When the real development work starts in a later iteration, the connectivity setup has already been given a green light.

7. Focus first on end-to-end

This flowchart explains in depth the rules you can apply to split user stories. Integration scenarios match the best with Workflow Steps. This advice is really helpful: "Can you take a thin slice through the workflow first and enhance it with more stories later?". The first focus should be to get it work end-to-end, so that at least some data is exchanged between source and target application. This can be with a temporary data contract, within a simplified security model, without more advanced features like caching, sequence controlling, duplicate detection, batching, etc…
 
As a real-life example, we recently had the request to expose an internal API that must consume an external API to calculate distances. There were some additional requirements: the responses from the external API must be stored for a period of 1 month, to save transaction costs of the external API; authentication must be performed with the identity of the requesting legal entity, so this can be billed separately; both sync and asynchronous internal API must be exposed. The responsibility of the product owner is to find the Minimal Valuable Product (MVP). In this case, it was a synchronous internal API, without caching and with one fixed identity for the whole organization. During later phases, this API was enhanced with caching, with a dynamic identity and an async interface.
 
In some projects, requirements are written in stone upfront and are not a subject of negotiation: the interface can only be released in production if all requirements are met. In such cases, it's also a good exercise to find the MVP required for acceptance testing. In that way, you can release faster internally, which results in faster feedback from internal testing.

8. Put common non-functionals on the Definition of Done

In middleware solutions, there are often requirements on high performance, high throughput and large message handling. Most of these requirements can be tackled by applying best practices in your development: use a streaming design in order to avoid loading messages entirely in memory, reduce the number of persistence points, cache configuration values wherever it's applicable, etc…
 
It's a good practice to put such development principles on the Definition of Done, to ensure an overall quality of your product. Code reviews should check whether these best practices are applied. Only in case that specific measures need to be taken to meet exceptional performance criteria, it's advised to list these requirements explicitly as user stories on the product backlog.

"Done" also means: it's tested and can be shipped at any moment. Agree on the required level of test automation: is unit testing (white box) sufficient, do you fully rely on manual acceptance testing or is a minimal level of automated system testing (black box) required? Involve the customer in this decision, as this impacts the team composition, quality and budget. It's also a common practice to ensure automated deployment is in place, so you can release quickly, with a minimal impact. Fantastic to see that team members are challenging each other, during the daily stand-up, to verify if the Definition of Done has been respected.

9. Aim for early acceptance (testing)

In quite a lot of ERP implementations, go-live is performed in some big phases, preceded by several months of development. Mostly, acceptance testing is planned at the same pace. This means that flows being developed at the beginning of the development stage, will remain several months untouched until acceptance testing is executed. One important advice here: acceptance testing should follow the iterative development approach and not the slow-paced go-live schedule.
 
One of the base principles of an agile approach is to get fast feedback: fail fast and cheap. Early acceptance testing will ensure your integrations will be evaluated by the end users against the requirements. If possible, also involve operations in this acceptance process: they will be able to provide feedback on the monitoring, alerting, troubleshooting capabilities of your integration solution. This feedback is very useful to optimize the integration flows and to take into account these lessons learned for the subsequent development efforts. This approach can avoid a lot of refactoring afterwards…
 
Testing is not the only way to get feedback. Try to schedule demos on a regular basis, to verify if you are heading in the right direction. It's very important to adapt the demo to your stakeholders. A demo for operations can be done with technical tools, while explaining all details about reliability and security. When presenting to functional key users, keep the focus on the business process and the added value that integration brings. Try to include both source and target application, so they can witness the end result without exactly knowing what is under the hood. If you can demonstrate that you create a customer in one application and this get synchronised into two other applications within 10 seconds, you have them on your side!

10. Adapt to improve

Continuous improvement is a key to success. This improvement must be reflected on two levels: your product and your team. Let's first consider improvements on the product, of which you have two types. You have optimizations that are derived from direct feedback from your stakeholders. They provide immediate value to your product, which is in this case your integration project. These can be placed on the backlog. Secondly, there are adaptations that result in indirect value, such as refactoring. Refactoring is intended to stabilize the product, to improve its maintenance and to prepare it for change. It's advised to only refactor a codebase that is thoroughly tested, to ensure you do not introduce regression bugs.
 
Next to this, it's even more important to challenge the way the team is working and collaborating. Recurring retrospectives are the starting point, but they must result in real actions. Let the development team decide on the subjects they want to improve. Sometimes these could be quick wins: making some working agreements about collaboration, communication, code review, etc… Other actions might take more time: improve the development experience, extend the unit testing platform, optimize the ALM approach. All these actions result in better collaboration, higher productivity and faster release cycles.

I find it quite challenging to deal with such indirect improvements. I used to place them also on the backlog, whilst the team decides on their priority. We mixed them with backlog items that result in direct business value, in a 90% (direct value) / 10% (indirect value) proportion. The drawback of this approach is that not everyone is involved in indirect improvements. Another way to tackle this is reserving 1 day, every two weeks, that is dedicated for such improvements. In that way the whole team is involved in this process, which encourages the idea of having a self-organized development team.

Hope you've enjoyed this one!

Toon

Categories: Technology
Tags: Integration
written by: Toon Vanhoutte

Posted on Friday, October 29, 2010 9:37 AM

Peter Borremans by Peter Borremans

Host Integration Server 2010 Dynamic Remote Environments

In the previous versions of Host Integration Server, all connection properties to the Host environment were configured in a static remote environment (RE).

On the HostApps adapter properties, the connection to a Host was made through one of the existing RE’s on the BizTalk server. Defining a remote environment was done through the TI Manager.

HIS2009adapterscreen

From Host Integration Server 2010, the Remote Environment can now be configured on the sendport itself, instead of a pre-defined RE by using the TI Manager.

sendport1

The steps to configure this dynamic RE are very similar to what you were used to do in TI manager. Click the ‘Connection Strings’ ellipses to configure the RE on the sendport.

Add your TI assemblies…

sendport5blog

Then ‘Edit Connection String…’ The following screen shows the connection properties to connect to the Host.

sendport6blog

Having all connection properties available on the sendport allows for more flexibility in deployment and runtime. It is also much more ‘friendly’ to the BizTalk developer. He is used to configure send ports, in this way he does not need to use another tool to configure a Host App sendport.

For a complete overview of new features in Host Integration Server 2010 visit this site:

http://msdn.microsoft.com/en-us/library/gg167635(v=BTS.70).aspx

Categories: Technology
written by: Peter Borremans

Posted on Wednesday, March 17, 2010 4:16 PM

Peter Borremans by Peter Borremans

Host Integration Server Smart Multihoming causing delays to connect to a HIS server

Host Integration Server is by default configured to use 'smart multihoming'. When using smart multihoming, the Host Integration Server (HIS) will send all his available IP addresses to the clients. The SNA client will use the address from the list with addresses received from the server that is closest to its own subnet or IP address.

In some scenarios smart multihoming can cause some problems. What if one of the IP addresses received from the HIS server is not reachable (e.g. an IP address that is used to link to a separate network)? In that case the client will try to connect to one of the other address received from the HIS server. This only happens after a delay (default 45 seconds). After the delay the client will connect to the IP address that connects successfully.

This delay when connecting to the server is not a desired behavior.

How do we fix this?

On the client we can disable smart multihoming. To do this, make sure that this registery key 'ReadjustMultihomedAddresses' is set to 'NO'. If the key does not exist, create it. The location of this registry key should be: 'HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Snabase\Parameters\SnaTcp'.

If smart multihoming is disabled on our client, the new behavior is as follows: the client will try to connect to a server IP address in the order as the client received them from the server. Again, this causes problems... What if the IP address that is not reachable appears first in the list? We will again see a long delay when connecting to the server. Luckely we can fix this!

Host integration server uses the network bindings to determine in what order to present its IP addresses to the client. You can find the network binding  on the 'Network Connections' sceen, 'Advanced' menu option, 'Advanced Settings...', then 'Adapters and Bindings' tab. On that tab page, every network connection is listed and we are able to change the order in which they appear. Adapt the order to your needs (reachable IP's first!!!). Host Integration Server will use the order specified here to send it's IP list to the client.

Now that both steps are done (disable smart multihoming AND change the network bindings on the server), the client will no longer experience a delay when connecting to the HIS server.

 

Categories: BizTalk
written by: Peter Borremans

Posted on Monday, May 15, 2017 3:15 PM

Toon Vanhoutte by Toon Vanhoutte

Recently, the product team released a first feature pack for BizTalk Server 2016. Via this way, Microsoft aims to provide more agility into the release model of BizTalk Server. The feature pack contains a lot of new and interesting features, of which the automated deployment from VSTS is probably the most important one. This blog post looks at what is included in this offering and compares it with existing BTDF functionality.

In case you are interested in a detailed walk-through on how to set up continuous deployment, please check out this blog post on Continuous Deployment in BizTalk 2016, Feature Pack 1.

What is included?

Below, you can find a bullet point list of features included in this release.

  • An application version has been added and can be easily specified.
  • Automated deployment from VSTS, using a local deploy agent.
  • Automated deployment of schemas, maps, pipelines and orchestrations.
  • Automated import of multiple binding files.
  • Binding file management through VSTS environment variables.
  • Update of specific assemblies in an existing BizTalk application (with downtime)

What is not included?

This is a list of features that are currently not supported by the new VSTS release task:

  • Build BizTalk projects in VSTS hosted build servers.
  • Deployment to a remote BizTalk server (local deploy agent required)
  • Deployment to a multi-server BizTalk environment.
  • Deployment of shared artifacts (e.g. a schema that is used by several maps)
  • Deployment of more advanced artifacts: BAM, BRE, ESB Toolkit…
  • Control of which host instances / ports / orchestrations should be (re)started
  • Undeploy a specific BizTalk application, without redeploying it again.
  • Use the deployment task in TFS 2015 Update 2+ (no download supported)
  • Execute the deployment without the dependency of VSTS.

Conclusion!

Microsoft released this VSTS continuous deployment service into the wild, clearly stating that this is a first step in the BizTalk ALM story. That sounds very promising to me, as we can expect more functionality to be added in future feature packs!

After intensively testing the solution, I must conclude that there is a stable and solid foundation to build upon. I really like the design and how it is integrated with VSTS. This foundation can now be extended with the missing pieces, so we end up with great release management!

At the moment, this functionality can be used by BizTalk Server 2016 Enterprise customers that have a single server environment and only use the basic BizTalk artifacts. Other customers should still rely on the incredibly powerful BizTalk Deployment Framework (BTDF), until the next BizTalk Feature Pack release. At that moment in time, we can re-evaluate again! I'm quite confident that we're heading in the good direction!

Looking forward for more on this topic!

Toon

Posted on Saturday, December 6, 2014 12:19 AM

Peter Borremans by Peter Borremans

Today Microsoft went into more detail on the Host Integration roadmap. Read all about it in this article.

As integration specialists, Codit developed numerous integration projects involving Host Systems.

After the announcements during Integrate 2014, I was very interested to see how BizTalk, BizTalk Services and Host Integration will cope with these changes.

The Host Integration Team represented by Paul Larsen, published a clear roadmap of how Host Integration Server will evolve.
The following Host Integration Server features will become available as Microservices:

  • CICS, IMS and i programs application integration
  • DB2 and Informix databases
  • WebSphere MQ messages (using MS client)

The Host Integration Team will also provide connectors to use in Power BI for Office to DB2 and Informix databases (Power Query, Power Pivot)

Host Integration Server vNext will support Informix databases for both ADO.NET and BizTalk Server.

I was very pleased to see the clear and concrete roadmap and hope to see this from the other product teams as well.

Categories: BizTalk
written by: Peter Borremans