wiki

Codit Wiki

Informatie wordt geladen...

Codit Blog

Gepost op maandag 18 september 2017 12:45

Toon Vanhoutte door Toon Vanhoutte

Azure's serverless PaaS offering consists of Azure Functions and Logic Apps. If you consult the documentation, you'll find out that there is quite some overlap between the two. For many people, it's not clear what technology to use in what scenario. In this blog post, I discuss the main differences between these two event-driven Azure services and I provide some guidance to help you to make the right decision.

Comparison

Developer experience

A popular comparison states that Azure Functions is code being triggered by an event, whereas Logic Apps is a workflow triggered by an event. This is reflected in the developer experience. Azure Functions are completely written in code, with currently supports JavaScript, C#, F#, Node.js, Python, PHP, batch, bash and PowerShell. In Logic Apps, workflows are created with an easy-to-use visual designer, combined with a simple workflow definition language in the code view. Each developer has of course his/her personal preference. Logic Apps is much simpler to use, but this can sometimes cause limitations in complex scenarios. Azure Functions gives a lot more flexibility and responsibility to the developer.

Connectivity

Logic Apps connects to an enormous variety of cloud / on-premise applications, going from Azure and Microsoft services over SaaS applications and social media to LOB systems. You can find the impressive list of connectors here. Each connector comes with an API connection, that stores the required credentials in a secure way. These API connections can be reused from within multiple Logic Apps, which is great! Azure Functions have the concept of triggers, input and output bindings. Most of these bindings connect your Azure Functions to other Azure services, such as Event Hubs, Storage, DocumentDb, etc… Consult the complete list here. The HTTP binding is probably the most popular one, as it allows the creation of serverless API's. At the moment, there are no signs that Azure Functions aims to support that many bindings as what Logic Apps offers.

Exception handling

Cloud solutions need to deal with transient fault handling. Logic Apps provides out-of-the-box functionality that allows you to configure automatic retries on every action. In case this doesn't solve the problem, the workflow gets a failed status and can be resubmitted after human intervention. This guarantees an at-least-once execution model, which is pretty reliable! In Azure Functions, you have the typical try/catch options available. If you want to enable retries, you need to do the plumbing yourself, by introducing for example Polly. The way you can handle exceptions in the output binding, depends on the used language and type of output binding. This doesn't always give you the desired outcome. No resume / resubmit capabilities, except if you develop them yourself!

State

Until recently, Azure Functions always needed to be stateless and preferably idempotent. With the announcement of Azure Durable Functions, Microsoft brings state and long-running capabilities to Azure Functions, by leveraging the Durable Task Framework. This new framework allows sequential and parallel execution of several Functions, it supports long-running tasks with pre-defined timeouts and provides statefull actors without the need for external storage. The state is automatically stored in Azure Storage queues, tables and blobs, which is disaster proof. I am looking forward how this will evolve. These long-running / statefull processes are inherent available in Logic Apps, except for the statefull actor model.

Networking

Hybrid integration is reality nowadays. Cloud services must be able to connect to on-premises resources in a secure and high performing way. Azure Logic Apps performs this task via the On Premises Data Gateway, that needs to be installed on premises. Behind the scenes, it uses Azure Service Bus Relay to connect to the cloud in a firewall friendly way, through encrypted channels. When using Azure Functions within an App Service Plan, you have more convenient hybrid connectivity options that reside on the network level. App Service Plans offer support for many networking options like Hybrid Connections, VNET Integration and App Service Environment. Via these options, you can integrate Azure Functions with your local network through a Site-to-Site VPN or ExpressRoute.

Deployment

Azure Resource Manager templates are the way to deploy resources across the Microsoft Azure platform. Fortunately, both Azure Functions and Logic Apps have built-in support for ARM deployments, through for example Visual Studio Release Management. Next to this, Azure Functions allows easy setup of continuous deployments triggered from sources like BitBucket, Dropbox, Git, GitHub, OneDrive and VSTS. This is ideal in case multiple and frequent contributions need to be consolidated and tested. Additionally, Azure Functions now has deployment slots in preview. This allows deploying and testing a vNext first, before you swap that tested deployment slot with the current version in production.

Runtime

Logic Apps run only in the cloud, as it has a dependency on Microsoft-managed connectors. As a consequence, you cannot debug, test or run Logic Apps locally. Azure Functions can be easily developed and debugged on your local workstation, which is a big plus to increase developer productivity. Via the Azure Functions Runtime (still in preview) you are able to deploy them on premises in Windows Containers, with SQL Server as a storage layer. Azure Functions is also supported to run on Azure Stack and it has been announced as part of Azure IoT Edge to execute on small devices. This hosting flexibility is a big asset in phased migration scenarios towards the cloud.

Monitoring

Per Logic App, you have a nice overview of the previous runs and their corresponding outcome. You can filter this history, based on a time period and the resulting run status. The monitoring view of a workflow run is the same as the designer view, which makes it very intuitive. For each action, you can see the status and all inputs/outputs. With one button click, you can enable integration with OMS, where you can search on tracked properties. It's on the roadmap to have a user-friendly and cross Logic Apps dashboard on top of this OMS integration. Each Azure Function comes with a Monitor tab, where you can see the execution history. There is also a live event stream that shows the almost real-time processing statistics in nice graphs. On top of that, there's full integration with Application Insights, where you can take advantage of the powerful Analytics queries.

Pricing Model

Logic Apps has a pure pay-per-usage billing model. You pay for each action that gets executed. It's important to be aware that you also need to pay for polling triggers, which can be a hidden cost. If you want to benefit from the capabilities of the Integration Account, you should be aware that this comes with a fixed monthly bill. With Azure Functions, you have two options qua pricing. You can opt for a fixed cost of an App Service Plan. In that option you reserve compute power on which you can run Azure Functions, but also Web, Mobile and API Apps. The second option is completely serverless, with a consumption plan based on resource consumption (memory/s) and number of executions. Don’t forget that the Azure Storage layer also comes with a rather small cost.

Security

Each particular binding or connector comes with its own security. In this section, I focus on the security of Logic Apps and Azure Functions exposed as an API. In order to access a Logic App with the HTTP trigger, the client must include a Shared Access Signature in the URL. The signature is generated via a secret key that can be regenerated at all time. There is also the ability to restrict access, based on incoming IP addresses. To add more authorization logic, you can put Azure API Management in front of it. Azure Functions has a similar concept of API keys. The API key can be shared for the whole Function App (host key) or you can create a specific one for your Function. If you run your Azure Function in an App Service Plan, you can leverage its codeless authentication functionality with Active Directory, Google, Facebook, etc… Real authorization requires a small code change. Azure Function Proxies can be a light-weight alternative of full-blown API Management, to add security on top of your HTTP triggered Functions.

Conclusion

Based on the comparison above, you'll notice that a lot of factors are involved when deciding between the two technologies. At first, it's important to see what technology supports the connectivity that you require. Do you want to write it yourself or you want to leverage out-of-the-box bindings / connectors? Next to that, my general guidance is as follows:

When dealing with synchronous request/response calls, that execute more complex logic, Azure Functions is the preferred option. Logic Apps is better suited for asynchronous integration and fire-and-forget messaging that requires reliable processing. When using Logic Apps, they can be perfectly extended with Azure Functions to execute stateless tasks that cannot be fulfilled by the out-of-the-box Logic Apps capabilities.

Web API's are often composed of both sync and async operations. If you follow the guidance stated above, you might end-up with an API that uses both Azure Functions and Logic Apps. This is where Azure Functions Proxies has its value, as it can expose these separate microservices as a unified API. This will be discussed in another blog post. 


Stay tuned for more!

Categorieën: Azure
geschreven door: Toon Vanhoutte

Gepost op maandag 18 september 2017 12:31

Stijn Moreels door Stijn Moreels

How can Functional Programming help us to ignore even more in our tests?

Introduction

In this series of Test Infected, I will show you how we can increase the Test Ignorance of our tests by applying Functional approaches to our Imperative code.
If you don’t quite understand what I mean with “ignorance”, I recommend my previous post about the topic. In this post, we will go through with the journey of increasing the Code’s Intent by increasing the Ignorance in a Functional way.

Functional Ignorance

Fixture

The fixture-phase of your test can become very large, several previous posts have already proved this.
How can functional programming help?
Well, let’s assume you want to setup an object with some properties, you would:

  • Declare a new variable
  • Initialize the variable with a newly created instance of the type of the variable
  • Assign the needed properties to setup the fixture

Note that we’re most interested in our test in the last item; so how can we make sure that the last part is the most visible?

Following example shows what I mean:

We would like to test something with the subject property of the message, but note that this is not the first thing which catches your eye (especially if we use the object-initializer syntax). We must also initialize something in a context.

We could, of course, extract the creation functionality with a Parameterized Creation Method and extract the insertion functionality that accepts a message instance.

But note that we do not use the message elsewhere in the test. We could extract the whole functionality and just accept the subject name, but we will have to use an explicit method name to make clear that we will insert a message in the context AND will assign the given subject name to that inserted message. What if we want to test something else? Another explicit method?

What I sometimes do is extract only the assigning functionality like this:

We don’t use the name of the method to state our intentions, we use our code.

In the extracted method, we can do whatever necessary to create an ignored message. If we do need another way to create a message initially, we can always create a new method that only inserts the incoming message and call this from our functional method.

If would be nice if we had immutable values and could use something like F# "Copy-And-Replace Expressions".

Exercise

Several times, when you want to test your code branches from an external SUT endpoint, the creation of the SUT doesn’t change, but rather the info you send to the endpoint. Since we have a value that does not change across several tests; we could say that the value is not that important to the test case but rather the changing values.

When you come across such a scenario, you can use the approach I will describe in here.

The idea is to split the exercise logic from the SUT creation. If you have different endpoints you want to test for the same SUT fixture, you can even extend this approach by letting the client code decide what endpoint to call.

Following example shows two test cases where the SUT creation is the same:

Note that we have the same pattern: (1) create SUT, (2) exercise SUT. Compare with the following code where the SUT is being exercised differently.

We ignore the unnecessary info by Functional Thinking:

We can extend this idea by letting the client choose the return value. This is rather useful if we want to test the SUT with the same Fixture but with different member calls:

I use this approach in almost every Class Test I write. This idea is simple: Encapsulate what varies. Only we think in Functions rather than in Objects. Functions can be treated as Objects!

Verification

The last topic I will discuss in a Functional approach is the Result Verification phase of the Four-Phase Test.

When I applied some techniques in this phase, I always come back to the same principle: I ask myself the same question: “What is really important?” What interests me the most?

In the Result Verification phase, this is the Assertion itself. WHAT do you assert in the test to make it a Self-Evaluating Test? What makes the test succeed or fail?
That’s what’s important; all the other clutter should be removed.

A good example (I think) is when I needed to write some assertion code to Spy on a datastore. When the SUT was exercised, I needed to check whether there was any change in the database and if this correspondeded with my expectations.
Of course, I needed some logic to call the datastore, retrieve the entities, assert the entities, Tear Down some datastore-related items. But the test only cares whether the updated happened or not.

As you can see, the assertion itself is baked-in into the called method and we must rename the method to a more declarative name in order for the test reader to know what we’re asserting on.

Now, as you can see in the next example, I extracted the assertion, so the test itself can state what the assertion should be.
Also note that when I extract this part, I can reuse this Higher-Order Function in any test that needs to verify the datastore, which is exactly what I did:

Conclusion

Test Ignorance can be interpreted in many ways, this post explored some basic concepts of how Functional Programming can help us to write more Declarative Tests. By extracting not only hard-coded values, but hard-coded functions, we can make complex behavior by composing smaller functions.

Functional Programming hasn’t been a fully mainstream language (yet), but by introducing Functional Concepts into Imperative Languages such as: lambda functions, pattern matching, inline functions, pipelines, higher-order functions, … we can maybe convince the Imperative programmer to at least try the Functional way of thinking.

Categorieën: Technology
Tags: Code Quality
geschreven door: Stijn Moreels

Gepost op maandag 11 september 2017 15:13

Pim Simons door Pim Simons

With the introduction of BizTalk 2016 it is now possible to use SHA-2 certificates when signing a message. As this is not as straightforward as I expected it to be, I’ve decided to share my experiences with setting up SHA-2 in this blogpost.

For one of our customers we migrated all their interfaces from BizTalk 2006 R2 to BizTalk 2016. During testing of the new BizTalk 2016 environment we found that the signature for the AS2 messages being sent out was not working correctly. While there was no exception in BizTalk, the external party, that was receiving the messages, was unable to verify the signature of the messages. The messages from the old BizTalk 2006 R2 environment were all verified and processed successfully. Obviously we started checking if all of the certificates and party settings were setup correctly in the new BizTalk 2016 environment. We found those to be correct and continued to search for the cause of this issue.

We ended up finding a difference when comparing the signing algorithms. The old BizTalk 2016 R2 environment was using SHA1 while the new BizTalk 2016 machine was using SHA256. Having found this clue, we figured that the fix would be easy: just change the signing algorithm on the AS2 agreement. However, this is where we ran into some problems. It turns out there really isn’t anywhere to configure this on the AS2 agreement. As shown in the picture below, it is possible to specify that the message should be signed, but it is not possible to specify a signing algorithm.

 
The documentation does not specify where to supply the signing algorithm. But after walking through all of the settings of the AS2 agreement again, I noticed that the signing algorithm for the MDN was set to SHA256 and not SHA1. While it is greyed out and, at least according to the screen, only used for MDN’s, we decided to change it anyway and see if this could be the issue.


 
I enabled ‘Request MDN’ and ‘Request signed MDN’ after which I could change the signing algorithm to SHA1. Finally, I disabled ‘Request MDN’ and ‘Request signed MDN’ again since we are not using the MDN.


This finally solved our issue with the signing of the message as now the SHA1 algorithm was used to sign the message!

In conclusion, it is possible to specify the signing algorithm for outgoing messages, but it is not where you would expect it to be. If you interpret the screens of the AS2 party agreement you would think that the signing algorithm can only be specified for MDN’s as it is greyed out by default.

Hopefully the choice of signing algorithm will be easier after a bugfix or in the next release of BizTalk.  

I enabled ‘Request MDN’ and ‘Request signed MDN’ after which I could change the signing algorithm to SHA1. Finally, I disabled ‘Request MDN’ and ‘Request signed MDN’ again since we are not using the MDN.

Categorieën: BizTalk
geschreven door: Pim Simons

Gepost op donderdag 24 augustus 2017 07:35

Toon Vanhoutte door Toon Vanhoutte

I've always been intrigued by agile development and scrum methodology. Unfortunately, I never had the opportunity to work within a real agile organization. Because I strongly believe in scrum and its fundamental key principles, I've tried to apply an agile mindset on integration projects; even within very waterfall oriented organizations and projects. I'm not an expert at all on scrum methodology, I've just adopted it in a very pragmatic way.

Please do not hesitate to share your vision in the comments section below, even if it conflicts with my statements!

Important note: this post is not intended to state that when you do scrum, you should make your own interpretation of it. It's to explain how you can benefit from agile / scrum principles on integration projects, that are not using the scrum methodology at all. It's a subtle, but important difference!

1. Prototype in an early stage 

I'm working for more than 10 years on integration projects and every new assignment comes with its specific challenges: a new type of application to integrate with, a new protocol that is not supported out-of-the-box and specific non-functional requirements that you never faced before. Challenges can become risks if you do not tackle them soon. It's important to list them and to perform a short risk assessment.
 
Plan proof of concepts (PoC) to overcome these challenges. Schedule these prototyping exercises early in the project, as they might influence overall planning (e.g. extra development required) and budget (e.g. purchase of third party tool or plug-in). Perform them in an isolated sandbox environment (e.g. cloud), so you do not lose time with organizational procedures and administration overhead. A PoC must have a clear scope and success criteria defined. Real life examples where we introduced a PoC: validate performance characteristics of the BizTalk MLLP adapter, determine the best design to integrate with the brand-new Dynamics 365 for Operations (AX), test the feature set of specific Logic Apps connectors against the requirements…

2. Create a Definition of Ready 

A Definition of Ready is a kind of a prerequisite list that the development team and product owner agree on. This list contains the essential information that is required in order to kick off the development of a specific backlog item. It's important to agree on a complete, but not too extended Definition of Ready. Typical items that are listed on an integration focused Definition of Ready are: samples files, data contracts, transformation analysis, single point of contact of relying backend application.

This is a very important aspect in large integration projects. You want to avoid that your development team is constantly blocked by unclear dependencies, but on the other hand it's not advised to postpone development constantly as this imposes a risk. It's a difficult balancing exercise that requires a pragmatic approach and a decent level of flexibility.
 
It's important to liberate your development team from the task of gathering these prerequisites, so they can focus on delivering business value. In large integration projects, it's a full-time occupation to chase responsibles from the impacted teams to get the required specs or dependencies. The person taking up this responsibility has a crucial role in the success of the project. Excellent communication and people skills are a must.

3. Strive for a self-organized team

"The team lead gives direct orders to each individual team member". Get rid of this old-fashioned idea of "team work". First, the development team must be involved in estimating the effort for backlog items. In that way, you get a realistic view on the expected development progress and you get the team motivated to meet their estimates. Secondly, it's highly advised to encourage the team to become self-organized. This means they decide on how they organize themselves to get the maximum out of the team, to deliver high quality and to meet the expectations. In the beginning you need to guide them towards that direction, but it's amazing how quick they adapt to that vision.

Trust is the basis of this kind of collaboration between the team lead (or product owner) and the team. I must admit that it wasn't easy for me in the beginning, as my natural flavour is to be in control. However, the advantages are incredible: team members become highly involved, take responsibility, are better motivated and show real dedication to the project.

One might think you lose control, but nothing is less true. Depending on the development progress, you can shift the product backlog in collaboration with your stakeholders. It's also good to schedule regular demo sessions (with or without the customer) to provide your feedback to the development team.

Each team member has its own role and responsibilities within the team, even though no one ever told them to do so. Replacing one member within the team, always has a drastic impact on the team performance and behaviour. It's like the team loses part of its DNA and needs some time to adjust to the new situation. I'm blessed that I was always able to work together with highly motivated colleagues, but I can imagine it's a hell of a job to strive for a self-organized team that includes some unmotivated individuals.

4. Bridge the gap between teams

The agile vision encourages cross-functional teams, consisting of e.g. business analysts, developers and testers. Preferably one person within the team, can take multiple roles. However, if we face reality, many large organizations still have the mindset of teams per expertise (HR, Finance, .NET, Integration, Java, Testing…). Often there is no good interaction amongst these teams and they are even physically separated.

If you are part of the middleware team, you're stuck between the two teams: the ones who manage the source application and those who are developing the target system. Try to convince them to create cross-functional project teams, that are preferably working at the same place. If this is not an option, you can aim at least for a daily stand-up meeting with the most important key-players (the main analysts and developers) involved. Avoid at all time that communication always goes via a management layer, as this is time consuming and a lot of context is lost. As a last resort, you can just go on a daily basis to the floor where the team is situated and discuss the most urgent topics.

Throughout many integration projects, I've seen the importance of people and communication skills. These soft skills are a must to bridge the gap between different teams. Working full time behind your laptop on your own island, is not the key to success within integration. Collaborate on all levels and cross teams!

5. Leverage the power of mocking

In an ideal scenario, all backend services and modules we need to integrate with are already up and running. However, if we face reality, this is almost never the case. In a waterfall approach, integration would be typically scheduled in the last phase of the project, assuming all required prerequisites are ready at that moment in time. This puts a big risk on the integration layer. According to the scrum and agile principles, this must be avoided at all time.
 
This introduces a challenge for the development team. Developers need to make an abstraction of external systems their solution relies on. They must get familiar with dependency injection and / or mocking frameworks that simulate back-end applications. These techniques allow to start development of the integration layer with less prerequisites and ensure a fast delivery once the depending backend applications are ready. A great mocking framework for BizTalk Server is Transmock, definitely worth checking out if you face problems with mocking. Interesting blogs about this framework can be found here and here, I've also demonstrated its value in this presentation.

6. Introduce spikes to check connectivity

Integration is all about connecting backend systems seamlessly with each other. The setup of a new connection with a backend system can often be a real hassle: exceptions need to be made in the corporate firewall, permissions must be granted on test environments, security should be configured correctly, valid test data sets must be available, etc...
 
In many organizations, these responsibilities are spread across multiple teams and the procedures to request such changes can cause a lot of administrative and time consuming overhead. In order to avoid your development team is being blocked by such organizational waste, it is advised to put these connectivity setups early on the product backlog as "spikes". When the real development work starts in a later iteration, the connectivity setup has already been given a green light.

7. Focus first on end-to-end

This flowchart explains in depth the rules you can apply to split user stories. Integration scenarios match the best with Workflow Steps. This advice is really helpful: "Can you take a thin slice through the workflow first and enhance it with more stories later?". The first focus should be to get it work end-to-end, so that at least some data is exchanged between source and target application. This can be with a temporary data contract, within a simplified security model, without more advanced features like caching, sequence controlling, duplicate detection, batching, etc…
 
As a real-life example, we recently had the request to expose an internal API that must consume an external API to calculate distances. There were some additional requirements: the responses from the external API must be stored for a period of 1 month, to save transaction costs of the external API; authentication must be performed with the identity of the requesting legal entity, so this can be billed separately; both sync and asynchronous internal API must be exposed. The responsibility of the product owner is to find the Minimal Valuable Product (MVP). In this case, it was a synchronous internal API, without caching and with one fixed identity for the whole organization. During later phases, this API was enhanced with caching, with a dynamic identity and an async interface.
 
In some projects, requirements are written in stone upfront and are not a subject of negotiation: the interface can only be released in production if all requirements are met. In such cases, it's also a good exercise to find the MVP required for acceptance testing. In that way, you can release faster internally, which results in faster feedback from internal testing.

8. Put common non-functionals on the Definition of Done

In middleware solutions, there are often requirements on high performance, high throughput and large message handling. Most of these requirements can be tackled by applying best practices in your development: use a streaming design in order to avoid loading messages entirely in memory, reduce the number of persistence points, cache configuration values wherever it's applicable, etc…
 
It's a good practice to put such development principles on the Definition of Done, to ensure an overall quality of your product. Code reviews should check whether these best practices are applied. Only in case that specific measures need to be taken to meet exceptional performance criteria, it's advised to list these requirements explicitly as user stories on the product backlog.

"Done" also means: it's tested and can be shipped at any moment. Agree on the required level of test automation: is unit testing (white box) sufficient, do you fully rely on manual acceptance testing or is a minimal level of automated system testing (black box) required? Involve the customer in this decision, as this impacts the team composition, quality and budget. It's also a common practice to ensure automated deployment is in place, so you can release quickly, with a minimal impact. Fantastic to see that team members are challenging each other, during the daily stand-up, to verify if the Definition of Done has been respected.

9. Aim for early acceptance (testing)

In quite a lot of ERP implementations, go-live is performed in some big phases, preceded by several months of development. Mostly, acceptance testing is planned at the same pace. This means that flows being developed at the beginning of the development stage, will remain several months untouched until acceptance testing is executed. One important advice here: acceptance testing should follow the iterative development approach and not the slow-paced go-live schedule.
 
One of the base principles of an agile approach is to get fast feedback: fail fast and cheap. Early acceptance testing will ensure your integrations will be evaluated by the end users against the requirements. If possible, also involve operations in this acceptance process: they will be able to provide feedback on the monitoring, alerting, troubleshooting capabilities of your integration solution. This feedback is very useful to optimize the integration flows and to take into account these lessons learned for the subsequent development efforts. This approach can avoid a lot of refactoring afterwards…
 
Testing is not the only way to get feedback. Try to schedule demos on a regular basis, to verify if you are heading in the right direction. It's very important to adapt the demo to your stakeholders. A demo for operations can be done with technical tools, while explaining all details about reliability and security. When presenting to functional key users, keep the focus on the business process and the added value that integration brings. Try to include both source and target application, so they can witness the end result without exactly knowing what is under the hood. If you can demonstrate that you create a customer in one application and this get synchronised into two other applications within 10 seconds, you have them on your side!

10. Adapt to improve

Continuous improvement is a key to success. This improvement must be reflected on two levels: your product and your team. Let's first consider improvements on the product, of which you have two types. You have optimizations that are derived from direct feedback from your stakeholders. They provide immediate value to your product, which is in this case your integration project. These can be placed on the backlog. Secondly, there are adaptations that result in indirect value, such as refactoring. Refactoring is intended to stabilize the product, to improve its maintenance and to prepare it for change. It's advised to only refactor a codebase that is thoroughly tested, to ensure you do not introduce regression bugs.
 
Next to this, it's even more important to challenge the way the team is working and collaborating. Recurring retrospectives are the starting point, but they must result in real actions. Let the development team decide on the subjects they want to improve. Sometimes these could be quick wins: making some working agreements about collaboration, communication, code review, etc… Other actions might take more time: improve the development experience, extend the unit testing platform, optimize the ALM approach. All these actions result in better collaboration, higher productivity and faster release cycles.

I find it quite challenging to deal with such indirect improvements. I used to place them also on the backlog, whilst the team decides on their priority. We mixed them with backlog items that result in direct business value, in a 90% (direct value) / 10% (indirect value) proportion. The drawback of this approach is that not everyone is involved in indirect improvements. Another way to tackle this is reserving 1 day, every two weeks, that is dedicated for such improvements. In that way the whole team is involved in this process, which encourages the idea of having a self-organized development team.

Hope you've enjoyed this one!

Toon

Categorieën: Technology
Tags: Integration
geschreven door: Toon Vanhoutte

Gepost op dinsdag 22 augustus 2017 16:45

Tom Kerkhove door Tom Kerkhove

In this third and final article on Azure Event Grid, we'll have a look at how this relates to Azure Service Bus Topics and why they are still relevant.

No, Service Bus Topics are not dead.

I don't think that Azure Service Bus Topics are going away any time soon. Although Azure Event Grid also leverages publish-subscribe capabilities and uses the concept of "Topics", they are not the same.

Here is why:

  • Message Exchange Patterns - Azure Event Grid uses a push-push-model where all the events are pushed directly to the Event Handlers. Azure Service Bus Topics, however, are using a pulling model where the Message Processor will actively check the topic subscription if there are any new messages available. This means that the Message Processor is capable of controlling when and how many message it wants to process and thus controls the load it will handle. With Azure Event Grid you don't, so make sure your handlers can process this.

  • Differences in velocity - Since Azure Service Bus Topics are using a pull-mechanism the Message Processor is in charge of getting new messages. The advantage here is that it has full control on the pace it is processing messages. That said, if they can't keep up with the ingestion throughput the messages will only pile up until the size of the topic has been met. With Azure Event Grid however you are no longer in charge since they will push your messages to the Event Handlers. This means that your Event Handler needs to be capable to handle the load and provide some throttling to protect itself from crashing, Event Grid will retry the delivery anyway.

  • Throughput - Azure Event Grid promises 10 million events per second, per region. This is far more than what Azure Service Bus can handle, unless you distribute it across multiple Service Bus Namespaces for which they have a soft limit of 100 per subscription, that's not event close.

  • Message & Event Sizes - Azure Service Bus supports message sizes up to 256 kb for Basic/Standard or even 1MB for Premium. While I couldn't find an official limitation on the event size, my guess is that this will be similar to or less than Service Bus Basic given the throughput they promise. Of course, there is still the Claims Check pattern to bypass these limitations.

In summary, I think they have their own use-cases where I see Service Bus Topics more for a fan-out transactional processing, but with a smaller throughput where Azure Event Grid is more used as an eventing infrastructure that provides higher velocity for near-real-time processing.

As with every technology you need to compare both and see which one is best for your scenario.

Thanks for reading,

Tom Kerkhove.