wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Thursday, March 31, 2016 3:50 PM

Tom Kerkhove by Tom Kerkhove

Recently I was working on a Service Fabric project where I was using Service Remoting to communicate from one service to another by using the ServiceProxy.

Unfortunately it caused a "Interface id -103369040 is not implemented by object Codit.MyOtherService" exception.

Here is how I fixed it.

While refactoring my Service Fabric services I got the following exception over and over again:

Interface id -103369040 is not implemented by object Codit.MyOtherService

The exception was caused when I was running the following code:

During the refactoring I added an additional operation to the IMyService but apparently it's looking for it in my IMyOtherService implementation. Odd!

During debugging I noticed that the problem was in the configuration of my service that was trying to initiate a call by using the proxy. The "culprit" lies in the following line:

Do you see it? Neither did I because the problem was in the Settings.xml of the service where the configured URI I was using was fabric:/Codit.Demo/MyOtherService instead of fabric:/Codit.Demo/MyService. This caused the runtime to attempt to call a method on a service implementation that didn't implement IMyService but is an implementation of IMyOtherService instead.

While this seems like a stupid mistake -and it is- it took me a while to notice it. What I once again learned is that the key to success is in your logging - Log enough to know what's going on but don't over do it.

In my case it's a good idea to add an ETW entry to what configured endpoint I'm remoting so I can detect this misconfigured earlier in the future.

Thanks for reading,

Tom.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Categories: Azure
written by: Tom Kerkhove

Posted on Monday, March 21, 2016 11:48 AM

Luis Delgado by Luis Delgado

At Codit, we help customers envision, design and implement solutions focused on Internet of Things (IoT) initiatives. As part of this work, I've realized that organizations investing in IoT initiatives typically walk through a path, which I will call the "IoT Maturity Levels".

 

Maturity levels are important because they provide organizations with a goal-oriented path, so they can measure progress and celebrate small successes on their way to a greater goal. They are important because, as experience shows, it is best for organizations to progress through consecutive maturity levels rather than to simply try to swallow a super-complex project at once. Violent and ambitious jumps in maturity typically fail due to organizational change resistance, immature operational procedures, and deficient governance practices. It is better to have a solid hand on a maturity level before adventuring into the next one.

Here at the 4 maturity levels of IoT, and what they mean for organizations:

Level 1: Data Generation and Ingestion

What is it about: In level 1, organizations begin projects to generate and collect IoT data. This involves coupling their services or products with devices that capture data and gateways to transmit that data, implementing data ingestion pipelines to absorb that data, and storing that data for later use. 

What it means: at this point, companies are finally in a position to generate data and collect it. Data generation is the key building block of IoT, and the first maturity level is aimed at getting your hands on data. Typically, the hardest part are the devices themselves, how to securely capture and transmit the data, how to manage those devices in the field, solve connectivity issues, and building a pipeline that scales to serve many devices.

Level 2: First Analytics

What is it about: once armed with data, companies will typically try to derive some value out of it. These are initially ad-hoc, exploratory efforts. Some companies might already have a developed concept about how they will use the data, while others will need to embark in exploring the data to find useful surprises. For example, data analysts / scientists will start connecting to the data with mainstream tools like Excel and PowerBI and start exploring.

What it means: Companies might be able to start extracting value from the data generated. This will mostly be manual efforts done by functional experts or data analysts. At this stage, the organization starts to derive initial value from the data.

Level 3: Deep Learning

What is it about: the organization recognizes that the data is much more valuable and large than manual analysis permits, and starts investing in technology that can automatically extract insights from the data. These are typically investments in deep learning, machine learning or streaming analytics. Whereas the value of the data in Level 2 was extracted from the manual work of highly-skilled experts, the value of the data in Level 3 is extracted automatically from sophisticated algorithms, statiscal models and stochastic process modeling.

What it means: the organization is able to scale the value of its data, as it is not dependent anymore on the manual work of data analysts. More data can be analyzed in many more different ways, in less time. The insights gained might be more profound, due to the sophistication of the analysis, which can be applied to gigantic data sets with ease.

Level 4: Autonomous Decision Making

What is it about: the deep learning and analytical models, along with their accuracy and reliability, are solid upon exiting Level 3. The organization is now in a position to trust these models to make automated decisions. In Level 3, the insights derived from deep learning are mostly used as input for pattern analysis, reporting dashboards and management decision-making. In Level 4, the output of deep learning is used to trigger autonomous operational actions.

What it means: in Level 4, the deep learning engine of the organization is integrated with its operational systems. The deep learning engine will trigger actions in the ERP (i.e: automatic orders to replenish inventory), LoB systems (remote control of field devices via intelligent bi-directional communication), the CRM (triggering personalized sales and marketing actions based on individual customer behavior), or any other system that interacts with customers, suppliers or internal staff. These actions will require no human intervention, or at least, require minimal human supervision or approvals to be executed.

Do you need to go all the way up to Level 4?
Not necessarily. How far you need to invest into the maturity of your IoT stack is dependent on the business case for such an investment. The true impact of IoT, and what business value it might bring, is very hard to gauge at Day 0. It is best to start with smaller steps by developing innovative business models, rapidly prototyping them, and making smaller investments to explore if an IoT-powered business model is viable or not. Make larger commitments only as steps from previous successes. This allows you to fail fast with minimal pain if your proposed business model turns out to be wrong, adapt the model as you learn through iterations, and allows your team to celebrate smaller successes on your IoT journey.

Categories: Architecture
Tags: IoT
written by: Luis Delgado

Posted on Thursday, March 17, 2016 12:00 AM

Lex Hegt by Lex Hegt

BizTalk360 is becoming increasingly popular as a monitoring and management tool for organizations that use BizTalk. And there’s good news for BizTalk360 users everywhere: You can now extend the BizTalk Deployment Framework (BTDF) to automatically deploy your BizTalk360 alerts!

As you probably know, it is considered a best practice to use the BizTalk Deployment Framework for automatic deployment of your BizTalk solutions. This Framework doesn’t only allow for deployment of BizTalk artifacts, but it also enables you to perform all kind of custom tasks to automate the deployment of your BizTalk application, thereby reducing the risk of error which is introduced when you’re deploying manually. Until recently it wasn’t possible to automate the deployment of BizTalk360 alerts with BTDF, but it is now…

BizTalk360 API

As of BizTalk360 v8, which was released early February 2016, we now have a documented API available, enabling developers to build their own custom solutions by implementing the BizTalk360 API. With this JSON-based API you can operate both BizTalk360 and BizTalk.

The API consists of the following services:

  • ActivityMonitoringService – List of Business Activity Monitoring Services
  • AdminService – List of API’s related to Admin(Super user) activities in BizTalk360
  • AdvancedEventViewerService – List of API’s helps to retrieve details about Advanced Event Viewer
  • AlertService – List of API’s used for Monitoring in BizTalk360
  • BizTalkApplicationService – List of API’s helps to retrieve information about BizTalk Applications
  • BizTalkGroupService – List of API’s helps to get details about the Artifacts in BizTalk Group
  • BizTalkQueryService – List of API’s helps to run queries BizTalk
  • EDIManagementService – List of API’s helps to run EDI queries and to get details about parties and agreements
  • EnvironmentMgmtService – List of API’s related to environment management services in BizTalk360
  • ESBManagementService – List of API’s helps to retrieve details about Enterprise Service Bus

Combined, these services contain many operations, which are used by BizTalk360 itself, but can also be used in custom applications. In this blog post I will describe an example of such a custom application, which automatically creates a BizTalk360 Alert during deployment of a BizTalk application.

If you have BizTalk360 v8 Platinum, Partner, Product Specialist or if you have a Trial license and you want to find out more about the BizTalk360 API, just navigate to Settings / API Documentation. As you’ve probably come to expect from a modern API, you can try each operation directly from the API Documentation.

Note: although you will need one of the above licenses to be able to access the API Documentation, you don’t need a specific license to be able to actually use the API itself!

Deployment of BizTalk applications

In my experience, most organizations that use BizTalk Server as their middleware platform, also use BTDF to deploy their BizTalk solutions. The most important reason to use this open source software over the out-of-the-box capabilities that BizTalk Server offers, is that BTDF enables you to minimize the (sometimes large) number of manual steps required to deploy a full integration solution, thereby significantly reducing or even fully eliminating the risk of human error. Think single-click deployments (your admins will love you for this)!

An important component of BTDF is the SettingsFileGenerator.xml file. In this file, which can be edited with Microsoft Excel, you can define all of the specific settings for each environments in a DTAP street. An example of these type of setting are the URLs of Send Ports, that will be different in a Production environment when compared to a test environment for instance.

Besides being able to deploy all kinds of BizTalk artifacts, the open architecture of BTDF enables you to execute SQL scripts, deploy webservices in IIS and much more. (I actually wrote a blogpost a while ago about how PL/SQL scripts can be executed during the deployment of a BizTalk application).

BT360Deploy

Now that we have a flexible deployment framework with BTDF and a documented BizTalk360 API, it’s only a small step to create a tool that takes advantage of this powerful combination, which will create a BizTalk360 Alert for a BizTalk application during a deployment with BTDF! All this piece of software needs to be capable of doing is reading settings from the BTDF file which contains environment specific settings file and call the BizTalk360 API to create that alert. And the best news is: we’ve already done that for you and we’re offering it for free on CodePlex!

It’s called BT360Deploy, and it works as follows:

BT360Deploy takes 2 arguments, namely:
-a: the name of the BizTalk Application which is deployed
-s: the name and location of the (environment specific) settings file which contains all kind of BizTalk360 parameters

Example:

BT360Deploy -aAxonOlympus.BTDF -sC:\Data\Development\BizTalk\AxonOlympus.BTDF\Deployment\EnvironmentSettings\Exported_LocalSettings.xml

Below you see the output of the deployment of a BTDF project, in which BT360Deploy was incorporated, with the parameters as shown above:

To be able to create BizTalk360 Alerts during deployment, you need to know which fields in the BizTalk360 User Interface correspond with which fields in the BTDF Settings file. For that purpose we created a document which describes that mapping. Below you will find a sample with a couple of fields. The full document can be downloaded from CodePlex, as part of the BT360Deploy package.

Screen/Field in BizTalk360 User Interface Field in BTDF Settings file
Alarm – Basic / Basic Details
Alarm Name BizTalk360_alertName
Email Ids BizTalk360_commaSeparatedEmails
Disable Alarm for Maintenance BizTalk360_isAlertDisabled
Alarm – Threshold / Threshold Alert
Alert on threshold violation BizTalk360_isAlertASAP
If violation persists for BizTalk360_alertASAPWaitDurationInMinutes
Limit number of alerts per violation to BizTalk360_isContinuousErrorRestricted
Notify when things become normal again BizTalk360_isAlertOnCorrection
Set Alerts on set day(s) and time(s) only BizTalk360_isThresholdRestricted
Days BizTalk360_thresholdDaysOfWeek
Start Time BizTalk360_thresholdRestrictStartTime
End Time BizTalk360_thresholdRestrictEndTime

Are you interested in creating your BizTalk360 Alerts during deployment, instead of having to create them manually afterwards? Check out the following link on CodePlex to get your hands on it:

http://bt360deploy.codeplex.com

Besides some binaries, the download contains the earlier mentioned mapping document and a ‘How to…’ document which describes how you can incorporate BT360Deploy in your BTDF project.
Enjoy!

Categories: BizTalk
written by: Lex Hegt

Posted on Thursday, March 10, 2016 3:38 PM

Massimo Crippa by Massimo Crippa

With the latest Azure API Management service update, the Git integration has been rolled out. In this post we will see how we can seamlessly control the versions of the proxy configuration and move it between different environments.

Scenario

All the configuration that are applied to the proxy and the portal customizations are stored in a database in the provisioned tenant. Now, every APIM tenants can expose a public Git endpoint to which we can refer to PULL the configuration down to our local (or remote) Git repository.

Once we have our local version, we can apply the changes we need, push it back to the tenant repository and then save it to the APIM database.

In the diagram below, the steps of our scenario

  1. Save (sync from the APIM repository to Tenant Git)
  2. Pull (From Tenant Git to local repo),
  3. Apply changes (on our local repo),
  4. Push (to Tenant Git)
  5. Deploy (from the Tenant Git to the APIM repository)

 

The Git integration is not enabled by default, so first we have to connect to the management portal, go to the Security area and enable the Git access. 

Save and Pull

Next step is to save the proxy configuration to the APIM tenant’s Git repository. This can operation can be done in two different ways:

  • Use the “Save Configuration to Repository” button on the APIM Admin Portal.
  • Call the “Save” operation of the API Management REST API (here how to enable the REST API).

In both cases you have to specify the branch name where to save the configuration and whether to override or not the changes in newer check-ins. This operation can takes a couple of minutes.

Once completed, you can open a Git console and pull the configuration to create a working copy of the remote repository by using the clone command.

Before that you need to get (in the Admin Portal) a temporary password to access to the remote repository.

 

Then run the “git clone https://coditapi.scm.azure-api.net/” command and specify “apim” as username and the temporary password we got at the step before.

Below the folder structure of the local repository. As you can see the proxy configuration is exported (apis, policies, security groups and products) along with the developer portal customizations.

If in the meanwhile a new configuration has been saved in the APIM repo, we can pull it down with the "git pull" command.

Apply a change

Let's imagine we want to change the policy applied to the Echo API to extend the existing basic round robin algorithm.

The policy is applied at API scope so the file to be edited is policies/apis/Echo_API.xml 

This is the result of the "git diff" command after the change.

Now, in order to add the change to the git staging area use the “git add -A” command and then commit the changes with "git commit -m" as in the picture below.

Now we’re ready to push our changes to the Azure API Management Git repo.

Push and deploy

Type “git push” to sync the changes with the repository on our API Management tenant.

The final step is to deploy our configuration from the tenant repository to the APIM proxy.

This can operation can be done in two different ways:

  • Use the “Deploy Repository Configuration” button on the APIM Admin Portal.
  • Call the “Deploy” operation of the API Management REST API (here how to enable the REST API).

For this step I'm going to invoke the Configuration REST API using postman.  Here the details of my API call.

Method : POST
Address : http://{tenantname}.management.azure-api.net/configuration/deploy?api-version=2014-02-14-preview
Headers :
 + Content-Type > application/json
 + Authorization > SharedAccessSignature=....
Body :
    {"branch":"master"}

 

As response I got a 202 (Accepted) and the Location header with the link to the check the status of this transaction.

With the operationResults operation to check the status (Failed, InProgress, Succeded) of the deploy. It's a GET and again we must specify the Authorization header as in the previous call.

If the deploy succeeded, the changes are immediately applied to the proxy. 

Restore a configuration

Now, imagine that you've applied a wrong configuration to the proxy and you want to restore a previous version from your local git repository. For example, these are the timestamps of my configurations:

  • On proxy: updated at 10:22 AM
  • On Teneant Repo: updated at 10:18 AM
  • Local repo: updated at 08:19 AM

I want to discard the 10:18 AM version and replace the 10:22 AM version with the 08:19 AM one. It's a four step procedure. 

A) First thing to do is to bring the tenant repo in sync with the proxy. This step is necessary to mark the proxy as synched. Without the sync you will get this error as result of the the deploy operation : "Deployment operation failed due to invalid data: Snapshot operation is not safe.  Latest sync date: '2015-11-10T10:18:35.5278745'; latest configuration update date: '2015-11-10T10:22:01.7635694'"

B) Apply a modification to the local repo and commit it. This is necessary so the deploy procedure can recognize that there is something to be overwritten. 

C) run the "git push -f origin master" command to overwrite the version in the tenant git.

D) deploy the configuration using the Admin Portal or via REST API

Conclusion

The git integration is a feature that customers have been asking for a while. Now, you can create and manage different versions of your proxy configurations and move them between different environments.

Cheers,

Massimo

Categories: Azure
written by: Massimo Crippa

Posted on Saturday, February 27, 2016 12:00 AM

Luis Delgado by Luis Delgado

Micro-services architectures are gaining popularity as a software architectural pattern. There are many aspects to think about when considering a micro-services architecture: scalability is one of them. Let's contrast scalability with common alternatives.

Vertical scalability

With vertical scalability, you scale the capacity of your application by increasing hardware capacity. This is why it is named "vertical": you add more CPU, more memory, more disk IOPS, but effectively, the architecture of your app and infrastructure does not change. This is a viable scalability pattern but it has an obvious hard-stop: there is so much hardware you can add, up to a certain point.

Horizontal scalability

With horizontal scalability, instead of scaling up by adding more hardware capacity, you architect your application so that it scales out by adding more instances of it. This can be accomplished by adding more VMs with the application installed, more application instances inside a cloud service, or more containers... you get the idea. You don't need beefy, expensive hardware for horizontal scalability, you can get along with small machines, and add many of them. This scalability pattern usually requires adjustments in the application architecture. For example, since a client request may be served by any machine (out of many), the application typically has to be stateless, or if state is needed, it needs to be stored somewhere else.

Scalability with micro-services

Whereas the concept of scaling horizontal seems appealing, remember that every instance you "clone" to scale horizontally is running a complete instance of your application. This might be undesirable, as your application might have different scalability needs. Typically, the load of an application is not evenly-distributed among all the services it provides. For example, a careful analysis of telemetry data might show that the bottleneck in your application are the authentication services, but all other services inside your app are performing well. If you scale horizontally, you will be scaling out the authentication services... along with everything else that does not need to be scaled out. This is a waste of resources.

A micro-services architecture takes an application and splits it into independent, working, functional units, called "services". Don't get mislead by the word "micro" in "micro-services". The split of services does not need to be "microscopic". You can split the services within your application in any arbitrary way you want. Typically, the more atomic the services are, the more value you will get from this architecture, but it need not be the case every time.

With that out of the way, let's go back to our example. With a micro-services architecture, your app will run different services as independent units, each with its own runtime, codebase, processing thread(s), etc. Since the bottleneck in our app is the authentication routine, you can scale out that service only, and leave the rest alone. With micro-services, you make a more effective use of the horizontal scalability pattern.

When considering a micro-services architecture, there are many more factors to analyse beyond scalability. But mixing the micro-services architecture with horizontal scalability typically gives you better capacity elasticity than using a monolithic architecture.

Categories: Architecture
Tags: Scalability
written by: Luis Delgado