wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Thursday, April 7, 2016 5:27 PM

Tom Kerkhove by Tom Kerkhove

Last week Microsoft held her annual //BUILD/ developer conference again in San Francisco with a lot of announcements going from Windows 10, to Office, to Azure and beyond.

Let's walk through some of the announcements that got me excited!

Announcing Azure Functions, Microsofts AWS Lambda competitor

Azure Functions (Preview) is the long awaited competitor of AWS Lambda allowing you to run small pieces of code and only having to pay for what you're using. It uses a event driven model where you connect data sources -from the cloud or on-prem- and (re)act on certain events in an easy-to-use way.

You can either choose to use Continous Deployment & Integration or use the interactive portal to visualize the Triggers, Inputs & Outputs of your functions.

You can write functions in a variety of languages going from Node.js & Python to Bash & PowerShell to C# and others, they even support pre-compiled executables. Here is a small example of code used in a Function.

People who have been working with Azure Web Jobs can see some similarities but the differentiator here is that with Azure Functions you only pay for the compute you use while with Azure Web Jobs you run on an App Plan that is billed per hour.

Azure Functions provide a variety of ways to trigger your functions: Timer-based, webhooks, events from other services i.e. message in Service Bus Queue, etc. allowing you to use them in a variety of scenarios.

From an integration/IoT perspective this is a very nice service where we can use this in a combination with other services. We could react on events in an on-prem SQL database and trigger processes in the cloud. One could trigger them from within a Logic App as a substep of the business process, etc...

Interested in knowing how it works under the hood? Check this //BUILD/ session!

Here is a nice comparison between Azure Functions & AWS Lambda by Tom Maiaroto

But keep in mind - This is only a preview!

Extended Device Management in Azure IoT Hub

Microsoft announced that Azure IoT Hub will have extended Device Management features in the near future enabling us to more easily manage our devices, perform health checks, organise devices into groups, and so on, by exposing several server-side APIs:

  • Device Registry Manager API
  • Device Groups API
  • Device Query API
  • Device Model API
  • Device Job API

What I personally like the most is that I can now define the information model of devices & entities taking the management a step further. In the past a device ID was only linked to access keys without any metadata - Those days are over!

Announcing Azure IoT Gateway SDK

Following Microsofts Internet of Your Things they've announced the Azure IoT Gateway SDK that helps developers & ISVs build flexible field gateways where they can implement edge intelligence to process data before it was even sent to the cloud. This allows us to for example to encrypt our data before sending it over the wire to improve the security of our solutions.

This is really great because it allows us to save cost/time on the gateway part and focus on connecting our devices to the gateway or analyse & process our data in the cloud!

Cortana Analytics Suite is no more, meet Cortana Intelligence Suite!

Last year Microsoft announced the Cortana Analytics Suite, the flagship for building intelligent applications in the cloud or devices based on the (big) data analytics.

At //BUILD/ Microsoft took it a step further and rebranded the Cortana Analytics Suite to Cortana Intelligence Suite!

Next to Cortana the Cortana Intelligence Suite also has two new additional "Intelligence" feature/services:

  • Microsoft Bot Framework enables you to create your own intelligent agents, or bots, to use in your applications to make it feel more natural. After taking a quick look it feels like the idea is to create a WebAPI that is being deployed in Azure as an API App.
  • Project Oxford is now being offered as a service called Azure Cognitive Services (Preview). It is a collection of APIs-as-a-Service that enable you to make your applications more intelligent and contains the following APIs at the moment:
    • Language - Web Language Model, Text Analytics & Language Understanding Intelligent Service API
    • Vision - Face & Emotion API
    • Knowledge - Recommendation API
    • Speech - Speech API

Want to become intelligent yourself? Read more about Azure Cognitive Services here and here.

Azure Data Catalog is now General Available

Azure its enterprise-grade metadata catalog, Azure Data Catalog, is now General Available! Data Catalog stores, describes, indexes, and shows how to access any registered data asset. It enables collaboration on data within the corporation and to make data discovery super easy.

In the new pricing, the limitation on the maximum amount of users in the Free plan is gone & it comes without saying that you really need to have a catalog when your working several data sources, certainly in an IoT solution.

Read the official announcement by Julie Strauss here.

Announcing Azure Power BI Embedded

Microsoft introduced Azure Power BI Embedded, a service that allows you to use embedded interactive visuals in your apps & websites. This allows you to use Power BI Desktop to create reports without having to write any code for the visualization or reporting in your app.

However it's not 100% clear to me how Power BI Embedded relates to Power BI - Is the vision for Embedded to focus on the end user and save developer time while Power BI is focussed for internal usage & data engineers? To be continued...

Here is a small introduction on Azure Power BI Embedded and how you authenticate it against a back-end.

Announcing Azure Storage Service Encryption preview

All new Azure Storage accounts using the Azure Resource Manager now have to possibility to enable Azure Storage Service Encryption (preview). Azure Storage Service Encryption will encrypt all your Blob Storage data-at-rest using the AES-256 algorithm.

You don't need to do anything as this is a managed service where Microsoft will manage the complete process.

Read the full announcement here.

Partitioned collections in DocumentDb across the globe

The DocumentDb team has made several announcements about their service, let's have a look!

For starters: they have a new pricing model that seperates the billing for storage from throughput. Your indexed storage will be billed for each GB per hour that you store while the throughput is based on the throughput units (RU) you've reserved per hour.

With the new Global Databases you can take it a step further and replicate your data from one region to several others allowing you to move your data as close to the consumer as possible. This improves the high-availability of your application and offers a fail-over mechanism.

DocumentDB Global Databases is currently in public preview.

When creating a new DocumentDB collection, you now have the option to create a Single Partition or Partitioned Collections. The partitioned collection allows you to specify a partition key enabling you to store up to 250 GB of data and up to 250 000 request units per second or even increase it more by filing a support ticket.

Last, but not least - DocumentDB now supports using the Apache MongoDB APIs & drivers allowing you to use your existing MongoDb skills & tools to work with DocDb. Because of this you can now use Parse in Azure with DocumentDb.

Here are some additional resources:

Service Fabric going General Available with preview for Windows Server & Linux support

Service Fabric is now General Available and ready to use in production on Azure! Using Service Fabric is free of charge however you'll need to pay for the compute, network & storage that you are using.

For those who have missed last years announcement - Service Fabric is a microservice application platform that allows you to build reliable services & actors in a distributed way. The platform will handle applicationh update/upgrades for you out-of-the-box and is heavily using inside of Microsoft with internal customers such as Azure SQL Databases, Azure DocumentDB, Intune, Cortana and Skype for Business.

Microsoft also announced the public preview of the standalone Service Fabric on Windows Server allowing you to use Service Fabric on-premises or in other clouds. Next to Windows Server, it will also be available on Linx starting with a private preview.

Last, but not least - The runtime has also been improved and the GA SDK is available. Remarkable is that you can now also debug a cluster in Azure from within Visual Studio.

I bet you'd love to read more! Read more about these announcements & improved developement experience here or if you want to learn more about Service Fabric here.

But wait, there is a more!

Here are some small tips/reminders:

  • Azure App Service Advisor now monitors your App Plan giving you recommendations on the resources i.e. to scale out to provide more resources & keep running smoothly. This feature is enabled by default as of last week. Check out this Azure Friday episode if you want to learn more.
  • MyDriving is an Azure IoT & Mobile sample that uses Azure services to build a solution scalable, performant, highly available, and cross platform IoT service and application. The sample comes with a +/- 150 page long guide on how they've built it. Read more here if you want to learn more about it.
  • A small reminder that Azure Managed Cache Service & Azure In-Role Cache will be retired on November 30, 2016

Still want more? Don't forget to browse all the recordings of the event here.

Thanks for reading,

Tom.

Categories: Community
written by: Tom Kerkhove

Posted on Thursday, March 31, 2016 3:50 PM

Tom Kerkhove by Tom Kerkhove

Recently I was working on a Service Fabric project where I was using Service Remoting to communicate from one service to another by using the ServiceProxy.

Unfortunately it caused a "Interface id -103369040 is not implemented by object Codit.MyOtherService" exception.

Here is how I fixed it.

While refactoring my Service Fabric services I got the following exception over and over again:

Interface id -103369040 is not implemented by object Codit.MyOtherService

The exception was caused when I was running the following code:

During the refactoring I added an additional operation to the IMyService but apparently it's looking for it in my IMyOtherService implementation. Odd!

During debugging I noticed that the problem was in the configuration of my service that was trying to initiate a call by using the proxy. The "culprit" lies in the following line:

Do you see it? Neither did I because the problem was in the Settings.xml of the service where the configured URI I was using was fabric:/Codit.Demo/MyOtherService instead of fabric:/Codit.Demo/MyService. This caused the runtime to attempt to call a method on a service implementation that didn't implement IMyService but is an implementation of IMyOtherService instead.

While this seems like a stupid mistake -and it is- it took me a while to notice it. What I once again learned is that the key to success is in your logging - Log enough to know what's going on but don't over do it.

In my case it's a good idea to add an ETW entry to what configured endpoint I'm remoting so I can detect this misconfigured earlier in the future.

Thanks for reading,

Tom.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Categories: Azure
written by: Tom Kerkhove

Posted on Monday, March 21, 2016 11:48 AM

Luis Delgado by Luis Delgado

At Codit, we help customers envision, design and implement solutions focused on Internet of Things (IoT) initiatives. As part of this work, I've realized that organizations investing in IoT initiatives typically walk through a path, which I will call the "IoT Maturity Levels".

 

Maturity levels are important because they provide organizations with a goal-oriented path, so they can measure progress and celebrate small successes on their way to a greater goal. They are important because, as experience shows, it is best for organizations to progress through consecutive maturity levels rather than to simply try to swallow a super-complex project at once. Violent and ambitious jumps in maturity typically fail due to organizational change resistance, immature operational procedures, and deficient governance practices. It is better to have a solid hand on a maturity level before adventuring into the next one.

Here at the 4 maturity levels of IoT, and what they mean for organizations:

Level 1: Data Generation and Ingestion

What is it about: In level 1, organizations begin projects to generate and collect IoT data. This involves coupling their services or products with devices that capture data and gateways to transmit that data, implementing data ingestion pipelines to absorb that data, and storing that data for later use. 

What it means: at this point, companies are finally in a position to generate data and collect it. Data generation is the key building block of IoT, and the first maturity level is aimed at getting your hands on data. Typically, the hardest part are the devices themselves, how to securely capture and transmit the data, how to manage those devices in the field, solve connectivity issues, and building a pipeline that scales to serve many devices.

Level 2: First Analytics

What is it about: once armed with data, companies will typically try to derive some value out of it. These are initially ad-hoc, exploratory efforts. Some companies might already have a developed concept about how they will use the data, while others will need to embark in exploring the data to find useful surprises. For example, data analysts / scientists will start connecting to the data with mainstream tools like Excel and PowerBI and start exploring.

What it means: Companies might be able to start extracting value from the data generated. This will mostly be manual efforts done by functional experts or data analysts. At this stage, the organization starts to derive initial value from the data.

Level 3: Deep Learning

What is it about: the organization recognizes that the data is much more valuable and large than manual analysis permits, and starts investing in technology that can automatically extract insights from the data. These are typically investments in deep learning, machine learning or streaming analytics. Whereas the value of the data in Level 2 was extracted from the manual work of highly-skilled experts, the value of the data in Level 3 is extracted automatically from sophisticated algorithms, statiscal models and stochastic process modeling.

What it means: the organization is able to scale the value of its data, as it is not dependent anymore on the manual work of data analysts. More data can be analyzed in many more different ways, in less time. The insights gained might be more profound, due to the sophistication of the analysis, which can be applied to gigantic data sets with ease.

Level 4: Autonomous Decision Making

What is it about: the deep learning and analytical models, along with their accuracy and reliability, are solid upon exiting Level 3. The organization is now in a position to trust these models to make automated decisions. In Level 3, the insights derived from deep learning are mostly used as input for pattern analysis, reporting dashboards and management decision-making. In Level 4, the output of deep learning is used to trigger autonomous operational actions.

What it means: in Level 4, the deep learning engine of the organization is integrated with its operational systems. The deep learning engine will trigger actions in the ERP (i.e: automatic orders to replenish inventory), LoB systems (remote control of field devices via intelligent bi-directional communication), the CRM (triggering personalized sales and marketing actions based on individual customer behavior), or any other system that interacts with customers, suppliers or internal staff. These actions will require no human intervention, or at least, require minimal human supervision or approvals to be executed.

Do you need to go all the way up to Level 4?
Not necessarily. How far you need to invest into the maturity of your IoT stack is dependent on the business case for such an investment. The true impact of IoT, and what business value it might bring, is very hard to gauge at Day 0. It is best to start with smaller steps by developing innovative business models, rapidly prototyping them, and making smaller investments to explore if an IoT-powered business model is viable or not. Make larger commitments only as steps from previous successes. This allows you to fail fast with minimal pain if your proposed business model turns out to be wrong, adapt the model as you learn through iterations, and allows your team to celebrate smaller successes on your IoT journey.

Categories: Architecture
Tags: IoT
written by: Luis Delgado

Posted on Thursday, March 17, 2016 12:00 AM

Lex Hegt by Lex Hegt

BizTalk360 is becoming increasingly popular as a monitoring and management tool for organizations that use BizTalk. And there’s good news for BizTalk360 users everywhere: You can now extend the BizTalk Deployment Framework (BTDF) to automatically deploy your BizTalk360 alerts!

As you probably know, it is considered a best practice to use the BizTalk Deployment Framework for automatic deployment of your BizTalk solutions. This Framework doesn’t only allow for deployment of BizTalk artifacts, but it also enables you to perform all kind of custom tasks to automate the deployment of your BizTalk application, thereby reducing the risk of error which is introduced when you’re deploying manually. Until recently it wasn’t possible to automate the deployment of BizTalk360 alerts with BTDF, but it is now…

BizTalk360 API

As of BizTalk360 v8, which was released early February 2016, we now have a documented API available, enabling developers to build their own custom solutions by implementing the BizTalk360 API. With this JSON-based API you can operate both BizTalk360 and BizTalk.

The API consists of the following services:

  • ActivityMonitoringService – List of Business Activity Monitoring Services
  • AdminService – List of API’s related to Admin(Super user) activities in BizTalk360
  • AdvancedEventViewerService – List of API’s helps to retrieve details about Advanced Event Viewer
  • AlertService – List of API’s used for Monitoring in BizTalk360
  • BizTalkApplicationService – List of API’s helps to retrieve information about BizTalk Applications
  • BizTalkGroupService – List of API’s helps to get details about the Artifacts in BizTalk Group
  • BizTalkQueryService – List of API’s helps to run queries BizTalk
  • EDIManagementService – List of API’s helps to run EDI queries and to get details about parties and agreements
  • EnvironmentMgmtService – List of API’s related to environment management services in BizTalk360
  • ESBManagementService – List of API’s helps to retrieve details about Enterprise Service Bus

Combined, these services contain many operations, which are used by BizTalk360 itself, but can also be used in custom applications. In this blog post I will describe an example of such a custom application, which automatically creates a BizTalk360 Alert during deployment of a BizTalk application.

If you have BizTalk360 v8 Platinum, Partner, Product Specialist or if you have a Trial license and you want to find out more about the BizTalk360 API, just navigate to Settings / API Documentation. As you’ve probably come to expect from a modern API, you can try each operation directly from the API Documentation.

Note: although you will need one of the above licenses to be able to access the API Documentation, you don’t need a specific license to be able to actually use the API itself!

Deployment of BizTalk applications

In my experience, most organizations that use BizTalk Server as their middleware platform, also use BTDF to deploy their BizTalk solutions. The most important reason to use this open source software over the out-of-the-box capabilities that BizTalk Server offers, is that BTDF enables you to minimize the (sometimes large) number of manual steps required to deploy a full integration solution, thereby significantly reducing or even fully eliminating the risk of human error. Think single-click deployments (your admins will love you for this)!

An important component of BTDF is the SettingsFileGenerator.xml file. In this file, which can be edited with Microsoft Excel, you can define all of the specific settings for each environments in a DTAP street. An example of these type of setting are the URLs of Send Ports, that will be different in a Production environment when compared to a test environment for instance.

Besides being able to deploy all kinds of BizTalk artifacts, the open architecture of BTDF enables you to execute SQL scripts, deploy webservices in IIS and much more. (I actually wrote a blogpost a while ago about how PL/SQL scripts can be executed during the deployment of a BizTalk application).

BT360Deploy

Now that we have a flexible deployment framework with BTDF and a documented BizTalk360 API, it’s only a small step to create a tool that takes advantage of this powerful combination, which will create a BizTalk360 Alert for a BizTalk application during a deployment with BTDF! All this piece of software needs to be capable of doing is reading settings from the BTDF file which contains environment specific settings file and call the BizTalk360 API to create that alert. And the best news is: we’ve already done that for you and we’re offering it for free on CodePlex!

It’s called BT360Deploy, and it works as follows:

BT360Deploy takes 2 arguments, namely:
-a: the name of the BizTalk Application which is deployed
-s: the name and location of the (environment specific) settings file which contains all kind of BizTalk360 parameters

Example:

BT360Deploy -aAxonOlympus.BTDF -sC:\Data\Development\BizTalk\AxonOlympus.BTDF\Deployment\EnvironmentSettings\Exported_LocalSettings.xml

Below you see the output of the deployment of a BTDF project, in which BT360Deploy was incorporated, with the parameters as shown above:

To be able to create BizTalk360 Alerts during deployment, you need to know which fields in the BizTalk360 User Interface correspond with which fields in the BTDF Settings file. For that purpose we created a document which describes that mapping. Below you will find a sample with a couple of fields. The full document can be downloaded from CodePlex, as part of the BT360Deploy package.

Screen/Field in BizTalk360 User Interface Field in BTDF Settings file
Alarm – Basic / Basic Details
Alarm Name BizTalk360_alertName
Email Ids BizTalk360_commaSeparatedEmails
Disable Alarm for Maintenance BizTalk360_isAlertDisabled
Alarm – Threshold / Threshold Alert
Alert on threshold violation BizTalk360_isAlertASAP
If violation persists for BizTalk360_alertASAPWaitDurationInMinutes
Limit number of alerts per violation to BizTalk360_isContinuousErrorRestricted
Notify when things become normal again BizTalk360_isAlertOnCorrection
Set Alerts on set day(s) and time(s) only BizTalk360_isThresholdRestricted
Days BizTalk360_thresholdDaysOfWeek
Start Time BizTalk360_thresholdRestrictStartTime
End Time BizTalk360_thresholdRestrictEndTime

Are you interested in creating your BizTalk360 Alerts during deployment, instead of having to create them manually afterwards? Check out the following link on CodePlex to get your hands on it:

http://bt360deploy.codeplex.com

Besides some binaries, the download contains the earlier mentioned mapping document and a ‘How to…’ document which describes how you can incorporate BT360Deploy in your BTDF project.
Enjoy!

Categories: BizTalk
written by: Lex Hegt

Posted on Thursday, March 10, 2016 3:38 PM

Massimo Crippa by Massimo Crippa

With the latest Azure API Management service update, the Git integration has been rolled out. In this post we will see how we can seamlessly control the versions of the proxy configuration and move it between different environments.

Scenario

All the configuration that are applied to the proxy and the portal customizations are stored in a database in the provisioned tenant. Now, every APIM tenants can expose a public Git endpoint to which we can refer to PULL the configuration down to our local (or remote) Git repository.

Once we have our local version, we can apply the changes we need, push it back to the tenant repository and then save it to the APIM database.

In the diagram below, the steps of our scenario

  1. Save (sync from the APIM repository to Tenant Git)
  2. Pull (From Tenant Git to local repo),
  3. Apply changes (on our local repo),
  4. Push (to Tenant Git)
  5. Deploy (from the Tenant Git to the APIM repository)

 

The Git integration is not enabled by default, so first we have to connect to the management portal, go to the Security area and enable the Git access. 

Save and Pull

Next step is to save the proxy configuration to the APIM tenant’s Git repository. This can operation can be done in two different ways:

  • Use the “Save Configuration to Repository” button on the APIM Admin Portal.
  • Call the “Save” operation of the API Management REST API (here how to enable the REST API).

In both cases you have to specify the branch name where to save the configuration and whether to override or not the changes in newer check-ins. This operation can takes a couple of minutes.

Once completed, you can open a Git console and pull the configuration to create a working copy of the remote repository by using the clone command.

Before that you need to get (in the Admin Portal) a temporary password to access to the remote repository.

 

Then run the “git clone https://coditapi.scm.azure-api.net/” command and specify “apim” as username and the temporary password we got at the step before.

Below the folder structure of the local repository. As you can see the proxy configuration is exported (apis, policies, security groups and products) along with the developer portal customizations.

If in the meanwhile a new configuration has been saved in the APIM repo, we can pull it down with the "git pull" command.

Apply a change

Let's imagine we want to change the policy applied to the Echo API to extend the existing basic round robin algorithm.

The policy is applied at API scope so the file to be edited is policies/apis/Echo_API.xml 

This is the result of the "git diff" command after the change.

Now, in order to add the change to the git staging area use the “git add -A” command and then commit the changes with "git commit -m" as in the picture below.

Now we’re ready to push our changes to the Azure API Management Git repo.

Push and deploy

Type “git push” to sync the changes with the repository on our API Management tenant.

The final step is to deploy our configuration from the tenant repository to the APIM proxy.

This can operation can be done in two different ways:

  • Use the “Deploy Repository Configuration” button on the APIM Admin Portal.
  • Call the “Deploy” operation of the API Management REST API (here how to enable the REST API).

For this step I'm going to invoke the Configuration REST API using postman.  Here the details of my API call.

Method : POST
Address : http://{tenantname}.management.azure-api.net/configuration/deploy?api-version=2014-02-14-preview
Headers :
 + Content-Type > application/json
 + Authorization > SharedAccessSignature=....
Body :
    {"branch":"master"}

 

As response I got a 202 (Accepted) and the Location header with the link to the check the status of this transaction.

With the operationResults operation to check the status (Failed, InProgress, Succeded) of the deploy. It's a GET and again we must specify the Authorization header as in the previous call.

If the deploy succeeded, the changes are immediately applied to the proxy. 

Restore a configuration

Now, imagine that you've applied a wrong configuration to the proxy and you want to restore a previous version from your local git repository. For example, these are the timestamps of my configurations:

  • On proxy: updated at 10:22 AM
  • On Teneant Repo: updated at 10:18 AM
  • Local repo: updated at 08:19 AM

I want to discard the 10:18 AM version and replace the 10:22 AM version with the 08:19 AM one. It's a four step procedure. 

A) First thing to do is to bring the tenant repo in sync with the proxy. This step is necessary to mark the proxy as synched. Without the sync you will get this error as result of the the deploy operation : "Deployment operation failed due to invalid data: Snapshot operation is not safe.  Latest sync date: '2015-11-10T10:18:35.5278745'; latest configuration update date: '2015-11-10T10:22:01.7635694'"

B) Apply a modification to the local repo and commit it. This is necessary so the deploy procedure can recognize that there is something to be overwritten. 

C) run the "git push -f origin master" command to overwrite the version in the tenant git.

D) deploy the configuration using the Admin Portal or via REST API

Conclusion

The git integration is a feature that customers have been asking for a while. Now, you can create and manage different versions of your proxy configurations and move them between different environments.

Cheers,

Massimo

Categories: Azure
written by: Massimo Crippa