wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Thursday, March 9, 2017 8:16 AM

Stijn Degrieck by Stijn Degrieck

You most probably know that Microsoft is the world’s largest contributor to the open source community on the popular GitHub platform, no? That’s right. When it comes to sharing code for open development and collaboration, it is leaving behind companies like Facebook, Google and Red Hat. All this is the result of a major strategic shift initiated by Steve Ballmer, and accelerated by Satya Nadella. One that will allow Microsoft to transform to a full-blown Software-as-a-Service company.

In a letter to all employees two years ago, Satya Nadella, who had just been appointed CEO, said: “Our strategy is to build best-in-class platforms and productivity services for a mobile-first, cloud-first world. Our platforms will harmonize the interests of end users, developers and IT better than any competing ecosystem or platform.”

Today, Microsoft is reporting impressive growth for its SaaS solutions. Revenue from its cloud platform, Azure, grew triple digits, with usage of key computing and database workloads more than doubling year-over-year. And embracing Apple and Android is paying off, making its software easily available on all operating systems. (In fact, that’s often where you’ll find the best Microsoft apps.) Office 365’s enterprise user base is also growing quickly. End of last year, Techradar.com reported it’s already twice as popular as Google’s G Suite in organizations across Europe. It’s a bold move for a company once considered an evil monopolist who perceived open-source as an existential threat to their business. As one court order stated: they put up ‘technical barriers’, making it hard for the competition to work on the Windows operating system. Remember the ‘browser wars’?

I’m happy to see Microsoft’s progress and its approach to open source. At Codit, we welcome the transition from a closed Microsoft-only stack to an open Azure platform. It’s the perfect foundation for co-creation with our customers. For instance on projects related to the Internet of Things.  

We have many customers exploring IoT. Usually they have lots of ideas, devices and sensors. But they have no resources, expertise nor experience to connect these to the cloud and putting their data to work. Cue in the Nebulus™ IoT Gateway. You can use it to link any sensor or device in a couple of minutes to the Microsoft Azure cloud, allowing you to connect, capture and control data in real-time.

I’m a big fan of co-creation. Most customers have a clear view on what they want. But they need help translating it into specific technology features and functions. That’s where we come in, helping you turn big ideas into new tangible services.

What’s your big idea? We’re listening.

- Stijn Degrieck, CEO Codit

Categories: Opinions
Tags: Azure
written by: Stijn Degrieck

Posted on Friday, October 24, 2014 6:00 PM

Sam Vanhoutte by Sam Vanhoutte

On October 21, I presented on Azure Hybrid Connections in the Channel 9 studios in Redmond for the AzureConf conference, the biggest online Azure conference in the world. This blog post is the story of my day.

Tuesday was a great day for me.  I had the chance to talk on AzureConf for the first time and it was really great. AzureConf is the biggest Microsoft Azure online conference and it was on for the 3rd time.  I’m really honored, being among this great line-up of speakers.  The conference was streamed from the Channel 9 studios on the Microsoft campus in Redmond and had several thousands of viewers (stats might be shared later on).  Scott Klein organized the conference this year and I really want to thank him and his team for the good organization and the chance we all got.

This blog post is the story of my day.

The preparation

Since I knew I had to present on Hybrid Connections, I immediately started planning for this talk.  I had never given this talk before (had presented Hybrid Connections as part of my session on www.itproceed.be), so there was a lot of preparation needed.   I used the existing presentation of TechEd as input and guideline, but added more specific content and details to it in order to position Hybrid Connections and compare it with Service Bus Relay and Virtual Networking.

I also had some specific questions and things I wanted to get clarified, and for that I could count on the help and guidance of Santosh Chandwani (PM on the hybrid connections team).  As always, I spent most of the time on my demo for which I used our Codit Integration Dashboard and moved it to the cloud, while the data and back end services were still on premises.  I also built a new mobile service and a universal app, my first time.  And to end, I exposes a managed API through Azure API Management.  

Preconf-day

The day before conference, all speakers were invited for technical pre-checks in the Channel 9 studios.  It was great to see the famous studio and all the equipment that was used there.  You immediately felt the atmosphere was really nice over there.

We got to know the nice crew of people there and had to test our laptops for screen projection, network connectivity and sound.  That seemed to be very important as both Mike Martin and me had some screen resolution issues.  Scott also handed our shirts and we all went our own way to prepare for our talk, the day after.  

AzureConf day

October 20 started.  After final dry run of the demo, we drove to the studios at 6:45AM.  Tension was building, as we saw the twittersphere getting more active about this event.  People from all over the world were tuning in for the keynote of Scott Guthrie.

We settled ourselves in the speaker room and all watched Scott Guthrie detailing out a lot of nice announcements that can be found on the azure blog.

The live sessions

We were watching the sessions from the other speakers, either from the speaker room, or from the 'channel9 war room'.  I believe the content of the sessions was very good and showed a good variety of services that are available on the Microsoft Azure platform.  The live sessions are available on channel 9 as well.  So if you have missed a session, go ahead and watch it online.

  • Michael Collier: Michael talked about the resource manager in Azure.  Very interesting capabilities of a service that will definitely evolve over time.
  • Mike Martin: Mike had a nice session on one of the topics that is crucial for every company: backups.  He showed who the Azure platform offers features & services for this.
  • Sam Vanhoutte: I guess that's me.  Hybrid connections, web sites, mobile services, service bus relay & API management.  All in one session.
  • Seth Juarez: This was a great session on one of the newest services in Azure: Machine learning.  By combining humor and complex maths, he made the complex subject of Machine learning much more 'digestable'.
  • Rick G Garibay: Rick gave a session that was very similar to the sessions I gave on IoT at the UKCSUG, Cloudburst and WAZUG.  Positioning the investments of Microsoft around IoT and discussing Reykjavik and the concepts of cloud assisted communications.  Great to see the complex demo worked.  I can guarantee it's not easy.
  • Vishwas Lele: Vishwas showed tons of tools and concepts (of which I believe Docker and the Traffic Manager for SharePoint were really nice).
  • Chris Auld: Chris talked about DocumentDB, the new document database in Azure.  A real good explanation and demonstration of the backend for MSN and Onenote. 

Everything was recorded in the Channel 9 studios and here's a nice group picture of all live speakers with the Channel 9 crew.

The recorded sessions

And to add to the great live content, there are also a lot of recorded sessions available on Channel 9.  I would encourage you all to have a look and download those sessions to watch whenever you have the time as there's real great content out there.

It was a real honour and pleasure to be part of this group of great speakers.  And with this, I would like to thank Scott Klein for having me over, the great crew of Channel 9 and all speakers for the great time.

Sam

 

Categories: Community
Tags: Azure
written by: Sam Vanhoutte

Posted on Wednesday, February 22, 2012 10:10 PM

Sam Vanhoutte by Sam Vanhoutte

This post outlines the various extensibility options of Microsoft Dynamics CRM Online and how this can be integrated with the Azure Service Bus to integrate with on premises applications.
This is part 1, focusing on the out of the box capabilities.

Recently, I was asked by Microsoft to give a presentation on how to integrate CRM online with on premises applications, on the Techdays in Belgium.  While hybrid integration was totally not new for me, the challenge here was to learn the extensibility capabilities and integration aspects of CRM online.  And I have to say, I was happily surprised with the extensibility framework that Dynamics CRM Online provides, especially with the Plugin mechanism.

All code of this article will be added with the second blog post of this series.

Extending CRM Online

There are different ways to extend CRM Online.  This is possible through forms customization, or by injecting Ajax/Silverlight controls.  However, these customizations are only applied on changes and events that happen through the front end.  Knowing that more and more actions of a CRM applications (especially data creation) are getting automated, I believe that customization should be injected in the actual ‘processing pipeline’ of CRM.  And that is where custom plugins come into the picture.

Getting started

Getting started is very straightforward.

  1. If you don’t have a CRM online account, it is very easy to create a trial account (which is only valid for 30 days).  Just browse to http://www.crmonline.com and register for a trial account, using your Windows Live Id (WLID).
  2. After this, you need to get the CRM SDK.  This one can be downloaded here.
  3. Once you have the SDK downloaded, you should build the PluginRegistration project, that can be found in %sdk%\tools\pluginregistration directory.  You will use this tool to inject plugins and create endpoints on your CRM system.

Writing custom plugins

Every plugin you create, needs to implement the IPlugin interface, which has one method; Execute.  These plugins should then be linked with the corresponding CRM event, through the PluginRegistrationTool.

using Microsoft.Xrm.Sdk;
 
namespace CRMPlugins
{      
  public class MyPlugin : IPlugin       
  {             
     public void Execute(IServiceProvider serviceProvider)             
     {    }  
   }
}

Implementing a ‘Base plugin’

Soon, I noticed that almost every plugin has some generic code that is applicable to almost all plugins.  For that, I created an abstract class BasePlugin, that every plugin will implement.  This BasePlugin takes care of a lot of common aspects:

  • Tracking: plugins can now easily track events and logging info, in case exceptions occur.
  • Configuration: plugins can receive external configuration, when linked with an event (example: connection strings, passwords, service bus settings…)  This configuration is then passed as a string to the plugin constructor, where one can parse it.
  • Entity filtering: some plugins should only be executed against specific entities (for example: only for invoices, only for salesorders…).  It is possible in the BasePlugin to filter out the entity, just by passing in the entity logical name with the constructor.
  • OrganizationService: the organization service allows to query and lookup entities that are configured on the CRM system.  This service is made available to every executing plugin.
  • Entity handling: reading and updating attributes on entities is made generic now.
  • Exception handling: all exceptions are caught and handled in a consistent way.

Implementing a plugin now just comes down to writing the specific logic and not bothering about the general logic that is always the same.  The only things left are the following:

  • Define the constructors: a constructor should exist with two string parameters as arguments.  These are the secured and unsecured configuration settings that get passed in from the plugin registration.  If you want to make sure the plugin only gets executed for certain entity types, you can easily pass in the entity type name to the base constructor, like here (only execute on invoice entities)
    public MyPlugin(string unsecure, string secure) : 
        base(unsecure, secure, "invoice") 
    { 
    } 
  • Implement the InternalExecute method: this is the abstract method that gets called by the BasePlugin and that is wrapped in the exception handling and is preceeded with the EntityType check.  This method will contain the actual logic of the plugin.
public override void InternalExecute(IServiceProvider serviceProvider, Microsoft.Xrm.Sdk.Entity entity)
{}

All the following plugins are using this Base Plugin class, in order to focus on the actual logic and abstract away all CRM specific logic.  (for that, you can easily find resources and blogs online).

Writing a plugin: sending a text message to the invoicee

As a first test, I created a plugin that sends a Text message, when an invoice gets created. For this, it looks up the phone number of the account, linked with that invoice. To send the plugin, we just use an existing online service (www.smsbox.be), that exposes the texting functionality through an HTTP endpoint.

Execution logic

The execution logic for the plugin is pasted below.  The following steps take place

  • First, the invoice number and amount are read from the Entity.  (the entity is of type “invoice”, because we have passed the “invoice” as an argument to the BasePlugin constructor).  To read these values, we call the base method ReadAttribute.
  • If the customerId attribute is available, we make an EntityReference, using the customerId value and we retrieve that account through the OrganizationService (provided by the BasePlugin).  We also specify (for performance reasons) that we only want to receive the telephone number attribute.
  • If that attribute is available, we send the Text message to the right phone number.  Full code is available in the provided zip file.
// Obtain the target entity from the input parmameters.
string invoiceNr = ReadAttribute(entity, "invoicenumber", "(unknown)");
string amount = ReadAttribute(entity, "totalamount", "0");
if (entity.Attributes.Contains("customerid"))
{
  EntityReference Customer = (EntityReference)entity["customerid"];
  if (Customer.LogicalName == "account")     
  {        
    Entity Account = OrganizationService.Retrieve("account", Customer.Id, new ColumnSet(new string[] { "address1_telephone1" }));         
    string phoneNr = ReadAttribute(Account, "address1_telephone1", null);         
    if (phoneNr != null)         
    {            
      SendSms(phoneNr, string.Format("Invoice {0} has been created for you with the amount of {1}", invoiceNr, amount), null);         
    }     
  }
}

Sending the SMS

For those who are interested, the code to do the HTTP post to the smsbox.be endpoint is pasted below:

 

private void SendSms(string phoneNumber, string text, ITracingService tracingService) 
{ 
  try   
  {    
    phoneNumber = phoneNumber.Replace("+", "");      
    string prefix = phoneNumber.Substring(0, 2);      
    string phoneNr = phoneNumber.Substring(2);     
    WebRequest request = WebRequest.Create(string.Format("http://www.smsbox.be/scripts/sendsms.php?login=####&pwd=####&prefix={0}&number={1}&message={2}", prefix, phoneNr, text));     
    request.Method = "POST";     
    var response = request.GetResponse();     
    response.Close();   
  } 
  catch (Exception ex)   
  {     
    if (tracingService != null)      
      tracingService.Trace(ex.Message);     
    throw;   
  } 
}

 

Registering the plugin with the PluginRegistration tool.

As indicated earlier, you need the PluginRegistrationTool to register the plugin on the CRM system and to link it with the corresponding events.  To do that, follow these steps:

  1. Open the PluginRegistrationTool
    • Enter the discovery URL of your CRM account.  This can be found in the CRM portal, in the resources section.
    • Enter the WLID that is linked with your CRM online subscription and click connect, to enter the password.
      image
  2. In the left pane, you now can see all CRM subscriptions that belong to your WLID.  Double clicking one of them will retrieve the entire customization profile.
    • Now you can add assemblies, plugins and endpoints and save them to your CRM instance.
      image
  3. Now it’s time to upload our assembly with the plugin.  For that, it’s as easy as clicking the Register new assembly button in the toolbar and selecting the plugin.
    • For CRM Online, the only Isolation Mode that is available, is the Sandbox mode, meaning you have reduced functionality.
    • The assembly should be deployed to the database, since the Disk or GAC is not available in CRM Online.
    • Once everything is configured, clicking the Register Selected Plugins will upload and register the plugin on the CRM Online system.
      image
  4. The plugin we have created, should be executed, when a new invoice is created.  To do that, we need to register a new step for the plugin. 
    • Select the plugin and click Register new step. This pops up a new window, where we need to provide the following information.
    • Message: this is the type of event.  In this case, we type Create (notice the intellisense, while typing)
    • Primary Entity: the logical name of the entity to which the event will be linked.  In this case, we select invoice. (case sensitive)
    • Then we can specify the execution order and security context in which the plugin should be executed.
    • The plugin can be executed at a specific time in the processing pipeline.  Pre-validation, Pre-operation and Post-operation.  In this case, we will execute the action, when the invoice has been saved in the system.  Therefore, we select the post-operation stage.
    • The execution mode defines if the user will be waiting during the exeuction of the pipeline (synchronous mode).  Asynchronous mode will increase the user experience, but when this action fails, it is not visible to the end user and only visible to the administrators through the CRM portal.
    • On the right, it is possible to specify configuration values that will be passed to the constructor of the plugin.  (we handle this in the BasePlugin)
      image
  5. That’s all that is needed to link our custom action with the right event on CRM Online.  Straight forward and easy, isn’t it?

 

CRM out of the box service bus connectivity

Dynamics CRM provides some out of the box capabilities to integrate events with the Azure Service Bus on different levels: Messaging and Relay.  For that you first need to configure the Access Control Service (ACS).  This is typically a complex step, but the registration tool makes this extremely easy and shows a perfect example of how Microsoft is using its own components and capabilities in a good way.

To do this, you need to register a new service bus endpoint through the PluginRegistrationTool.  (Register New Service Endpoint). 

  • Here we need to provide the following settings:
    • Now we need to specify our service bus namespace and the path to our queue/topic or service endpoint.
    • It’s also needed to specify the type of endpoint in the Contract drop down: Oneway, Queue, TwoWay, Rest.
      image
  • Secondly, we need to configure the ACS settings for this endpoint, so that the CRM online system has access to the service bus endpoint.
    • Clicking on Save & Configure ACS opens a new tool window, where we need to specify the management key and the issuer name for that key.
    • We also need to upload a certificate.  This public certificate key can be downloaded from the CRM online portal by clicking on Settings > Customizations > Developer Resources and Download Certificate.
      image
    • After clicking the Configure ACS button, we get a nice log, indicating what this wizard has done for us.  The highlights are pasted below:
      Trying to find out the ACS Version.
      ACS Version is: V2
      Creating ManagementService for codittest-sb
      Created RelyingParty with Name: testqueue, RealmName:
      http://codittest.servicebus.windows.net/testqueue, ID: 10004406
      Created RuleGroup with Name: Rule group for testqueue
      Assigned RuleGroup to RelyingParty
      Created Rule: sampleorgsendrule
      Created Rule: sampleownersendrule
      Created Rule: sampleownerlistenrule
      Created Rule: sampleownermanagerule
    • To test if the ACS was configured successfully, it is possible to click on the Save & Verify Authentication button.

Messaging capabilities

It is possible to use the Plugin Registration tool to register a service bus messaging endpoint.  This endpoint can then be of type queue, but it also works with topics (since both topics and queues are exposing the same REST interface).  On this endpoint, it is also possible to register a step, linked with an event.  When this event is fired, the entity will be serialized and written to the messaging endpoint.  From then on, it is possible to use the receiver functionality to receive these entities from the messaging endpoint. 

There is one downside, however.  No properties are being added to the BrokeredMessage, which makes it hard to use the publish/subscribe pattern through subscriptions that are using a routing filter.

Relay service capabilities

The Relay service capabilities can also be leveraged through these endpoints, but then it is required that the on premises service, exposed over the service bus relaying endpoint implements the specific contract (Microsoft.Xrm.Sdk.IServiceEndpointPlugin), as indicated in the SDK documentation.  A sample service can be found in the SDK (%SDK%\samplecode\cs\azure\onewaylistener).  This application exposes itself on the service bus endpoint through WsHttpRelayBinding.

Conclusion

Microsoft Dynamics CRM Online is truly a good and well-designed product that leverages the various capabilities and components that the Microsoft platform is offering.  It provides a very extensible plugin model and application concepts in order to configure and customize the application.  Next to that, it also provides some out of the box capabilities to integrate with the various Azure Service Bus capabilities, including relaying and messaging.

However, these out of the box integration capabilities have some limitations and my next post will show how to work around these limitations.

Sam Vanhoutte, Codit

Categories: Azure
Tags: Azure
written by: Sam Vanhoutte
1 comment

Posted on Thursday, February 9, 2017 4:00 PM

Massimo Crippa by Massimo Crippa

In Azure API Management, Groups are used to manage the visibility of products to developers so the developers can view and consume the APIs that are contained into the groups in which they belong.

Suppose that we have a custom group for developers affiliated with a specific business partner and we want to allow those developers (that signed up with different identity providers) to access only to the partner's relevant Products.

Let's combine Logic Apps, Azure API Management and ARM together to automate the user group association.

In short: no matter what which identity provider (AAD, Google, Twitter, etc..) is used to sign up, when the user belongs to the @codit.eu domain it should be added to the "Codit Dev Team" custom group.

The basic idea here is to use logic apps as a batch process to get the list of registered users and then call a child logic app to assign the current developer to a proper custom group to manage the product visibility.

Logic Apps and Azure API Management

There are three ways to invoke an API Management endpoint from a Logic App:

  • API Management connector. The connector is pretty straightforward. You first select an APIM tenant, the API and then the operation to be called. Finally, the available headers and parameters are automatically displayed. The APIM connector by default shows only the APIM tenants created in the same subscription where the Logic App was created. 
  • Http + Swagger connector. This connector provides a similar user experience as the APIM connector. The shape of the API with the parameters are automatically integrated in the designer.
  • Http connector. It requires to specify HTTP verb, URL, Headers and Body to perform an HTTP call. Simple as that!

In this exercise, all the services that had been integrated are located in different Azure subscriptions therefore I used only Http and Http+Swagger connectors.

Manage 'em all

With the "Every API should be a managed API" mantra in mind and with the final goal to have a more information about which API is called and its performance we created a facade API for every HTTP call.

Here the list of managed APIs:

  • Every call to the Azure Resource Manager (get users, get groups by user, add user to group)
  • Get the token to authorize the ARM call
  • Call the child Logic App

And here the Logic App workflows that had been created. 

Some other benefits we got from the virtualization: 

  • Use of a single authorization model between Logic App and APIM by providing an API Key via the "Ocp-Apim-Subscription-Key" header.
  • Balancing complexity and simplicity. The ARM authentication is delegated to the API Management layer.
  • Apply a consistent resource naming convention. 

Azure API Management Policies

The policy engine is where the core power of Azure API Management lies. Let's go through the policies that have been configured for this exercise. 

Get the bearer token

A token API with a GET operation is used by the ARM facade API to get the bearer token to authorize the call to Azure Resource Manager endpoint. The policy associated to the "get-token" operation changes the HTTP request method and sets the body of the request to be sent to the AAD token endpoint using the password flow.

Call the ARM

This is the call to the ARM endpoint (get users, get groups by user, add user to group). The "send-request"-policy is used to perform a call to the private token API and to store the response in the bearerToken context property.

The "set-header" policy in combination with a policy expression is used to extract the token and to add it as a header to the request sent to the ARM endpoint.

This policy can be improved by adding the policy expressions to store and retrieve the token from the cache. Here an example. 

Logic Apps facade API

The Logic Apps workflows that expose an HTTP trigger call can be called by using the POST verb only and passing the parameters in the body of the request.

The child workflow that takes care to assign a user to a specific group has been virtualized via Azure API Management to change the URL segments as here https://api.codit.eu/managedarm/users/{uid}/groups/{groupname} and to change the request method to PUT.

Conclusion

Thanks to this simple Logic App and some APIM power I can now be sure that every new colleague that sign up to our developer portal is automatically associated to the internal developer team so that he/she can get access to a broader set of APIs.

A similar result can be achieved using the Azure B2B/B2C integration in combination with the AAD security groups but, at the time of writing, the APIM integration with AAD B2C has not been completed yet.

Another benefit of managed APIs is the gain of visibility of the exposed assets and their performance. Discover how an API is used, information about the consumer and be able to discover the trends that are most impacting the business.

Cheers

Massimo

Categories: API Management
Tags: Azure
written by: Massimo Crippa

Posted on Thursday, July 7, 2016 11:58 AM

Luis Delgado by Luis Delgado

Discover how to unit test your node.js Azure functions on Azure, to increase code quality and productivity, using these code samples.

Writing unit and integration tests for Azure Functions is super critical to the development experience, since their execution relies on context variables and are beyond your control and supplied by the runtime. Furthermore, currently there is no local development or debugging experience available for Azure Functions. Therefore, testing if your functions behave properly, in the context of their runtime, is extremely critical to catch defects and increase your productivity.

Because Node.js is dynamically-typed, I want to share a quick trick on how to mimick the Azure Functions runtime context in order to test your functions. I did not find any documentation from Microsoft related to unit testing Node.js Azure Functions, so feel free to comment on the approach I propose here.

As an example, we are going to make a function that posts an observation every minute to Azure IoT Hub:

deviceSimulator/index.js

Now we want to write a unit/integration test for this function.

deviceSimulator/test.js

The function getContextObject simply returns an object the mimics the context object expected by the Azure Functions runtime. The test will simply import your function from index.js, create the mock-up context object and feed it to your function for execution. Finally, within your test, you can override the context.done() function to do the assertions you need and call done();

Is this the proper way to test Azure Functions on Node.js? I will let the Functions Product Group comment on that :). However, this method works for me.

The other alternative you have is to create your inside (internal) functions on other files that you can test separately in the traditional way you would test JS code, and import those files in your index.js file. The problem I see with that approach is, if your internal functions make a call to the context object, your tests will probably fail because of this.

Comments, feedback or suggestions? Submit an issue to the repository or write them below.

Categories: Azure
written by: Luis Delgado