wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Tuesday, September 23, 2014 7:14 PM

Glenn Colpaert by Glenn Colpaert

Tom Kerkhove by Tom Kerkhove

Yesterday was the second time Codit organized an Integration Summit with different sessions on new technologies and testimonials from customers and community enthusiasts.
In this blogpost we will take a look back on what happened yesterday, so in case you missed it... happy reading!

Opening keynote: The Future of Integration (Richard Seroter)

 

In this amazing opening keynote Richard talked about the future of integration, what are the current trends and how to prepare for the future technologies?

The main question Richard tried to answer in his session is how the current trends change our typical XML application integration.
In our current integration enterprises it’s all about data volume, endpoints, technologies and destinations.
With the new trends like cloud computing, IOT, mobility, wearables and many other things we are introducing a whole new range of challenges to us integration specialists and companies.

Richard took us round all the current trends and tackled some of the more important implications on the integration industries.

It’s difficult to go through all of these trends and implications, so I would suggest to check out the slides of Richard when they are made available because they contain a very interesting overview of the current trends.
However I would like to give you the tips and suggestions which Richard gave us on how to prepare for this new revolution of trends.
First of all, BE ENGAGED! Get on Twitter and start following people, join conferences and meet-ups (even on technologies you never worked with) and most important of all share your knowledge with your co-workers.


GET EDUCATED! Learn the new products, protocols and new architectures by playing around and trying those hands-on. Never stop training yourself.
ENGINEER! Decompose current dependencies in your applications and integration solutions. Stay on the edge of technologies and give the latest technologies a try and most important of all. Try to automate as much processes as possible.

You can view his slidedeck here.

How to make everybody love SaaS (Sam Vanhoutte)

 

Sam Vanhoutte shared his vision & experience with the integration of SaaS in different architecture and exchange patterns which comes with certain challenges that have to be solved and how you could solve them going from external connectivity to security & identity to mobility.

He also showed us how easily you can add Salesforce to your infrastructure without creating a new Active Directory.

The key here is that each scenario will have their set of challenges depending on a lot of factors, for example: is it a Ground-to-Cloud, Cloud-to-Ground or Cloud-to-Cloud integration? It's just a matter of finding the right technology/service to bridge the gap.

 

Integration project tips & tricks (Toon Vanhoutte, Serge Verborgh and Danny Buysse)

 

This session consisted of three different parts.

In the first part Serge explained the attendees what methodology Codit is using when doing an integration project, this methodology is not only applicable to Codit integration project but can be applied to any integration project.
It’s all about identifying and tackling some common concerns in an integration project in an early phase.

The second part of this session was handled by Toon and was all about continuous integration and Application Lifecycle Management (ALM).
The key in a good ALM setup is all about the repository, automated testing and deployment and behavior testing. Always know what codebase is deployed where!
To round it all up Toon demonstrated how to easily setup ALM and automated deployment with the help of some tools Codit has developed.
After Toon, Danny took the stage to talk about performance management and detecting issues on your platform in an early stage. He talked about the differences between application monitoring and performance management monitoring which is actually a story about re-active vs pro-active operations.
To do this pro-active monitoring Codit is using AIMS for BizTalk, read more information on the AIMS product.

Mobility: it's not about the "if", but about the "how to"! (Rudy Van Hoe)

 

Rudy van Hoe walked us through the vision of Microsoft - what they have learned and why they made certain changes. He talked about the new Mobile first, cloud first model and how you can integrate mobility with your cloud IaaS/PaaS-infrastructure.


Look beyond the device and application, a testimonial (Hans Valcke).

 

 

In the testimonial session Hans took us on a trip round the infrastructure setup of the Mohawk Group (Unilin). He gave us some idea on how do they tackle the challenges in their integration and services infrastructure with over more than 100 BizTalk Servers and several hundred services.


One of the key features to the success of their integration setup and manageability of their services is Sentinet, Hans demonstrated how they use the product in their enterprise. Read more information on Sentinet.
Next to managing BizTalk and services the integration team also has to manage connectivity to certain mobile applications. The biggest challenge there according to Hans is the data and the provisioning of the data and application.
Last but not least Hans gave us some tips for a successful mobile strategy vision, here are the some of the key tips:
• Decouple your applications from your ERP
• Stimulate your development team to re-use existing services
• Monitoring and alerting is the key to keep your application in a healthy state
• Make mobile development abstract from base services development
• Operate in a secure way

 

 

Internet of Things – Hype or Reality? (Piet Vandaele)

 

In this session Piet gave us an overview on what IOT is and what it’s all about and how IOT leverages a number of other trends.
IOT is really all about embedded sensors connected to the internet and to each other and allows businesses and manufacturer to make better decisions on the moment they need it.
The reason why IOT is booming right now can be brought down to 2 simple reasons being: there is a whole new range of chipsets available that are more power efficient when it comes to connectivity and they are more affordable than before.

The maturity model of IOT consists of three different stages, first we have the basic information support (reading out meter details for example), then we have the remote operation support and last we have the remove performance improvement support.
Piet demonstrated this model by showing some real life cases and scenario’s.
The most question asked when it comes to IOT scenarios is whether to create an onsite, cloud or hybrid (cisco routers and switches) implementation.
According to Piet it’s not an OR story but more an AND story, preprocess it locally and store and execute logic in the cloud.

 

Win of the day: How to fit a full day Summit in one single demo? (Tom Kerkhove and Sam Vanhoutte)

 

 

Sam Vanhoutte & Tom Kerkhove had built a demo that covered most of the topics covered during the day and raffled two bottles of champagne to the Kinect Play Box. Keep an eye on the Codit Blog for a detailed post on the demo!

 

Keynote: Marc's Motivational Talk (Marc Herremans)

 

We had a full day of integration talks - which were very interesting - but we ended with a different kind of session: Marc Herremans joined us and told us about his life before and after his accident. He learned us - or at least me - that nothing can stop you from achieving your goals and you have to fight for the things you want to achieve and love, especially for your family as this is the most important thing in life.

This was a very interesting session that can't be written down but I'd like to summarize it with a quote of Marc - "Every setback is an opportunity to fight back."

Posted on Thursday, September 4, 2014 3:57 PM

Glenn Colpaert by Glenn Colpaert

Description of an issue when adding the WCF.OutboundCustomHeaders to the context when sending to the BizTalk ServiceBus adapter.

For a hybrid scenario I'm currently working on, using onsite-WCF services and Azure Service Bus, it was necessary to have the WCF Headers from the original call available as Brokered Message Properties in Azure Service Bus.

We created a SB-Messaging port to send the message to Azure Service Bus and added the WCF Namespace (http://schemas.microsoft.com/BizTalk/2006/01/Adapters/WCF-properties) in the Brokered Message Properties window.

 

We quickly ran into following issue:

The adapter failed to transmit message going to send port "SpBlog" with URL "sb://coditblogdemo.servicebus.windows.net/demo". It will be retransmitted after the retry interval specified for this Send Port. Details:"System.InvalidOperationException: Envelope Version 'EnvelopeNone ( http://schemas.microsoft.com/ws/2005/05/envelope/none)' does not support adding Message Headers.

At first I tought this error was related to the fact that we added the WCF Namespace in the Brokered Message Properties window, but even if we removed the WCF namespace, the error still occurred. 

In fact we noticed that from the moment the WCF.OutboundCustomHeaders is in the context of the message this error occurs on the SB-Messaging port.

Cause

Looking at the stack trace it seems that the SB-Messaging adapter is built on top of the BizTalk WCF Adapter runtime. Which makes perfect sense. The only downside here is that the SB-Messaging adapter acts the same as the WCF adapter when it comes to the OutboundCustomHeaders property. 
When using the WCF adapter and adding the WCF.OutboundCustomHeaders to the context, the value of this property gets added to the SOAP:Header of the outgoing message and that is exactly what is also happening with the SB-Messaging adapter. Let’s call it a hidden feature in the SB-Messaging adapter.

Solution

The solution for this problem was fairly simple and straight forward. We created a custom pipeline component: ‘Context Copier” that copies the value of the OutboundCustomHeaders property to another context property. After assigning the value to the new context property we write null to the OutboundCustomHeaders context property. That way the OutBoundCustomHeaders is removed from the context.

 

Off course this is only the basic outline of the component. In our ‘Context Copier’ we added the possibility to add a source and destination list of properties that need to be copied. But the basic outline below will already solve the issue of the OutboundCustomHeaders when sending to Service Bus.

 

 

After adding this pipeline component to our Send Pipeline, the issue was resolved and we could happily continue integration with Service Bus!

 

Cheers,

Glenn Colpaert

written by: Glenn Colpaert

Posted on Thursday, August 28, 2014 3:55 PM

Massimo Crippa by Massimo Crippa

Sentinet is highly extendable through standard Microsoft .NET, WCF and WIF extensibility points, and through the Sentinet API interfaces.

In this 4th post I want to continue the Sentinet Extensibility series exploring another possible customization, the routing.

Routing

The Routing feature allows to deliver the messages received on the virtual inbound endpoint to more than one alternative endpoint of the same backend service. When the backend service exposes multiple endpoints, some of them (or all) can be included in message routing by marking them using the Sentinet Administrative Console. Notice that, to activate the Routing at least two backend endpoints must be selected.

The Sentinet routing feature improves the API availability with an automatic failover. This means that, in case of communication failure, the Sentinet Node falls back to the next available endpoint (this does not happen in case of the SOAP Fault because it's considered as a valid response).

Sentinet supports four router types:

  • Round-Robin with priority or equal distribution of the load. This is the default routing mode, the fallback is automatic.
  • Simple Fault Tolerance. The routing mechanism always hit the endpoint with the highest priority then, in case of communication failure, it falls back to the endpoint with lowest priority.
  • Multicast. A copy of the incoming message is delivered to all the endpoints.
  • Custom. The routing rules are defined in a custom .NET component.

Scenario

Here are  the requirements for this scenario: 

  • Sentinet is deployed behind the network load balancer and the customer doesn't want to pass again through the NLB.
  • The virtualized backend service has nine endpoints (three per continent) and the load should be routed depending on which continent the request is coming from.
  • The load routed to Europe and to North America should be equally distributed between the endpoints (simple round robin). 
  • The load routed to Asia should always hit a specific endpoint and in case of error must fall back to the others (simple fault tolerance). 

In short, we want to build a geography-based custom router that merges the Round-Robin and the Fault-Tolerance types. To build the GeoRouter I started with the example that I found in the Sentinet SDK.

Build the custom Router

A Sentinet custom Router is a regular .NET component that implements the IRouter interface (ref. Nevatech.Vbs.Repository.dll) and MessageFilter abstract class.

The IRouter  inteface contains three methods: 
- GetRoutes – Where to define the routing rules. 
- ImportConfiguration – Read the (and apply) the component’s configuration.    
- ExportConfiguration – Save the component its configuration.

 

The custom router reads the component configurations where we define which endpoint is contained in which region (continent) and the type of routing to be applied. Based on this XML the GetRoutes method creates the Route object which is responsible for the message delivery.

<Regions>
  <Region code="NA" roundRobin="true">
    <!-- North America -->
    <Endpoint>net.tcp://northamerica1/CustomerSearch/4</Endpoint>
    <Endpoint>net.tcp://northamerica2/CustomerSearch/5</Endpoint>
    <Endpoint>net.tcp://northamerica3/CustomerSearch/6</Endpoint>
  </Region>
  <Region code="AS" roundRobin="false">
    <!-- Asia -->
    <Endpoint>net.tcp://asia1/CustomerSearch/7</Endpoint>
    <Endpoint>net.tcp://asia2/CustomerSearch/8</Endpoint>
    <Endpoint>net.tcp://asia3/CustomerSearch/9</Endpoint>
  </Region>
  <Region code="EU" roundRobin="true">
    <!-- Europe -->
    <Endpoint>net.tcp://europe1/CustomerSearch/1</Endpoint>
    <Endpoint>net.tcp://europe2/CustomerSearch/2</Endpoint>
    <Endpoint>net.tcp://europe3/CustomerSearch/3</Endpoint>
  </Region>
</Regions>

 

The GetRoutes method, returns a collection of Route objects. A Route is composed by a Filter expression, an EndpointCollection and a Priority.

 

How the sentinet engine processes the Collection<Route> object?

The Sentinet engine processes the routes one by one according to the order defined (priority field) untill a first match occurs. Then, when the filter criteria is matched, the request message is sent to the first endpoint in the EndpointCollection. If the current endpoint throws an exception Sentinet fallbacks to the next endpoint in the collection.

 

How to populate the Collection<Route> to achieve our goals?

The fallback is automatically implemented by Sentinet every time that in the Route’s endpoint collection there are more than one endpoint. So the creation of a route which contains one single endpoint disables the fallback mechanism.

 

The round robin mechanism implemented in this demo is very simple. Basically the distribution of the load between the endponts is achieved :

- Creating a number of routes equal to the number of the endpoint in that region (e.g. in europe we have 3 endpoint so 3 routes are created and added to the collection).

- Every route has a different a filter expression based on a random number.

- In every route’s endpoint collection, the items are sorted in a different order to prioritize a different endpoint at every iteration.

 

Here a visual representation of the Routes collection to achieve the RoundRobin + Automatic fallback

Automatic fallback without round robin

Round robin without the automatic fallback (not implemented in this example)

 

So what does the code do? Basically, it reads the collection of the endpoint that we checkmarked during the virtual service design and if the endpoint is contained in the XML configuration it is added to the continent-based route object.

 

Here the GetRoutes code

        public IEnumerable<Route> GetRoutes(IEnumerable<EndpointDefinition> backendEndpoints)
        {
            if (backendEndpoints == null) throw new ArgumentNullException("backendEndpoints");

            // Validate router configuration
            if (!Validate()) throw new ValidationException(ErrorMessage);

            // Collection of routes to be returned
            Collection<Route> routes = new Collection<Route>();

            // Ordered collection of outbound endpoints used in a single route
            Collection<EndpointDefinition> routeEndpoints = new Collection<EndpointDefinition>();

            // The order of a route in a routing table 
            byte priority = Byte.MaxValue;

            foreach (Region region in Regions)
            {
                // Collection can be reused as endpoints are copied in Route() constructor
                routeEndpoints.Clear();

                // collection of the backend endpoint per region 
                foreach (string endpointUri in region.Endpoints)
                {
                    // Find outbound endpoint by its AbsoluteURI
                    EndpointDefinition endpoint = backendEndpoints.FirstOrDefault(e => String.Equals(e.LogicalAddress.AbsoluteUri, endpointUri, StringComparison.OrdinalIgnoreCase));
                    if (endpoint == null) throw new InvalidOperationException(String.Format(CultureInfo.CurrentCulture, InvalidRouterConfiguration, endpointUri));
                    routeEndpoints.Add(endpoint);
                }

                if (region.EnableRoundRobin)
                {
                    // build a route for each endpoint in the region
                    var iEndpointIndex = 0;
                    foreach (string endpointUri in region.Endpoints)
                    {
                        // change the backend's endpoint order 
                        if (iEndpointIndex > 0) SortEndpoints(routeEndpoints, iEndpointIndex - 1);

                        // Configure message filter for the current route
                        var rrFilter = new GeoMessageFilter
                        {
                            ContinentCode = region.Code,
                            RoundRobin = region.EnableRoundRobin,
                            BalancingFactor = GetBalancingFactor(iEndpointIndex)
                        };

                        routes.Add(new Route(rrFilter, routeEndpoints, priority));
                        iEndpointIndex++;
                        priority--;
                    }
                }
                else
                {
                    // build a route for each region
                    var filter = new GeoMessageFilter
                    {
                        ContinentCode = region.Code,
                        RoundRobin = false
                    };
                    // endpoint Fallback scenario
                    routes.Add(new Route(filter, routeEndpoints, priority));
                }
                priority--;
            }

            return routes;
        }

And the messageFilter class

    public sealed class GeoMessageFilter : MessageFilter
    {
        #region Properties

        public String ContinentCode { get; set; }
        public bool RoundRobin { get; set; }
        public double BalanceFactor { get; set; }

        private static Random random = new Random(); 
        #endregion

        #region Methods

        public override bool Match(Message message)
        {
            var remoteProps = (RemoteEndpointMessageProperty) message.Properties[RemoteEndpointMessageProperty.Name];
            return Match(remoteProps.Address, ContinentCode);
        }


        private bool Match(string ipAddress, string continentCode)
        {
            var requestCountryCode = GeoLocation.GetCountryCode(ipAddress);
            var matchTrue = (CountryMap.GetContinentByCountryCode(requestCountryCode) == continentCode.ToUpperInvariant());

            if (matchTrue && RoundRobin)
            {
                if (random.Next(0, 100) > BalanceFactor) return false;
            }
            return matchTrue;
        }

        #endregion
    }

Register and configure

The custom component can be registered and graphically configured using the Sentinet Administrative Console. Go to the design tab of the virtual service and click modify, then select the endpoint you want to be managed by the Routing component. On the endpoint tree node click the ellipsis button.

Add new Custom Router, specifying few parameters:

  • Name. The fiendly name of the custom router (GeoRouter)
  • Assembly. The fully qualified assembly name that contains the Router implementation (Codit.Demo.Sentinet.Router,Version=1.0.0.0,Culture=neutral,PublicKeyToken=null)
  • Type. The .NET class that implement the IRouter interface (Codit.Demo.Sentinet.Router.Geo.GeoRouter)
  • Default Configuration. In this example I let it blank and I will specify the parameters when I use the

 

Select the router and set the custom configuration.

Save the process and wait for the next heartbeat so that the modifications will be applied.

Test

To test the virtual service with the brand new custom router, this time I tried WcfStorm.Rest.

Test case #1 - All the nine endpoints were available.

The messages have been routed to the specific continent and load has been distributed among the backend services as expected.

I collected in this image the backend services monitor (top left) and the map displays the sources of the service calls.

As you can see the the basic load balancer is not bullet proof, but the load is spread almost equally which is acceptable for this proof of concept.

 

Test case #2 -  Fall back test on the European region.

I shut down the europe1 and europe2 services, so only the europe3 service was active.

Thanks to the fallback mechanism, the virtual service always responded. In the monitor tab you can check the fallback in action.

 

Test case #3 - All the European backend services were stopped.

Means that a route had a valid matchfilter, Sentinet tried to contact all the endpoints in the endpoint collection but without any success in evey attempt. Here under, it’s reported the error message we got. Notice that the address reported will be different depending on the route has been hit.

<string xmlns="http://schemas.microsoft.com/2003/10/Serialization/">
The message could not be dispatched because the service at the endpoint address 
'net.tcp://europe3/CustomerSearch/3' is unavailable for the protocol of the address.
</string>

Test case #4 – No matching rules.

If there are no matching rules (e.g. sending messages from South America) this following error message is returned.

<string xmlns="http://schemas.microsoft.com/2003/10/Serialization/">
No matching MessageFilter was found for the given Message.</string>

Conclusion

Sentinet is designed to be extensible in multiple areas of the product. In this post I’ve demonstrated how to create a Geography-based custom Router that combines the round-robin and the fault tolerance features. In the next post I will discuss about the Sentinet management APIs.

 

Cheers,

Massimo

Categories: Sentinet SOA WCF
written by: Massimo Crippa

Posted on Thursday, August 21, 2014 7:00 PM

Jonas Van der Biest by Jonas Van der Biest

Azure search is an indexing service where you can look for content in documents. You could compare it with your personal Bing search engine for your own indexed documents. You can configure and search your documents by using the Azure Search REST API. This blog post tells you more about it.

Note: Azure search is currently in preview (at the time of writing), you might need to request access first using your Azure subscription in order to view/use this functionality. To test this service, you can use one of the free packages.

What is Azure Search

Azure search is an indexing service where you can look for content in documents. You could compare it with your personal Bing search engine for your own indexed documents. You can configure and search your documents by using the Azure Search REST API.

Typical usage flow:

  1. Create an Azure Search Service using the portal
  2. Create an Index schema
  3. Upload documents to the index
  4. Query the Azure Search for results 

Configuring Azure Search Service using the the preview portal

 

In order to start using Azure Search, you will need:

Creating the search service

Browse to https://portal.azure.com and follow the instructions as shown in the following screenshots.

Using Fiddler to configure your search

Once you have set up your Azure Search Service, we can start using the API. You might notice the api-version query url parameter that you might need to change as things evolve.

Creating an index

An index defines the skeleton of your documents using a schema that includes a number of data fields. To create an index, you can issue a POST or PUT request with at least one valid "field". Each POST request uses a JSON payload in order to specify the request body.
 

 POST Example

 
POST https://coditshared.search.windows.net/indexes?api-version=2014-07-31-Preview HTTP/1.1
Api-key: DXXXXXXXXXXXXXXXXXXXXXXX0
Host: coditshared.search.windows.net
Content-Type: application/json
Content-Length: 579

{
  "name": "products",
  "fields": [
    {"name": "productId", "type": "Edm.String", "key": true, "searchable": false, "filterable": true},
    {"name": "title", "type": "Edm.String", "searchable": true, "filterable": true},
    {"name": "category", "type": "Edm.String", "searchable": true, "filterable": true},
    {"name": "description", "type": "Edm.String", "searchable": true, "filterable": true},
    {"name": "releaseDate", "type": "Edm.DateTimeOffset" },
    {"name": "isPromotion", "type": "Edm.Boolean" },
    {"name": "price", "type": "Edm.Double" }
  ]
}
  
You should receive a 201 Created response.

Modifying your index

Whenever you want to edit your index, just issue a new POST request with an updated index. For now only added fields are updated. Changes to existing fields are not possible (you should delete and recreate the index).

Deleting your index

DELETE https://coditshared.search.windows.net/indexes/products?api-version=2014-07-31-Preview HTTP/1.1
Api-key: DXXXXXXXXXXXXXXXXXXXXXXXXX0Host: coditshared.search.windows.net
  

You should receive a 204 No Content response.

Retrieving existing indexes configured on your Azure Search Service

GET https://coditshared.search.windows.net/indexes/products?api-version=2014-07-31-Preview HTTP/1.1
Api-key: DXXXXXXXXXXXXXXXXXXXXXXXXX0
Host: coditshared.search.windows.net

Index new blob files

We previously created a new Azure Search index. It's time to populate the index with some documents. For each document you need to upload it to the Search Service API as JSON format using the following structure:

 
{
  "value": [
    {
      "@search.action": "upload (default) | merge | delete",
      "key_field_name": "unique_key_of_document", (key/value pair for key field from index schema)
      "field_name": field_value (key/value pairs matching index schema)
        ...
    },
    ...
  ]
}
  
You could of course use Fiddler to upload your documents but let's write some code to do this. 

Hands-on: Index new blob files using Azure WebJobs

Because "we" (developers) are lazy and try to automate as much as possible, we could use Azure WebJobs as a blob polling service and then index each new blob file on the fly. For the sake of simplicity, we will only send 1 file to the Azure Search API service, you could however send over a batch to the indexing service (~ max 1000 documents at once and size should be < 16MB).

The following example is available for download on Github

Creating the Azure WebJob

More information on how to create a WebJob can be found here

Creating a WebJob is fairly easy, you just need to have the following static method to pickup new blob messages:

public static void ProcessAzureSearchBlob([BlobTrigger("azure-search-files/{name}")] TextReader inputReader) { }
  
The blob connectionstring is specified in the config and the webjob will poll on the blob container "azure-search-files". When you drop a new file in the blob container, the method will be executed. Simple as that and half of the work already done! We created an index before so we should follow the indexing schema. Let's drop the following XML file:
  B00FFJ0HUE
  ASUS EeePC 1016PXW 11-Inch Netbook
  Computers \ Tablets
  
    Graphics: Intel HD Graphics Gen7 4EU
    Cameras: 1.2MP
    Operating System: Windows 8.1
  
  2013-11-12T00:00:01.001Z
  false
  362.90
The last and most interesting task is to parse the blob message and send it to the API. First we deserialize the xml to an object.
var deserializer = new XmlSerializer(typeof(Product), new XmlRootAttribute("Product"));
var product = (Product)deserializer.Deserialize(inputReader);
inputReader.Close();
  
Then we parse it to JSON using Json.NET library.
var jsonString = JsonConvert.SerializeObject(product,Formatting.Indented, new JsonSerializerSettings { ContractResolver = new CamelCasePropertyNamesContractResolver() });
message.Content = JObject.Parse(jsonString);
  
Finally we need to adjust the JSON format in order to have the same structure as Azure Search API is expecting.
var indexObject = new JObject();
var indexObjectArray = new JArray();
var itemChild = new JObject { { "@search.action", "upload" } };
itemChild.Merge(message.Content);
indexObjectArray.Add(itemChild);
indexObject.Add("value", indexObjectArray);
  
We send the JSON using an HttpClient. This is the request that was sent
POST https://coditshared.search.windows.net/indexes/products/docs/index?api-version=2014-07-31-Preview HTTP/1.1
api-key: DXXXXXXXXXXXXXXXXXXXXXXXXXXXXX0
Content-Type: application/json; charset=utf-8
Host: coditshared.search.windows.net
Content-Length: 431

{
  "value": [
    {
      "@search.action": "upload",
      "productId": "B00FFJ0HUE",
      "title": "ASUS EeePC 1016PXW 11-Inch Netbook",
      "category": "Computers \\ Tablets",
      "description": "\n    Graphics: Intel HD Graphics Gen7 4EU\n    Cameras: 1.2MP\n    Operating System: Windows 8.1\n  ",
      "releaseDate": "2013-11-12T00:00:01.001Z",
      "isPromotion": false,
      "price": 362.9
    }
  ]
}

And the response:

HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Content-Type: application/json; odata.metadata=minimal
Expires: -1
request-id: 1abe3320-de0e-47aa-ac65-c6aa15f303c3
elapsed-time: 15
OData-Version: 4.0
Preference-Applied: odata.include-annotations="*"
Date: Thu, 21 Aug 2014 07:20:29 GMT
Content-Length: 221

{"@odata.context":"https://coditshared.search.windows.net/indexes('products')/$metadata#Collection(Microsoft.Azure.Search.V2014_07_31_Preview.IndexResult)","value":[{"key":"B00FFJ0HUE","status":true,"errorMessage":null}]}

To make things easy, I've created a test client that sends the message to blob for you.

 

Querying the Azure Search Service

Querying is done using the Query API Key.

Lookup

You can do a simple lookup by the key of your index schema. Here we can lookup the previously uploaded document with productId: B00FFJ0HUE

GET https://coditshared.search.windows.net/indexes/products/docs/B00FFJ0HUE?api-version=2014-07-31-Preview HTTP/1.1
Api-key: DXXXXXXXXXXXXXXXXXXXXXXXXX0
Host: coditshared.search.windows.net

Simple query syntax

Querying is fairly simple. For a complete overview, you should check the complete API reference. For this example, the service has 5 product documents. 4 of them have a price lower than 500. We can execute the following query to retrieve the 4 products.

GET https://coditshared.search.windows.net/indexes/products/docs?$filter=(price+lt+500)&api-version=2014-07-31-Preview HTTP/1.1
Api-key: DXXXXXXXXXXXXXXXXXXXXXXXXX0

Host: coditshared.search.windows.net

Azure Search also supports paging, so we can also $skip and $take a number of results.

More info on the syntax to create specific queries: MSDN

Additionally the following characters may be used to fine-tune the query:

  • +: AND operator. E.g. wifi+luxury will search for documents containing both "wifi" and "luxury"
  • |: OR operator. E.g. wifi|luxury will search for documents containing either "wifi" or "luxury" or both
  • -: NOT operator. E.g. wifi -luxury will search for documents that have the "wifi" term and/or do not have "luxury" (and/or is controlled by searchMode)
  • *: suffix operator. E.g. lux* will search for documents that have a term that starts with "lux", ignoring case.
  • ": phrase search operator. E.g. while Roach Motel would search for documents containing Roach and/or Motel anywhere in any order, "Roach Motel" will only match documents that contains that whole phrase together and in that order (text analysis still applies).
  • ( ): precedence operator. E.g. motel+(wifi|luxury) will search for documents containing the "motel" term and either "wifi" or "luxury" (or both).

There is more than this…

Time to mention some extra features that are out of scope for this blogpost:

CORS Support

When using javascript to query the service, you will expose your API Query key to the end-user. If you have a public website, someone might just steal your key to use on their own website. CORS will prevent this by checking the HTTP Headers where the original request came from.

Hit highlighting

You could issue a search and get a fragment of the document where the keyword is highlighted. The Search API will respond with HTML markup (<em />).

Suggestions

Provides the ability to retrieve suggestions based on partial search input. Typically used in search boxes to provide type-ahead suggestions as users are entering text. More info: MSDN

Scoring Profiles

Scores matching documents at query time to position them higher / lower in ranking. More info: MSDN

Performance and pricing

When you are interested for using the service in production, you can pay for a dedicated Search Unit. Search Units allow for scaling of QPS (Queries per second), Document Count, Document Size and High Availability. You could also use Replicas in order to load balance your documents.

And more…

 

Download Sample

Sample code can be found here: https://github.com/jvanderbiest/Codit.AzureSearch.Sample

Categories: Azure .NET
written by: Jonas Van der Biest

Posted on Friday, August 8, 2014 3:00 PM

Glenn Colpaert by Glenn Colpaert

Sentinet is highly extendable through standard Microsoft .Net, WCF and WIF extensibility points. Also the Sentinet API interfaces can be used to extend Sentinet possibilities.

In this post I would like to explain another extensibility point of Sentinet: Custom Alert Handlers.

Alerts or Violation Alerts are triggered in the Sentinet Service Agreements when certain configured SLA violations occurred.

In some previous post that were release on this blog we saw how to build a custom access rule expression and how to leverage the WCF extensibility by setting up a virtual service with a custom endpoint behavior .

In this post I would like to explain another extensibility point of Sentinet: Custom Alert Handlers.

Alerts or Violation Alerts are triggered in the Sentinet Service Agreements when certain configured SLA violations occurred. More details about Sentinet Service Agreements can be found here.

Scenario

The main scenario for this blog post is very simple. We will create our own Custom Alert Handler class by inheriting certain interfaces, register our custom Alert Handler in Sentinet and use it as an Alert when a certain SLA is violated.

Creating the Custom Alert Handler

We start our implementation of the custom alert handler by creating a new class that inherits from the IAlertHandler interface. This interface is available through the Nevatech.Vbs.Repository.dll.

This interface contains one single method: ProcessAlerts where you put your logic to handle the alert after the SLA violation has occurred.

clip_image001[6]

One more thing to do before we can start our implementation of the ProcessAlerts method is to add a reference to the Twilio REST API through NuGet. More information about Twilio can be found here.

clip_image002

 The final implementation of our custom Alert Handler looks like below. We start by initializing some variables needed for Twilio. After that everything is pretty straight forward. We read our handler configuration, here we made the choice for a CSV configuration string, but you can perfectly go for an XML configuration and parse it to an XMLDocument or XDocument.

When we've read the receivers from the configuration we create an alert message by concatenating the alert description, after that we send an SMS message by using the Twilio REST API.

clip_image002[5]

Register

The first step in registering your Custom Alert is to add your dll(s) to the Sentinet installation folder. This way our dll(s) can be accessed by the "Nevatech.Vsb.Agent" process, that is responsible for generating the alert. There is no need to add your dll(s) to the GAC.

Next step is to register our Custom Alert in Sentinet itself and assign it to a desired Service Agreement.

In the Sentinet Repository window, click on the Service Agreements and navigate to alerts and then choose add Alert. Following screen will popup.

clip_image002[7]

Next step is clicking on the 'Add Custom Alert Action' button as shown below.

clip_image002[9]

In the following screen we have to add all our necessary parameters to configure our Custom Alert.

  • Name: The friendly name of the Custom Alert
  • Assembly: The fully qualified assembly name that contains the custom alert
  • Type: The .NET class that implements the IAlertHandler interface
  • Default Configuration: The optional default configuration. In this example I specified a CSV value of difference phone number. You can access this value inside your class that implements the IAlertHandler interface.

clip_image002[11]

Confirm the configuration by clicking 'OK'. In the next screen be sure to select your newly configured Custom Alert.

You will end up with following Configured Violation Alerts.

clip_image002[13]

Testing

To test this alert I've modified my Service Agreement Metrics to a very low value (ex 'Only 1 call is allowed per minute'). So I could easily trigger the alert. After I called my Virtual Service multiple times per minute, I received following SMS.

Service agreement "CoditBlogAgreement"  has been violated one or more times. The most recent violation is reported for the time period started on 2014-07-24 18:15:00 (Romance Standard Time).

Conclusion

Sentinet is designed to be extensible in multiple areas of the product. In this post I’ve demonstrated how to create a Custom Alert Handler that will send an SMS when an SLA has been violated.

 

Cheers,

Glenn Colpaert

Categories: .NET BizTalk Sentinet
written by: Glenn Colpaert