wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Thursday, September 4, 2014 3:57 PM

Glenn Colpaert by Glenn Colpaert

Description of an issue when adding the WCF.OutboundCustomHeaders to the context when sending to the BizTalk ServiceBus adapter.

For a hybrid scenario I'm currently working on, using onsite-WCF services and Azure Service Bus, it was necessary to have the WCF Headers from the original call available as Brokered Message Properties in Azure Service Bus.

We created a SB-Messaging port to send the message to Azure Service Bus and added the WCF Namespace (http://schemas.microsoft.com/BizTalk/2006/01/Adapters/WCF-properties) in the Brokered Message Properties window.

 

We quickly ran into following issue:

The adapter failed to transmit message going to send port "SpBlog" with URL "sb://coditblogdemo.servicebus.windows.net/demo". It will be retransmitted after the retry interval specified for this Send Port. Details:"System.InvalidOperationException: Envelope Version 'EnvelopeNone ( http://schemas.microsoft.com/ws/2005/05/envelope/none)' does not support adding Message Headers.

At first I tought this error was related to the fact that we added the WCF Namespace in the Brokered Message Properties window, but even if we removed the WCF namespace, the error still occurred. 

In fact we noticed that from the moment the WCF.OutboundCustomHeaders is in the context of the message this error occurs on the SB-Messaging port.

Cause

Looking at the stack trace it seems that the SB-Messaging adapter is built on top of the BizTalk WCF Adapter runtime. Which makes perfect sense. The only downside here is that the SB-Messaging adapter acts the same as the WCF adapter when it comes to the OutboundCustomHeaders property. 
When using the WCF adapter and adding the WCF.OutboundCustomHeaders to the context, the value of this property gets added to the SOAP:Header of the outgoing message and that is exactly what is also happening with the SB-Messaging adapter. Let’s call it a hidden feature in the SB-Messaging adapter.

Solution

The solution for this problem was fairly simple and straight forward. We created a custom pipeline component: ‘Context Copier” that copies the value of the OutboundCustomHeaders property to another context property. After assigning the value to the new context property we write null to the OutboundCustomHeaders context property. That way the OutBoundCustomHeaders is removed from the context.

 

Off course this is only the basic outline of the component. In our ‘Context Copier’ we added the possibility to add a source and destination list of properties that need to be copied. But the basic outline below will already solve the issue of the OutboundCustomHeaders when sending to Service Bus.

 

 

After adding this pipeline component to our Send Pipeline, the issue was resolved and we could happily continue integration with Service Bus!

 

Cheers,

Glenn Colpaert

written by: Glenn Colpaert

Posted on Thursday, August 28, 2014 3:55 PM

Massimo Crippa by Massimo Crippa

Sentinet is highly extendable through standard Microsoft .NET, WCF and WIF extensibility points, and through the Sentinet API interfaces.

In this 4th post I want to continue the Sentinet Extensibility series exploring another possible customization, the routing.

Routing

The Routing feature allows to deliver the messages received on the virtual inbound endpoint to more than one alternative endpoint of the same backend service. When the backend service exposes multiple endpoints, some of them (or all) can be included in message routing by marking them using the Sentinet Administrative Console. Notice that, to activate the Routing at least two backend endpoints must be selected.

The Sentinet routing feature improves the API availability with an automatic failover. This means that, in case of communication failure, the Sentinet Node falls back to the next available endpoint (this does not happen in case of the SOAP Fault because it's considered as a valid response).

Sentinet supports four router types:

  • Round-Robin with priority or equal distribution of the load. This is the default routing mode, the fallback is automatic.
  • Simple Fault Tolerance. The routing mechanism always hit the endpoint with the highest priority then, in case of communication failure, it falls back to the endpoint with lowest priority.
  • Multicast. A copy of the incoming message is delivered to all the endpoints.
  • Custom. The routing rules are defined in a custom .NET component.

Scenario

Here are  the requirements for this scenario: 

  • Sentinet is deployed behind the network load balancer and the customer doesn't want to pass again through the NLB.
  • The virtualized backend service has nine endpoints (three per continent) and the load should be routed depending on which continent the request is coming from.
  • The load routed to Europe and to North America should be equally distributed between the endpoints (simple round robin). 
  • The load routed to Asia should always hit a specific endpoint and in case of error must fall back to the others (simple fault tolerance). 

In short, we want to build a geography-based custom router that merges the Round-Robin and the Fault-Tolerance types. To build the GeoRouter I started with the example that I found in the Sentinet SDK.

Build the custom Router

A Sentinet custom Router is a regular .NET component that implements the IRouter interface (ref. Nevatech.Vbs.Repository.dll) and MessageFilter abstract class.

The IRouter  inteface contains three methods: 
- GetRoutes – Where to define the routing rules. 
- ImportConfiguration – Read the (and apply) the component’s configuration.    
- ExportConfiguration – Save the component its configuration.

 

The custom router reads the component configurations where we define which endpoint is contained in which region (continent) and the type of routing to be applied. Based on this XML the GetRoutes method creates the Route object which is responsible for the message delivery.

<Regions>
  <Region code="NA" roundRobin="true">
    <!-- North America -->
    <Endpoint>net.tcp://northamerica1/CustomerSearch/4</Endpoint>
    <Endpoint>net.tcp://northamerica2/CustomerSearch/5</Endpoint>
    <Endpoint>net.tcp://northamerica3/CustomerSearch/6</Endpoint>
  </Region>
  <Region code="AS" roundRobin="false">
    <!-- Asia -->
    <Endpoint>net.tcp://asia1/CustomerSearch/7</Endpoint>
    <Endpoint>net.tcp://asia2/CustomerSearch/8</Endpoint>
    <Endpoint>net.tcp://asia3/CustomerSearch/9</Endpoint>
  </Region>
  <Region code="EU" roundRobin="true">
    <!-- Europe -->
    <Endpoint>net.tcp://europe1/CustomerSearch/1</Endpoint>
    <Endpoint>net.tcp://europe2/CustomerSearch/2</Endpoint>
    <Endpoint>net.tcp://europe3/CustomerSearch/3</Endpoint>
  </Region>
</Regions>

 

The GetRoutes method, returns a collection of Route objects. A Route is composed by a Filter expression, an EndpointCollection and a Priority.

 

How the sentinet engine processes the Collection<Route> object?

The Sentinet engine processes the routes one by one according to the order defined (priority field) untill a first match occurs. Then, when the filter criteria is matched, the request message is sent to the first endpoint in the EndpointCollection. If the current endpoint throws an exception Sentinet fallbacks to the next endpoint in the collection.

 

How to populate the Collection<Route> to achieve our goals?

The fallback is automatically implemented by Sentinet every time that in the Route’s endpoint collection there are more than one endpoint. So the creation of a route which contains one single endpoint disables the fallback mechanism.

 

The round robin mechanism implemented in this demo is very simple. Basically the distribution of the load between the endponts is achieved :

- Creating a number of routes equal to the number of the endpoint in that region (e.g. in europe we have 3 endpoint so 3 routes are created and added to the collection).

- Every route has a different a filter expression based on a random number.

- In every route’s endpoint collection, the items are sorted in a different order to prioritize a different endpoint at every iteration.

 

Here a visual representation of the Routes collection to achieve the RoundRobin + Automatic fallback

Automatic fallback without round robin

Round robin without the automatic fallback (not implemented in this example)

 

So what does the code do? Basically, it reads the collection of the endpoint that we checkmarked during the virtual service design and if the endpoint is contained in the XML configuration it is added to the continent-based route object.

 

Here the GetRoutes code

        public IEnumerable<Route> GetRoutes(IEnumerable<EndpointDefinition> backendEndpoints)
        {
            if (backendEndpoints == null) throw new ArgumentNullException("backendEndpoints");

            // Validate router configuration
            if (!Validate()) throw new ValidationException(ErrorMessage);

            // Collection of routes to be returned
            Collection<Route> routes = new Collection<Route>();

            // Ordered collection of outbound endpoints used in a single route
            Collection<EndpointDefinition> routeEndpoints = new Collection<EndpointDefinition>();

            // The order of a route in a routing table 
            byte priority = Byte.MaxValue;

            foreach (Region region in Regions)
            {
                // Collection can be reused as endpoints are copied in Route() constructor
                routeEndpoints.Clear();

                // collection of the backend endpoint per region 
                foreach (string endpointUri in region.Endpoints)
                {
                    // Find outbound endpoint by its AbsoluteURI
                    EndpointDefinition endpoint = backendEndpoints.FirstOrDefault(e => String.Equals(e.LogicalAddress.AbsoluteUri, endpointUri, StringComparison.OrdinalIgnoreCase));
                    if (endpoint == null) throw new InvalidOperationException(String.Format(CultureInfo.CurrentCulture, InvalidRouterConfiguration, endpointUri));
                    routeEndpoints.Add(endpoint);
                }

                if (region.EnableRoundRobin)
                {
                    // build a route for each endpoint in the region
                    var iEndpointIndex = 0;
                    foreach (string endpointUri in region.Endpoints)
                    {
                        // change the backend's endpoint order 
                        if (iEndpointIndex > 0) SortEndpoints(routeEndpoints, iEndpointIndex - 1);

                        // Configure message filter for the current route
                        var rrFilter = new GeoMessageFilter
                        {
                            ContinentCode = region.Code,
                            RoundRobin = region.EnableRoundRobin,
                            BalancingFactor = GetBalancingFactor(iEndpointIndex)
                        };

                        routes.Add(new Route(rrFilter, routeEndpoints, priority));
                        iEndpointIndex++;
                        priority--;
                    }
                }
                else
                {
                    // build a route for each region
                    var filter = new GeoMessageFilter
                    {
                        ContinentCode = region.Code,
                        RoundRobin = false
                    };
                    // endpoint Fallback scenario
                    routes.Add(new Route(filter, routeEndpoints, priority));
                }
                priority--;
            }

            return routes;
        }

And the messageFilter class

    public sealed class GeoMessageFilter : MessageFilter
    {
        #region Properties

        public String ContinentCode { get; set; }
        public bool RoundRobin { get; set; }
        public double BalanceFactor { get; set; }

        private static Random random = new Random(); 
        #endregion

        #region Methods

        public override bool Match(Message message)
        {
            var remoteProps = (RemoteEndpointMessageProperty) message.Properties[RemoteEndpointMessageProperty.Name];
            return Match(remoteProps.Address, ContinentCode);
        }


        private bool Match(string ipAddress, string continentCode)
        {
            var requestCountryCode = GeoLocation.GetCountryCode(ipAddress);
            var matchTrue = (CountryMap.GetContinentByCountryCode(requestCountryCode) == continentCode.ToUpperInvariant());

            if (matchTrue && RoundRobin)
            {
                if (random.Next(0, 100) > BalanceFactor) return false;
            }
            return matchTrue;
        }

        #endregion
    }

Register and configure

The custom component can be registered and graphically configured using the Sentinet Administrative Console. Go to the design tab of the virtual service and click modify, then select the endpoint you want to be managed by the Routing component. On the endpoint tree node click the ellipsis button.

Add new Custom Router, specifying few parameters:

  • Name. The fiendly name of the custom router (GeoRouter)
  • Assembly. The fully qualified assembly name that contains the Router implementation (Codit.Demo.Sentinet.Router,Version=1.0.0.0,Culture=neutral,PublicKeyToken=null)
  • Type. The .NET class that implement the IRouter interface (Codit.Demo.Sentinet.Router.Geo.GeoRouter)
  • Default Configuration. In this example I let it blank and I will specify the parameters when I use the

 

Select the router and set the custom configuration.

Save the process and wait for the next heartbeat so that the modifications will be applied.

Test

To test the virtual service with the brand new custom router, this time I tried WcfStorm.Rest.

Test case #1 - All the nine endpoints were available.

The messages have been routed to the specific continent and load has been distributed among the backend services as expected.

I collected in this image the backend services monitor (top left) and the map displays the sources of the service calls.

As you can see the the basic load balancer is not bullet proof, but the load is spread almost equally which is acceptable for this proof of concept.

 

Test case #2 -  Fall back test on the European region.

I shut down the europe1 and europe2 services, so only the europe3 service was active.

Thanks to the fallback mechanism, the virtual service always responded. In the monitor tab you can check the fallback in action.

 

Test case #3 - All the European backend services were stopped.

Means that a route had a valid matchfilter, Sentinet tried to contact all the endpoints in the endpoint collection but without any success in evey attempt. Here under, it’s reported the error message we got. Notice that the address reported will be different depending on the route has been hit.

<string xmlns="http://schemas.microsoft.com/2003/10/Serialization/">
The message could not be dispatched because the service at the endpoint address 
'net.tcp://europe3/CustomerSearch/3' is unavailable for the protocol of the address.
</string>

Test case #4 – No matching rules.

If there are no matching rules (e.g. sending messages from South America) this following error message is returned.

<string xmlns="http://schemas.microsoft.com/2003/10/Serialization/">
No matching MessageFilter was found for the given Message.</string>

Conclusion

Sentinet is designed to be extensible in multiple areas of the product. In this post I’ve demonstrated how to create a Geography-based custom Router that combines the round-robin and the fault tolerance features. In the next post I will discuss about the Sentinet management APIs.

 

Cheers,

Massimo

Categories: Sentinet SOA WCF
written by: Massimo Crippa

Posted on Thursday, August 21, 2014 7:00 PM

Jonas Van der Biest by Jonas Van der Biest

Azure search is an indexing service where you can look for content in documents. You could compare it with your personal Bing search engine for your own indexed documents. You can configure and search your documents by using the Azure Search REST API. This blog post tells you more about it.

Note: Azure search is currently in preview (at the time of writing), you might need to request access first using your Azure subscription in order to view/use this functionality. To test this service, you can use one of the free packages.

What is Azure Search

Azure search is an indexing service where you can look for content in documents. You could compare it with your personal Bing search engine for your own indexed documents. You can configure and search your documents by using the Azure Search REST API.

Typical usage flow:

  1. Create an Azure Search Service using the portal
  2. Create an Index schema
  3. Upload documents to the index
  4. Query the Azure Search for results 

Configuring Azure Search Service using the the preview portal

 

In order to start using Azure Search, you will need:

Creating the search service

Browse to https://portal.azure.com and follow the instructions as shown in the following screenshots.

Using Fiddler to configure your search

Once you have set up your Azure Search Service, we can start using the API. You might notice the api-version query url parameter that you might need to change as things evolve.

Creating an index

An index defines the skeleton of your documents using a schema that includes a number of data fields. To create an index, you can issue a POST or PUT request with at least one valid "field". Each POST request uses a JSON payload in order to specify the request body.
 

 POST Example

 
POST https://coditshared.search.windows.net/indexes?api-version=2014-07-31-Preview HTTP/1.1
Api-key: DXXXXXXXXXXXXXXXXXXXXXXX0
Host: coditshared.search.windows.net
Content-Type: application/json
Content-Length: 579

{
  "name": "products",
  "fields": [
    {"name": "productId", "type": "Edm.String", "key": true, "searchable": false, "filterable": true},
    {"name": "title", "type": "Edm.String", "searchable": true, "filterable": true},
    {"name": "category", "type": "Edm.String", "searchable": true, "filterable": true},
    {"name": "description", "type": "Edm.String", "searchable": true, "filterable": true},
    {"name": "releaseDate", "type": "Edm.DateTimeOffset" },
    {"name": "isPromotion", "type": "Edm.Boolean" },
    {"name": "price", "type": "Edm.Double" }
  ]
}
  
You should receive a 201 Created response.

Modifying your index

Whenever you want to edit your index, just issue a new POST request with an updated index. For now only added fields are updated. Changes to existing fields are not possible (you should delete and recreate the index).

Deleting your index

DELETE https://coditshared.search.windows.net/indexes/products?api-version=2014-07-31-Preview HTTP/1.1
Api-key: DXXXXXXXXXXXXXXXXXXXXXXXXX0Host: coditshared.search.windows.net
  

You should receive a 204 No Content response.

Retrieving existing indexes configured on your Azure Search Service

GET https://coditshared.search.windows.net/indexes/products?api-version=2014-07-31-Preview HTTP/1.1
Api-key: DXXXXXXXXXXXXXXXXXXXXXXXXX0
Host: coditshared.search.windows.net

Index new blob files

We previously created a new Azure Search index. It's time to populate the index with some documents. For each document you need to upload it to the Search Service API as JSON format using the following structure:

 
{
  "value": [
    {
      "@search.action": "upload (default) | merge | delete",
      "key_field_name": "unique_key_of_document", (key/value pair for key field from index schema)
      "field_name": field_value (key/value pairs matching index schema)
        ...
    },
    ...
  ]
}
  
You could of course use Fiddler to upload your documents but let's write some code to do this. 

Hands-on: Index new blob files using Azure WebJobs

Because "we" (developers) are lazy and try to automate as much as possible, we could use Azure WebJobs as a blob polling service and then index each new blob file on the fly. For the sake of simplicity, we will only send 1 file to the Azure Search API service, you could however send over a batch to the indexing service (~ max 1000 documents at once and size should be < 16MB).

The following example is available for download on Github

Creating the Azure WebJob

More information on how to create a WebJob can be found here

Creating a WebJob is fairly easy, you just need to have the following static method to pickup new blob messages:

public static void ProcessAzureSearchBlob([BlobTrigger("azure-search-files/{name}")] TextReader inputReader) { }
  
The blob connectionstring is specified in the config and the webjob will poll on the blob container "azure-search-files". When you drop a new file in the blob container, the method will be executed. Simple as that and half of the work already done! We created an index before so we should follow the indexing schema. Let's drop the following XML file:
  B00FFJ0HUE
  ASUS EeePC 1016PXW 11-Inch Netbook
  Computers \ Tablets
  
    Graphics: Intel HD Graphics Gen7 4EU
    Cameras: 1.2MP
    Operating System: Windows 8.1
  
  2013-11-12T00:00:01.001Z
  false
  362.90
The last and most interesting task is to parse the blob message and send it to the API. First we deserialize the xml to an object.
var deserializer = new XmlSerializer(typeof(Product), new XmlRootAttribute("Product"));
var product = (Product)deserializer.Deserialize(inputReader);
inputReader.Close();
  
Then we parse it to JSON using Json.NET library.
var jsonString = JsonConvert.SerializeObject(product,Formatting.Indented, new JsonSerializerSettings { ContractResolver = new CamelCasePropertyNamesContractResolver() });
message.Content = JObject.Parse(jsonString);
  
Finally we need to adjust the JSON format in order to have the same structure as Azure Search API is expecting.
var indexObject = new JObject();
var indexObjectArray = new JArray();
var itemChild = new JObject { { "@search.action", "upload" } };
itemChild.Merge(message.Content);
indexObjectArray.Add(itemChild);
indexObject.Add("value", indexObjectArray);
  
We send the JSON using an HttpClient. This is the request that was sent
POST https://coditshared.search.windows.net/indexes/products/docs/index?api-version=2014-07-31-Preview HTTP/1.1
api-key: DXXXXXXXXXXXXXXXXXXXXXXXXXXXXX0
Content-Type: application/json; charset=utf-8
Host: coditshared.search.windows.net
Content-Length: 431

{
  "value": [
    {
      "@search.action": "upload",
      "productId": "B00FFJ0HUE",
      "title": "ASUS EeePC 1016PXW 11-Inch Netbook",
      "category": "Computers \\ Tablets",
      "description": "\n    Graphics: Intel HD Graphics Gen7 4EU\n    Cameras: 1.2MP\n    Operating System: Windows 8.1\n  ",
      "releaseDate": "2013-11-12T00:00:01.001Z",
      "isPromotion": false,
      "price": 362.9
    }
  ]
}

And the response:

HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Content-Type: application/json; odata.metadata=minimal
Expires: -1
request-id: 1abe3320-de0e-47aa-ac65-c6aa15f303c3
elapsed-time: 15
OData-Version: 4.0
Preference-Applied: odata.include-annotations="*"
Date: Thu, 21 Aug 2014 07:20:29 GMT
Content-Length: 221

{"@odata.context":"https://coditshared.search.windows.net/indexes('products')/$metadata#Collection(Microsoft.Azure.Search.V2014_07_31_Preview.IndexResult)","value":[{"key":"B00FFJ0HUE","status":true,"errorMessage":null}]}

To make things easy, I've created a test client that sends the message to blob for you.

 

Querying the Azure Search Service

Querying is done using the Query API Key.

Lookup

You can do a simple lookup by the key of your index schema. Here we can lookup the previously uploaded document with productId: B00FFJ0HUE

GET https://coditshared.search.windows.net/indexes/products/docs/B00FFJ0HUE?api-version=2014-07-31-Preview HTTP/1.1
Api-key: DXXXXXXXXXXXXXXXXXXXXXXXXX0
Host: coditshared.search.windows.net

Simple query syntax

Querying is fairly simple. For a complete overview, you should check the complete API reference. For this example, the service has 5 product documents. 4 of them have a price lower than 500. We can execute the following query to retrieve the 4 products.

GET https://coditshared.search.windows.net/indexes/products/docs?$filter=(price+lt+500)&api-version=2014-07-31-Preview HTTP/1.1
Api-key: DXXXXXXXXXXXXXXXXXXXXXXXXX0

Host: coditshared.search.windows.net

Azure Search also supports paging, so we can also $skip and $take a number of results.

More info on the syntax to create specific queries: MSDN

Additionally the following characters may be used to fine-tune the query:

  • +: AND operator. E.g. wifi+luxury will search for documents containing both "wifi" and "luxury"
  • |: OR operator. E.g. wifi|luxury will search for documents containing either "wifi" or "luxury" or both
  • -: NOT operator. E.g. wifi -luxury will search for documents that have the "wifi" term and/or do not have "luxury" (and/or is controlled by searchMode)
  • *: suffix operator. E.g. lux* will search for documents that have a term that starts with "lux", ignoring case.
  • ": phrase search operator. E.g. while Roach Motel would search for documents containing Roach and/or Motel anywhere in any order, "Roach Motel" will only match documents that contains that whole phrase together and in that order (text analysis still applies).
  • ( ): precedence operator. E.g. motel+(wifi|luxury) will search for documents containing the "motel" term and either "wifi" or "luxury" (or both).

There is more than this…

Time to mention some extra features that are out of scope for this blogpost:

CORS Support

When using javascript to query the service, you will expose your API Query key to the end-user. If you have a public website, someone might just steal your key to use on their own website. CORS will prevent this by checking the HTTP Headers where the original request came from.

Hit highlighting

You could issue a search and get a fragment of the document where the keyword is highlighted. The Search API will respond with HTML markup (<em />).

Suggestions

Provides the ability to retrieve suggestions based on partial search input. Typically used in search boxes to provide type-ahead suggestions as users are entering text. More info: MSDN

Scoring Profiles

Scores matching documents at query time to position them higher / lower in ranking. More info: MSDN

Performance and pricing

When you are interested for using the service in production, you can pay for a dedicated Search Unit. Search Units allow for scaling of QPS (Queries per second), Document Count, Document Size and High Availability. You could also use Replicas in order to load balance your documents.

And more…

 

Download Sample

Sample code can be found here: https://github.com/jvanderbiest/Codit.AzureSearch.Sample

Categories: Azure .NET
written by: Jonas Van der Biest

Posted on Friday, August 8, 2014 3:00 PM

Glenn Colpaert by Glenn Colpaert

Sentinet is highly extendable through standard Microsoft .Net, WCF and WIF extensibility points. Also the Sentinet API interfaces can be used to extend Sentinet possibilities.

In this post I would like to explain another extensibility point of Sentinet: Custom Alert Handlers.

Alerts or Violation Alerts are triggered in the Sentinet Service Agreements when certain configured SLA violations occurred.

In some previous post that were release on this blog we saw how to build a custom access rule expression and how to leverage the WCF extensibility by setting up a virtual service with a custom endpoint behavior .

In this post I would like to explain another extensibility point of Sentinet: Custom Alert Handlers.

Alerts or Violation Alerts are triggered in the Sentinet Service Agreements when certain configured SLA violations occurred. More details about Sentinet Service Agreements can be found here.

Scenario

The main scenario for this blog post is very simple. We will create our own Custom Alert Handler class by inheriting certain interfaces, register our custom Alert Handler in Sentinet and use it as an Alert when a certain SLA is violated.

Creating the Custom Alert Handler

We start our implementation of the custom alert handler by creating a new class that inherits from the IAlertHandler interface. This interface is available through the Nevatech.Vbs.Repository.dll.

This interface contains one single method: ProcessAlerts where you put your logic to handle the alert after the SLA violation has occurred.

clip_image001[6]

One more thing to do before we can start our implementation of the ProcessAlerts method is to add a reference to the Twilio REST API through NuGet. More information about Twilio can be found here.

clip_image002

 The final implementation of our custom Alert Handler looks like below. We start by initializing some variables needed for Twilio. After that everything is pretty straight forward. We read our handler configuration, here we made the choice for a CSV configuration string, but you can perfectly go for an XML configuration and parse it to an XMLDocument or XDocument.

When we've read the receivers from the configuration we create an alert message by concatenating the alert description, after that we send an SMS message by using the Twilio REST API.

clip_image002[5]

Register

The first step in registering your Custom Alert is to add your dll(s) to the Sentinet installation folder. This way our dll(s) can be accessed by the "Nevatech.Vsb.Agent" process, that is responsible for generating the alert. There is no need to add your dll(s) to the GAC.

Next step is to register our Custom Alert in Sentinet itself and assign it to a desired Service Agreement.

In the Sentinet Repository window, click on the Service Agreements and navigate to alerts and then choose add Alert. Following screen will popup.

clip_image002[7]

Next step is clicking on the 'Add Custom Alert Action' button as shown below.

clip_image002[9]

In the following screen we have to add all our necessary parameters to configure our Custom Alert.

  • Name: The friendly name of the Custom Alert
  • Assembly: The fully qualified assembly name that contains the custom alert
  • Type: The .NET class that implements the IAlertHandler interface
  • Default Configuration: The optional default configuration. In this example I specified a CSV value of difference phone number. You can access this value inside your class that implements the IAlertHandler interface.

clip_image002[11]

Confirm the configuration by clicking 'OK'. In the next screen be sure to select your newly configured Custom Alert.

You will end up with following Configured Violation Alerts.

clip_image002[13]

Testing

To test this alert I've modified my Service Agreement Metrics to a very low value (ex 'Only 1 call is allowed per minute'). So I could easily trigger the alert. After I called my Virtual Service multiple times per minute, I received following SMS.

Service agreement "CoditBlogAgreement"  has been violated one or more times. The most recent violation is reported for the time period started on 2014-07-24 18:15:00 (Romance Standard Time).

Conclusion

Sentinet is designed to be extensible in multiple areas of the product. In this post I’ve demonstrated how to create a Custom Alert Handler that will send an SMS when an SLA has been violated.

 

Cheers,

Glenn Colpaert

Categories: .NET BizTalk Sentinet
written by: Glenn Colpaert

Posted on Monday, August 4, 2014 4:00 PM

Brecht Vancauwenberghe by Brecht Vancauwenberghe

This blogpost will demonstrate how to create a high-available FTP server on the Microsoft Azure platform.

There are a lot of posts available learning you how to create a singlebox FTP server on Microsoft Azure, using FileZilla Server or Internet Information Services... About a year ago, at the announcement of the general Availability of Infrastructure as a Service I tested a singlebox FTP server using FileZilla Server. The main benefit of FileZilla is it's easy installation and configuration. 

 

SingleBox configuration:

Following posts will guide you while creating a singlebox server using FileZilla:

http://digitalmindignition.wordpress.com/2012/11/28/azure-vm-role-with-filezilla-as-ftp-server

http://huberyy.wordpress.com/2012/08/03/set-up-a-ftp-server-in-windows-azure-virtual-machine-with-filezilla-no-coding

 

Azure Virtual machines need maintenance from time to time, you should always avoid a single point of failure,... Enough reasons for a High Available configuration.. This post does not consists of a step by step guide showing you how to create a VNET, VM's, DFS,...
You need some experience with the Azure platform and Windows Server, I will help you putting the pieces together for a High Available FTP Server running on the Azure platform.

 

High Available Topology:

 

 

 
Topology remarks:
 
 
  • I'm using DFS (Distributed File System) as a High Available network share. I tried a currently in preview new Windows Azure feature, Azure File Services, it's very useful for shared storage between Virtual Machines. IIS and FileZilla are not able to work with this feature, so it's not useful for our purposes.
 
Virtual Network and Virtual Machines:
 
  • Create a new Virtual Network, choose a region, create an affinity group,... 
  • When your Virtual Network has been provisioned, create two new Virtual Machines and add them both to the VNET.
  • When creating the first VM, create an availability set. When creating the second VM, join the availability set you just created.  
  • Use the same cloudservice name for the second VM as the one you defined at creation time of the first VM.
  • When creating a new VM, the first thing you should do is changing the Windows Update and UAC settings.
  • Attach an empty datadisk to both Virtual Machines and format it. (Will be used for DFS and FTP file storage)
 
Active Directory:
 

FTP and IIS:

  • Install IIS and FTP service on both servers.
  • Configure the FTP services (publish FTP services).
  • Create a DFS share and set up Shared IIS config (you can use a shared config when doing the initial setup, when you go live you will need to disable it due to the port settings).
 
You will find all the information to do this on following sites:
 
FTP users and folders:
 
Should you have problems remembering how to configure user access in IIS, following posts will guide you.
If you work with a domain instead of for example local users, you need to create a folder with the domain name in IIS, don't forget this! 
 

https://community.rackspace.com/products/f/25/t/491

http://technet.microsoft.com/en-us/library/jj134201.aspx

 

Azure Load Balancer:
 
Open port 21 and load balance it between the two machines. Don't forget to join the load balanced port on the second virtual machine!
 

 

 

Now here is where the magic happens to enable passive FTP. I was not able to find any solution for this on the internet, but following did the trick. (You could use the Public Instance IP (PIP), but then your Windows Explorer clients will not be able to connect.)

You open a specific range of Passive FTP Ports on the first VM, and another specific range of ports on the second server. This way FTP traffic will always be routed to the the right server.

To avoid a lot of manual work you can use powershell to open a range of ports:

Import-Module azure
Add-AzureAccount

Select-AzureSubscription "yoursubscription"
$vm = Get-AzureVM -ServiceName "yourvmservicename" -Name "yourvm"

for ($i = 6051; $i -le 6100; $i++)
{
	$name = "FTP-Dynamic-" + $i
	Write-Host -Fore Green "Adding: $name"
	Add-AzureEndpoint -VM $vm -Name $name -Protocol "tcp" -PublicPort $i -LocalPort $i
}
 
# Update VM.
Write-Host -Fore Green "Updating VM..."
$vm | Update-AzureVM 
Write-Host -Fore Green "Done."

 

Now you can specify the machine specific range in IIS per machine, secondly you need to specify the public IP of your cloud service in IIS. Note, deallocating both Virtual Machines will make you lose your Public IP. (Since the latest Azure announcements it's possible in Azure to reserve your IP).

Don't forget to allow FTP through your Windows Firewall!

Categories: Azure Infrastructure