wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Tuesday, March 24, 2015 6:01 PM

Sam Vanhoutte by Sam Vanhoutte

In this post, I'm sharing some of my thoughts about the fresh Azure App Service, that were announced by Scott Guthrie and Bill Staples.

Today, Scott Guthrie and Bill Staples announced a new interesting Azure Service: Azure App Service.  Actually it's a set of services, combined under one umbrella, allowing customers to build rich business oriented applications.  Azure App Services is now the new home for:

  • Azure Web Apps (previously called Azure Websites)
  • Azure Mobile Apps (previously Mobile Services)
  • Azure Logic Apps (the new 'workflow' style apps)
  • Azure API Apps (previously announced as Microservices)

It speaks for itself that the Logic Apps and API apps will be the most important for integration people.  The Azure Microservices were first announced in public on the Integrate 2014 event and it's clear that integration is at the core of App Services, which should make us happy. 

Codit has been part of the private preview program 

Codit has been actively involved in private preview programs and we want to congratulate the various teams in the excellent job they have done.  They have really been listening to the feedback and made incredible improvements over the past months.  While everyone knows there is still a lot to do, it seems they are ready to take more feedback, as everything is public now.  

My personal advise would be to look at it with an open mind, knowing that a lot of things will be totally different from what we've been doing over the past 10-15 years (with BizTalk).  I'm sure a lot of things will (have to) happen in order to make mission critical, loosely coupled integration solutions running on App Services.  But I am confident they will happen.

Is this different from what was said at Integrate2014?

As Integrate 2014 was solely focused on the BizTalk Services, the other things (such as Web Sites and Media apps were not mentioned).  But most of the things we saw and heard back then, now made it to the public preview. 

  • Azure Microservices are now called API apps and are really just web API's in your favorite language, enriched with Swagger metadata and version control.  These API apps can be published to a gallery (public gallery might come later on) or directly to a resource group in your subscription.
  • The Workflows (they used to be called Flow Apps) are now called Logic Apps.  These will allow us to combine various API apps from the gallery in order to instrument and orchestrate logical applications (or workflows).

Important concepts

I tried to list the most important concepts below.

All of the components are built on top of Azure Websites.  This way, they can benefit from the existing out of the box capabilities there:

  • Hybrid connectivity: Hybrid Connections or Azure Virtual Networking.  Both of these are available for any API app you want to write.  And the choice is up to the user of your API app!
  • Auto-scaling: do you want to scale your specific API app automatically?  That's perfectly possible now.  If you have a transformation service deployed and the end of month message load needs to be handled, all should be fine!
  • New pricing model (more pay per use, compared to BizTalk Services)
  • And many more: Speed of deployment, the new portal: we get the new portal

API Apps really form the core of this platform.  They are restful API's, with Swagger metadata that is used to model and link the workflows (you can flow values from one API app to another in Logic apps).

API Apps can be published to the 'nuget-based' gallery, or directly to a resource group in your subscription.  When you will be able to publish to the public gallery over time, it will be possible for other users to leverage your API app in their own applications and logic apps, by provisioning an instance of that package into their own subscription.  That means that all the cost and deployment hassle is for the user of your API app.

Where I hope for improvements

As I mentioned, this is a first version of a very new service.  A lot of people have been working on this and the service will still be shaped over the coming months.  It seems the teams are taking feedback seriously and that's definitely a positive thing.  This is the feedback I posted on uservoice.  If you agree, don't hesitate to go and vote for these ideas!

  • Please think about ALM.  Doing everything in the portal (including rules, mapping, etc) is nice, but for real enterprise scenarios, we need version and source control. I really would love to see a Visual Studio designer experience for more complex workflows as well. The portal is nice for self-service and easy workflows, but it takes some time and is limited in its nature, compared to pro-dev experience in Visual Studio.
    Vote here
  • Seperate configuration from flow or business logic.
    If we now have a look at the json file that makes up a Logic app, we can see that throughout the entire file, references are being added to the actual runtime deployment of the various API apps. We also see values for the various connectors in the json structure. It would really help (in deployment of one flow to various staging slots) to seperate configuration or runtime values from the actual flow logic. 
    Vote here
  • Management
    Now it is extremely difficult to see the various "triggers" and to stop/start them.  With BizTalk, we have receive locations that we can all see in one view and we can stop/start them.  (the same thing for send ports).  Now all of that is encapsulated in the logic app and it would really be a good thing to provide more "management views".  As an example, we have customers with more than 1000 receive endpoints.  I want to get them in an easy to handle and search overview.
    Vote here
  • The usability in the browser has increased a lot, but still I believe it would make sense to make the cards or shapes smaller (or collapsable).  This way, we'll get a better overview of the end to end flow and that will be needed in the typical complex workflows we build today (including exception handling, etc) 
    Vote here

More posts will definitely follow in the coming weeks, so please stay tuned!

Categories: Azure BizTalk
written by: Sam Vanhoutte

Posted on Wednesday, March 18, 2015 4:44 PM

Jonas Van der Biest by Jonas Van der Biest

At Codit Products we often do research at new technologies. It is important that these new technologies are easy to use, testable and maintainable. I would like to share my experience about JavaScript testing on a continuous Team Foundation Build server.

In this blog series, we first focus on how to automatically test and get code coverage from JavaScript. Part 2 describes how to execute javascript tests on the build server and have test results next to your build package. Part 3 explains how to use other testing frameworks like NUnit instead of the default MSTest framework on the build server.

For earlier blog posts in this series, see part 1 or part 2.

Setting up NUnit for TFSBuild

Read this if you are using an existing build controller:

Previous Test Adapters needed to be installed as custom dll's (Version control custom assemblies on your build controller). This is not needed anymore, you can just add the test adapter as a nuget package to your test project. However, if you have custom assemblies, they take precedence over the nuget package dll's, so it's recommended that you check this folder for older versions.

Single test project

Install the following NuGet package: NUnit TestAdapter including NUnit 2.6.3 framework 1.2.0.

Multiple test projects

Install the following NuGet package on each test project: NUnit v2.6.4 And install the adapter on 1 random test project NUnitTestAdapter v1.2.0

We're using Nunit because it has "better" support than other frameworks like XUnit on TFSBuild.

Running tests for a specific category

Each unit test framework has a different way of grouping Unit tests in categories (MSTest uses "TestCategory", Nunit uses "Category", XUnit uses "Traits", Chutzpah uses "Module", ...). Some test frameworks have adapters available that map the TFS Build properties to the properties used by your framework. However, In my experience, they only work for NUnit (and of course MSTest).

On TFS Build when you create a new build definition you can specify what test category to use by filling the "Test case filter". This filter uses the following syntax: "TestCategory=CategoryName". There are 4 filter expression keys that you can use: "FullyQualifiedName, Name, Priority, TestCategory" and you can specify multiple by using the pipe character (ex: "TestCategory=Unit|TestCategory!=Acceptance").

Running tests with code coverage

Change your the build definition to have CodeCoverageEnabled enabled

 

When we run our build, we get the following result:

When you click on "Coverage Result" on the successful build, you can download the Code Coverage file. This file needs to be opened in Visual Studio. You can then have a detailed view of the coverage. If you double click a tree node, you see the actual code fragment that was covered (it's highlighted in blue).

 

In the last part of this blogseries we have discussed how to use another framework than MSTest to run our tests on TFSBuild. We see that it's not only MSTest that delivers support for TestCategories and Code Coverage but you could use another framework like NUnit too.

 

Sample Solution Files:

Download [22.5 MB]

Categories: Products
written by: Jonas Van der Biest

Posted on Wednesday, March 18, 2015 4:43 PM

Jonas Van der Biest by Jonas Van der Biest

At Codit Products we often do research at new technologies. It is important that these new technologies are easy to use, testable and maintainable. I would like to share my experience about JavaScript testing on a continuous Team Foundation Build server.

In this blog series, we first focus on how to automatically test and get code coverage from JavaScript. Part 2 describes how to execute javascript tests on the build server and have test results next to your build package. Part 3 explains how to use other testing frameworks like NUnit instead of the default MSTest framework on the build server.

Continuous Integration with JavaScript

This post is part 2 of 3, it is recommended that you read part 1 first.

If you don't have a Team Foundation Server you can always create one for free (up to 5 users) on the Visual Studio Online website.

The build controller that is used by default on TFS Online has a bunch of software installed. Luckily for us they also installed node.js which makes it fairly easy to execute our Grunt statement (see part 1). Make sure you have checked into TFS all the files (config files, node_modules) and that there were no files excluded from check-in.

Execute batch file in Pre-Built event

The Pre-Built event is self-explanatory, we'll add the following:


We will only execute this command if we're in the Release configuration. The first argument is the task name that needs to be executed on the grunt task runner. The second argument is needed to get hold of our solution directory.

Note: we use the $(ProjectDir) macro because we are building the project files and not the solution. The reason we are building project files is to make advantage of the "Package" functionality on MSBuild (see below).

To execute grunt on the build server, we will create a gruntCI.bat file in our solution directory. We will execute this file on Pre-Build of our main project. The batch file should perform the following actions:

  • Remove the read-only flag on the workspace files retrieved by the build server
  • Install grunt cli on the build server
  • Execute grunt "release" task

You could ask yourself why we aren't just installing the grunt-cli and execute the grunt task by using the "grunt [task]" command. This used to work on the build server, however Microsoft changed its security policy on the build server and you cannot add variables to the PATH variable anymore. Therefore, we need to look for the grunt.cmd file and run grunt from there. We also need to use the "call" statement in order to continue executing the batch file after grunt is completed.

Create the build definition

The build definition is pretty standard, we will build our ASP.Net application project and the associated test project.

Configure the source settings to point to the solution folder. If you don't use this source path, you will have to keep this in mind when configuring the PowerShell script (see later).

Select both projects and make sure they build in release mode for "AnyCpu" (notice there is no space between "Any" and "Cpu". Tfs adds a space automatically but you'll need to remove it afterwards).

If possible, you should use the TfvcTemplate.12.xaml as Build process template. It enables us to execute a PowerShell scripts after MSBuild completes (see later).

Running the build

Run the build, pray it succeeds and then have a look at the MSBuild log

If you navigate to View Log and MsBuild Log File, the output of the batch file should be logged as such:

Publishing build artifacts

Now that our build is running, how can we easily check if our tests were running fine. Where can we have a look at the code coverage report? It requires a bit more configuration to get us a clean package.

Configuring the drop location

When we configure a drop location, TFS Build will copy the output of the build to this directory.

Configure the MSBuild Package target for multiple projects
With the "Package" build target, we can let TFS Build create a Web Deployment Package. You will end up with a zip file that contains your website and that can easily be imported into IIS. We will now take advantage of this built-in functionality.

Not all Visual Studio project types have this "Package" target. ASP.Net supports this, you just need to trigger it using MSBuild. But with this target comes another pitfall, the test project doesn't support this target. With MSBuild you cannot specify a build target that is specific for a project file, your project files will all try to use that target when building. Conclusion, if you specify the "Package" target as a MSBuild argument, the build will currently fail. To make it work, you should create a new MSBuild target in both project files and execute the desired child targets. If we then add this new target to the MSBuild argument, everything will run smooth. The following code will make it clear.

In our asp.net project we add the following target:

In our asp.net test project we add the following target:

Now we can specify an MSBuild argument named TfsOnlineBuild. Also set the Output location to "PerProject", it will make it easier in Powershell to copy our project to a clean folder.

Adding a PowerShell script to organize our drop location

Creating the powershell

When you've followed all the previous steps, the current drop location should look like:

We are only interested in the logs and the Web Deployment Zip Package. There is a bunch of other stuff (like *.dll's in the _PublishedWebsites folder), the test project, ...) as you can see in the screenshot.

As we are writing a PowerShell we can also organize things a bit better. The PowerShell script will do the following.

  • Look for any packaged zip file and copy the folder to the root folder.
  • It will copy the test output folder from javascript to a new root folder: "JavaScript_Output"
  • It will remove all "trash" but keep the important root folders like "logs", "Package" and "JavaScript_Output"

We add a new file to our solution folder ("OrganizeDropFolder.ps1"), add the content and check it in on TFS. If you didn't follow the instructions about the "Source Settings", now is the time to change the path in the script. You should change line 18 to match your relative path.

Then we reference the script in our build definition, so it runs after MSBuild is complete.

When we now build again, we see the following PowerShell log in the build logs:



And we get the desired result in our TFS drop folder:

 

In this part we have discussed how to configure TFSBuild to execute your javascript tasks and organize the drop folder. If you have questions, feel free to use the comments below.

See part 3 for the conclusion of this series.

 

Sample Solution Files:

Note: execute the "npm init" command to restore the node_modules folder. The packages are not included. 

Download [22.5 MB]

Categories: Products
written by: Jonas Van der Biest

Posted on Wednesday, March 18, 2015 4:31 PM

Jonas Van der Biest by Jonas Van der Biest

At Codit Products we often do research at new technologies. It is important that these new technologies are easy to use, testable and maintainable. I would like to share my experience about JavaScript testing on a continuous Team Foundation Build server.

In this blog series, we first focus on how to automatically test and get code coverage from JavaScript. Part 2 describes how to execute javascript tests on the build server and have test results next to your build package. Part 3 explains how to use other testing frameworks like NUnit instead of the default MSTest framework on the build server.

JavaScript testing using Grunt, Karma, Istanbul and PhantomJS

You should be able to follow along if you have an existing Javascript application or you could use the sample application (ASP.Net Web Api project) attached at the end of this blog post series. The sample application consists of a simple AngularJS application that also has a configured javascript unit test, it also includes all configuration files needed to run tests / tasks.

Setting up our environment

Installation of nodejs

The installation of nodejs is pretty straightforward. Open the Node.js command prompt once the installation is completed.

Using NPM

We will use NPM (Node Packet Manager) to install all required JavaScript packages. Any packages installed with NPM should not directly be apart of our Visual Studio Project. Instead we will install them in the solution folder, not in a project folder. You can compare it with NuGet that also has a "packages" folder on solution directory level but NPM will create a "node_modules" folder.

When installing packages with NPM, you can install them globally (-g) or locally (--save-dev). When they are installed locally, they are stored with your solution (other developers (and build server) will have them out of the box).

Creating a "package.json" file (NPM Configuration file)

The packackage.json contains all the meta-data of the installed packages. "devDependencies" lists all the local installed modules. You could generate a package.json file using the npm (wizard) command: "npm init"
Or you could start with a basic file like this (it includes all packages we'll use later on):

You could do a package restore locally by entering "npm install". It will check what packages are already installed and install the missing devDependencies. If you want to install them manually, you could clean the devDependencies and install them by entering: "npm install [PackageName] --save-dev" for each package.

Creating Unit Tests

JamineJS

Our Unit test library that we will use is JasminJS v2.1. In the demo application, I've added a very simple unit test that you can use. You can download the javascript project or ASP.Net project at the end of this post.

Karma

Karma is a test runner developed by the AngularJS team. The framework is able to execute unit tests in the scope of a browser. To use karma in the command line, install the CLI: "npm install -g karma-cli".

Karma makes use of a config where you could specify a bunch of options. Right now it is important to configure Karma to use Jasmine as our test framework, karma-coverage as our code coverage runner and to instruct what our test files are.

Creating a config file can be done by following command: "karma init karma.conf.js" but the following config should get your running:


We name this config karma.chrome.conf.js and place it next to the other configuration files in the solution root directory.

Running Unit Tests using Chrome

Once our configuration is done, we should make sure our test runs without problems. To run your JavaScript tests, browse to the solution root and executing the following command: "karma start karma.chrome.conf.js".

You should notice that an instance of Chrome is starting up to execute the unit tests in the browser context. Because of the autoWatch and singleRun properties in the configuration, you could just keep writing code and your tests will be executed each time you save a file.

Running Unit Tests using PhantomJS

Because we are preparing to run our test on a build server, we will also create a config file to execute our tests in a PhantomJS context. PhantomJS is a headless WebKit scriptable with a JavaScript API. More like a browser without a GUI.

Create the following karma config file ("karma.phantomjs.conf.js"). Please note that we will do a singleRun and will not watch for any file changes (autoWatch). Of course we need to change the browser to PhantomJS too...

Html Reporting and Code Coverage

There are plugins that could give you some insight on your JavaScript unit tests. The first plugin that we will use for Karma is "karma-htmlfile-reporter". When we run our test on the build server, we can read this report afterwards so we are certain that our tests were all succeeded.

The following plugin is "karma-coverage". This plugin wraps "Istanbul" to use with karma. It will generate a detailed report of the code that has been tested.

In order to run those 2 tools, we make some modifications to the "karma.phantomjs.conf.js"

 

We will use this configuration file later in Grunt.

About Grunt

Grunt is a task runner, using configuration, you can group multiple task and let Grunt handle the execution of these tasks. It is extremely useful because you can define different tasks that combine a number of actions. On the build server we can always minify, uglify and run unit tests. On development we might only want to run unit tests and check syntax using js-hint. Most popular packages support the grunt task runner, for a complete list check: http://gruntjs.com/plugins.

Installing Grunt-CLI using NPM

In a node.js command prompt enter following command "npm install -g grunt-cli". This will globally install the CLI so we can use grunt from everywhere.

Creating a "Gruntfile.js" (Grunt Configuration file)
When grunt is running, it will lookup the configuration from the Gruntfile.js. This file should be placed next to package.json in the solution root.


Let's run the same unit tests using grunt. Execute the command: "grunt test".

Basically it's doing the same thing as before but now we can add multiple tasks to our grunt task, will be useful in our build process scenario. For example if we run another task "grunt release" it will run our unit tests but will also validate our application code for possible syntax errors.

How we can execute these commands on the build server is described in part 2 of this blog series.

 

Sample Solution Files:

Note: execute the "npm init" command to restore the node_modules folder. The packages are not included. 

Download [22.5 MB]

Categories: Products
written by: Jonas Van der Biest

Posted on Friday, March 13, 2015 1:07 PM

Glenn Colpaert by Glenn Colpaert

The last couple of month’s I've been playing around with SAP on my local development machine and I've received a couple of requests/questions of people that also would like to have their own SAP instance to try out some scenarios.
Based on these requests I have written a series of blog posts that covers the entire installation and setup of a local development SAP instance.

The series contains following parts:

Please do note that this setup is a setup that is created for a development/playground environment, this guide is not intended to give best practices or a production ready setup.
If you require any assistance or if you might have questions, don’t hesitate to put them in the comment section of this blog or give me a ping through twitter or email.

In this fourth and last part I will show you how you can insert master data in your SAP system. This master data can then be used to exchange data with any external system.

All these steps are executed inside the SAP environment so you might want to check out following blogpost (SAP Transactions) that gives you a basic overview of the SAP Interface.
Be aware the first time you execute a certain transaction on the new SAP system it takes some time to load, so you will require some patience.

Add the Flight Data Application

The flight data application provides a certain amount of business processes that can be used in the SAP System. More information can be found here.

Start SAP Transaction SE38.
In the Program field type SAPBC_DATA_GENERATOR and click the Execute button in the top left corner.

Click the Execute button in the top left corner to start the import of the data.

Click Yes to confirm deletion of the Old table entries.

Wait for the program to complete, this might take a couple of minutes.

Add the Flight Customer IDOC Data

The flight customer IDOC data is a standard ABAP report available within the SAP System.

Start SAP Transaction SE38.
In the Program field type SAPBC_FILL_FLCUST_IDOC and click the Execute button in the top left corner.
Fill in the Customer Number, this will assign the ABAP report to that specific customer.
Click the Execute button in the top left corner to start the import of the data.

Following confirmation will confirm that the IDOC was successfully created.

Test the Flight Data

You can test these flight data through the normal SAP Transactions like SE37 for RFC testing and Transaction WE19 for IDOC testing.

 

Your system is now ready to start processing IDOC's and BAPI calls. There is tons of information out there to setup and configure your BizTalk system to communicate with SAP, so now you can actually start building that part and start building some nice interfaces with SAP.
Thank you for taking the time to read all the blogposts in this series. If you have questions do not hesitate to contact me.

Cheers,

Glenn Colpaert