wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Wednesday, March 18, 2015 4:44 PM

Jonas Van der Biest by Jonas Van der Biest

At Codit Products we often do research at new technologies. It is important that these new technologies are easy to use, testable and maintainable. I would like to share my experience about JavaScript testing on a continuous Team Foundation Build server.

In this blog series, we first focus on how to automatically test and get code coverage from JavaScript. Part 2 describes how to execute javascript tests on the build server and have test results next to your build package. Part 3 explains how to use other testing frameworks like NUnit instead of the default MSTest framework on the build server.

For earlier blog posts in this series, see part 1 or part 2.

Setting up NUnit for TFSBuild

Read this if you are using an existing build controller:

Previous Test Adapters needed to be installed as custom dll's (Version control custom assemblies on your build controller). This is not needed anymore, you can just add the test adapter as a nuget package to your test project. However, if you have custom assemblies, they take precedence over the nuget package dll's, so it's recommended that you check this folder for older versions.

Single test project

Install the following NuGet package: NUnit TestAdapter including NUnit 2.6.3 framework 1.2.0.

Multiple test projects

Install the following NuGet package on each test project: NUnit v2.6.4 And install the adapter on 1 random test project NUnitTestAdapter v1.2.0

We're using Nunit because it has "better" support than other frameworks like XUnit on TFSBuild.

Running tests for a specific category

Each unit test framework has a different way of grouping Unit tests in categories (MSTest uses "TestCategory", Nunit uses "Category", XUnit uses "Traits", Chutzpah uses "Module", ...). Some test frameworks have adapters available that map the TFS Build properties to the properties used by your framework. However, In my experience, they only work for NUnit (and of course MSTest).

On TFS Build when you create a new build definition you can specify what test category to use by filling the "Test case filter". This filter uses the following syntax: "TestCategory=CategoryName". There are 4 filter expression keys that you can use: "FullyQualifiedName, Name, Priority, TestCategory" and you can specify multiple by using the pipe character (ex: "TestCategory=Unit|TestCategory!=Acceptance").

Running tests with code coverage

Change your the build definition to have CodeCoverageEnabled enabled

 

When we run our build, we get the following result:

When you click on "Coverage Result" on the successful build, you can download the Code Coverage file. This file needs to be opened in Visual Studio. You can then have a detailed view of the coverage. If you double click a tree node, you see the actual code fragment that was covered (it's highlighted in blue).

 

In the last part of this blogseries we have discussed how to use another framework than MSTest to run our tests on TFSBuild. We see that it's not only MSTest that delivers support for TestCategories and Code Coverage but you could use another framework like NUnit too.

 

Sample Solution Files:

Download [22.5 MB]

Categories: Products
Tags: ALM
written by: Jonas Van der Biest

Posted on Wednesday, March 18, 2015 4:43 PM

Jonas Van der Biest by Jonas Van der Biest

At Codit Products we often do research at new technologies. It is important that these new technologies are easy to use, testable and maintainable. I would like to share my experience about JavaScript testing on a continuous Team Foundation Build server.

In this blog series, we first focus on how to automatically test and get code coverage from JavaScript. Part 2 describes how to execute javascript tests on the build server and have test results next to your build package. Part 3 explains how to use other testing frameworks like NUnit instead of the default MSTest framework on the build server.

Continuous Integration with JavaScript

This post is part 2 of 3, it is recommended that you read part 1 first.

If you don't have a Team Foundation Server you can always create one for free (up to 5 users) on the Visual Studio Online website.

The build controller that is used by default on TFS Online has a bunch of software installed. Luckily for us they also installed node.js which makes it fairly easy to execute our Grunt statement (see part 1). Make sure you have checked into TFS all the files (config files, node_modules) and that there were no files excluded from check-in.

Execute batch file in Pre-Built event

The Pre-Built event is self-explanatory, we'll add the following:


We will only execute this command if we're in the Release configuration. The first argument is the task name that needs to be executed on the grunt task runner. The second argument is needed to get hold of our solution directory.

Note: we use the $(ProjectDir) macro because we are building the project files and not the solution. The reason we are building project files is to make advantage of the "Package" functionality on MSBuild (see below).

To execute grunt on the build server, we will create a gruntCI.bat file in our solution directory. We will execute this file on Pre-Build of our main project. The batch file should perform the following actions:

  • Remove the read-only flag on the workspace files retrieved by the build server
  • Install grunt cli on the build server
  • Execute grunt "release" task

You could ask yourself why we aren't just installing the grunt-cli and execute the grunt task by using the "grunt [task]" command. This used to work on the build server, however Microsoft changed its security policy on the build server and you cannot add variables to the PATH variable anymore. Therefore, we need to look for the grunt.cmd file and run grunt from there. We also need to use the "call" statement in order to continue executing the batch file after grunt is completed.

Create the build definition

The build definition is pretty standard, we will build our ASP.Net application project and the associated test project.

Configure the source settings to point to the solution folder. If you don't use this source path, you will have to keep this in mind when configuring the PowerShell script (see later).

Select both projects and make sure they build in release mode for "AnyCpu" (notice there is no space between "Any" and "Cpu". Tfs adds a space automatically but you'll need to remove it afterwards).

If possible, you should use the TfvcTemplate.12.xaml as Build process template. It enables us to execute a PowerShell scripts after MSBuild completes (see later).

Running the build

Run the build, pray it succeeds and then have a look at the MSBuild log

If you navigate to View Log and MsBuild Log File, the output of the batch file should be logged as such:

Publishing build artifacts

Now that our build is running, how can we easily check if our tests were running fine. Where can we have a look at the code coverage report? It requires a bit more configuration to get us a clean package.

Configuring the drop location

When we configure a drop location, TFS Build will copy the output of the build to this directory.

Configure the MSBuild Package target for multiple projects
With the "Package" build target, we can let TFS Build create a Web Deployment Package. You will end up with a zip file that contains your website and that can easily be imported into IIS. We will now take advantage of this built-in functionality.

Not all Visual Studio project types have this "Package" target. ASP.Net supports this, you just need to trigger it using MSBuild. But with this target comes another pitfall, the test project doesn't support this target. With MSBuild you cannot specify a build target that is specific for a project file, your project files will all try to use that target when building. Conclusion, if you specify the "Package" target as a MSBuild argument, the build will currently fail. To make it work, you should create a new MSBuild target in both project files and execute the desired child targets. If we then add this new target to the MSBuild argument, everything will run smooth. The following code will make it clear.

In our asp.net project we add the following target:

In our asp.net test project we add the following target:

Now we can specify an MSBuild argument named TfsOnlineBuild. Also set the Output location to "PerProject", it will make it easier in Powershell to copy our project to a clean folder.

Adding a PowerShell script to organize our drop location

Creating the powershell

When you've followed all the previous steps, the current drop location should look like:

We are only interested in the logs and the Web Deployment Zip Package. There is a bunch of other stuff (like *.dll's in the _PublishedWebsites folder), the test project, ...) as you can see in the screenshot.

As we are writing a PowerShell we can also organize things a bit better. The PowerShell script will do the following.

  • Look for any packaged zip file and copy the folder to the root folder.
  • It will copy the test output folder from javascript to a new root folder: "JavaScript_Output"
  • It will remove all "trash" but keep the important root folders like "logs", "Package" and "JavaScript_Output"

We add a new file to our solution folder ("OrganizeDropFolder.ps1"), add the content and check it in on TFS. If you didn't follow the instructions about the "Source Settings", now is the time to change the path in the script. You should change line 18 to match your relative path.

Then we reference the script in our build definition, so it runs after MSBuild is complete.

When we now build again, we see the following PowerShell log in the build logs:



And we get the desired result in our TFS drop folder:

 

In this part we have discussed how to configure TFSBuild to execute your javascript tasks and organize the drop folder. If you have questions, feel free to use the comments below.

See part 3 for the conclusion of this series.

 

Sample Solution Files:

Note: execute the "npm init" command to restore the node_modules folder. The packages are not included. 

Download [22.5 MB]

Categories: Products
Tags: ALM
written by: Jonas Van der Biest

Posted on Wednesday, March 18, 2015 4:31 PM

Jonas Van der Biest by Jonas Van der Biest

At Codit Products we often do research at new technologies. It is important that these new technologies are easy to use, testable and maintainable. I would like to share my experience about JavaScript testing on a continuous Team Foundation Build server.

In this blog series, we first focus on how to automatically test and get code coverage from JavaScript. Part 2 describes how to execute javascript tests on the build server and have test results next to your build package. Part 3 explains how to use other testing frameworks like NUnit instead of the default MSTest framework on the build server.

JavaScript testing using Grunt, Karma, Istanbul and PhantomJS

You should be able to follow along if you have an existing Javascript application or you could use the sample application (ASP.Net Web Api project) attached at the end of this blog post series. The sample application consists of a simple AngularJS application that also has a configured javascript unit test, it also includes all configuration files needed to run tests / tasks.

Setting up our environment

Installation of nodejs

The installation of nodejs is pretty straightforward. Open the Node.js command prompt once the installation is completed.

Using NPM

We will use NPM (Node Packet Manager) to install all required JavaScript packages. Any packages installed with NPM should not directly be apart of our Visual Studio Project. Instead we will install them in the solution folder, not in a project folder. You can compare it with NuGet that also has a "packages" folder on solution directory level but NPM will create a "node_modules" folder.

When installing packages with NPM, you can install them globally (-g) or locally (--save-dev). When they are installed locally, they are stored with your solution (other developers (and build server) will have them out of the box).

Creating a "package.json" file (NPM Configuration file)

The packackage.json contains all the meta-data of the installed packages. "devDependencies" lists all the local installed modules. You could generate a package.json file using the npm (wizard) command: "npm init"
Or you could start with a basic file like this (it includes all packages we'll use later on):

You could do a package restore locally by entering "npm install". It will check what packages are already installed and install the missing devDependencies. If you want to install them manually, you could clean the devDependencies and install them by entering: "npm install [PackageName] --save-dev" for each package.

Creating Unit Tests

JamineJS

Our Unit test library that we will use is JasminJS v2.1. In the demo application, I've added a very simple unit test that you can use. You can download the javascript project or ASP.Net project at the end of this post.

Karma

Karma is a test runner developed by the AngularJS team. The framework is able to execute unit tests in the scope of a browser. To use karma in the command line, install the CLI: "npm install -g karma-cli".

Karma makes use of a config where you could specify a bunch of options. Right now it is important to configure Karma to use Jasmine as our test framework, karma-coverage as our code coverage runner and to instruct what our test files are.

Creating a config file can be done by following command: "karma init karma.conf.js" but the following config should get your running:


We name this config karma.chrome.conf.js and place it next to the other configuration files in the solution root directory.

Running Unit Tests using Chrome

Once our configuration is done, we should make sure our test runs without problems. To run your JavaScript tests, browse to the solution root and executing the following command: "karma start karma.chrome.conf.js".

You should notice that an instance of Chrome is starting up to execute the unit tests in the browser context. Because of the autoWatch and singleRun properties in the configuration, you could just keep writing code and your tests will be executed each time you save a file.

Running Unit Tests using PhantomJS

Because we are preparing to run our test on a build server, we will also create a config file to execute our tests in a PhantomJS context. PhantomJS is a headless WebKit scriptable with a JavaScript API. More like a browser without a GUI.

Create the following karma config file ("karma.phantomjs.conf.js"). Please note that we will do a singleRun and will not watch for any file changes (autoWatch). Of course we need to change the browser to PhantomJS too...

Html Reporting and Code Coverage

There are plugins that could give you some insight on your JavaScript unit tests. The first plugin that we will use for Karma is "karma-htmlfile-reporter". When we run our test on the build server, we can read this report afterwards so we are certain that our tests were all succeeded.

The following plugin is "karma-coverage". This plugin wraps "Istanbul" to use with karma. It will generate a detailed report of the code that has been tested.

In order to run those 2 tools, we make some modifications to the "karma.phantomjs.conf.js"

 

We will use this configuration file later in Grunt.

About Grunt

Grunt is a task runner, using configuration, you can group multiple task and let Grunt handle the execution of these tasks. It is extremely useful because you can define different tasks that combine a number of actions. On the build server we can always minify, uglify and run unit tests. On development we might only want to run unit tests and check syntax using js-hint. Most popular packages support the grunt task runner, for a complete list check: http://gruntjs.com/plugins.

Installing Grunt-CLI using NPM

In a node.js command prompt enter following command "npm install -g grunt-cli". This will globally install the CLI so we can use grunt from everywhere.

Creating a "Gruntfile.js" (Grunt Configuration file)
When grunt is running, it will lookup the configuration from the Gruntfile.js. This file should be placed next to package.json in the solution root.


Let's run the same unit tests using grunt. Execute the command: "grunt test".

Basically it's doing the same thing as before but now we can add multiple tasks to our grunt task, will be useful in our build process scenario. For example if we run another task "grunt release" it will run our unit tests but will also validate our application code for possible syntax errors.

How we can execute these commands on the build server is described in part 2 of this blog series.

 

Sample Solution Files:

Note: execute the "npm init" command to restore the node_modules folder. The packages are not included. 

Download [22.5 MB]

Categories: Products
Tags: ALM
written by: Jonas Van der Biest

Posted on Monday, December 4, 2017 8:51 PM

Glenn Colpaert by Glenn Colpaert

The Internet of Things (IoT) is a business revolution enabled by technology and is no longer just for early adopters, it offers tremendous business opportunities.

With Microsoft IoT Central, a new SaaS solution, Microsoft is helping to solve IoT challenges.

Microsoft IoT Central is now available in Public Preview!

 

The Internet of Things (IoT) is a business revolution enabled by technology and is no longer just for early adopters, it offers tremendous business opportunities.

As already explained in this blogpost, the path to build, secure and provision a scalable IoT solution from device to cloud can be complex. Evolving products with IoT in most cases require some up-front investment and a whole new set of skills to be learned.

With Microsoft IoT Central, a new SaaS solution, Microsoft is helping to solve these challenges.

Meet Microsoft IoT Central

Microsoft IoT Central was first announced in April 2017, since then Microsoft has been working with partners and customers to align business and user scenarios with the product functionality in a private preview mode, today Microsoft IoT central is available in public preview.

Microsoft IoT Central is a SaaS (Software-as-a-Service) offering that reduces the complexity of IoT Solutions, it is fully managed and makes it easy to create IoT solutions by removing management burdens, operational costs and overhead of a typical IoT project.

A silver bullet for IoT?

There's more than one approach when building an IoT Solution with the Microsoft Azure platform. With the announcement of Microsoft IoT Central it's important to determine whether you need a PaaS or SaaS offering.

SaaS solutions allow you to get started quickly with a pre-configured IoT solution offering where PaaS solutions provide the building blocks for companies to construct customized IoT Solutions.

The decision PaaS vs SaaS is depending on your business, expertise and the amount of control and customization desired. All these topics are important to make the decision between PaaS and SaaS.

If you need more information please check out following announcement blogposts by Microsoft:

I'll be further exploring this new Microsoft offering in the next coming days and keep you posted on my findings and the outcomes.

Cheers,

Glenn

Categories: Azure, Products
written by: Glenn Colpaert

Posted on Wednesday, October 26, 2016 7:31 AM

Glenn Colpaert by Glenn Colpaert

The moment we've all been waiting for has arrived, as of now BizTalk Server 2016 is RTM!

BizTalk Server 2016 comes with lots of new exciting features, with focus on HA improvements and addressing customer asks and pain points, BizTalk Server 2016 tries to bridge the gap between on premise and cloud and take your business on a successful hybrid integration journey. The goal of this blogpost is to give you an in depth vision of all the new features of BizTalk Server 2016.

Before diving into the details of this blogpost be sure to also check following blogpost released by Microsoft on the vision and the shift of momentum in the integration space.


https://azure.microsoft.com/en-us/blog/an-important-milestone-in-enterprise-integration-launch-of-microsoft-biztalk-server-2016/

High Availability with SQL Server 2016 AlwaysOn

BizTalk Server 2016 comes with support for SQL Server 2016 AlwaysOn Availability Groups. With this addition of AlwaysOn, BizTalk Server provides a modern and consistent way for doing HA/DR scenarios. Next to that you can run your multi-node BizTalk deployments both on-premises or as Azure IaaS in the cloud in a supported way.

SQL Server 2016 supports MSDTC with AlwaysOn Availability Groups (AG) on Windows Server 2016 and Windows Server 2012 R2, any version prior to SQL Server 2016 will not be supported.

Another important remark in the new HA/DR setup is that MSDTC between databases on same SQL Server instance is not supported with SQL Server 2016 AlwaysOn Availability Groups. This means that no two BizTalk databases in a distributed transaction can be hosted on the same SQL server instance. For transactional consistency BizTalk databases participating in distributed transaction should be hosted on different SQL server instances.

The illustration below demonstrates the recommended configuration for BizTalk Server Databases in SQL Server 2016 Always On Availability Groups.


(Click on the image to enlarge)

Adapter Improvements

  • SFTP Adapter now supports additional ciphers (DES, Blowfish and ArcFour) and more SFTP Server.
  • ServerBus Adapter is updated with support for Shared Access Signature (SAS) authentication for the BasicHttpRelay, NetTcpRelay, BasicHttp and WebHttp bindings.
  • The MLLP Adapter (HL7 Accelerator) supports the option to initiate an outbound connection.
  • Further improvement of SAP NcO Support for the SAP Adapter. You can find more info of the switch from RFC to NcO in following blogpost. https://www.codit.eu/blog/2016/01/04/microsoft-adds-support-for-sap-net-connector-in-biztalk-server-2013-r2/

Miscellaneous

Next to the platform alignment with Windows Server 2016, SQL Server 2016, Office 2016 and Visual Studio 2015, BizTalk Server 2016 comes with a whole new range of miscellaneous additions and improvements.

  • SHA2 Support
  • Support for xslCompiledTransform or xslTransform
  • Binding management improvements like 'Include/Exclude tracking settings', 'Export on Party Level'


(Granular import of binding files)

(Importing/Exporting on Party level)

  • Admin Console improvements like 'Search/Filter on artifact name', 'Change multiple host settings', 'Suspended messages - multi select save to file"


(Search and Filter on Artifact name)

(Multi Select save to file)

BizTalk Logic Apps Adapter

One of the key goals of BizTalk Server 2016 is to bridge the gap between on premise and cloud by taking advantages of the API App available through Logic Apps.

The new Logic Apps Adapter for BizTalk Server 2016 enables to integrate seamlessly with Logic Apps via the recently released On-premises Data Gateway. More details on how to install and configure the BizTalk Logic Apps Adapter please visit following blog.

https://blogs.msdn.microsoft.com/biztalk_server_team_blog/2016/08/08/announcing-the-new-biztalk-connector-for-logic-apps/

 


(BizTalk and Logic Apps - Better Together)

 

Launch Event

To celebrate the launch of the 10th version of BizTalk Server, Codit organizes a BizTalk 2016 launch event.

Learn everything about the new features in BizTalk Server 2016 and the vision of Microsoft on Integration anno 2016.More information of the event can be found on following website.

http://biztalk2016.codit.eu/

If you still have question on BizTalk Server 2016 after reading this blogpost, don't hesitate to contact me and if you're in the neighborhood, don't forget to register for out BizTalk Server 2016 Launch event.

Cheers,

Glenn

Categories: Azure, BizTalk, Products
Tags: Azure, Logic Apps
written by: Glenn Colpaert