We implemented this approach to incorporate QUnit in our regular CI builds a while back and I think it's worth sharing. Since QUnit runs in a browser, it's typically only viewed when someone can be bothered to run the tests manually. Possible options include launching the QUnit page as a post-build step, but that still requires eye balling the results, which means people can ignore it.
If we treat the JavaScript in our web applications with the respect it deserves since it's critical code, we must incorporate the results of the QUnit tests in out CI build result. We took the following approach to failing a build if a QUnit test fails.
Using a normal (NUnit) unit test, we load the QUnit page with WatiN. Once loaded, the JavaScript is executed and we parse the DOM to find tests and test failures. QUnit markup could be more helpful in this respect, but it's certainly far from impossible. Any failing tests are output to the console so that they appear in the CI build log. If any QUnit test has failed, the [NUnit] unit test Asserts a failure to ensure that the overall build is failed.
The thing I dislike most about this approach is that it's an integration test rather than a true unit test, with the parsing of the QUnit results tightly bound with the QUnit output DOM. But since we use a local copy of QUnit, we at least won't be stung by an update of QUnit that we weren't aware of.
A few things I like very much about this technique are:
- The ability to run the tests as part of the CI build. Any commit of code verifies the tests still pass.
- The ability to run the tests through a number of different browsers thanks to WatiN.
- The treatment of JavaScript code as equally important and "grown-up" as our server side (C#) code.
I could probably improve matters by changing the testPageUrl. The code below assumes a local virtual directory called "Tests" has been set up and configured for my project. Without further ado, here's the code:
[TestFixture]
public class JavaScriptTests
{
[Test]
public void LoadJavaScriptTestPage()
{
const string testPageUrl = "http://localhost/Tests/JavaScript%20Tests.htm";
string jsTestResults;
string jsFailedTestResults;
using (var browser = new IE(testPageUrl))
{
browser.ClearCache();
ParseQUnitTestResults(browser, out jsTestResults, out jsFailedTestResults);
}
Trace.WriteLine(string.Format("{0}:{1}", testPageUrl, jsTestResults));
if (jsTestResults.IndexOf('F') >= 0)
{
Assert.Fail(jsFailedTestResults);
}
}
private void ParseQUnitTestResults(IElementContainer browser, out string jsTestResults, out string jsFailResults)
{
var testResults = new StringBuilder();
var failResults = new StringBuilder();
foreach (var element in browser.ElementsWithTag("li").Where(element => element.Parent != null && element.Parent.Id != null && element.Parent.Id.Equals("qunit-tests")))
{
if (element.ClassName.Equals("fail", StringComparison.InvariantCultureIgnoreCase))
{
testResults.Append("F");
failResults.AppendLine(element.Text);
}
else
{
testResults.Append(".");
}
}
jsTestResults = testResults.ToString();
jsFailResults = failResults.ToString();
}
}
Today, I had to update the end date of my sprint. I know that's not normal, and it's not scrum and I should never do this, but hey, here in the real world, people are breaking the rules all the time. I updated the sprint work item at 8:50:59. I've requested an update of the burndown at 9:44, which shows me the same one it generated at 9:41:41 which claims that the "Data Updated" was at 9:25:02. Maybe it's my lack of training with the tools, but if I edit the sprint, I expect the burndown to change. I expect that change to be immediate. It's 2011 people; the days of waiting for your JCL to run overnight in an underground bunker should be over.