In my last blog post I revealed that 'Aikau' was the project name for the updates we've been making to our approach to developing Alfresco Share and I briefly mentioned that we had been working with Intern.io to develop a unit test framework. That unit testing framework is now available in the latest Community source on SVN and I thought that it would be worth explaining how it works.
It's still early days so it's a little rough around the edges and we've yet to hit 100% code coverage (more on that later) but hopefully this will provide an insight into our commitment to improving the overall quality of our code and to reduce the number of future bugs that get released into the wild.
We settled on Intern.io simply because it appeared to be the best fit for our needs - that isn't to say that we think it's the best testing framework available or that it's the right choice for everyone, however it does seem to be serving our needs nicely at the moment and we have had some excellent support via StackOverflow from it's developers.
We also wanted to be able to perform cross-browser testing - and whilst we're still some way from achieving that completely, we are well placed to get there in the future - especially if we decide to make use of a service such as SauceLabs for our testing (in fact we've already had some degree of success in using SauceLabs already).
Getting Our Hipster On
Back in October 2013 the Alfresco UI team went to the Future of Web Apps conference and got an insight into some cool new technologies that we subsequently decided to incorporate into our development practices.
We're making use of NPM for managing our packages and Grunt for controlling our development and test environment. The team develop using Windows, OS/X and various distributions of Linux so we use Vagrant to provide a common VM to test against. We've got CSS and JS linting ready to go and we use 'node-coverage' to ensure that our unit tests are fully testing our modules.
How It Works
Each unit test is run against a specific page defined by a JSON model containing the widget(s) to be tested. The JSON model is defined as a static resource that is loaded from the file system by Node.js. We've created a new test bootstrap page that requires no authentication. This page contains a simple form and we use the Intern functional test capabilities to manually enter the JSON model into the form and then POST it to a WebScript. The JSON model is stored as an HTTP session attribute and the page redirects to a test page that retrieves the JSON model from the session attribute and passes it through the Surf dynamic dependency analysis to render a page containing the Aikau widgets to be tested. The unit test is then able to go about the business of interacting with the widgets to test their expected behaviour.
As widgets are de-coupled over a publication/subscription communication framework it is very easy to both drive widgets and capture their resulting behaviour through that same framework. Buttons can be clicked that publish on topics that a widget listens to and we make use of a special logging widget ('alfresco/testing/SubscriptionLog') to test that it is publishing the correct data on the correct topics.
The de-coupling also allows us to test data rendering widgets without needing an Alfresco Repository to be present. Aikau widgets never make REST calls themselves - instead they use the publication/subscription model to communicate with client-side 'services' that handle XHR requests and normalize the data. This means that a test page JSON model can include services that mock XHR requests and provide reliable and consistent test data. This makes setup and teardown incredibly straight forward as their is no need for a 'clean' repository for each test.
We're making use of the 'node-coverage' project to capture how well our unit tests are driving the Aikau code and we'll admit that at the moment we're not doing a great job (our goal is to ensure that any Aikau widget that gets included in a Share feature gets a unit test that provides 100% code coverage.
We're able to get accurate coverage results by 'instrumenting' all of the AIkau code and then loading a special test page that builds a layer containing all of that instrumented code. This is a very elegant way of resolving the issue of only getting coverage results for the widgets that are actually tested and a very useful benefit of using AMD for Aikau.
We've written Grunt tasks for instrumenting the Aikau code, running the test suites and gathering the results to make it really easy for us to collect the data - and the coverage results enable us to write truly effective and comprehensive tests (as well as identifying any 'dead' code that we can trim).
Local, VM and Cross-Browser Testing
When writing the tests we typically run the tests against a local Selenium instance so that we can actually see the tests running as it's incredibly useful to be able to see the mouse move and text being typed, etc. However, we make use of a Vagrant VM to run background tests during development so that we are able to continue to use our keyboards and mice - it's also incredibly useful to have a consistent test environment across all the team members.
The Vagrant VM is Linux and only allows us to test Chrome and Firefox but in order to test multiple versions Internet Explorer we need to use a Selenium grid. We're still working through whether or not to go with an internal Selenium grid or outsource to SauceLabs but when everything is in place we will be able to reliably test the whole of Aikau against all supported browsers.
One of the primary goals of Aikau is for it to be a truly robust framework and a reliable unit testing framework is key to that. Hopefully this blog post will have illustrated our commitment to that goal and also given you something to experiment with. In a future blog post I'll provide a guide on how to get the tests running. In the meantime we'd appreciate any feedback you have on the approach that we've taken so far.