This post follows on from the getting started post which you should run through first. It includes some free software that you may need to install before you can proceed.
Earlier I introduced the basic functionality of the testing framework we have created for Aikau. In addition to testing Aikau on a couple of browsers within a predefined virtual machine, we have some options to test locally and to generate a coverage report on the spread of the Aikau testing. Here you will find some of these use cases described.
Local testing - why?
Testing against the browsers in a virtual machine is great because any individual can test against the same known set of browsers, irrespective of the platform they are using themselves. You might want to test a specific browser on a specific platform to analyse a platform-specific error and doing a local test could be the best way to do that. More important however is that the virtual machine testing is a little hidden. Whilst it is possible to interrogate selenium to see what it has been doing during a test, and Intern provides the capability to record a screen capture at a specific moment in a test, it can quite often just be easier to see a test running. Errors are often very easy to see when you are just looking at the browser window that Intern is driving.
Local testing - how
In my previous post about testing Aikau we used the following command to install an array of packages required for the testing:
>> npm install
One of the packages installed by that command is selenium-standalone. The selenium-standalone package installs a local instance of selenium with a wrapper and symbolic link to allow it to be launched from the command line, anywhere. You should be able to write the following command in any command prompt and see a small trace as selenium is started on ip and port 127.0.0.1:4444:
Note: All of these commands are run in directory:
Note: Depending on the platform you are using you may need to reinstall selenium-standalone as a global package. If the command above does not work, try reinstalling selenium-standalone globally either by itself:
>> npm install selenium-standalone -g
...or with all of the other dependencies:
>> npm install -g
If you're able to launch selenium using the standalone command shown above and you have the required browsers installed then you should be able to proceed. At time of writing the two browsers referenced by the 'local' intern file are Firefox and Chrome.
Launching a local Intern test is as simple as:
>> g test_local
Note: This command is ‘g’ for grunt and ‘test_local’ for run intern test suite (locally).
Now because this test scenario is going to drive the browsers on your machine you probably want to leave the mouse and keyboard alone whilst the tests are running. If you're happy to watch the entire suite run then do just that. If however you are working on one particular test and want to keep quickly running that one, open the Suites.js file and comment out all of the tests you're not interested in, from variable 'baseFunctionalSuites':
The output from local testing is identical to that seen with the virtual machine.
Note: The structure of the selenium-standalone package is quite simple and if you have an unusual browser for which there is a selenium driver in existence, you can modify selenium-standalone to support it. All that remains is to add the required browser to the Intern configuration file and you're good to go.
Generating a code coverage report
A code coverage report can be produced by running a suite of tests with the code pre-instrumented to report on it's use. Commands have already been written into the grunt framework that perform all of the required steps for code coverage.
Code coverage reports are generated locally, so follow the instructions shown above but use the following command for the testing phase:
>> g coverage-report
Once this has completed, which will take slightly longer than basic testing, you will be able to visit the node-coverage console here:
You should see the report you have just generated with scoring and which you can look through all of the files touched by the testing. The tool shows areas of code that remain unvisited or through which not all of the pathways have been found. This feature can be very useful when writing tests to make sure edge cases have been considered.
Note: A particularly poor test result with numerous failed tests will distort the scoring of the coverage report, sometimes in an overly positive way. Make sure all of your tests are passing reliably before assuming that the scoring from the coverage report is accurate.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.