Running a Passing Test Script

In the section First Test Script, we created a simple script. This section will cover how to execute that script.

Start the GUI by double-clicking on jameleon.bat in the jameleon-test-suite directory. You should be presented with a screen with one file listed to the left. This is the file we created in First Test Script. Simply click on this file and wait for the test case docs to be generated in the window to the right. You should see something like the picture below:

Notice how in the image above the Summary, Author, Application Tested and ... fields aren't populated. This will be covered in Documenting Test Scripts.

Now, click the Run button. It's the button with the icon of the blue person running. You should be presented with the results tab, showing that the test passed.

You can then click on the green checkmark and the html results should appear. Try it out!

Running a Failing Test Script

Now let's make the script fail by creating a new file like the first test script and change the expected value to value 2. The script should now look like:

<testcase xmlns="jelly:jameleon">
          functionId="Compare two different values"
          expected="value 2"
          actual="value 1"/>

Save the above as failingAssertEquals.xml in the scripts folder. To see the new script, double-click on the Test Cases folder on the top left twice. Then click on the failing script and you should see something like:

Now, click the Run button like you did above You should be presented with the results tab, showing that the test failed.

You can then click on the red X and the html results should appear. Try it out!

In the section marked with blue, there are several columns. Each of these columns are explained below

  • Row - This is most useful for data-driven tests. This value represents the row of data where the failure occurred.
  • Blank - If the row has an icon of a camera on it, you can click on the icon to see the state of the application when it failed. This functionality is dependent on the plug-in implementing this feature. Since we are using the JUnit plug-in this functionality is not available.
  • Function Id - The description of the function or tag where the failure occurred.
  • Failed Reason - This is the "user-friendly" message stating what error occurred. In the case of failingAssertEquals.xml, if we would have set the value of msg to something like Values did not match then the message would then include Values did not match in the error message.
Sometimes a single test script will have multiple failures. In that case, the highlighted section will have one row for each failure. To see where in the code exactly the failure occurred or to debug the script, simply select the row of interest and the text area below should be populated with a stack trace.

Running Multiple Scripts

It is also possible to run multiple scripts. This can be done by holding down on the shift or ctrl key while selecting another script with your mouse.

After selecting both the passing and failing script, click the run button. The results should appear like the image below. The order of execution depends on the order the scripts were selected. Therefore, you may see the first script pass and the second script fail. Just look at the test case name to be sure which script passed or failed.

Running Scripts in Ant

If writing custom tags is required to test your application, you will need to use the provided Ant task to build and register your tags. The scripts can still be executed via the GUI, but sometimes you may want to run your scripts via the command line. An example build.xml file is provided in the Installation section of the site and all Ant tasks are documented in the Ant Tasks section of the site. Please see those areas for full documentation. We run both Ant and the GUI. We keep the GUI running while we create and compile new tags or rebuild existing tags via Ant.