Category Archives: Testing

Recent Reading – Agile Test Quadrants

A coworker recently shared with me this SlideShare presentation from ThoughtWorks.

I had never seen the Agile Testing Quadrants model by Brian Marick, but I believe it will be useful in helping me communicate types of testing to the teams. There is currently an attitude forming that “We can test everything via automation. Programmers can test it all, with more code” which is fallacious, but change takes time. I am hoping that exposing people to different models and ideas will help accelerate understanding my perspective on the value of sapient testing.

Here is the diagram I am referencing
Agile Testing Quadrants Model

I now have a new book added to my To Do list: Agile Testing: A Practical Guide for Testers and Agile Teams. Hopefully it will add even more tools to my belt for both testing software, and teaching about testing to developers.

Context Drive Testing – The Awakening

It has been a progressive unraveling of my assumptions and understanding of what is the Context Driven movement going on in testing. When Selena Delesie first arrived at my work to help facilitate our learning of the possibilities a title of “Software Tester” could be, I stood deep in the valley. Now, I am nowhere near the peak of this steep climb up the mountain, but my view is less foggy.

Much like the agile movement, the underlying goal is clear: apply critical thought. Do not just swap out one process for another, or blindly trust the instructions given to you by a colleague. Your key job as a member of a team is to apply your own opinion + experiences + knowledge + wisdom + subjectivity. You don’t have to just a cog in an industrial machine: your unique brain can add value to the team’s goals.

On the Twitterverse, I see an ongoing feud between two factions:

rockem-sockem-robots1

While researching the Tester schism, I came across this wonderful paper on the Schools of Software Testing by Bret Pettichord:

  • Analytic: Testing as form of mathematics
  • Standards: Testing should be predictable & repeatable, requiring little skill
  • Quality: Testing adherence to processes and act as gate keepers
  • Context-Driven: Testing as a human activity focused on finding and reporting on risks to value for stakeholders
  • Agile: Testing as an automation-able dev activity to determine story completion and notify of change

For me, having these five schools defined makes the discussion more clear. The ISTQB comes from a Standards and Quality family where there exists Best Practices and repeatable patterns to solve testing challenges. The CDT crew disagree, favouring Heuristics to help perform testing.

Before moving on, lets address this question: What is the difference between ‘Heuristic’ and ‘Best Practices’ ? The term ‘best practice’ implies that it is the recommended solution to a problem. It does not come with an asterisk beside it leading to the small-print legalese warning its users that “Your Mileage May Vary”. Instead, it sells the bearer a checklist of steps to follow to obtain the ‘best results’ without heeding the context dependent variables. The term ‘heuristic’ looks nearly the same: it provides a list of steps or terms to apply to a situation. The key is in the definition of the word: “a technique to solve problems that is not guaranteed to be optimal”. There it is! By choosing a different word, the legal small-print needed for “Best Practice” has become the centerpiece of “Heuristic”.

The CDT intentionally is choosing terminology to break from the mould and put the intelligent individual at the center of “Testing”. Much like ‘agile’ it does not prescribe single solution to rule them all.
l_one_ring_gold_italian

  • Does that mean there is no room for Analytic School of testing if you follow CDT? Nope! If your context suits mathematical metrics and proofs to decrease risk (and thus increase value), go for it!
  • Does that mean there is no room for Agile School of testing? Nope. If devs authoring automated checks adds value to your project, go for it!

Thus, I think both sides of the feud are fighting for the same goals: how to help testers be masters of their craft. Their approaches and terminology differ, let alone their visions of the future state of the craft… We just need to remain empathetic to all sides as that is a great way to learn from each other and to slowly affect change.

For me, my vision s that we explorers strive to see past our logical fallacies and cognitive biases. We must apply critical thought to our problems and not blindly rely on “time tested best practices”.

.. and that is why I choose the label of Context Driven Tester.

JavaScript Unit Testing

Note: The recommendations I make in this report are specific to the contextual needs of my current team. Your mileage may vary 🙂

Summary

The goal of this research was to determine tools and techniques to empower developers in unit testing JavaScript applications. The research discovered that there are three distinct aspects of JS unit testing:

  • Authoring checks: the means of writing the unit tests
  • Executing scripts: the frameworks that execute the checks
  • Reporting: displaying the execution results in a consistent and valued format

For authoring, the recommendation is to use the Chai.js library and to write checks in a behaviour driven development (BDD) format. For execution, the recommendation is to use Mocha as it has the most versatility to integrate into an existing Continuous Integration (CI) system. For reporting, the recommendation is to either use SonarQube if looking for tracking history and other code quality metrics, or to create a custom reporter that suits the team’s needs.

Authoring Checks

As is typical in the JavaScript world, given any one need there exists many similar libraries and frameworks to solve the problem. This remains true for unit test helpers. To further conflate selection, some libraries offer both authorship and execution in a single framework (see Table 1).

The largest dichotomy between library selections is the supported writing style: do you want checks to be written as asserts (typically labelled at TDD for Test Driven Development) or as describing behaviour (BDD). Assertions are the more traditional pattern (see Code 1), but behavioural is more readable enabling increased visibility of risk to Product Owners and Business Analysts (see Code 2).


Code Sample 1: TDD Style Unit Testing

 


Code Sample 2: BDD Style Unit Testing

 

The selection of libraries and frameworks is simplified by comparing these aspects (see Table 1).

Name TDD Style BDD Style Authoring Execution
Chai.js Yes Yes Yes No
QUnit Yes No Yes Yes
Jasmine No Yes Yes Yes
Unit.js Yes Yes Yes No
Mocha No No No Yes
Test Swarm No No No Yes
Buster.js Yes Yes Yes Yes
Intern.io No No No Yes

 

Table 1: JavaScript Unit Test Frameworks Compared

Basing a choice on the “Single Responsibility Principle” a framework focused on authoring was recommended: Chai.js. It is versatile, supporting both TDD and BDD coding styles. It is well supported online. Most importantly, checks written using it can easily be ported to another library if so desired.

Executing Scripts

 

With authoring selected, the next aspect to be solved is execution of these unit test scripts. There are two primary scenarios for execution: developers verifying their programs and systems (continuous integration) checking for unexpected impacts to the system.

 

To enable developers to verify their creations, keeping a simple workflow for execution is desired. Most Test Executors have a server based aspect (like running on a Node.js server), as well as browser based execution. The authoring of a browser executor should be intuitive for developers (see Code 3).

 

For integrating to a system, it must support command-line execution, and offer outputs that can be fed to a reporting solution.
 


Code Sample 3: Mocha Test Executor

For similar reasons as the selection of authoring tools, Mocha is recommended. It is well supported, and it would easy to port a solution to another executor if ever needed. Also, it offers the most execution output options of the frameworks considered.

Reporting Results

Surprisingly, there are not a lot of Off-the-Shelf reporting tools for unit tests (or other automated checks) nor report output formats. There are generally two reporting formats with spotty support: TAP and XUnit. Similarly, for reporting tools, only these three options were found: SonarQube, TestLink, and Smolder.

 

Both Smolder and TestLink are focused on content management of test specifications, plans, and requirements. SonarQube is focused on code analysis and reporting metrics that may indicate overall product quality. For reporting, if already using one of these tools, it is worth investigating the results of integrating JavaScript unit tests. However, it may be overkill for some teams and may be difficult to migrate to a different future solution if keeping the report history is important.

 

Since Mocha offers output in both TAP and XUnit, it could be sufficient to build a custom reporting tool that processes these outputs and displays the state of all checks. If the goal is to never leave checks failing, a custom reporter would be a better choice. It would be designed to only display information relevant to the team (see Image 1).
 


Image 1: Custom Domain-based Unit Test Reporter

Research Session – Reporting Outputs of Automation

There is an underwhelming quantity of test reporting options. The protocols for integration are few (XUnit and TAP) and the few tools that I found are not focused on reporting.

If adopting a reporting tool, I would recommend using SonarQube. If using this tool, then the report output best supported is XUnit.

An alternative approach would be to build a custom reporting tool and dashboard, that reflects the team’s Domain Model and surfaces only relevant information.

Session notes below the fold…

Continue reading Research Session – Reporting Outputs of Automation

Research Session – JS UT Experimentation

Recommend starting with Chai.js + Mocha, and Sinon.js for mocking when necessary. 

A lot of the test libraries available are similar, so it is hard to go wrong. Chai.js appears to be commonly used and also integrated into larger frameworks. Since Chai is just a test authoring library, there is still need for a tool to execute the tests. For current needs, Mocha has good support and a lot of reporting output options. At this point, the added benefits provided by theintern.io do not add immediate value for me, but transitioning to it from Mocha should not be difficult.

Further analysis plans

  • Look into Test Runner outputs and how they might integrate into JUnit reports

Simplistic examples created during experimentation can be found on Github here.

Session notes below the fold…

Continue reading Research Session – JS UT Experimentation

Research Session – Javascript Unit Testing

Report Summary:

  • Much like the rest of the Javascript ecosystem, there are a lot of options for any given problem and not a lot of community consensus
  • There are two aspects of JS testing needing to be addressed: tools to test (libraries) and tools to report results (test runners)
  • When selecting libraries, there are two style choices: TDD (Test Driven Development) vs. BDD (Behaviour Driven Development)
    Historically, our company has been more comfortable with TDD
  • Chai.js is a TDD library that looks like a good place to begin learning and experimenting with test authoring
  • Still not sure of pros/cons between test runners
  • Further analysis plans

Session notes below the fold…

Continue reading Research Session – Javascript Unit Testing

Test Sessions – Research Sessions

My responsibilities include researching and investigating tools to help others test software. I was recently asked to investigate options for helping developers author Unit Tests for Javascript applications.

While thinking about performing the investigation, it came to me that I was testing something: a domain of knowledge. And what is a good tool to record such testing? Test Sessions!

So, I am experimenting with this idea. I gave thought to my mission, wrote up an initial charter of exploration ideas, and have begun recording my path through the internet and contacts to learn more on Javascript unit testing.  Once I wrap it up, I will likely have more charters to explore and can try my hand at my first test report to hand back to the person requesting this information 🙂

CAST 2014

On my first day in my new job, my boss threw out a challenge to me: submit a proposal to CAST for a conference talk I could give. The submission due date was is 42 hours…

CAST  is the annual Conference for the Association of Software Testing. It does not align with my typical impression of conferences: payed for by industry corporations, presenters with hidden agendas to line their pockets, and a general feeling of a shark-tank with chum. Instead, to me it sounds more like a University type of conference: sessions meant for discussion and growth for both the attendees and the presenter, and not linked directly to profits.

So, back to my challenge. I succeeded at selecting a topic of interest to me and outlining enough information that I believe I could grow it into a 40 minute presentation. I wrote up a proposal and emailed it into the cyber-nether.

My letter made it to Bernie Berger & Paul Holland, the co-chairs of the conference. And most surprising to me, I have been selected to present this August in New York City!

It is a big responsibility, and definitely an honour to have been selected. I have a lot of work ahead for creating an engaging and educational presentation that is worthy of the time of my peers.

It will a first for me in many regards: trip to NY, attending a conference in my profession, presenting to a crowd outside of my employer, planning and booking travel to the United States of America, preparing myself for my first encounter with the TSA scanners… 🙂

Wish me luck !

Virtual and Physical

It is time to start learning about virtual machines. Microsoft is kindly offering up free VMs to do browser testing. I find this exciting. Where I work we often have physical machines to host different Operating Systems, to accommodate different versions of our software. At least now from a testing perspective, in regard to client-side investigation, no one needs to pay Microsoft license fees just to see how a webpage renders in Internet Explorer.

This got me thinking that it would be equally beneficial to have VM Templates setup for the server side. With supporting more than one version of our product, it is often time consuming to setup a working server environment just for play.

I am hoping to learn enough to achieve virtualization of the server side for ‘already released’ editions of our software. Then I can turn my eye towards getting VMs auto-created as part of continuous integrations..!

Week 4 – Informed Failure

I worked on a new tool for Selenium JUnit testing: video recording of test execution. There are some good articles on how to utilize Monte Media Library for Java to add recording to your code. However, I did not see anything that took advantage of JUnit 4 functionality, such as Rules. Therefore, I have married the two together into a custom rule that lets you specify on which TestResult conditions to record a video: Success, Failure, or Error.

The goal of this tool is to make it faster to diagnose why a check has failed. The sooner information is provided to a stakeholder, the greater the value. I am guessing that check failure is either an indication of broken value in the product, or broken checking. Having videos to compare against a known working state will be handy.

I imagine this may be useful for other Selenium users, so here is the gist:


import org.junit.Assert;
import org.junit.Rule;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.junit.runners.JUnit4;
import ScreenRecorderRule;
/**
* This is a sample test class that will output videos of Selenium WebDriver execution on Fail and
* Error results.
*/
@RunWith( JUnit4.class )
public class CreateAdHocTaskExample
{
@Rule public ScreenRecorderRule iRule = new ScreenRecorderRule( false, true, true );
/**
* This is a sample test case
*/
@Test public void testCreateAdHocTask()
{
Assert.fail( "Not yet implemented" );
}
}

view raw

SampleTest.java

hosted with ❤ by GitHub


import org.junit.rules.TestRule;
import org.junit.runner.Description;
import org.junit.runners.model.Statement;
/**
* JUnit Rule to record the test execution to an AVI file, depending on the test result status
* (success, fail, or error)
*
* @author jclarkin
*/
public class ScreenRecorderRule
implements TestRule
{
private final boolean iRecordError;
private final boolean iRecordFailure;
private final boolean iRecordSuccess;
/**
* Creates a new {@linkplain ScreenRecorderRule} object.
*
* @param aRecordSuccess If true, will record test on success
* @param aRecordFailure If true, will record test on failure
* @param aRecordError If true, will record test on error
*/
public ScreenRecorderRule( boolean aRecordSuccess, boolean aRecordFailure,
boolean aRecordError
)
{
super();
this.iRecordSuccess = aRecordSuccess;
this.iRecordFailure = aRecordFailure;
this.iRecordError = aRecordError;
}
/**
* {@inheritDoc}
*/
@Override public Statement apply( Statement aBase, Description aDescription )
{
String lFilename = aDescription.getClassName() + "." + aDescription.getMethodName();
return new ScreenRecorderStatement(
aBase,
lFilename,
iRecordSuccess,
iRecordFailure,
iRecordError
);
}
/**
* Handles execution of test and potentially records the execution as well
*/
public class ScreenRecorderStatement
extends Statement
{
private final Statement iBase;
private final String iFilename;
private final boolean iRecordError;
private final boolean iRecordFailure;
private final boolean iRecordSuccess;
/**
* Creates a new {@linkplain ScreenRecorderStatement} object.
*
* @param aBase The base test statement
* @param aFilename The prefix to use for the recording file name
* @param aRecordSuccess If true, will record test on success
* @param aRecordFailure If true, will record test on failure
* @param aRecordError If true, will record test on error
*/
public ScreenRecorderStatement(
Statement aBase,
String aFilename,
boolean aRecordSuccess,
boolean aRecordFailure,
boolean aRecordError
)
{
this.iRecordSuccess = aRecordSuccess;
this.iRecordFailure = aRecordFailure;
this.iRecordError = aRecordError;
this.iBase = aBase;
this.iFilename = aFilename;
}
/**
* Executes the test and saves a recording depending on the test result status
*
* @throws Throwable If an error or failure occurs
*/
@Override public void evaluate()
throws Throwable
{
TestRecorder lRecorder = new TestRecorder( iFilename );
boolean lSucceeded = false;
boolean lFailed = false;
boolean lError = false;
try {
lRecorder.start();
iBase.evaluate();
lSucceeded = true;
} catch ( Throwable e ) {
if ( e instanceof AssertionError ) {
lFailed = true;
} else {
lError = true;
}
throw e;
} finally {
lRecorder.stop();
boolean lKeepFile =
( lSucceeded && iRecordSuccess ) || ( lFailed && iRecordFailure ) ||
( lError && iRecordError );
// If not keeping the file, delete it
if ( !lKeepFile ) {
lRecorder.delete();
}
}
}
}
}


import java.io.File;
import java.io.IOException;
import java.text.SimpleDateFormat;
import java.util.Date;
import org.monte.media.Format;
import org.monte.media.FormatKeys.MediaType;
import org.monte.media.Registry;
import static org.monte.media.VideoFormatKeys.*;
import org.monte.media.math.Rational;
import org.monte.screenrecorder.ScreenRecorder;
import java.awt.AWTException;
import java.awt.GraphicsConfiguration;
import java.awt.GraphicsEnvironment;
/**
* A customized instance of the ScreenRecorder that allows the filename prefix to be specified, as
* well as functionality to delete the recording
*/
public class TestRecorder
extends ScreenRecorder
{
private String iFilename;
private File iMovieFile;
/**
* Creates a new {@linkplain TestRecorder} object.
*
* @param aFilename The Prefix filename to use
*
* @throws IOException If an error occurs with saving the recording
* @throws AWTException If an error occurs in accessing the video screen
*/
public TestRecorder( String aFilename )
throws IOException, AWTException
{
super( buildConfig(), buildFileFormat(), buildScreenFormat(), buildMouseFormat(), null );
iFilename = aFilename;
}
/**
* Delete the recorded file
*
* @return True if succeeded to delete
*/
public boolean delete()
{
boolean lDeleted = false;
if ( State.DONE == this.getState() ) {
lDeleted = iMovieFile.delete();
}
return lDeleted;
}
/**
* {@inheritDoc}
*/
@Override protected File createMovieFile( Format aFileFormat )
throws IOException
{
if ( !movieFolder.exists() ) {
movieFolder.mkdirs();
} else if ( !movieFolder.isDirectory() ) {
throw new IOException( "\"" + movieFolder + "\" is not a directory." );
}
SimpleDateFormat lDateFormat = new SimpleDateFormat( "yyyy-MM-dd 'at' HH.mm.ss" );
iMovieFile =
new File(
movieFolder, //
iFilename + " " + lDateFormat.format( new Date() ) + "." +
Registry.getInstance().getExtension( aFileFormat )
);
return iMovieFile;
}
/**
* Generate the Graphics configuration used to access the video screen
*
* @return Graphics configuration
*/
private static GraphicsConfiguration buildConfig()
{
return GraphicsEnvironment.getLocalGraphicsEnvironment()
.getDefaultScreenDevice()
.getDefaultConfiguration();
}
/**
* Generate the fileformat for the recording to be saved
*
* @return The fileformat of the recording
*/
private static Format buildFileFormat()
{
return new Format( MediaTypeKey, MediaType.FILE, MimeTypeKey, MIME_AVI );
}
/**
* Generate the Mouse accessor details for recording
*
* @return Mouse accessor configuration
*/
private static Format buildMouseFormat()
{
return new Format(
MediaTypeKey,
MediaType.VIDEO,
EncodingKey,
"black",
FrameRateKey,
Rational.valueOf( 30 )
);
}
/**
* Generate the Screen accessor details for recording
*
* @return Screen accessor configuration
*/
private static Format buildScreenFormat()
{
return new Format(
MediaTypeKey,
MediaType.VIDEO,
EncodingKey,
ENCODING_AVI_TECHSMITH_SCREEN_CAPTURE,
CompressorNameKey,
ENCODING_AVI_TECHSMITH_SCREEN_CAPTURE,
DepthKey,
24,
FrameRateKey,
Rational.valueOf( 15 ),
QualityKey,
1.0f,
KeyFrameIntervalKey,
15 * 60
);
}
}

Thanks goes to Monte Media Library  and Road to Automation for the tools to solve my problem.