Running a Retrospective

This month, I ran my first retrospective for a different team at work. I’ve been participating in retrospectives run by my team mate and have wanted to try my hand at facilitating. When a different team approached me to host theirs, I was thrilled at the opportunity.

Preparation

I was made aware that this team had been having retrospectives… but they were more similar to a status meeting than to a voyage of discovery and improvement. A week before the event, I went to talk to the gang to witness their environment and see their interactions. I posed whether they had any working agreements, and soon discovered that most members were shy when asked to share their opinions aloud to the group. They were most comfortable ideating in private and collaborating only as necessary. This information helped me tailor which activities would be least disruptive for their retrospective.

I proceeded to review the content of my two go-to books about retrospectives:

and

I selected a series of activities to get people talking, and to progress from individual contributions towards team decisions. With my outline in-hand, I was nervous but ready for the event.

The Event

I got to my room a half hour before the event. This gave me time to prep the room: move chairs to well distributed pattern, write the Agile Prime Directive on the whiteboard, get pens and cue cards out, display the agenda, and have poster board ready for categorization.

In standard corporate fashion, the team trickled in fashionably late (first 5 minutes). From there, I began facilitating the retrospective.

We began with “Set the Stage”. I recited the prime directive:
“Regardless of what we discover, we understand and truly believe that everyone did the best job they could, given what they knew at the time, their skills and abilities, the resources available, and the situation at hand.” –Norm Kerth

I thanked everyone for willingly participating in the event, then we went over the agenda so that we all had a rough idea of the meeting’s pacing. Knowing that the crowd would not be forthcoming with opinions and participation, we started with a 1-2 Word Checkin activity as warm up. I was happy that we completed the circle with minimal protest.

Keeping everyone comfortable in their chairs, the “Gather Data” stage began with the “Four L’s” activity. Each person acts as an individual contributor, reflecting on the sprint, and categorizing their experiences as Liked, Learned, Lacked, r Longed For. During and after the 15 minutes, people were to place their sticky notes onto the appropriate 4 poster boards.

Thus we segued into a team activity: split into two groups and each group was to consolidate & summarize the data in 2 Ls (Liked and Lacked, or Learned and Longed For). Two presenters from each group were selected to share their discoveries with the team aloud.

In the “Decide What to Do” phase, we moved to an activity that got everyone physically out of their seats and moving about: Dot Voting on the found themes each person deemed most important. With all votes tallied, it was easy to spot the top 4 themes for further investigation. I requested people to pair off with a team mate that they had not worked with for the Ls activity. Each pair was assigned a theme for further brainstorming, on potential next actions. Once again, a member was requested to present their discoveries to the whole team. Members of the team chose to champion an item, to be followed up within the next sprint.

With the team aware of Who would do What by When, we moved onto “Closing” activities. I thanked the team again for their participation, and requested that they leave me feedback on how the retrospective went. I handed out the sticky notes and pens, told them to leave the mess as is in the room for me to clean up, and exited so that they could converse in private.

Aftermath

After about 10 minutes, they emerged looking energetic and ready to tackle new problems. I was happy to see that effect. I had not realized the anterior affect of retrospectives; they are not just an opportunity to find ways to improve efficiency, but act as a team building event to strengthen communication pathways and working relationships.

I returned to the room to put away the markers, stickers, and pens, to clean up the whiteboards and used stickies, and to reset the room to the state it had been prior to the fun we all just had. I looked over the feedback the team had left for me, and although it lacked any constructive criticism, it was filled with comments of people having a positive experience!

Next Steps

I learned a lot from facilitating a retrospective, and already have been asked to facilitate for other teams. I plan on trying out different activities, and will come prepared to record the data as the activities occur. I also plan on trying to keep that new-team energy flowing after the retrospective and into their next tasks. I feel that is a hard challenge when you are not part of the team and thus not involved with the time directly after the meeting.

Having seen the event from the another perspective has given me greater appreciation of the benefits to retrospectives. I will now look forward to both participating in and facilitating these events in the months to come.

Recent Reading – Agile Test Quadrants

A coworker recently shared with me this SlideShare presentation from ThoughtWorks.

I had never seen the Agile Testing Quadrants model by Brian Marick, but I believe it will be useful in helping me communicate types of testing to the teams. There is currently an attitude forming that “We can test everything via automation. Programmers can test it all, with more code” which is fallacious, but change takes time. I am hoping that exposing people to different models and ideas will help accelerate understanding my perspective on the value of sapient testing.

Here is the diagram I am referencing
Agile Testing Quadrants Model

I now have a new book added to my To Do list: Agile Testing: A Practical Guide for Testers and Agile Teams. Hopefully it will add even more tools to my belt for both testing software, and teaching about testing to developers.

Context Drive Testing – The Awakening

It has been a progressive unraveling of my assumptions and understanding of what is the Context Driven movement going on in testing. When Selena Delesie first arrived at my work to help facilitate our learning of the possibilities a title of “Software Tester” could be, I stood deep in the valley. Now, I am nowhere near the peak of this steep climb up the mountain, but my view is less foggy.

Much like the agile movement, the underlying goal is clear: apply critical thought. Do not just swap out one process for another, or blindly trust the instructions given to you by a colleague. Your key job as a member of a team is to apply your own opinion + experiences + knowledge + wisdom + subjectivity. You don’t have to just a cog in an industrial machine: your unique brain can add value to the team’s goals.

On the Twitterverse, I see an ongoing feud between two factions:

rockem-sockem-robots1

While researching the Tester schism, I came across this wonderful paper on the Schools of Software Testing by Bret Pettichord:

  • Analytic: Testing as form of mathematics
  • Standards: Testing should be predictable & repeatable, requiring little skill
  • Quality: Testing adherence to processes and act as gate keepers
  • Context-Driven: Testing as a human activity focused on finding and reporting on risks to value for stakeholders
  • Agile: Testing as an automation-able dev activity to determine story completion and notify of change

For me, having these five schools defined makes the discussion more clear. The ISTQB comes from a Standards and Quality family where there exists Best Practices and repeatable patterns to solve testing challenges. The CDT crew disagree, favouring Heuristics to help perform testing.

Before moving on, lets address this question: What is the difference between ‘Heuristic’ and ‘Best Practices’ ? The term ‘best practice’ implies that it is the recommended solution to a problem. It does not come with an asterisk beside it leading to the small-print legalese warning its users that “Your Mileage May Vary”. Instead, it sells the bearer a checklist of steps to follow to obtain the ‘best results’ without heeding the context dependent variables. The term ‘heuristic’ looks nearly the same: it provides a list of steps or terms to apply to a situation. The key is in the definition of the word: “a technique to solve problems that is not guaranteed to be optimal”. There it is! By choosing a different word, the legal small-print needed for “Best Practice” has become the centerpiece of “Heuristic”.

The CDT intentionally is choosing terminology to break from the mould and put the intelligent individual at the center of “Testing”. Much like ‘agile’ it does not prescribe single solution to rule them all.
l_one_ring_gold_italian

  • Does that mean there is no room for Analytic School of testing if you follow CDT? Nope! If your context suits mathematical metrics and proofs to decrease risk (and thus increase value), go for it!
  • Does that mean there is no room for Agile School of testing? Nope. If devs authoring automated checks adds value to your project, go for it!

Thus, I think both sides of the feud are fighting for the same goals: how to help testers be masters of their craft. Their approaches and terminology differ, let alone their visions of the future state of the craft… We just need to remain empathetic to all sides as that is a great way to learn from each other and to slowly affect change.

For me, my vision s that we explorers strive to see past our logical fallacies and cognitive biases. We must apply critical thought to our problems and not blindly rely on “time tested best practices”.

.. and that is why I choose the label of Context Driven Tester.

Classification of Software Features

I typically hear two categories for software features: internal and external. Occasionally, from the development side, I hear of a third option: deprecated. I am proposing a fourth category, that I often find in enterprise software that I would call vestigial.

Here are my four categories defined:

  • External Feature: These are solutions for customer needs. They should produce value to the buyers of the software.
  • Internal Features: These are solutions for the company that produces the software. They reduce costs of maintaining and improving the software.
  • Deprecated Features: These are solutions once targeted internally or externally that are known to no longer produce significant value to keep. The are technical debt that is clearly flagged for removal.
  • Vestigial Features: These are a mystery. They likely were once solutions to someone, or at least intended to be so. Their current value is unknown and cannot be flagged for deprecation. They are technical debt with no mitigation strategy.

A vestigial feature is like the human appendix: maybe we don’t need it anymore, but it remains part of our ecosystem. The tonsils were once vestigial until we learned more about them and determined their value.

Does your enterprise software have many vestigial features? We can form a test strategy to determine their original intent, their current uses, and estimate their value.

Heuristic for selecting a Trainer

When looking at a potential coach or teacher, I find myself often using the following criteria to help me make a selection.

  • Openness: Do they expose their ideas and opinions in public forums? Do they allow discourse and feedback on their material, or is it a one-way channel?
  • Prior Art: Research material authored by the coach: articles, blog posts, videos, code, tweets, publications. Are ideas clearly expressed and compatible to your mode of learning?
  • Bias: Do they present multiple facets to ideas? Is there personal incentive for endorsing one idea over another?
  • Interpersonal: The “Play nice with others” factor. How to they behave in a group? Do they foster relationships and enable growth? Do they advocate for peers in their profession?
  • Referral: Use your network of both people you know or online personas you respect and see if any of them approve or refer to the trainer or their material.
  • Experience: Review the individual’s listed skills, credentials, and experience. Can you trust them to bring authentic information that you believe applies to your needs?

This is not a comprehensive lists (all models are flawed). What questions do you ask yourself when evaluating potential mentors, coaches, trainers, or teachers?

JavaScript Unit Testing

Note: The recommendations I make in this report are specific to the contextual needs of my current team. Your mileage may vary 🙂

Summary

The goal of this research was to determine tools and techniques to empower developers in unit testing JavaScript applications. The research discovered that there are three distinct aspects of JS unit testing:

  • Authoring checks: the means of writing the unit tests
  • Executing scripts: the frameworks that execute the checks
  • Reporting: displaying the execution results in a consistent and valued format

For authoring, the recommendation is to use the Chai.js library and to write checks in a behaviour driven development (BDD) format. For execution, the recommendation is to use Mocha as it has the most versatility to integrate into an existing Continuous Integration (CI) system. For reporting, the recommendation is to either use SonarQube if looking for tracking history and other code quality metrics, or to create a custom reporter that suits the team’s needs.

Authoring Checks

As is typical in the JavaScript world, given any one need there exists many similar libraries and frameworks to solve the problem. This remains true for unit test helpers. To further conflate selection, some libraries offer both authorship and execution in a single framework (see Table 1).

The largest dichotomy between library selections is the supported writing style: do you want checks to be written as asserts (typically labelled at TDD for Test Driven Development) or as describing behaviour (BDD). Assertions are the more traditional pattern (see Code 1), but behavioural is more readable enabling increased visibility of risk to Product Owners and Business Analysts (see Code 2).


Code Sample 1: TDD Style Unit Testing

 


Code Sample 2: BDD Style Unit Testing

 

The selection of libraries and frameworks is simplified by comparing these aspects (see Table 1).

Name TDD Style BDD Style Authoring Execution
Chai.js Yes Yes Yes No
QUnit Yes No Yes Yes
Jasmine No Yes Yes Yes
Unit.js Yes Yes Yes No
Mocha No No No Yes
Test Swarm No No No Yes
Buster.js Yes Yes Yes Yes
Intern.io No No No Yes

 

Table 1: JavaScript Unit Test Frameworks Compared

Basing a choice on the “Single Responsibility Principle” a framework focused on authoring was recommended: Chai.js. It is versatile, supporting both TDD and BDD coding styles. It is well supported online. Most importantly, checks written using it can easily be ported to another library if so desired.

Executing Scripts

 

With authoring selected, the next aspect to be solved is execution of these unit test scripts. There are two primary scenarios for execution: developers verifying their programs and systems (continuous integration) checking for unexpected impacts to the system.

 

To enable developers to verify their creations, keeping a simple workflow for execution is desired. Most Test Executors have a server based aspect (like running on a Node.js server), as well as browser based execution. The authoring of a browser executor should be intuitive for developers (see Code 3).

 

For integrating to a system, it must support command-line execution, and offer outputs that can be fed to a reporting solution.
 


Code Sample 3: Mocha Test Executor

For similar reasons as the selection of authoring tools, Mocha is recommended. It is well supported, and it would easy to port a solution to another executor if ever needed. Also, it offers the most execution output options of the frameworks considered.

Reporting Results

Surprisingly, there are not a lot of Off-the-Shelf reporting tools for unit tests (or other automated checks) nor report output formats. There are generally two reporting formats with spotty support: TAP and XUnit. Similarly, for reporting tools, only these three options were found: SonarQube, TestLink, and Smolder.

 

Both Smolder and TestLink are focused on content management of test specifications, plans, and requirements. SonarQube is focused on code analysis and reporting metrics that may indicate overall product quality. For reporting, if already using one of these tools, it is worth investigating the results of integrating JavaScript unit tests. However, it may be overkill for some teams and may be difficult to migrate to a different future solution if keeping the report history is important.

 

Since Mocha offers output in both TAP and XUnit, it could be sufficient to build a custom reporting tool that processes these outputs and displays the state of all checks. If the goal is to never leave checks failing, a custom reporter would be a better choice. It would be designed to only display information relevant to the team (see Image 1).
 


Image 1: Custom Domain-based Unit Test Reporter

Research Session – Reporting Outputs of Automation

There is an underwhelming quantity of test reporting options. The protocols for integration are few (XUnit and TAP) and the few tools that I found are not focused on reporting.

If adopting a reporting tool, I would recommend using SonarQube. If using this tool, then the report output best supported is XUnit.

An alternative approach would be to build a custom reporting tool and dashboard, that reflects the team’s Domain Model and surfaces only relevant information.

Session notes below the fold…

Continue reading Research Session – Reporting Outputs of Automation

Research Session – JS UT Experimentation

Recommend starting with Chai.js + Mocha, and Sinon.js for mocking when necessary. 

A lot of the test libraries available are similar, so it is hard to go wrong. Chai.js appears to be commonly used and also integrated into larger frameworks. Since Chai is just a test authoring library, there is still need for a tool to execute the tests. For current needs, Mocha has good support and a lot of reporting output options. At this point, the added benefits provided by theintern.io do not add immediate value for me, but transitioning to it from Mocha should not be difficult.

Further analysis plans

  • Look into Test Runner outputs and how they might integrate into JUnit reports

Simplistic examples created during experimentation can be found on Github here.

Session notes below the fold…

Continue reading Research Session – JS UT Experimentation

Research Session – Javascript Unit Testing

Report Summary:

  • Much like the rest of the Javascript ecosystem, there are a lot of options for any given problem and not a lot of community consensus
  • There are two aspects of JS testing needing to be addressed: tools to test (libraries) and tools to report results (test runners)
  • When selecting libraries, there are two style choices: TDD (Test Driven Development) vs. BDD (Behaviour Driven Development)
    Historically, our company has been more comfortable with TDD
  • Chai.js is a TDD library that looks like a good place to begin learning and experimenting with test authoring
  • Still not sure of pros/cons between test runners
  • Further analysis plans

Session notes below the fold…

Continue reading Research Session – Javascript Unit Testing

Test Sessions – Research Sessions

My responsibilities include researching and investigating tools to help others test software. I was recently asked to investigate options for helping developers author Unit Tests for Javascript applications.

While thinking about performing the investigation, it came to me that I was testing something: a domain of knowledge. And what is a good tool to record such testing? Test Sessions!

So, I am experimenting with this idea. I gave thought to my mission, wrote up an initial charter of exploration ideas, and have begun recording my path through the internet and contacts to learn more on Javascript unit testing.  Once I wrap it up, I will likely have more charters to explore and can try my hand at my first test report to hand back to the person requesting this information 🙂

Sharing – the key to learning