TSM - Filling the gap between business and technology on automation testing

Mădălin Ilie - Cluj Java Discipline Lead

Why?

One of the main challenges that Agile teams are facing is the way it plans the testing activities so that:

Automated testing solves the second bit, by having a continuously evolving and repeatable set of tests that can be executed at any time in order to make sure you didn’t break any existing functionality when adding new features.

BDD (Behavioral Driven Development) solves the first bit by creating a common language between the Product Owner and the team so that the Product Owner has a full visibility and clear mapping over his acceptance criteria with given-when-then test scenarios, which can be further automated or manually tested (with a very clear traceability at Selenium level by using JBehave).

But in order to also solve the third bit, we need something that clearly presents to the PO in a easy to understand, non-technical manner what testing scenarios are being automated, enabling the PO to take risk based decisions on the testing side and better decide with the team what’s worth being automated. This allows a good alignment on the regression testing approach and optimizes the way the PO understands the regression packs by helping him reduce the manual UAT cycles for example.

What?

Spider is a Java tool that I wrote, which solves this third bit. It creates an automation testing coverage report with a clear mapping between the BDD style test scenarios and the actual automated tests (in our case - Selenium tests). Using this report the PO can further decide to raise the automated test coverage in the more risky areas, raise the flag if low impact stories are treated with too much importance or just be happy with what he/she sees :)

How?

Spider is based on conventions. The acceptance criteria from the User Stories are transformed in given-when-then test scenarios (GWTs). These GWTs live in JIRA (the tool used for the Agile Project Management) in a testing task linked to the story. Each GWT has an id. The convention here is that this Id should uniquely identify a GWT test scenario for a particular story. Duplicates are not allowed. Using a Java annotation called @Covers we’ll mark the GWTs that are being covered by a particular Selenium test.

Spider will generate a report like the following:

The generated report has a chart displaying how much of a user story is covered by Selenium tests. The green part tells you the actual percent of the coverage from the whole story (the total size of a bar represents the total number of story points).

Spider also computes a Sprint coverage by using an average weight and considering only the Functional stories delivered inside a sprint, it ignores the technical ones. The weighted coverage is calculated taking in consideration the number for story points per each story. For example if we have:

The weighted coverage will be: (550+2100+8*60)/(5+2+8)=62%.

The second page is a mapping table between the GWT id and the actual Selenium tests that cover it.

When?

Spider can be run at any moment for a particular Sprint. It will just consider stories that have the GWT scenarios created. But in order to actually produce usable results, the recommendation is to run it after the automated tests for the stories in a particular sprint are actually done.

Where?

Spider is not open source yet and it can only be used in Endava. There are plans to release it as an open source project at some point.