Continuous learning to guide testing decisions

Joshua Gorospe
IxN — The Intersection Blog
6 min readDec 6, 2017

--

“All testers do exploratory testing. Some do it more deliberately and in intentionally skilled ways.”

Cem Kaner, “A tutorial in exploratory testing”, Chicago QAI QUEST Conference, April 2008

Exploratory and model-based testing approaches have been around for decades. An exploratory test involves unscripted concurrent test design and execution. While a system is tested with this approach, there are a number of supportive activities running in parallel. The most crucial activity is continuous learning about the system to gain more knowledge about its behavior and attributes. Concurrently, a tester will adapt the test design according to results collected in real-time during the testing session.

While scripted tests are necessary for creating a foundation of basic test coverage, they can also lead us to a false sense of security if they are the only type of coverage in the continuous testing toolbox. In most cases, scripted tests only check for one type of issue, or they focus solely on issues that are covered in the hand written script. Another problem with scripted tests is that the same static test sequence is repeated during every test run cycle.

Model-based testing can help us avoid static sequences by automatically generating tests using models of system behavior and requirements. For example, model-based test tools such as GraphWalker can help testers automatically create large volumes of varied test sequences using different test generator options and specified coverage parameters. Models also have the added benefit of being a useful visualization and collaborative tool for anyone to use.

Here is an example of what a model could look like. The following comes from a working Appium example in the graphwalker-example repo on Github.

Appium GraphWalker example model

Both of these approaches provide several advantages for testers, but combining them can produce an even more powerful tool. Exploratory testing (in most cases) is handled manually, and requires a significant time investment to complete. Model-based testing is usually an automated test generation approach, but it lacks the adaptability of exploratory testing. A tester’s capabilities could be greatly enhanced with a flexible model-based exploratory test (MBET) process that runs on a frequent schedule every night, makes decisions based on the results status of scripted tests, and adapts the test sequences after every run.

“The most powerful testing does new things all the time. Automated tests can be designed this way and they explore system characteristics we never conceived of.”

Douglas Hoffman, Belgium Testing Days Conference, March 2014

At Intersection we believe that automation in testing is important for achieving good test coverage. This coverage is crucial given that our products help millions of people every day. That said, we also recognize that conventional static automation scripts alone are not enough. In addition to the basic coverage our various manual and automated scripts provide, which we manage with Testrail (central hub for all of our test results), we also use combinations of exploratory testing, fault-injection, properties-based testing, fuzz testing, model-based testing, and experiment with various types of software testing oracles.

Test engineers in Intersection’s Consulting & Solutions team are currently experimenting with unique model-based exploratory test POCs (proof of concepts) that utilize Docker containers. These POCs orchestrate various combinations and types of software testing oracles. The oracles are mechanisms that help us evaluate result statuses of recently completed scripted test processes in Testrail (any test management tool would also work), and also evaluate all of the test tool logs generated by recently completed automation runs. They help make GraphWalker test path generation decisions based on all of the gathered historical information. Over time, the following POCs make better decisions the more frequently they are used.

One of the earliest POCs we have experimented with in the past was built for the MTA On the Go (OTG) kiosk. This early version of the tool would run for more than 12 hours a day. Here is the model that was used to run the OTG kiosk test.

MTA On the Go (OTG) kiosk GraphWalker model

This early POC helped Intersection’s OTG test engineers discover the following difficult to find user-facing issue that was reported several times in the subways.

OTG kiosk issue found by a 12+ hour model-based exploratory test run

FYI: The following section assumes that you have knowledge of Docker and docker-compose.

The first high-level diagram below displays the capabilities of one of our newest POCs that evolved from the MBET automation work that was done for the MTA On the Go kiosk. We refer to it as a “Path Deciding Model Based Exploratory Test”, which runs within a single Docker container. The core components used in this first example are the following…

Path Deciding Model Based Exploratory Test

The second high-level diagram below displays the capabilities of another POC that we refer to as a “Wolfpack Strategy Model Based Exploratory Test”. It utilizes multiple Docker containers and is basically a scaled out version of the Path Deciding POC. The Alpha Container runs first and provides GraphWalker test path decisions for the four Follower Containers that run immediately after it. This was partially inspired by the behavior of wolves. The core components used in this second example are the following…

Wolfpack Strategy Model Based Exploratory Test

The below screenshot represents the Robot Framework results log of Follower Container One. This shows the GraphWalker test path that was generated by the Alpha Container and executed as Robot Framework keywords within Follower Container One.

Robot Framework results log of Follower Container One

Here is a video demonstration of the Wolfpack Strategy POC running tests on an event driven system. The video also displays it catching an issue after running approximately ten minutes.

Wolfpack Strategy demonstration

Both of these POC examples use combinations of the following types of software testing oracles to help the tests make a GraphWalker test path, and determine whether the system under test is actually misbehaving. If the software test oracles detect something unusual, they trigger emails, send Slack messages, and also send test result status updates to Testrail’s API.

  • Golden Master, or Consistency Oracle
  • Model-based Oracle
  • Statistical Oracle
  • Diagnostic Oracle
  • Probabilistic Oracle

Model-based exploratory testing is a great new addition to our scheduled continuous testing and real-time monitoring processes at Intersection. In fact, the team is learning from the test sequences generated by the Path Deciding POC. It has given us several new test ideas that helped us improve the existing scripted regression tests.

Still, there remains plenty of room to grow, and several improvement possibilities are currently being researched. For example, the scalability of these POCs can be enhanced with container orchestration tools such as Kubernetes. We also plan on adapting the Wolfpack Strategy POC to further extend a tester’s abilities by creating a version that allows a human tester to be the Alpha for the Follower Containers. We look forward to sharing more details in the future.

--

--