As I am shifting towards a testing role and currently preparing for a foundation-level tester certification, there is one thing I noticed in which testing folks and developers are fundamentally different: Testers have a clear definition and documentation for a lot of things. There is even a widely accepted institution (ISTQB) that defines specific terms and processes, something that developers might have in some cases and for some languages, but often enough it’s a wild west of competing terms and ideas.
That leads to situations, especially when talking about all kinds of software quality aspects, in which developers use terms synonymously and interchangeably that have significant differences from a tester’s perspective.
Since it has been and still is my personal goal to encourage developers, database people, and testing folks to collaborate more, I thought it might be a good idea to write down some of my findings while I explore and learn them in a way that is suited for developers.
A huge kudos and thank you to my teacher Peter “Pixel” Hartauer, who didn’t get tired of answering my numerous questions, explaining the different standards and correcting my wrong conclusions, even though I tried very hard 😀
When developers are first introduced to testing, they are often overwhelmed by the plethora of different types of tests.
It’s not so easy to get a good explanation and overview, the different terms are often mixed up and conflated, and they might even have a different meaning based on the team or project. This can lead to a lot of miscommunication and can cause a lot of friction whenever people work together.
The most important thing to understand when it comes to names of test types is that they describe a specific aspect of a test. A regression test can also be a unit test, a feature test, and a black-box test at the same time, but that doesn’t mean that all these terms are synonyms. They categorize a test based on different aspects of the same test. Some aspects are often found in combination, but there’s no definitive rule.
We can imagine a test as a Lego figure with different parts. You could describe them as “the blue ones” and focus on one aspect: the torso part. In that case, you would mean both figures in the picture.
You could say “the red-haired ones” and describe them by their headwear part, limiting it to only one figure in the picture.
You might also say “the women” and describe them based on the printing of the head (maybe even in combination with the headwear). This example also highlights that categorization is never a fundamental truth: what you might categorize as a woman could as well be a man or nonbinary person. Categorization is always subjectively based on the observations and interpretations of the viewer.
No matter the description, we always describe the same figure – or the same test. But we describe them based on different aspects, which can be very meaningful because it provides context.
So what different aspects and categorizations do we have when it comes to test types?
Tests can be described based on their test level in the test process. This test process can differ depending on the project context, and so can the test levels that are part of the process.
- Component test / Module test / Unit test
- Integration test
- System test
- Acceptance test
Tests can also be described based on the method they use. The different testing methods also build several sub-categories and can be described in a hierarchical structure.
- Static tests
- Control flow analysis
- Metrics (e.g. McCabe)
- Dynamic tests
- Black-Box tests
- White-Box tests
- Error guessing
- Explorative testing
- Dynamic analysis
Another category that can be used to describe tests is the event or situation in which their execution is triggered: When are they done? What’s the reason to run them?
Sometimes we also want to emphasize a certain quality aspect for which a test is created, and of course, we can categorize tests based on that.
This is also the point where things get really tricky, and where a lot of misinformation exists. For example, every functional test (quality aspect) is a Black-box test (method), because the only way to check the functional behavior of a piece of software is to provide input and check against the output. However, not every Black-box test is a functional test, because a Black-box testing technique can also be used to measure performance, a non-functional quality aspect of software.
That’s the reason why the terms are often conflated, but they describe different aspects of a test.
- Functional test
- Non-Functional test
We might also use certain terms to describe a certain aspect of the test itself and categorize tests by that. These terms emphasize the underlying business process they cover.
- End-to-End test
- API test
- Feature test
It is not uncommon that especially developers might categorize tests by the tool they are written or done with. “We don’t care what type of test it is, everything that is written in JUnit is a Unit-test for us”.
This is not a problem per se, because it’s just another, legitimate aspect of a test. The problem often arises if people are not aware of the aspect they are using for categorization and start conflating different aspects and contexts.
- Automated test
- Selenium test
- Cucumber test
- JUnit test
Names and terms are fluent, carry a lot of context, and are often different for every team or project. Next to the terms that categorize tests based on certain aspects, there are also some colloquial terms that are frequently used and might have a different meaning for each environment.
There are also more ways to categorize test, e.g. “Solitary vs. Socialable Unit tests” by Martin Fowler, but they all have in common that they want to emphasize a certain aspect of the test.
- Unit test – as description for the different automated tests that are created during test-driven-development. Depending on the team they can share certain key features (e.g. fast, deterministic, isolated, composable – Kent Beck)
- Microtest – a term coined by GeePaw Hill to emphasize the fast, easy-to-write/read, automated, small-scope nature of the tests
The goal is never to nitpick and fight over terms, that’s not helpful for any team or project.
I tried to use links to the ISTQB glossary where possible, but even with a “standard” in place, it doesn’t mean that the definitions are set in stone. They are constantly debated, adapted, improved, and misused (e.g. the categorization of security testing as functional testing).
The advantage of having a basic knowledge of different test categorizations that exist is that testers and developers (and all the other team members) can talk more effectively to each other:
“By saying Integration-test, do you want to highlight that they target how different modules act in combination?”
Let me close with a great quote from Peter that he got from his former boss:
Peter, remember one thing in projects: When everybody is yelling at each other, nothing is working and the management escalates, when the fronts betweet team members are hardening and personal conflicts dominate the workday, then resort to extreme means yourself:
Talk to the people!