Anti-Patterns In Unit Testing
Podcast: Play in new window | Download (51.3MB) | Embed
Subscribe: Apple Podcasts | Spotify | Email | RSS | More
Test driven development (TDD) is a paradigm where test cases are written based on stories or requirements before any coding is started. It can be difficult to wrap your brain around the idea of writing tests first if you haven’t developed that way before. Sometimes you don’t know what you need to test until you’ve built it. The idea with TDD is to start with the basic business requirements for testing and build to the test, then if anything else needs to be testing because it came up in the development process tests for it can be created.
Unit tests are test cases that are designed to test specific units of your code. Unit tests are code themselves and so require maintenance and have their own set of design patterns and anti-patterns.
An anti-pattern is an observed pattern of behavior, typically repeated, that is ineffective at best and directly harmful at worst. Anti-patterns in unit tests are patterns seen throughout a test suite that may seem like a good way to do things but in effect cause more damage than good.
This is far from a comprehensive list of anti-patterns you will find in unit tests. The ones listed here are some of the more common ones you’ll find. Check out the link to StackOverflow for a much larger list of patterns. Just like healthy design patterns, anti-patterns are not something that people set out to create but are patterns that are noticed over time and across developers and code. Use the ones listed here to help guide you when you are writing test code and so you know what to avoid or when you are working with someone else’s code so that you can easily recognize something that may need to be refactored.
No structure when creating test cases.
Test code is also code and should have a structure to it so that it can be easily read and altered if needed. Not having a structure to test code makes it hard to understand and maintain. A simple structure is to organize the code into the different stages of the testing process. Arrange, Act, Assert are the most common testing phases, though they could be in the Gherkin Given, When, Then or anything that breaks the test code into sections based on what is happening in the code.
The Arrange, or Given, stage is used to initialize any variables and dependencies of the thing you are testing. This is where you set your mock data that is test specific. The Act, or When, stage calls the unit of code, method or function, you are actually testing. This is typically a very small section of the testing code, often a single line. The Assert, or Then, stage verifies that the unit of code called actually did what was expected. It checks the outputs and verifies that calls to mocked dependencies like repositories were made.
There is too much setup to run the test cases.
Tests require an excessive amount of setup just to be able to run. This may be in the Arrange stage of the individual tests or in a setup method that runs before the tests. There can be hundreds of lines of code in some cases where there is too much setup needed. This makes it almost impossible to understand what is being tested because of all the extra in the setup.
Typically this occurs because of poor use of mocking or code that hasn’t been built with testing in mind resulting in the test being too tightly coupled to the implementation of the code. The result is that tests become brittle and unable to be maintained as any change to the code would require a massive rewrite of the setup for the test.
Improper clean up after tests have been run.
Most testing suites/frameworks will either have a clean up method that is run after each test or after all the tests have run. Your code in this method will remove objects from memory, close files, etc.
Improper cleanup occurs when the code that cleans up the mocks and anything created in the test is insufficient or entirely lacking. It may leave files open or objects in memory causing memory leaks. This can be especially important if your tests are doing any type of file manipulation or you are creating files specifically for testing.
Tests depend on something outside of the test suite.
There are a few anti-patterns that can come from test cases relying on data or objects created out side of the test suite. They typically occur when there is something the test needs to be able to run but it is not mocked either because it wasn’t missed or is not possible to mock.
One specific anti-pattern occurs when the tests rely on an environment specific variable that may not exist outside of development. This could be anything from an authorization issue to a particular file on the developer’s machine that is only used for testing.
Similarly, tests may require that certain data be populated before the test is able to be run. If not mocked or populated in the Arrange stage then finding what is missing may require developers to sort through the code as Null Exceptions are all but useless for figuring out what is actually missing.
Sneaking in refactors to test code while building new features.
A lot of developers have the attitude of, “while I’m in here…”, which leads them to attempt to refactor while building new features. As daunting as that may be in code, doing it in the unit tests while also testing the new feature is not the easiest or wisest path to refactoring. The best approach to refactoring your test code is to have a technical debt story or card and dedicate specific time to the refactoring that allows you to completely focus on the tests and the code being tested.
In reality, many developers aren’t able to have technical debt stories so they have to sneak in their refactors as they go along. If this is the case do your refactoring either before you start work on the feature, especially if that will hep testing the new feature, or do it after you have build and tested the feature. The key is to focus on one thing at a time. You are either building and testing new features or you are refactoring. Trying to do both at the same time will get overwhelming and lead to mistakes.
Rewriting private methods as public because testing is difficult.
Unit tests are designed to test units of code based on interfaces, not the particular implementation details of that code. Private methods are implementation details that are to be tested indirectly through the the public interfaces.
This anti-pattern arises when a developer is trying to increase code coverage but is not able to test all of the private methods in a class. They therefore start making those private methods public in order to test them.
If you have too many private methods associated with one public interface or there are too many possibilities to test them all, instead of making the private methods public, consider breaking down your public method into multiple component methods.
Overuse of abstractions (it’s too DRY).
While test code is real code it is not implementation code and shouldn’t be written the same. It’s easy to get into the habit of not repeating yourself by abstracting anything that you use more than once.
Test code, however, is more than code it is documentation as well. Because it is documentation it needs to be descriptive and easy to follow. Instead of DRY test code should be DAMP (Descriptive And Meaningful Phrases). Since the goal is to understand the test and the code you are testing some repetition may be necessary.
Multiple tests testing the same or similar things.
Multiple tests are created that have the same test code, but change the values passed. They are basically testing the same thing. If you find yourself copy/pasting entire tests then only changing a line or two then you are likely using this anti-pattern.
You will want to have separate test cases between the happy path the code takes and testing for errors. Those are testing different events and outcomes so while much of the setup will be similar your assertions will be different.
A good way to address this issue is through the use of table-driven testing which allows you to run the same test code with different values for each run. This reduces duplication of test code and lets you compare the different cases.
Piggybacking on existing tests.
On the other extreme from multiple tests is piggybacking or adding assertions to existing tests to test a distinct or new feature. The more of these you add the less descriptive your test becomes.
Continued use of piggybacking will eventually cause your test names to become like comments in code, useless or lies. As a part of the code’s documentation you want them to be as descriptive and accurate as possible.
This doesn’t mean that you have to create a new test case every time you make changes to the code. If you are adding new methods or new features then you should add new test cases, but if you are altering functionality of an existing method then you will modify the existing tests to reflect the change in functionality.
Testing for a specific bug.
Sometimes you need to reproduce a bug so a new test case is created for reproducing that bug. Most times developers are less descriptive in naming these tests calling them something like “testForBugXYZ”.
The issue with this type of test case is not in the moment but years later when that bug is no longer even a memory but the test is still there. Something changes in the code and that test now fails but no one knows what it was testing.
Most of the time these tests could be added to existing test cases that didn’t cover enough area to catch the particular bug. If a new test case needs to be developed then make sure to name it based on behavior it is testing.
Test cases are concerned with more than one unit of code.
Unit tests are not rings of power, one test case should not affect the others, especially not have other cases relying on it. If things change in one area you will have a tough time maintaining the tests because you have multiple places to make changes.
It might not be one test over others but a chain of tests that must be run in a certain order. This can happen when the changes made by one test are used in another test. This could also be one enormous test that covers multiple methods or processes. If the test code is more than a handful of lines then you might consider breaking it up. When tests get too big they can have bugs of their own.
Testing everything, including framework or language code.
Not every possible case needs to be tested. At a certain point you reach diminishing returns in your test cases where you actually waste time creating tests rather than save yourself time by having them.
Some of these tests are testing cases so rare that they will only need to be tested if they malfunction. Others may not even impact the application if they fail or will be caught immediately when using the app.
If you aren’t sure about if a test is necessary then do a quick cost/benefit analysis. Look at how often the code will be used and if changes might affect it to see benefits of the tests. Then look at the amount of time to write and maintain the code, don’t forget maintenance costs. If the cost is more then don’t write it.
Tests require too much intimate knowledge of the code to run.
Tests only need to know about the methods they are testing, and even then only the interface or what is going in and coming out of them not specific implementation details. This particular anti-pattern or set of them comes from attempts to get 100% code coverage.
Test cases may be as innocent as breaking rules about encapsulation to know too much about a method or as dangerous as reading private fields or even accessing private files to run. This may not be a problem with the construction of test cases but with the class being tested. It might need to be refactored, if possible, to use less data hiding and fewer private fields and methods.
Tricks of the Trade
Your yearly goals may follow similar patterns to unit testing. This includes antipatterns such as the ones we listed above.