Antipattern #2 — Sharing state between tests
A synonym of pain while understanding and debugging them.
😰 Unpredictable consequences
Unnecessarily sharing state is always a bad idea, whether in tests or implementation. Giving variables a broader scope than they need is an antipattern. It occurs when you define a set of variables visible by multiple tests. Unfortunately, it’s actively encouraged in some libraries (e.g., “describe” blocks in Jest, RSpec, Ginkgo, class variables in unittest, and test suite variables in Testify), but it can happen without libraries.
class SomeTestSuite:
def __init__(self):
self.mockX = ... # shared state ⚠️
def test1(self):
...
# using self.mockX or, even worse, change it
def test2(self):
...
# using self.mockX or, even worse, change it
describe('Some test suite', () => {
let varX; // shared state ⚠️
beforeEach(() => {
varX = 1;
});
test('test1', () => {
// Test logic using varX
// It might modify varX
});
test('test2', () => {
// Test logic using varX
// It might modify varX
});
});
Sharing state among tests couples them, making them a nightmare to understand and debug. If one test changes the shared state, it may affect many others with erratic behavior. By sharing state, there’s a high coupling between tests; they aren’t isolated. I’m referring to shared state variables but, even worse, to mocks. You may say you reset your mocks in a “beforeEach” block, but that’s tackling the symptom, not the root cause.
Even sharing immutable things in tests is terrible because manually changing one affects the whole suite. This applies to constants (which you should avoid) but also to fixture files holding a bunch of variables everyone relies on (e.g., upgradedClient, zeroValueOrder). Instead, rely on builders or helper functions (e.g., generateClient, generateOrder) that receive what matters and randomize the rest. Beware that they have no side-effects like inserting in databases; they only return fresh data.
𝌑 Parallelized testing
Sharing state in tests invalidates the ability to run tests in parallel. Even if you don’t need it today, why deny that option outright? You may need it later to speed up the suite and increase realism (in real life, many requests arrive concurrently). Then, it will be very costly to rewrite the tests to make them autonomous.
📖 Tests as documentation
Declaring/initializing variables outside tests can seem magical from each test’s point of view. Besides, you can’t quickly tell what’s relevant to each test. For example, you declare/initialize 25 variables for every test, but most of them solely need 2 or 3. This equates to a very low test cohesion. Tests should be as self-reliant as possible, so reduce variable scope to the bare minimum: define variables only inside the test or, even better, inline them. Therefore, initialize what you need when you need it.
Tests are documentation, so they should be as self-explanatory as possible. Tests are like instruction manuals: you should be able to understand each one by solely relying on its code. I don’t think “before/after” hooks are essential. They hide the logic and data that’s important for each test. They force me to scroll up and down to understand a test. I prefer to make everything explicit in each test. If it’s a lot of code, I create helper functions that I can invoke from within each test. These are just creators; they always give back a fresh, independent state (beware that they should not hide essential test data; they should receive it as input). Generally speaking, use pure functions rather than side-effect functions.
🏗️ Coding backward
A technique I like to use involves using a variable that does not exist yet. Then, I define it just above. Since I am driven by intent, this helps me with good naming. Additionally, it ensures that the variable has the minimum necessary scope. This practice, known as coding backward, is something I apply in both tests and production code. In short, declare variables as late and locally as possible.