Traditionally we are taught as testers to write test plans for features, and those test plans in turn transforms into test cases.
The feeling is, both among managers but also commonly among persons in test that for a feature one test case is good, two test cases are better, and three must be.... and so on and so on.



And over time we tend to run several projects and several features, and thus we are slowly gathering more and more test cases, just like a squirrel is gathering nuts before winter.

Update: My fellow colleague and chief-architect at Electric-Cloud Scott Stanton points out (quite rightly) that there needs to be an distinction between tests and unit tests.
"...There is a lot of value in large unit test suites when it comes time to refactor code..."
Thanks Scott.

The problem is that once we've eaten our nuts (i.e executed the test cases), the test cases don't disappear.
Now you're thinking: "Eating the pie and keeping it!"

Wrong!

I've been in organizations where they are proud over having several thousands of test cases, (automated or manual), and some where they are embarrassingly aware of the thousands of test cases.

One thing we have learned from the "Toyota Production System" is that inventory is waste.
The seven wastes says that it takes time to produce the inventory (Waste of over production), it takes storage space (Waste of stock at hand), and it keeps the organisation from acting quickly according to the current needs (Waste of movement).
But interestingly there is the seventh waste as well: "Waste of making defective products"
A waste that makes it "difficult to find defects"
I'll get back to this.


When working with agile and with agile testing we need to stay nimble, we need to be able to start doing testing on day one when our team is starting the sprint work ("Testing is more than running the software").
That means asking questions, talking to people (developers, product owner, and yes sometimes even customers).

If we are burdened with a large inventory of test cases that we need to run continuously, then that is slowing us down.
At some point in time we have spent an effort producing these test cases, time that could be spent on alternative activities.
Someone has to maintain these test cases as the product evolves, and let's not forget the effort of executing these scripted test cases with identical input.

At this point you might say that it's no problem since you don't actually run all of these test cases.

Well what are they doing there then?

There is the factor that by simply having this mass of test cases people will probably have a hard time finding the information that they are looking for. This is especially true if you are a new tester at the company and is trying to find your way around.
And even though storage space is cheap these days, it is still a cost, and let's not forget the eco-aspect as well, storage consumes power.

I quoted earlier about "difficult to find defects", and when I did so I was not referring to finding defects in the product we are testing.
I was referring to it being hard to find defects in the test cases we have.
How are you going to be able to keep up on maintaining your test cases (i.e finding the errors) if you can't even oversee them all.

Test cases contains defects as well.

So what can you do then if you're buried in this mess of legacy scripted test cases?
You do just like you do on any spring cleaning... You. Throw. Stuff. Out.
If the test cases haven't been executed in ages, if no one knows what the purpose is for them, if they don't bring any value (in other words new information), then you throw them out.... mercilessly

Because having a too large inventory equates to waste on several levels