A recent mixup in my flight going over to California left me stranded in Dusseldorf for the night
(dont ask for the details, but it involves three (!) internation air lines each blaming each other and no one taking responsibility...).

Turns out there was another person who´s name is Melissa who had been booked on the same flight as me (Copenhagen - Dusseldorf - San Fancisco) who was now equally stranded in there.

After a three hour phone conversation between my travel agency (gotta love them!) and the three air lines involved, finally one of the air lines stood up and took responsibility.
Early early the next morning I found myself in Heathrow/London having a cup of coffee with Melissa while we waited for our connecting flight.
Melissa is a journalism and international politics student from California that is studying for her masters in Denmark.

As usual it didn´t to long before I was asked what I was doing, and somehow I started talking about software testing with passion of a wind-up Isabel Evans on stage at Eurostar (I still get shivers thinking about when I saw her in 2007 at Eurostar giving her keynote).
We started discussing the concept that rarely is there a black and white situation, a right and wrong answer, and I told her that it´s very common for young engineers in university who study social sciences (my wife teaches organisation and management) to ask what "the right answer" is.
My wife often has to explain to them that there is rarely this kind of answer, and that she is more interested in their thoughts and ability to analyse the situation, to use the textbook literature/theories and build a case for "their answer".

This kind expectations can be very stressful and confusing to an engineer who has studied a lot of "quantifiable" subjects like for instance math, physics etc.
It´s stressful for them as they are now expected to analyse the context and reason their way to an answer that they think suits the situation for which there might not be a given answer.

And it was at this point I explained to Melissa that software development in "the real world" is a lot like that, especially for a tester.
Any tester can take software, look at the requirement (if there is any) and validate that the software meets the expectation.
But often the requirements are not perfect (if even present) and we are faced with a situation where we have to do interpretation.

A good tester realizes at this point that we have to start finding other "test oracles" or inputs to our testing on how we must interpret the requirements.
Lisa Cripsin and Janet Gregory talks a lot about the relationship between the agile tester and the customer in their book "Agile Testing: A Practical Guide for Testers and Agile Teams".
They discuss how we actually have to work with the customer to truly understand what it is they are expecting us to deliver, or "What is the problem that the software is trying to solve...".

We have to look at the context and make a judgement based on what the customer expects, what the status is for the project (early, late etc), what impact the bug might have, if it even is a bug ("A bug is something that threatens the delivered value to the customer") and from that we must decide how to proceed.
Post a bug, or talk to a developer or project leader first, perhaps do more investigation if similliar bugs have been encountered before and what their impact were and so on.

In short being "a good" software tester means so much more than just checking of requirements and posting a bug if we find something in the software (I´m intentionally using the term "something" loosely here).

After hearing the mad man raving about software testing, Melissa just said, "wow, I had no idea testing was so much more...".

That is why we testers ALWAYS must understand that we operate on a spectrum in the given context and that we must always question the assumption that there is a right or a wrong answer.