Last year at Eurostar (2009) I saw a really fascinating presentation done by Julian Harty from Google (since then he has moved on to ebay) about something called "Trinity testing".
These are the kind of ideas that I really like, where the focus is to put the right people together and make them start talking while the information is fresh and relevant.
Some of the material in this post is drawn from material that Julian was kind enough to send me after last years Eurostar.

Let me explain about trinity testing...

The motivation behind this idea is that even great requirements don´t communicate all that we need.
- There are few great requirements... and usually there are few requirements at that.
- So people have to guess the desired behaviour of features and fixes.
- This leads to software that is flawed in ways that we don´t even know!

Also there is the problem of latency.
- Features are bundled into releases
- Developers finish code in between releases
- Testers are expected to test releases

In the meantime the developer has moved on to the next feature.

When someone is working an idea, in this case developer working on a new feature, the knowledge or focus on that particular area is high.
Therefor if you were to ask her something about that code it would be easy for her to remember the correct details and explain them for you.
However ... as time progress and that developer starts to work on other features/bug fixes etc. and that knowledge is put further back in the mind and the attention of the developer is
on the new things that she is working on, she also has an incentive and interrest to work on the new thing currently occupying her mind (this being planned for the next release) and it takes time to remember the details.
Also during this time the needs might have changed and more changes might be required.

To often the normal scenario is just this.
A developer finishes with the code and submits it, and after the feature has been packaged into a release a tester normally picks up the release for testing.
Hopefully the tester has had time to do some preparations beforehand at their desk, separate from developers and the people writing the requirements of couse, and some time elapses, bugs
are most often found.
Now during this time the developer in question has started the work on a new feature/bug and has her mind, focus and attention on this new work.
When a new defect arrives this forces her to do a context switch and also affects her current work.
The longer the time that has elapsed since the developer left her work behind, the bigger the impact when she does this switch.
Also a not uncommon scenario arises here and the defect that has been found is not clearly breaking any requirements, but it has enough characteristics for the tester to be classified as a bug.
Now a discussion starts between the two parties regarding the intended behaviour of the software and sometimes a third party has to be consulted, usually a system specialist, requirement analyst or system architect.
This increases the overhead required for all parties to start communicating.

So the general idea of trinity testing is that when the work is being started, a developer, a tester and a "domain expert/analyst" sits down on a brief session (30-90 minutes) and talks the feature through, and this is when the concept is being concrete.
The participants can vary from session to session depending on the information needed.
All the parties have different roles, responsibilities and things to prepare up-front.

- The developer explains the features implementation details and will learn during this session the intention of the app, how it is going to be tested, how it might be broken and test automation ideas
- The tester explains about fault models, concerns that he might have, how he intend to test and if there are test automation ideas and learns about the intentions of the app (what it should do) and the implementation details (what it is actually doing)
- The domain expert understands the application in depth and what the application should do and learns about the implementation details

The idea here is that the session should not be a long painful meeting with lots of stakeholders, just a representative of the people that are going to do the job.
Sometimes the developer isn´t the sole person that is going to do the work and likewise for the tester, in this case that person acts as a representative for the group in order to decrease the number of people.
Also the domain expert can be different people depending on the area that needs to be covered.
The three persons exchange information, asks questions, communicate and then go home and do the work.
People can even do pair work afterwards (pair exploratory testing with the developer for instance) if they feel brave and adventurous.
Everything is fresh in their mind and this is the current task at hand.

- The analyst/domain expert works at home with the customer, defines features etc.
- The developer fixes issues found at the session, implements the feature etc.
- The tester tests in more depth and decides what should be automated if appropriate, finds defects etc.

After everyone is finished with their work they get together again and exchange information in a new session.

- How did the developer implement the code?
- Did the analyst uncover new information from the customer?
- What did the tester find during his test sessions?

Sometimes if the work that is being done is of the larger magnitude it can be necessary to have sessions in the middle as well.

So what are your thoughts on this concept, anything you already tried at your work or feel like trying out, or just don´t agree?
Post a comment if so.