Monthly Archives: July 2008

Where do test plans come from?

There’s been some discussion recently about the feature process for F10. If you aren’t aware of the process, or just curious, Paul Frields and John Poelstra have recently done some major clean up of the process at http://fedoraproject.org/wiki/Features/Policy. There is some great content there, so definitely take a look.

What interests me most about the process is how it relates to testing. If you take a look at the sample feature template, there is a section for the author to provide a test plan. For some, the thought of a test plan may scare, or even bore, them to death. In the context of the feature process, the phrase ”test plan” doesn’t necessarily mean a monolithic formal test matrix (like this). As stated best in Pirates of the Caribbean, the test plan section acts “… more like guidelines anyway.”

The feature process merely needs to provide the seeds for a test plan. With enough care and feeding, the guidelines can mature into a useful matrix designed to efficiently provide an assessment on the readiness of a feature and a baseline for future releases. But without guidelines, the feature runs the risk of not being accepted into an upcoming release or worse … being delivered with a less than satisfactory customer experience.

Okay, so what do we need in terms of guidelines? What are the criteria QA can use when evaluating a new feature for an upcoming Fedora release? At a minimum, I’d want to know the following:

  • Scope of proposed changes (new components, updates to existing components, project dependencies)
  • Packaging considerations (multilib, %obsoletes, dependencies)
  • Available documentation (wiki, mailing lists, pdf etc…)
  • Is there an upstream test community? (e.g. firefox, openoffice.org, python, etc …)
  • What level of testing has already been performed (e.g. unit-test, performance)
  • Use cases (how are users expected to use this stuff)
  • List of applicable bug(s) to track development efforts?

The good news is that a lot of the above questions can be answered using the existing feature template. But what’s missing?   How can we be confident that we’ve asked the right questions that will result in a more positive user experience?  Let me know what you think …

Where do test plans come from?

There's been some discussion recently about the feature process for F10. If you aren't aware of the process, or just curious, Paul Frields and John Poelstra have recently done some major clean up of the process at http://fedoraproject.org/wiki/Features/Policy. There is some great content there, so definitely take a look.

What interests me most about the process is how it relates to testing. If you take a look at the sample feature template, there is a section for the author to provide a test plan. For some, the thought of a test plan may scare, or even bore, them to death. In the context of the feature process, the phrase ''test plan'' doesn't necessarily mean a monolithic formal test matrix (like this). As stated best in Pirates of the Caribbean, the test plan section acts “… more like guidelines anyway.”

The feature process merely needs to provide the seeds for a test plan. With enough care and feeding, the guidelines can mature into a useful matrix designed to efficiently provide an assessment on the readiness of a feature and a baseline for future releases. But without guidelines, the feature runs the risk of not being accepted into an upcoming release or worse … being delivered with a less than satisfactory customer experience.

Okay, so what do we need in terms of guidelines? What are the criteria QA can use when evaluating a new feature for an upcoming Fedora release? At a minimum, I'd want to know the following:

  • Scope of proposed changes (new components, updates to existing components, project dependencies)
  • Packaging considerations (multilib, %obsoletes, dependencies)
  • Available documentation (wiki, mailing lists, pdf etc…)
  • Is there an upstream test community? (e.g. firefox, openoffice.org, python, etc …)
  • What level of testing has already been performed (e.g. unit-test, performance)
  • Use cases (how are users expected to use this stuff)
  • List of applicable bug(s) to track development efforts?

The good news is that a lot of the above questions can be answered using the existing feature template. But what's missing?   How can we be confident that we've asked the right questions that will result in a more positive user experience?  Let me know what you think …