Monthly Archives: November 2010

Fedora QA special thanks …

During the Report and Summary section of the installation validation SOP, we outline a procedure for wrapping up a community test run. We have similar wrap-ups for test days and desktop validation. Typically, this involves highlighting the bugs encountered and calling out any issues worth noting (e.g. outstanding efforts or troublesome areas).

We use the fedoraproject.org/wiki to plan and track test events. Using the wiki has a lot of benefits for us, primarily, it’s dead-simple to use and it’s integrated with FAS. When in doubt on any particular wiki syntax, there is a huge mediawiki community available to provide guidance (irc, forums and docs). One drawback with using the wiki is gathering metrics. For example, it’s bugged me that we haven’t been able to call out special thanks to test event contributors during an event recap. The good news is that the raw data is there, we just need some duck-tape to pull it together. To demonstrate, see the command-line below (mind the pipes please) …

# for PLAN in Install Desktop; do  \
   echo -e "\n== F-14-Final-RC1 Testers - $PLAN ==" ;  \
   curl --stderr /dev/null "https://fedoraproject.org/w/index.php?title=Test_Results:Fedora_14_Final_RC1_$PLAN&action=raw" \
   | grep -A999999 "^=.*Test Matrix.*=$" \
   | sed -e "s|{{\(result[^}]*\)}}|\n\1 |g" | grep "^result" \
   | gawk 'BEGIN{FS="[| ]"} $3{print $3} !$3{print "UNTESTED"}' \
   | sort | uniq -c | sort -bgr ; done

== F-14-Final-RC1 Testers - Install ==
     53 robatino
     53 kparal
     37 rhe
     37 jlaska
     23 jkeating
      8 cyberpear
      6 mkrizek
      2 rspanton
      2 newgle1
      2 dramsey
      1 UNTESTED
      1 bcl

== F-14-Final-RC1 Testers - Desktop ==
     16 UNTESTED
     14 mkrizek
     12 Masami
      7 kparal
      7 jskladan
      7 jreznik
      4 dramsey
      2 jsmith
      2 jlaska

Certainly not the prettiest command-line, and as you can imagine, bash is notoriously easy to mess-up. I wonder if it would be better to scrub the history of a wiki page instead? Or perhaps, use python-mwlib for a more programmatic approach?

Ignoring the data collection method, I’d like to focus on what the data tells me…

  1. First, along with Adam Williamson, I’d like to send a special thanks to our QA contributors listed above. As our test efforts mature, one thing is certain … we can’t do much of anything without our community. My impression has been that substantive discussion around quality topics on test@lists.fedoraproject.org has been on the increase in recent releases. Additionally, with release validation and proventesters, my sense has been that participation and engagement has been on the rise. While this is just one data set, I hope that the metrics collection method discussed here can help explore my theory.
  2. The data above is for only one test event (Fedora 14 Final Release Candidate #1), so there are many more contributors from previous events that I can’t list in a single blog. It certainly isn’t my intent to exclude them, but going forward I’d like to include the above data in each of our test events (test days and release validation) so we can recognize contributions.
  3. UNTESTED – Geez, we have quite a bit of UNTESTED results in the Desktop validation matrix. That’s not entirely a surprise since we focus primarily on ensuring the default desktop (GNOME) and KDE are tested. For remaining desktop spins, we reach out to the appropriate spin to provide test feedback using the provided framework. There are already some ideas on the F-14 QA Retrospective page on improving this coverage.

I’m still thinking about the data, so I may have more ideas in the next few days. In the meantime, feel free to contribute your comments.