Coming to an update near you …

This week, Kamil Páral and Martin Krizek experimented with the bodhi feedback submission code landing in autoqa-0.4.4. This is a small, but significant step in integrating AutoQA test results with the bodhi update process. As Will Woods pointed out, there are plenty of activities following this step. But for now, it’s exciting to have some visual feedback for all the planning, patching and debate that’s occupied our cycles for some time.


Fedora QA special thanks …

During the Report and Summary section of the installation validation SOP, we outline a procedure for wrapping up a community test run. We have similar wrap-ups for test days and desktop validation. Typically, this involves highlighting the bugs encountered and calling out any issues worth noting (e.g. outstanding efforts or troublesome areas).

We use the to plan and track test events. Using the wiki has a lot of benefits for us, primarily, it’s dead-simple to use and it’s integrated with FAS. When in doubt on any particular wiki syntax, there is a huge mediawiki community available to provide guidance (irc, forums and docs). One drawback with using the wiki is gathering metrics. For example, it’s bugged me that we haven’t been able to call out special thanks to test event contributors during an event recap. The good news is that the raw data is there, we just need some duck-tape to pull it together. To demonstrate, see the command-line below (mind the pipes please) …

# for PLAN in Install Desktop; do  \
   echo -e "\n== F-14-Final-RC1 Testers - $PLAN ==" ;  \
   curl --stderr /dev/null "$PLAN&action=raw" \
   | grep -A999999 "^=.*Test Matrix.*=$" \
   | sed -e "s|{{\(result[^}]*\)}}|\n\1 |g" | grep "^result" \
   | gawk 'BEGIN{FS="[| ]"} $3{print $3} !$3{print "UNTESTED"}' \
   | sort | uniq -c | sort -bgr ; done

== F-14-Final-RC1 Testers - Install ==
     53 robatino
     53 kparal
     37 rhe
     37 jlaska
     23 jkeating
      8 cyberpear
      6 mkrizek
      2 rspanton
      2 newgle1
      2 dramsey
      1 UNTESTED
      1 bcl

== F-14-Final-RC1 Testers - Desktop ==
     16 UNTESTED
     14 mkrizek
     12 Masami
      7 kparal
      7 jskladan
      7 jreznik
      4 dramsey
      2 jsmith
      2 jlaska

Certainly not the prettiest command-line, and as you can imagine, bash is notoriously easy to mess-up. I wonder if it would be better to scrub the history of a wiki page instead? Or perhaps, use python-mwlib for a more programmatic approach?

Ignoring the data collection method, I’d like to focus on what the data tells me…

  1. First, along with Adam Williamson, I’d like to send a special thanks to our QA contributors listed above. As our test efforts mature, one thing is certain … we can’t do much of anything without our community. My impression has been that substantive discussion around quality topics on has been on the increase in recent releases. Additionally, with release validation and proventesters, my sense has been that participation and engagement has been on the rise. While this is just one data set, I hope that the metrics collection method discussed here can help explore my theory.
  2. The data above is for only one test event (Fedora 14 Final Release Candidate #1), so there are many more contributors from previous events that I can’t list in a single blog. It certainly isn’t my intent to exclude them, but going forward I’d like to include the above data in each of our test events (test days and release validation) so we can recognize contributions.
  3. UNTESTED – Geez, we have quite a bit of UNTESTED results in the Desktop validation matrix. That’s not entirely a surprise since we focus primarily on ensuring the default desktop (GNOME) and KDE are tested. For remaining desktop spins, we reach out to the appropriate spin to provide test feedback using the provided framework. There are already some ideas on the F-14 QA Retrospective page on improving this coverage.

I’m still thinking about the data, so I may have more ideas in the next few days. In the meantime, feel free to contribute your comments.

Embedding mediawiki templates inside <pre>

I’ve experimented with this a few times, but never found something that works.  My intent was to use some mediawiki templates inside code/file samples on the wiki. On the wiki, we typically use <pre> tags to properly format code/file samples. By definition, the html <pre> tag allows for pre-formatted text to be displayed as-is.  Meaning, most html tags inside <pre> will be displayed and not processed. There are exceptions for some font mark-up tags, but I’m not fully aware of which html tags are explicitly allowed inside <pre>.  Let’s try an example using the following html markup …

This is <strong>bold <span style="color: red;">red</span></strong> text
<a href="#link">This is a link</a>
<span style="color:green;
                font-family:'Times New Roman', Times, serif;
                font-weight:bolder;">big and green times</span>

This will display in a browser as …

This is bold red text
This is a link
big and green times

Introducing mediawiki into the mix complicates things slightly. By design (as security precaution I’d guess), mediawiki does not honor all HTML tags on a wiki page. However, it does recognize and permit some tags, <pre> being one of them. As noted previously, we use the <pre> quite a bit to call out command execution or file contents.

Where the wrinkle comes is trying to use mediawiki templates inside a <pre> block. Well why would you want to do that? Often, our command samples or file contents include Fedora release version specific information. Since Fedora releases come out about every 6 months, for every wiki page that doesn’t need to reference an explicit Fedora release, someone would need to modify every wiki page that references a Fedora version. At a quick glance, that comes to about ~179 wiki pages. As I mentioned, not all of those pages would need an update when a new release comes out, but it at least gives a rough estimate.

After some searching, I came across a forum post which recommends a solution that works. To demonstrate using a mediawiki template inside a <pre> block, I’ll use the wiki page Creating_a_Test_Day_Live_Image. On that page, we recommend checking out a git branch of spin-kickstarts whose name matches the upcoming Fedora release. I’d like to use the FedoraVersion template so we don’t need to constantly update this page each time a new version of Fedora is released.

To make use of the FedoraVersion mediawiki template inside a <pre> block, you would enter the following …

git clone 'git://' -b F-{{FedoraVersion|number|next}}

This instructs mediawiki to format the page as follows:

git clone 'git://' -b F-14

I have no idea why the magic combination of noinclude mediawiki tags works. Bonus points to anyone who can answer: why does this work?

Fedora 14 QA retrospective

For several releases now, the Fedora QA team hosted retrospectives aimed at identifying process gaps and recommending corrective action.  For those unfamiliar with the term retrospective, these events are sometimes referred to as lessons learned or, the morbidly named, post-mortem.  We treat the retrospectives as living documents throughout a release.  Noting the good, bad and ugly in an objective manner as issues surface.  I’ve found recording issues as they happen paints a much more accurate picture, rather than trying to remember everything all at once after the product has been released declared GOLD.  At the end of a release, we discuss, debate, propose recommendations and prioritize the work for the upcoming release.

Rear view mirror taken from

We followed this process for several Fedora releases and most recently, with Fedora 13, created TRAC tickets to manage progress towards recommendations.  There are still a few tickets remaining, but that’s okay.  This is the first time we used TRAC to manage recommendations.  I wouldn’t be surprised if we over estimated how much we could accomplish in the short time between releases.  Of course, another way to look at this list is … if you’re interested in getting more involved with the Fedora QA team, review the tickets, grab one that looks interesting to you.

For Fedora 14, the retrospective process will continue.  I started the wiki page a bit later than I prefer, but the page is up and ready for your ideas and suggestions …  To help the discussion, I’ll list my two most recent entries …

  1. Virtualization Test Day – Participation in the virtualization test day was low and not what we had hoped for.  Why? What factors contributed to low attendance?  What can we do to improve virtualization test day attendance in the future?
  2. Root cause analysis of RHBZ#638091 – Kernel panic during boot after updating to image:Package-x-generic-16.pngglibc-2.12.90-13 (see bodhi).  The system would be fine until the prelink command ran, and then it would never boot and all commands would segfault on a running system. The issue was initially given positive proventester feedback as the system survived after a reboot.  The feedback was later reverted after the testers system was unusable+unbootable after prelink had run.  The good news, the problem was discovered in updates-testing before it landed in updates. However, it would be good to have some documented test procedures for a subset of packages to guide proventesters

What other issues/ideas would you like to list on the Fedora 14 QA Retrospective?  While you’re in the mood for feedback, please note that the QA team isn’t the only group interested in keeping track of issues.  Check out John Poelstra’s thoughts on the Fedora 14 Schedule Retrospective, as well as Kevin Fenzi’s growing list of Updates lessons.

Fedora package maintainers, want test results?

For a some time now, automated tests have been running against Fedora package builds thanks to the AutoQA post-koji-build watcher.  The current tests are by no means exhaustive, but they do provide some level of automated feedback for those interested in a quick sanity check of their package builds.  As a proof-of-concept, and before the spiffy results database is online, all test results are sent to the autoqa-results mailing list.

A few weeks back, Seth Vidal proposed several changes to allow package maintainers to sign-up for test results by mail.  We all get a lot of email as it is, so this certainly isn’t the long-term option.  However, it is an option for package maintainers who want test results emailed directly to them for review.

So, if you want to sign-up for email notification of test results (includes rpmlint, rpmguard and, if applicable, initscripts) whenever you build new packages:

  1. Using your FAS credentials, login to  (for assistance, see access instructions)
  2. Sign-up by running the command  autoqa-optin with appropriate options.  For example, to enable test emails for pykickstart for all rawhide and Fedora 14 builds, you would run:
    # autoqa-optin pykickstart devel F-14
    Opted into autoqa results for pykickstart of devel release
    Opted into autoqa results for pykickstart of F-14 release

To query which releases are configured for test result notification, you can run the autoqa-optin script with just a package name.  For example:

# autoqa-optin yum
yum opted in for releases:

Available Releases to be opted into:


AutoQA help wanted — initscript verification

In Fedora QA, we’ve been looking to simplify integration of existing tests into AutoQA.  With thanks to Petr Muller, a commonly used test library called beakerlib is now available in Fedora.  Beakerlib provides a nice API to make test development simpler for testers.  If you are familiar with other test frameworks (e.g. unittest, nose, junit, xunit etc…), you should feel right at home.  Since it’s no fun to have a test library without tests, folks from the Red Hat Enterprise Linux QA and Fedora QA teams have been integrating tests designed to automate aspects of the fedora package update acceptance test plan.

Quantities limited, apply today!

One batch of tests are designed to validate LSB compliance for SysV initscripts.  As a proof-of-concept, Josef Skladanka committed several tests for a subset of initscripts.  The good news, as new packages were built the tests behaved properly (see results for samba and openssh).  The bad news, we need more tests!  Josef answered the call and committed another batch of initscript tests.

While discussing the effort, we both felt it was a good time to open this up to fellow contributors.  The idea is simple.  We need help making sure the recent initscript tests behave as expected.  One just needs to …

  1. install the appropriate packages
  2. run the test
  3. report your findings

Pretty easy, right?  I know, famous last words.  We’re hoping to offer more events of this style in the future.  The hope, of course, is that by validating, or contributing, automated tests, you are helping to increase package acceptance test coverage and expose more people to the possibilities using AutoQA.

If you’d like to help, the procedure is documented at

2010-04-22 – Anaconda Storage UI test day

New and improved!Hot off the heels of the well attended graphics test week, we have another test day queued up for several installer features.  If you installed either Fedora 13 Alpha or Beta, you may have noticed the drastically different storage user-interface.  This wasn’t an accident.  At FUDCon Berlin 2009, several members of the installer team sat down with Mo to make it easier to find and identify the devices that you want to work with during installation.  What followed was a pretty spectacular analysis of installer use cases and interaction (checkout Mo’s blog for more info).  Interaction design is not something I know much about, but I know when something looks pleasing to the eye and feels well thought out.  So kudos to Mo and and installer team for coming up with good ideas, and more importantly, seeing them through.

The second feature under test involves the last bit of refinements to the storage filtering code first introduced in Fedora 11.  Certainly not as in your face as the user interface, but without it, you won’t be installing Fedora.  Just as with the UI changes, we want to send the installer through as many different code paths as possible.  This includes kickstart.

Okay, so enough prep.  All that means we have something shiny to test!  This Thursday, April 22, there will be a Test Day focused on the improved storage user-interface and storage filtering features.  Several focus areas have been identified for testing, ranging from installation to basic devices (IDE, SCSI, USB) to more complex environments (RAID, BIOS RAID, iSCSI, Multipath, FCoE).  Members of the installer development team, Fedora QA and Mo will be available to help guide testing and triage issues.  Given how this event will be exercising the results of the user interaction analysis, we’re also interested in your thoughts on the improved interface.  What works, what doesn’t seem quite right.

Checkout the test day wiki, and come join #fedora-test-day on to share your results.  If you are new to IRC, read this page.