Anaconda Storage Test Day recap

Adam Williamson said it best, anaconda storage testing is a tough thing to get folks excited about. It doesn’t rotate on a cube, slowly spew solar flares or read your fingerprint … but it sets the stage for everything that comes after you first boot up a Fedora 11 install. Naturally, getting things right is important … and failing during installation is a quick way to ward off potential contributors.

It’s an understatement to say the weeks leading up to the test day were very busy for the anaconda development team.  To reduce the disruption to rawhide users, the storage rewrite changes were self-contained in a rather large updates.img.   Which is something considering "all of anaconda’s code for creating and configuring disk partitions, LVM, mdraid, dmraid, multipath, iSCSI, LUKS devices, and filesystems is being rewritten." [1]

The day began by emphasizing the raw in rawhide.  Mirrors were still updating which caused confusion while testing patches and raised the question … what rawhide am I testing?  What seemed to help here was:

  1. Patience – the mirrors eventually had the latest and greatest content … but there’s no GREEN light that tells you a mirror is up to date
  2. .treeinfo – this file in the base directory of any installation tree contains a SHA256 hash for every file included in the tree.  Comparing it’s contents with the main download site can be helpful.

With that behind us the rubber met the road.  The focus for the day was excercising the most common use cases to reduce the number of obvious warts when the changes hit rawhide.  This includes, but not limited to, most every operation you can perform from the disk partitioning step seen below.


At the time of this post, testing that fell in the category of "things not expected to work" included:

  • dmraid
  • partitioning via kickstart
  • discovery of existing mdraid arrays
  • user interaction through anaconda screens (clicking Next/Back)
  • hardware raid controllers that have nonstandard device paths, like cciss (untested, not expected to work)
  • resizing of devices/filesystems
  • bootloader setup on multi-boot systems (untested)
  • ppc?

The event was very busy and only possible thanks to the hard work of folks like Dave Lehman, Chris Lumens, and Joel Granados.  Throughout the day they were able to traige, patch, review and integrate hotfixes into an updates.img for testers.  Also many thanks to testers for coming armed with interesting storage environments (j-rod and refried). 

While not everything was filed in bugzilla, the list of formerly recorded issues found includes:

488722 Properly align encrypted LV – LUKS device
488738 KeyError: ‘LVM2_VG_UUID’
488743 ImportError: No module named partRequests
488747 NameError: global name ‘iutil’ is not defined
488772 [StorageRewrite] Traceback during kickstart install
488800 [StorageRewrite] DeviceException: Could not find device for path /dev/sdb
488807 [StorageRewrite] PartitionException: no type specified
488812 [StorageRewrite] TypeError: ‘NoDevice’ object is not iterable
488859 [StorageRewrite] TypeError: argument 2 must be string, not None
488860 [StorageRewrite] Adding iSCSI device fails – TypeError: ‘Storage’ object is not iterable
488861 [StorageRewrite] LVMError: pvremove failed for /dev/sda1 – custom partitioning with 1 local and 1 iSCSI disk fails

At the end of the test day, several things were very clear:

  1. We need more interesting storage environments to throw at anaconda (SAN’s, FC, multiple disks, dmraid, mdraid, iSCSI)
  2. We need to provide another collaborative forum for testing #1
  3. I heart virt-manager and kvm

Stay tuned to the F11 Test Day schedule for information on Anaconda Storage Test Day part#2.  In the meantime, please feel free to take Rawhide for a spin on a spare laptop/workstation/VM.

4 responses to “Anaconda Storage Test Day recap

  1. Virt?

    the question is whether concentrating on storage testing in KVM, where the installs are usually clean and disks more ideal, will reproduce all the fun LVM-crash bugs in Anaconda 🙂

    • Re: Virt?

      That’s a great point!

      Normally I would agree and suggest we aim for hardware diversity in testing. However, given the state of the code (hot out of the oven) and the nature of changes … I wanted to (and we were requested to) focus on stabilizing a common install paths. Many of bugs discovered above related to previous disk contents, number of disks and formatting disks. I employed 3 x86 systems nearby and several virt guests which all ended with the same results. For me it’s good to know that I can rely of virt for a specific type of testing.

      I had hoped we could gather more data on multipath, SAN and iSCSI testing … however the code just was not ready for that type of abuse. I expect sometime post-beta we will be asked to test more complex “real-world” storage scenarios. Can I interest you in joining?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s