/
How to run ZFS tests on VM environment

How to run ZFS tests on VM environment

1. Setup VM with:

  • CPU - 4

  • Memory - 8GB

  • Hard disk - 100GB (for software installation) SATA

  • Three additional hard drives - 8GB (for zfs test) SATA or NVME

  • Have to use DEBUG build, where we have debug macro

names of other pools

Do not create pools with names that can be used in the tests, i.e. testpool*. Otherwise tests can work incorrectly.

Setup process is described in this instruction.

Note : If you already created dilos_BASE BE according the instruction, and there is no any new tested stable version, just execute the following command:

beadm activate dilos_BASE

In other case if you have already the dilos_ISO_orig BE - make the command:

beadm activate dilos_ISO_orig

Then reboot and start the instruction from the section Step 6. Prepare VM for Tests Running:

reboot

If you have no one of these 2 BEs then start the instruction from the beginning.

Note : Create ztest user if you forgot to do it according the instruction:

sudo useradd -m -d /var/ztest -g staff -s /bin/bash ztest

sudo passwd ztest

echo "ztest ALL=(ALL:ALL) NOPASSWD: ALL" >> /etc/sudoers.d/ztest

2. Install packages for zfs tests:

Note : If you already did it on the previous step when preparing VM to run, then you can skip this step and go to the next one.

apt update reboot apt install system-test-zfstest testrunner system-file-system-zfs-tests python3 screen

These packets contain kernel drivers, so reboot is required after packets installation.

reboot

3. Update /etc/sudoers:

sudo -E /usr/bin/sed -i.bak '/secure_path/a\Defaults exempt_group+=staff' /etc/sudoers sudo -E /usr/bin/sed -i.bak 's/ requiretty/ !requiretty/' /etc/sudoers

4. Create auxiliary scripts required for cli_user tests:

Create a file /usr/share/zfs/zpool.d/upath with this content or execute the following command:

su - mkdir /usr/share/zfs/zpool.d cat > /usr/share/zfs/zpool.d/upath <<EOT #!/bin/sh if [ "\$1" = "-h" ] ; then         echo "Show the underlying path for a device."         exit fi  # shellcheck disable=SC2154 echo upath="\$VDEV_UPATH" EOT sed -i '/./!d' /usr/share/zfs/zpool.d/upath

Then create a file /usr/share/zfs/zpool.d/iostat with the following content or execute the following commands:

cat > /usr/share/zfs/zpool.d/iostat <<EOT #!/bin/sh # # Display most relevant iostat bandwidth/latency numbers.  The output is # dependent on the name of the script/symlink used to call it. # helpstr=" iostat:         Show iostat values since boot (summary page). iostat-1s:      Do a single 1-second iostat sample and show values. iostat-10s:     Do a single 10-second iostat sample and show values." script="\${0##*/}" if [ "\$1" = "-h" ] ; then     echo "\$helpstr" | grep "\$script:" | tr -s '\\t' | cut -f 2-     exit fi if [ "\$script" = "iostat-1s" ] ; then     # Do a single one-second sample     interval=1     # Don't show summary stats     brief="yes" elif [ "\$script" = "iostat-10s" ] ; then     # Do a single ten-second sample     interval=10     # Don't show summary stats     brief="yes" fi if [ -f "\$VDEV_UPATH" ] ; then     # We're a file-based vdev, iostat doesn't work on us.  Do nothing.     exit fi out=\$(iostat -x "\${VDEV_UPATH##*/}" \     \${interval:+"\$interval"} \     \${interval:+"1"} | tail -n 2)       # Sample output (we want the last two lines): #   # Linux 2.6.32-642.13.1.el6.x86_64 (centos68)   03/09/2017      _x86_64_        (6 CPU) # # avg-cpu:  %user   %nice %system %iowait  %steal   %idle #           0.00    0.00    0.00    0.00    0.00  100.00 # # Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util # sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00 # # Get the column names cols=\$(echo "\$out" | head -n 1) # Get the values and tab separate them to make them cut-able. vals=\$(echo "\$out" | tail -n 1 | tr -s '[:space:]' '\\t') i=0 for col in \$cols ; do     i=\$((i+1))     # Skip the first column since it's just the device name     if [ \$i -eq 1 ]; then         continue     fi     # Get i'th value     val=\$(echo "\$vals" | cut -f "\$i")     echo "\$col=\$val" done EOT sed -i '/./!d' /usr/share/zfs/zpool.d/iostat

After that do the following steps:

cd /usr/share/zfs/zpool.d cp iostat iostat-10s cp iostat iostat-1s chmod +x upath iostat iostat-10s iostat-1s

5. Login by user ztest and run tests:

You can run tests with the special test script. The script can be created with any text editor or by executing the following command:

cat > z.sh <<EOT #!/bin/bash LOG="/var/tmp/zfstest.\$(date +%F-%T).txt" ipa=\$(sudo ifconfig | grep -A 1 vmxnet3s0 | sed '1d; s/^[ \\t]*inet[ \\t]*//; s/[ \\t]*netmask.*\$//') sudo /sbin/zpool destroy testpool sudo -E /usr/sbin/svcadm disable -s svc:/system/fmd:default sudo -E find /var/fm/fmd -type f -exec rm {} \\; sudo -E /usr/sbin/svcadm enable svc:/system/fmd:default # we need 3 x 8G drives with 512b sector size export DISKS="\$1 \$2 \$3" export KEEP="rpool data" export ZFS_USE_ONLY_DISKS=yes #remove partition on disks for D in \$DISKS do sudo -E /usr/bin/dd if=/dev/null of=/dev/dsk/\${D}p0 bs=512 count=512K done # cleanup drives before tests: sudo -E /sbin/zpool create -f test123321 \$DISKS sudo -E /sbin/zpool destroy test123321 sudo -E rm -rf /tmp/mnt* /tmp/ufs.* /tmp/log.* /tmp/tmp.* /tmp/rst* /tmp/tmpfs* test -d /var/tmp/test_results && sudo rm -rf /var/tmp/test_results sudo -E /usr/sbin/devfsadm -C uname -a > \${LOG} echo "IP Address : \${ipa}" >> \${LOG} echo "" >> \${LOG} echo "Disk IDs : \$DISKS" 2>&1 | /usr/bin/tee -a \${LOG} # run tests /bin/ksh /opt/zfs-tests/bin/zfstest \$* 2>&1 | /usr/bin/tee -a /var/tmp/z.tmp echo "Results Summary" >> \${LOG} echo "" >> \${LOG} skip=\$(sed -n '/^SKIP[ \\t]*/p' /var/tmp/z.tmp | sed 's/^SKIP[ \\t]*//; s/[ \\t]*\$//') fail=\$(sed -n '/^FAIL[ \\t]*/p' /var/tmp/z.tmp | sed 's/^FAIL[ \\t]*//; s/[ \\t]*\$//') pass=\$(sed -n '/^PASS[ \\t]*/p' /var/tmp/z.tmp | sed 's/^PASS[ \\t]*//; s/[ \\t]*\$//') total=0 if [ "\${pass}" != "" ]; then total=\$((\${total} + \${pass})) fi if [ "\${fail}" != "" ]; then total=\$((\${total} + \${fail})) fi if [ "\${skip}" != "" ]; then total=\$((\${total} + \${skip})) fi echo "TOTAL TESTS: \$total" >> \${LOG} sed '1,/Results Summary/d' /var/tmp/z.tmp >> \${LOG} sudo rm -f /var/tmp/z.tmp EOT chmod 777 z.sh

After running of this script you will find the log in the /var/tmp/zfstest.<ISO TIME>.txt file. It will contain information in the format that is used in Test Results.

Before running of this script you have to get IDs of tree 8-GB disks from this VM. You can do it with the diskinfo command and run the test script with these IDs in parameters (you can find IDs in the DISK column of the diskinfo output), for example:

ztest@zone:~# sudo diskinfo -p TYPE DISK VID PID SIZE RMV SSD SATA c2t0d0 VMware Virtual SATA Ha> 107374182400 no no SATA c2t1d0 VMware Virtual SATA Ha> 8589934592 no no SATA c2t2d0 VMware Virtual SATA Ha> 8589934592 no no SATA c2t3d0 VMware Virtual SATA Ha> 8589934592 no no ztest@zone:~# ./z.sh c2t1d0 c2t2d0 c2t3d0

Wait about 5h-6h or more for full tests and found full logs at:

/var/tmp/test_results/<ISO TIME>

If you use the SCREEN utility, your command will look like this:

ztest@zone:~# screen ./z.sh <disk1> <disk2> <disk3>

Then you can disconnect in any time by pressing Ctrl-A d and connect back in the new SSH session by the command:

ztest@zone:~# screen -r


Running in stability mode

Related content