

It’s still not perfect, the way the two SSDs in banshee are-but on rust disks, a little bit of load mismatch is expected.

The other of data’s two mirror vdevs is much more evenly loaded. That isn’t necessarily a big problem-an individual read only goes to one side of a mirror vdev, not to both-but it could be an indication that the other disk in the vdev is having problems, forcing the vdev to read from 5e7f while 10d4 is unavailable or busy. In particular, we can see that the first disk of the first mirror vdev-the one with WWN ending in 5e7f-has received considerably more read operations than the other disk in the same vdev. On banshee, our solid state pool, we can see exactly even usage on both disks of the pool’s single mirror vdev-read or write, IOPS or throughput, both sides receive the same load.īut on data-the rust pool-things aren’t quite so neat and pretty. Let’s take a look at our demonstration system in a little greater depth: zpool iostat -v In general, we expect the disks of an individual vdev to receive roughly similar numbers of read and write operations. One of the ways zpool iostat is most commonly used is to look for potential problems with individual disks. This is good information to have, but we can go much deeper, and also more granular-we can find more information about individual disks, and we can also monitor output in near-realtime, not just as averaged statistics since the most recent boot. The faster SSD pool includes home directories and VMs, so it’s received a much closer to even number of reads and writes, despite the scrub-normally, it would be weighted heavily towards writes. The system above has only been up for 25 hours, and each pool received a scheduled scrub during those 25 hours-so we see much higher reads than writes, particularly on the slower bulk storage pool, data. We can also see the capacity and used/free information for each pool, along with statistics for how many operations per second and how much data per second each pool has averaged since the last boot. This system has two pools, one comprised of fast but relatively small SSDs, and one with larger but relatively slow rust disks. If run without any arguments, zpool iostat will show you a brief overview of all pools on your system, with average usage statistics across that period.

OpenZFS 2.0 is also available on FreeBSD 12.1 and later via the openzfs-kmod port/package. The zpool iostat command has been greatly expanded in recent years, with OpenZFS 2.0 (coming in FreeBSD 13.0) offering new flags, including -l to monitor latency, and the ability to filter to a specific disk. If you’re familiar with the iostat command-a core tool on FreeBSD, and a part of the optional sysstat package on Debian-derived Linuxes-then you already know most of what you need to know about zpool iostat, which is simply “ iostat, but specifically for ZFS.” For those who are less familiar, both iostat and zpool iostat are very low level tools which offer concrete information about the throughput, latency, and some of the usage patterns of individual disks. Zpool iostat is one of the most essential tools in any serious ZFS storage admin’s toolbox-and today, we’re going to go over a bit of the theory and practice of using it to troubleshoot performance. Subscribe to our article seriesto find out more about the secrets of OpenZFS

OPENZFS SCRUB ALL POOLS SERIES
This is part of our article series published as “OpenZFS in Depth”.
