Monitoring, statistics and IOPS hard disk drive testing on Linux
Linux IOPS monitoring and testing tools are very common in distributions, you can use them even without the need to use LiveCDs, we find them in the sysstat
package (in Debian and Centos)
This package is very useful to have reports on disk usage, both in reads and writes and in IOPS (Input/Output Per Second), that is, the transactions per second to a RAID-type disk or disk array
Linux IOPS Test
It is very common that we are asked Linux consulting for problems with a slow server, prior analysis of resource utilization, usually includes load, CPU usage, RAM, disks, including read and write speeds and IOPS
To prevent the problem from being undetected in time, we usually adopt a real-time resource monitoring, capable of reporting unusual parameters and ensuring observability of usage via graphs
To stress our SSDs or Hard Disks, we can use these benchmarks available on the major Linux distributions
hdparm
Surely the best known and the simplest, it allows us to test in read using the cache and without our device having the report of the average read speed
# hdparm -tT /dev/sda
/dev/sda
Timing cached reads: 34876 MB in 2.00 seconds = 17469.92 MB/sec
Timing buffered disk reads: 2912 MB in 3.00 seconds = 970.63 MB/sec
Test with Disk Dump dd
Takes bit-to-bit copy of a device, allows you to have read and write statistics of average speed in MB/s
Caution: check well if=input of=output
, you risk writing zeros all over the disk, if you use it as an output device
Test disk read speed with dd
# dd if=/dev/sda of=/dev/null
1024+0 records in
1024+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 13.2991 s, 807 MB/s
Test disk write speed with dd
# dd if=/dev/zero of=./zero.dd count=1024 bs=10M
1024+0 records in
1024+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 3.74116 s, 2.9 GB/s
Remember to remove the file
rm ./zero.dd
Bonnie++
A more comprehensive benchmark, it also allows you to export statistics to .html
files (in the example bonnie.html
) readable with any browser
# bonnie++ -d /tmp/miadirectory -r -u root | bon_csv2html > /tmp/report-bonnie.html
Flexible IO test
Another comprehensive benchmark to stress IO on our system. fio
is parted of the Phoronix Test Suite but is also available for major Linux distributions as a package
Test write speed
Test write speed, on an array of 3 Micron 7500 NVMe disks, in ZFS RaidZ1
fio --filename=testfile-seq-50g --size=50G --direct=1 --rw=write --bs=4M --ioengine=libaio --numjobs=10 --iodepth=32 --name=seq-write-test --group_reporting --ramp_time=4
WRITE: bw=31.0GiB/s (33.3GB/s), 31.0GiB/s-31.0GiB/s (33.3GB/s-33.3GB/s), io=369GiB (396GB), run=11894-11894msec
Test read speed
Test read speed, on an array of 3 Micron 7500 NVMe disks, in ZFS RaidZ1
fio --filename=testfile-seq-50g --direct=1 --rw=read --bs=4M --ioengine=libaio --numjobs=10 --iodepth=32 --name=seq-read-test --group_reporting --readonly --ramp_time=4
READ: bw=26.7GiB/s (28.7GB/s), 26.7GiB/s-26.7GiB/s (28.7GB/s-28.7GB/s), io=387GiB (415GB), run=14480-14480msec
Write IOPS test
Test write IOPS, on an array of 3 Micron 7500 NVMe disks, in ZFS RaidZ1
fio --filename=testfile-rand-4g --size=4G --direct=1 --rw=randwrite --bs=4k --ioengine=libaio --numjobs=10 --iodepth=32 --name=rand-write-test --group_reporting --ramp_time=4 --time_based --runtime=300
write: IOPS=50.1k, BW=196MiB/s (205MB/s)(57.4GiB/300001msec); 0 zone resets
Test read IOPS
Test read IOPS, on an array of 3 Micron 7500 NVMe disks, in ZFS RaidZ1
fio --filename=testfile-rand-4g --direct=1 --rw=randread --bs=4k --ioengine=libaio --numjobs=4 --iodepth=32 --name=rand-read-test --group_reporting --readonly --ramp_time=4
read: IOPS=52.0k, BW=203MiB/s (213MB/s)(15.2GiB/76860msec)
IOPS statistics
iostat
Reports the statistics in tps (IOPS), instantaneous reads, writes, and from the previous statistics cycle, the first report may be inconsistent if run without parameters
You need to change the disk name, in our case md1
which we use to detect disk/RAID usage over a time interval:
# iostat -m 2 5 -d md1
Device tps MB_read/s MB_wrtn/s MB_dscd/s MB_read MB_wrtn MB_dscd
md1 14.57 0.87 0.33 0.00 12586 4739 0
Device tps MB_read/s MB_wrtn/s MB_dscd/s MB_read MB_wrtn MB_dscd
md1 0.00 0.00 0.00 0.00 0 0 0
Displays the IOPS statistics every 2 seconds for 5 times of the device md1
, raid software Linux MD
If used without specifying the device it also indicates details of processor usage
Very interesting is %iowait indicates the percentage of the CPU consumed by waiting for IOs on disk
sar
$ sar -b 2 5
12:06:40 tps rtps wtps dtps bread/s bwrtn/s bdscd/s
12:07:00 12,00 0,00 12,00 0,00 0,00 392,00 0,00
12:07:02 71,50 1,00 70,50 0,00 32,00 668,00 0,00
12:07:04 100,50 3,50 97,00 0,00 112,00 6956,00 0,00
12:07:06 45,50 0,00 45,50 0,00 0,00 428,00 0,00
12:07:08 9,00 0,00 9,00 0,00 0,00 232,00 0,00
Average: 47,70 0,90 46,80 0,00 28,80 1735,20 0,00
As per iostat
it displays statistics every 2 seconds for 5 times and returns us the average in this case over all devices present
Instant test with iotop
This very useful command is a simple monitor of IOs similar to top and allows us to sort via left and right arrows the processes that perform multiple IOs on the disks. It also displays the total read and write speed in MB/s
Total DISK READ: 13.96 K/s | Total DISK WRITE: 359.59 K/s
Current DISK READ: 13.96 K/s | Current DISK WRITE: 352.61 K/s
TID PRIO USER DISK READ DISK WRITE> COMMAND
393 be/4 root 0.00 B/s 359.59 K/s systemd-journald
1 be/4 root 0.00 B/s 0.00 B/s init
2 be/4 root 0.00 B/s 0.00 B/s [kthreadd]
3 be/4 root 0.00 B/s 0.00 B/s [pool_workqueue_release]
4 be/0 root 0.00 B/s 0.00 B/s [kworker/R-rcu_gp]
5 be/0 root 0.00 B/s 0.00 B/s [kworker/R-sync_wq]
6 be/0 root 0.00 B/s 0.00 B/s [kworker/R-slub_flushwq]
7 be/0 root 0.00 B/s 0.00 B/s [kworker/R-netns]
9 be/0 root 0.00 B/s 0.00 B/s [kworker/0:0H-events_highpri]