I had a few more suggestions thrown out at me before I could wrap this one up.
- Try disabling the RAID controller read-ahead
- Try a few custom options to XFS
- Try RAID-10
First, my final “best” state benchmarks for comparison:
Disabling the read-ahead was an interesting thought.
It didn’t seem to make any real difference however.
The second suggestion was to use modified XFS options (mkfs.xfs -f -d sunit=128,swidth=$((512*8)),agcount=32 /dev/sdb2).
It’s hard to tell, but it seems these actually degraded performance.
The last test was to switch to RAID-10. This would reduce overall storage capacity to 72TB, but given our requirements, this really shouldn’t cause any problem for the project. RAID-10 should have a significant boost to write performance.
These numbers back up the improvement to write speed, but XFS still lags behind at larger volume sizes.
Since I am had to reconfigure the array, I wanted to try the larger volume size (36T) above and then a smaller size (2T) to try to reproduce my earlier results showing XFS to perform better at lower volume size.
This was by far the best test results I had seen and has doubled the results from the original async test.
- XFS seems to be very sensitive to partition size
- In all but one case, EXT4 performed better on the random read-write tests
- Know your other caveats of both file systems before picking the one for you