Following my previous post, I got some excellent feedback in the forms of comments, tweets and other chat. In no particular order:
- Commenter Tibi noted that ensuring I’m mounting with noatime, nodiratime and nobarrier should all improve performance.
- Commenter benbradley pointed out a missing flag on some of my sysbench tests which will necessitate re-testing.
- Former co-worker @preston4tw suggests looking at different IO schedulers. For all tests past, I used deadline which seems to be best, but re-testing with noop could be useful.
- Fellow DBA @kormoc encouraged me to try many smaller partitions to limit the number of concurrent fsyncs.
There seem to be plenty of options here that should allow me to re-try my testing with a slightly more consistent method. The consistent difference seems to be in the file system, EXT4 vs XFS, with XFS performing at about half the speed of EXT4.
The constants for the testing:
- Testing tool: sysbench 0.4.12, using test=fileio, file-test-mode=rndrw (random read-write)
- fsync interval: Every 100 requests
- Read:Write Ratio: 1.5:1
- IO Scheduler: deadline
- Thread count: 1 (to mimic the MySQL SQL thread as closely as possible)
- sysbench testing duration was 300 seconds per run
First test is to look at whether the additional mount options (noatime, nodiratime, nobarrier) have any impact.
Recalling that MySQL 5.5 introduced native Linux asynchronous IO, I discovered that sysbench has a way to set this behavior (–file-io-mode=async), I thought this might be a good followup test.
This shows an incredible gain on both file systems for all values but the XFS numbers still maintain half the performance of EXT4.
In my last post, I mentioned that I had iinadvertentlycreated a smaller partition. This next test was with 2 4TB partitions instead of 2 59T partitions, first with synchronous then asynchronous IO modes:
For the first time, XFS outperformed EXT4 on synchronous IO, but still fell short on the asynchronous test, albeit, not be as wide a margin as in previous tests. The simple conclusion here is that as volume size grows, EXT4 performs more consistently than XFS.
I found this result to be fascinating and wonder if it holds true for other workloads. For example, if you have a large storage box to hold backups, and are writing multiple simultaneous backup streams to it, it could have a significant impact on performance. Seeing that partition size has such a profound impact on XFS also made me wonder if there was something that simply wasn’t tuned correctly. Strangely, the XFS FAQ effectively tells you to use the defaults.
|File System||Size||Mount Options||Transfer/s||Requests/s||Avg/Request||95%/Request|
I didn’t see any clear wins with this change.
With all that tested, it seems like the choice is clear here. Smaller partitions with EXT4 seem like the right way to go for a practical replication test. I will follow up with a (final?) post about the results of a live setup.