Category Archives: Uncategorized

Downsizing to SSDs

System management can be a big deal. At Etsy, we DBAs have been feeling the pain of getting spread too thin. You get a nice glibc vulnerability and have to patch and reboot hundreds of servers. There goes your plans for the week.

We decided last year to embark on a 2016 mission to get better performance, easier management and reduced power utilization through a farm reduction in server count for our user generated, sharded data.

Continue reading

Advertisements

Source of Truth or Source of Madness?

This year at Etsy, we spun up a “Database Working Group” that talks about all things data. It’s made up of members from many teams: DBA, core development, development tools and data engineering (Hadoop/Vertica). At our last two meetings, we started talking about how many “sources of information” we have in our environment. I hesitate to call them “sources of truth” because in many cases, we just report information to them, not action data based on them. We spent a session whiteboarding all of of these sources and drawing the relationships between them. It was a bit overwhelming to actually visualize the madness.

Continue reading

KeyError: ‘/dev/sda’

At Etsy, we have a nice, clean, streamlined build process. We have a command for setting up RAID, and another for OS installation. OS installation comes with automagic for LDAP, Chef roles, etc.

We came across an odd scenario today when a co-worker was building a box that gave the following error:

Traceback (most recent call first):
File “/usr/lib/anaconda/storage/partitioning.py”, line 1066, in allocatePartitions
disklabel = disklabels[_disk.path]
File “/usr/lib/anaconda/storage/partitioning.py”, line 977, in doPartitioning
allocatePartitions(storage, disks, partitions, free)
File “/usr/lib/anaconda/storage/partitioning.py”, line 274, in doAutoPartition
exclusiveDisks=exclusiveDisks)
File “/usr/lib/anaconda/dispatch.py”, line 210, in moveStep
rc = stepFunc(self.anaconda)
File “/usr/lib/anaconda/dispatch.py”, line 126, in gotoNext
self.moveStep()
File “/usr/lib/anaconda/dispatch.py”, line 233, in currentStep
self.gotoNext()
File “/usr/lib/anaconda/text.py”, line 602, in run
(step, instance) = anaconda.dispatch.currentStep()
File “/usr/bin/anaconda”, line 1131, in <module>
anaconda.intf.run(anaconda)
KeyError: ‘/dev/sda’
It suggests a problem with setting up partitions on /dev/sda, where we would put the boot partition. I knew it seemed familiar but I couldn’t recall the solution, and Google, while usually wonderful, got us to a Red Hat Support article behind a paywall.  A few other results suggested the boot order was incorrect. The OS was thinking the drives were out of order. Being a Dell box, I checked the virtual drive order, which in my experience always has matched the boot order:
Screen Shot 2015-12-29 at 3.02.57 PM.png
After the anaconda failure, I went into another terminal to a prompt and checked /proc/partitions. Sure enough, we started at sdb, not sda. Then it hit me. There were 4 people viewing the console in iDRAC, so what if someone else had mounted a virtual disk and that was /dev/sda? Sure enough:
Screen Shot 2015 12 29 at 3.23.07 PM.png
Deleting the virtual media session, rebooting and starting the OS install again proved out everything worked fine.
The bonus humor here is that this isn’t the first time we’ve run into this. Hopefully after posting this, Google will index this page and point us to the answer a bit quicker next time.

XFS and EXT4 Testing Redux

In my concluded testing post, I declared EXT4 my winner vs XFS for my scenario. My coworker, @keyurdg, was unwilling to let XFS lose out and made a few observations:

  • XFS wasn’t *really* being formatted optimally for the RAID stripe size
  • XFS wasn’t being mounted with the inode64 option which means that all of the inodes are kept in the first 2TB. (Side note: inode64 option is default in newer kernels but not on CentOS 6’s 2.6.32)
  • Single threaded testing isn’t entirely accurate because although replication is single threaded, the writes are collected in InnoDB and then writes it to disk using multiple threads governed by innodb_write_io_threads.

Armed with new data, I have – for real – the last round of testing.

Continue reading

IO, IO, It’s Off to Testing We Go

In my last post, I learned in disappointing fashion that sometimes you need to start small and work your way up, rather than trying to put together a finished product. This go-round, I’ll talk about my investigation into disk IO.

In an effort to better understand the hardware I have and it’s capacities, I started off by just trying to get some basic info about the RAID controller and the disks. This hardware in particular is a Supermicro, with a yet unknown RAID controller and 16 4TB disks arranged in RAID 6. Finding out more disk and controller information was the first step. “hdparm -i” wasn’t able to give me much, nor was “cat /sys/class/block/sdb/device/{model,vendor}”. “dmesg” got me to a list of hard disks, Hitatchi 7200rpm and a model number that I could Google. It also got me enough controller information to point to megaraid, which is LSI, which got me over to this MegaCli cheat sheet. Using “MegaCli -AdpAllInfo -aALL” actually got me a great deal of information. (In other news, I now think that Dell’s OMSA command line utility is a lot less terrible after trying to figure out MegaCli).

Continue reading

Even If You Fail, You Can Still Learn

As many learning experiences do, this one also starts out “So I was working on a project at work and…”.  In this case, the end result is to try to run as many concurrent copies of MySQL on a single server as possible, maintaining real time replication each running differing data sets. To help with this, I sent out to do this on a server with 36 7200rpm 4GB SATA disks, giving me roughly 120TB of available space to work with.

This isn’t an abnormal type of machine for us. Sometimes you simply need a ton of disk space. There is a quirk with this particular machine that I’ve been told: the RAID controller has some issues with addressing very large virtual disks and I should create 2 60TB volumes and stitch them together with LVM. Easy enough: pvcreate both volumes, create a volume group and a logical volume out of it and viola: ~116TB of storage on a single mount point, with xfs as the file system (default options).

Continue reading