How we migrated our assets to S3 without any downtime

Harvest used to have a local solution for storing user-generated files. We’re talking avatars, estimate attachments or expense receipts. This solution was based on GlusterFS and required that we maintain our own set of file servers. There was also a fair bit of work involved to keep everything stable.

For example, keeping GlusterFS assets in sync between multiple locations is problematic. There is either a batch oriented sync process we have to keep on top of, or we have to use GlusterFS native geo-replication.

This in itself is not a huge problem, and the amount of data we have isn’t that big, but it’s data that keeps growing every week.

For example, this is a totally outdated rundown of the numbers:

The numbers

S3 means having a single, decoupled and reliable store with endless capacity. We really like that.

Now, migrating this data without downtime is not that simple. Besides, we like to release things slow starting small to catch up any problems before they hit all of our users.

We used mostly Paperclip to handle our assets, so we decided to build an extension that would allow us to migrate seamlessly. This extension is called paperclip-multiple and provided a new storage called :multiple that stored files both in the filesystem and in S3 (using the fantastic fog gem).

Thanks to that we could migrate our assets one by one without any downtime. The process worked like this:

  1. Change one type of asset (say, user avatars) to use this multiple storage and deploy it. From this moment on new avatars will be both on GlusterFS and Amazon S3.
  2. Synchronize the local filesystem with S3. For this we used the s3cmd sync command, which worked fantastic and was really fast if you had to do multiple syncs.
  3. Now all files should be on S3, meaning we can change the storage to just :fog and be done with it!

paperclip-multiple allowed even more flexibility. With feature flags we could specify for which users we wanted to store files on S3 or just keep working with the filesystem as before. We could also feature flag which user avatars we wanted to display from S3. This let us try file uploading for a while on our internal accounts, and later on try displaying them from S3.

Now, this process wasn’t as simple as that. We discovered some bugs in the process, some of the code didn’t use Paperclip properly and needed a lot of modifications, and we also built a proxy to serve all this data even faster and cheaper (but that will be a story for another time).

Even more interesting were the exports, which didn’t even use Paperclip and, quite literally, were all over the place. We’ve rebuilt all of them, they are stored on S3, automatically expired after 15 days, and the new code is a lot easier to work with and, most importantly, extend. This actually meant that the biggest growth per week that we used to have is not really growth anymore because the files are deleted after certain time.

You can find the code on github, with more in-depth explanations about how to use it and how it works.