I’ve been asked about this a few times, so I figured I’d post here. This is a brief description of a highly available Rails cluster I’ve built. Some preliminaries:
- There’s no invention here, I believe this setup is very common.
- High availability isn’t the same thing as load balanced. There is nothing here to intelligently shared load across the frontend servers, and one backend server is essentially idle all the time.
- This cluster is built with a bunch of open-source software on non-fancy kit. As such it doesn’t have the enormous capacity of clusters built upon commercial shared-storage products, SAN kit, layer 7 web switches etc. Its ambition is to run a few busy Rails sites well whilst coping with hardware failure gracefully.
Layout
Operation
- Web traffic is spread across the managed frontend interfaces by multiple A records in the DNS.
- Wackamole uses a Spread messaging network to ensure these multiple A record IPs are always present across the frontend. It achieves this by managing the hosts’ interfaces when it detects hosts joining or leaving the cluster.
- A pair of MySQL servers run in master:master configuration on the backend hosts
- The backend hosts use DRBD to maintain a mirrored block device between them.
- These block devices back a NFS filesystem.
- Heartbeat runs on the backend hosts to do several tasks:
- Manage which host is the DRBD primary and therefore can be written to.
- Manage which host has the DRBD filesystem mounted and exported with NFS.
- Manage the IP through which the frontend mounts the filesystem and talks to MySQL.
- With all this in place, Nginx accepts web connections and serves static assets off the NFS mount and passess other requests to Mongrel, a HTTP server that’s well suited to running a Rails instance.
Notes
- One of the main hazards of MySQL master:master setups is primary key collision if an INSERT occurs on both hosts at once. We avoid that here by letting Hearbeat manage the IP that the frontends connect to.
- I’ve built two of these clusters to date. The second one is now four servers wide on the frontend.
Future work
- DRBD can now run in dual-primary mode, allowing both hosts to accept writes. This makes it a candidate for filesystems like GFS that use shared storage to present a filesystem that can be written to on multiple hosts. More here.
- To add some load balancing I’m considering using HAProxy or LVS to actively distribute traffic across the frontends.
- HA aside, there’s also some cool things like evented Mongrel that it would be interesting to try.
One Response to “High Availability Rails Cluster”
Leave a Reply
Recent articles
- Docker, SELinux, Consul, Registrator
(Wednesday, 04. 29. 2015 – No Comments) - ZFS performance on FreeBSD
(Tuesday, 09. 16. 2014 – No Comments) - Controlling Exim SMTP behaviour from Dovecot password data
(Wednesday, 09. 3. 2014 – No Comments) - Heartbleed OpenSSL vulnerability
(Tuesday, 04. 8. 2014 – No Comments)
Archives
- April 2015
- September 2014
- April 2014
- September 2013
- August 2013
- March 2013
- April 2012
- March 2012
- September 2011
- June 2011
- February 2011
- January 2011
- October 2010
- September 2010
- February 2010
- September 2009
- August 2009
- January 2009
- September 2008
- August 2008
- July 2008
- May 2008
- April 2008
- February 2008
- January 2008
- November 2007
- October 2007
- September 2007
- August 2007
- December 2006
- November 2006
- August 2006
- June 2006
- May 2006
- March 2006
- February 2006
- January 2006
- December 2005
- November 2005
- October 2005
November 22nd, 2008 at 2:39 pm
What you can do to prevent MySQL ID collisions is configure the nodes to increment ID’s by two and having the two nodes start on different counts. At RailsCluster we too let Heartbeat manage the IP but do the ID trick just in case a network partition occurs.