on
Remote Replication Introduction
When we first announced the SS7000 series, we made available a simulator (a virtual machine image) so people could easily try out the new software. At a keynote session that evening, Bryan and Mike challenged audience members to be the first to set up remote replication between two simulators. They didn’t realize how quickly someone would take them up on that. Having worked on this feature, it was very satisfying to see it all come together in a new user’s easy experience setting up replication for the first time.
The product has come a long way in the short time since then. This week sees the release of 2010.Q1, the fourth major software update in just over a year. Each update has come packed with major features from online data migration to on-disk data deduplication. 2010.Q1 includes several significant enhancements (and bug fixes) to the remote replication feature. And while it was great to see one of our first users find it so easy to replicate an NFS share to another appliance, remote replication remains one of the most complex features of our management software.
The problem sounds simple enough: just copy this data to that system. But people use remote replication to solve many different business problems, and supporting each of these requires related features that together add significant complexity to the system. Examples include:
- Backup. Disk-to-disk backup is the most obvious use of data replication. Customers need the ability to recover data in the event of data loss on the primary system, whether a result of system failure or administrative error.
- Disaster recovery (DR). This sounds like backup, but it’s more than that: customers running business-critical services backed by a storage system need the ability to recover service quickly in the event of an outage of their primary system (be it a result of system failure or a datacenter-wide power outage or an actual disaster). Replication can be used to copy data to a secondary system off-site that can be configured to quickly take over service from the primary site in the event of an extended outage there. Of course, you also need a way to switch back to the primary site without copying all the data back.
- Data distribution. Businesses spread across the globe often want a central store of documents that clients around the world can access quickly. They use replication to copy documents and other data from a central data center to many remote appliances, providing fast local caches for employees working far from the main office.
- Workload distribution. Many customers replicate data to a second appliance to run analysis or batch jobs that are too expensive to run on the primary system without impacting the production workload.
These use cases inform the design requirements for any replication system:
-
Data replication must be configurable on some sort of schedule. We don’t just want one copy of the data on another system. We want an up-to-date copy. For example, data changes every day, and we want nightly backups. Or we have DR agreements that require restoring service using data no more than 10 minutes out-of-date. Some deployments wanting to maximize freshness of replicated data may want to replicate continuously (as frequently as possible). Very critical systems may even want to replicate synchronously (so that the primary system does not acknowledge client writes until they’re on stable storage on the DR site), though this has significant performance implications.
-
Data should only be replicated once. This one’s obvious, but important. When we update the copy, we don’t want to have to send an entire new copy of the data. This wastes potentially expensive network bandwidth and disk space. We only want to send the changes made since the previous update. This is also important when restoring primary service after a disaster-recovery event. In that case, we only want to copy the changes made while running on the secondary system back to the primary system.
-
Copies should be point-in-time consistent. Source data may always be changing, but with asynchronous replication, copies will usually be updated at discrete intervals. But at the very least, the copy should represent a snapshot of the data at a single point in time. (By contrast, a simple “cp” or “rsync” copy of an actively changing filesystem would result in a copy where each file’s state was copied at slightly different times, potentially resulting in inconsistencies in the copy that didn’t exist (and could not exist) in the source data.) This is particularly important for databases or other applications with complex persistent state. Traditional filesystem guarantees about their state after a crash make it possible to write applications that can recover from any point-in-time snapshot, but it’s much harder to write software that can recover from arbitrary inconsistencies introduced by sloppy copying.
-
Replication performance is important, but so is performance observability and control (e.g., throttling). Backup and DR operations can’t be allowed to significantly impact the performance of primary clients of the system, so administrators need to be able to see the impact of replication on system performance as well as limit system resources used for replication if this impact becomes too large.
-
Complete backup solutions should replicate data-related system configuration (like filesystem block size, quotas, or protocol sharing properties), since this needs to be recovered with a full restore, too. But some properties should be changeable in each copy. Backup copies may use higher compression levels, for example, because performance is less important than disk space on the backup system. DR sites may have different per-host NFS sharing restrictions because they’re in different data centers than the source system.
-
Management must be clear and simple. When you need to use your backup copy, whether to restore the original system or bring up a disaster recovery site, you want the process to be as simple as possible. Delays cost money, and missteps can lead to loss of the only good copy of your data.
That’s an overview of the design goals of our remote replication feature today. Some of these elements have been part of the product since the initial release, while others are new to 2010.Q1. The product will evolve as we see how people use the appliance to solve other business needs. Expect more details in the coming weeks and months.