In Search Of Fireproof/Waterproof Backup

Every year or two, I like to audit how I backup and store my data. I’ve got a pretty good routine of backing up my hard drive for my primary and secondary computers. It’s part of my weekly routine. I also remotely backup some files just in case of a site compromising situation (fire, flood, theft). I’d like to continue that process to move my primary backups to something more secure for site compromising situations. Remote backups either need physical transportation, or adequate bandwidth, both of which are limiting and not exactly cost efficient. I’d like to bypass that.

I’m aware of but not really fond of iosafe’s line of fire/water proof hard drives because it’s a high investment in 1 drive. This doesn’t seem very practical to me in the long run as data storage needs change and drives get bigger/faster. I also don’t need that level of simplicity. I just want someplace safe to store backups.

What I’m really looking for is a lockbox style safe that meets the following requirements:

  • Just large enough to hold 1-2 3.5″ hard drive enclosures.
  • Fireproof and Waterproof
  • UL 125 rated for 1 hr or more.
  • Solid locking mechanism and hinges that can handle many cycles. Combo is preferred since keys either get lost, or leaving them in the lock results in them getting bent.

There doesn’t seem to be anything on the market that meets these seemingly simple requirements. Almost everything in this size range (which isn’t much) is UL 125 for 30 minutes at best. Reviews for everything in this class is very mixed regarding the quality of the hinges and locking mechanism. Truthfully I’d rather no lock and reliable opening/closing than a failed lock. Unless all computers are physically secured in a safe, it’s false security anyway. USB pass-through isn’t ideal either since who wants to keep something like this that close to their desk and not in a closet or someplace more convenient?

Oh yea, I’d also like to keep this somewhat economical. Truthfully a safe/lockbox of this size generally is, though they don’t meet the other two requirements. I’d be curious if anyone has found something that meets all my requirements. I can’t be the first to go down this path. Maybe I’m just the first who wants to do it right and doesn’t want a 300 lb walk-in safe.

rsnapshot For Mac OS X

Lately I’ve been using rsync to keep two hard drives in sync. I’ve been thinking of switching to rsnapshot since it would give me with incremental backup which is much better. What I’ve yet to figure out is if it can handle resource forks (with Apple’s special flag in rsync), and HTS+’s. Google hasn’t returned much on the combination, so apparently there’s very little experience out there. As a result I guess I’m sticking with the more simple rsync until I see otherwise.

Resource Forks Suck

Dear Apple,

Please kill off resource forks. They add an unnecessary complexity to data archiving and management that’s unneeded by todays standards. Since Mac OS X it seems only a few places exist where resource forks are actually used. For example the older pre-Mac OS X “font suitcases” used a resource fork, while the modern “Data Fork Suitcase Format” as it’s name implies, does not1.

One could argue keeping resource forks is good for legacy purposes. But since Mac OS X 10.5 can no longer run Classic even on PPC systems, is there really a need?

If that’s really not possible, could you please make rsync suck a little less?

Ideally since rsync 3.0 looks like it will be a lot better, make it a high profile download for Mac OS X 10.4 and 10.5 similar to what was done to push Safari 3.0. That would be a nice stop gap solution.

I hope you’ll fix this since it’s a real pain in the butt for people like me.

Thanks,
Robert

1. 25251 Mac OS X: Font file formats

New Home Server

Over the past few weeks, I’ve been in the process of setting up a new home server. The previous one was an old Beige G3 (266MHz) running Mac OS X 10.2 that was starting to show it’s age. The new system is a much more capable B&W G3 (400MHz) running Mac OS X 10.4. Despite only a slight increase in clock speed, the B&W G3 has much more modern hardware (USB, Firewire) not to mention more room for more storage. The opportunities are endless.

Decided to go with a multi-drive setup considering the extra bays. The system had a still usable 40GB Seagate Barracuda IV drive which would make a perfect system disk for OS/Software. Installed via a ACard ATA/66 controller it’s no speed daemon, but for the purpose it’s fine. For the data drive I decided to get a SIIG SATA card and a pair of Seagate SATA drives I found a good deal on at BestBuy. The drives were Seagate ST303204N1A1AS, which corresponds to 320GB. Inside the boxes as expected were (the newer and better) ST3320620AS, which is a Seagate Barracuda 7200.10 with firmware 3.AAE (not the AAK people have had in the past). Perfect.

Next I wanted to replicate data across the drives on a cron. Initially I was thinking rsync, since as of 10.4, it’s resource-fork aware. It turns out that’s not really true. I ended up going back to SuperDuper to copy between the drives. It only copies changed files, and once a week will delete removed files (so if you accidentally delete something, there’s still a chance to recover, unless you do it at the wrong time). Not a bad solution IMHO. Still would prefer rsync more. Initial backup took less than 1/2 hour. Just a few minutes should be enough to keep the disks in sync. I briefly considered setting up RAID, but decided against it since RAID is not backup. It doesn’t protect against things like corruption.

Apple needs to kill off resource forks ASAP. They should have done so when moving to Mac OS X several years ago.

Next up, I tried putting a copy of TechTools Pro I no longer use on my Mac Mini since upgrading to Leopard on the system, but that resulted in some drive problems that I couldn’t resolve without uninstalling. They seem to know about the problem, but haven’t fixed it. You see the following error repeatedly in the system.log file until you reboot:

kernel[0]: IOATAController device blocking bus.

Drag.

Also updated mrtg, and this time compiled GD, libpng, libjpeg, etc. all by hand, rather than use fink. Last time I went with fink, which saved me a few keystrokes, but when fink no longer updated packages for 10.2, left me high and dry. This time I think I’ll avoid it when possible. I need to try getting RRDtool setup at some point, since it’s so much better.

I use a few php scripts for easy admin of the box, and decided PHP 4 wasn’t adequate since it’s pretty much discontinued. So I upgraded to php 5.2, and all seems good so far. I think Apache 1.3.33 will serve me just fine for the moment, so not upgrading that.

I might give setting up BIND a try, since local DNS would be pretty handy for easily accessing the server without modifying the host file on computers.

I also disabled things like spotlight, which have absolutely no purpose on this box.

On another note, glib for some reason won’t compile for me. No clue what’s going on. Overall it’s looking pretty good. Should be about ready for real use. Just want to make sure the backups work as expected.

Improving Storage And Backups

I work on multiple computers (Mac/PC) and have various assets online including this blog and quite a bit of code lying around in svn, and just on the file system. My backup solutions so far have been pretty ad hoc but rather effective. Everything important is replicated somewhere else at varying frequencies. The downside is that it’s not very efficient and even partially manual. I’ve decided over the next several weeks I’m going to re-evaluate how I do all my data storage and backups. Here’s the list of goals:

  • Improve how data is organized and stored both primary storage and in backups. Organizing and clean up.
  • Make sure all data has at least 1 backup (I pretty much do this already and have for a long time).
  • Automate as much as possible.
  • Keep costs low. Backup more for less.
  • Use tertiary offsite backups for most critical data.
  • Maintain solid encryption practices where necessary for transmission and storage (already do this).
  • Decrease time to restore from backups.
  • Backup more often, so time between backups is minimal for frequently updated data.
  • Give myself room to grow.

At $0.15/GB Amazon’s S3 is very affordable for my needs. A dollar or so a month gets you a fair amount of storage considering most data doesn’t get touched that often (it’s data transfer that gets a little more costly). I’ve been using Amazon with a few backup scripts for a few months to see how it works and how I can best use it. I’m planning to ramp that up a little more. I also want to do more with incremental backups (perhaps use rsync more) to save time and disk.

Ironically I kick off this little project when reports indicated hard drive prices have been dropping (obvious right?). I’m not sure if would make sense to purchase additional storage, or if I can get by with just better utilizing what I already have.

I’m doing this for a few reasons. Considering the cost of storage, there’s no excuse to not have solid backups, or to even waste your time with data loss. I also want to improve my use of offsite backups for more important things to make sure that I keep costs low and keep backups fresh. Accident, fire, flood, theft, are always possibilities no matter how careful you are in life. The great thing about digital vs. paper is that it’s easier to have several copies.

I believe my practices are pretty good, and likely better than the vast majority of the population, but I think I can still do better. I think I can make better use of what I have and maybe for a slight cost add another layer of protection if necessary. I’ll post again with my findings.

Backup fun

Back in July I set out to get a good backup solution for my laptop (and my Mac). I then found a 300 GB Hard Drive for $99 (after rebates) [btw: the $99 offer is back again as of this posting], and a $60 case for that drive. My new laptop came with some IBM branded backup software, which royally stunk. Slow, no incremental updates, just not up to par. The only big advantage was that it’s integrated with IBM’s Restoration software, which has an emergency partition, making recovery super easy.

Last week I ordered Acronis True Image (8.0 because I heard 9.0 is still a little rough, and doesn’t have anything I really need). So far this product is a real gem. Easy to install and use. I can do a complete HD backup in 30 minutes, meaning incremental updates (yes it supports them) will be much quicker than that. Lets you put them in a secure partition on any drive you want. It has compression to save space, and you can mount your image should you need to retrieve one or two files from the backup. Appears just like another hard drive. One of my favorite features is that I can still work while it’s backing up my hard drive. I’ve only had it a few days, but so far I think the product is a real gem. This will be great if I get to update my hard drive to an 80 or 100 GB Hitachi 7k100 as you can just mirror your data over with the migration functionality. No reinstalling.

So overall my backup solution is rather cheap (relatively speaking), but works extremely well. I don’t think there’s an excuse these days to not back up. Disk is pretty cheap, and so is decent backup software.

While I’m on the topic of Hard Drives, I’m considering DiskKeeper after hearing so many good things about it. Looks like the Home edition would be enough for my needs. Perhaps I should checkout the demo. I tried it once, but that was quite a while ago.