SSD Price Wars

There’s now talk of an SSD price war brewing. This is great as the price for SSD’s are still pretty high.

Unfortunately they still have a way to drop. Especially in recent years people have tons of video and photos, more than they can upload due to asymmetrical broadband. Most people only have one computer, meaning a 128 GB drive isn’t going to get them very far. Especially true for people who are big movie downloaders (legal or licensed via iTunes). Most people don’t have several computers and USB drives hanging about for bulk storage. What’s on their laptop is what they have.

Even when I went with an SSD in my desktop, I put a RAID 0 HDD array in. Just a game or two can occupy half that SSD. It’s more cost-effective to have this gigantic complicated setup than to get an SSD, and truthfully most of what’s on that HDD array is fast enough at RAID 0 speed. So yes, I split things up, but it performs fast and is substantially cheaper. I got the best of both worlds. I can’t wait until this isn’t necessary.

New Home Server

Over the past few weeks, I’ve been in the process of setting up a new home server. The previous one was an old Beige G3 (266MHz) running Mac OS X 10.2 that was starting to show it’s age. The new system is a much more capable B&W G3 (400MHz) running Mac OS X 10.4. Despite only a slight increase in clock speed, the B&W G3 has much more modern hardware (USB, Firewire) not to mention more room for more storage. The opportunities are endless.

Decided to go with a multi-drive setup considering the extra bays. The system had a still usable 40GB Seagate Barracuda IV drive which would make a perfect system disk for OS/Software. Installed via a ACard ATA/66 controller it’s no speed daemon, but for the purpose it’s fine. For the data drive I decided to get a SIIG SATA card and a pair of Seagate SATA drives I found a good deal on at BestBuy. The drives were Seagate ST303204N1A1AS, which corresponds to 320GB. Inside the boxes as expected were (the newer and better) ST3320620AS, which is a Seagate Barracuda 7200.10 with firmware 3.AAE (not the AAK people have had in the past). Perfect.

Next I wanted to replicate data across the drives on a cron. Initially I was thinking rsync, since as of 10.4, it’s resource-fork aware. It turns out that’s not really true. I ended up going back to SuperDuper to copy between the drives. It only copies changed files, and once a week will delete removed files (so if you accidentally delete something, there’s still a chance to recover, unless you do it at the wrong time). Not a bad solution IMHO. Still would prefer rsync more. Initial backup took less than 1/2 hour. Just a few minutes should be enough to keep the disks in sync. I briefly considered setting up RAID, but decided against it since RAID is not backup. It doesn’t protect against things like corruption.

Apple needs to kill off resource forks ASAP. They should have done so when moving to Mac OS X several years ago.

Next up, I tried putting a copy of TechTools Pro I no longer use on my Mac Mini since upgrading to Leopard on the system, but that resulted in some drive problems that I couldn’t resolve without uninstalling. They seem to know about the problem, but haven’t fixed it. You see the following error repeatedly in the system.log file until you reboot:

kernel[0]: IOATAController device blocking bus.

Drag.

Also updated mrtg, and this time compiled GD, libpng, libjpeg, etc. all by hand, rather than use fink. Last time I went with fink, which saved me a few keystrokes, but when fink no longer updated packages for 10.2, left me high and dry. This time I think I’ll avoid it when possible. I need to try getting RRDtool setup at some point, since it’s so much better.

I use a few php scripts for easy admin of the box, and decided PHP 4 wasn’t adequate since it’s pretty much discontinued. So I upgraded to php 5.2, and all seems good so far. I think Apache 1.3.33 will serve me just fine for the moment, so not upgrading that.

I might give setting up BIND a try, since local DNS would be pretty handy for easily accessing the server without modifying the host file on computers.

I also disabled things like spotlight, which have absolutely no purpose on this box.

On another note, glib for some reason won’t compile for me. No clue what’s going on. Overall it’s looking pretty good. Should be about ready for real use. Just want to make sure the backups work as expected.

Drobo for network storage?

Drobo initially didn’t impress me to much, but after watching a demo I’m somewhat impressed. The positives:

  • The hotswapping, RAID-like (but not RAID) redundancy is awesome. That’s perfect for backup/bulk storage purposes.
  • Transfer isn’t bad (Up to read 22MB/s write 20MB/s)
  • Power consumption idles at about 12 watts which isn’t bad.
  • Adding storage capacity is really easy.

There are some downsides:

  • No Linux support. Which stinks if you were to hook it up to an old PC running Linux and use Samba. You could of course use a Mac.
  • Pretty expensive $499 isn’t cheap for a glorified drive enclosure. You still need a host, and drives.

Of course for true backup you need to offsite your data, but you can do that through standard means, and using Amazon’s S3. So your covered there.

The downfall of this product is the lack of a 10/100 Ethernet port. It would likely have been pretty cheap (lets face it network devices are pretty cheap these days) and would have removed the need for a PC. You could of course hook it up to a Access Point such as the Airport Extreme… but you don’t get the greatest level of control with these.

Ideally a real cheapo Linux machine (Intel Celeron, 1GB RAM, 80GB HD) with a Drobo would be an awesome backup solution. You could then use MRTG to graph network/data storage usage, manage usage, quota’s or whatever else you wanted to do. Even a media server. Backup some data with S3? No problem. Could even setup something like BackupPC to backup entire PC’s.