Categories
Blog General

DNS Strangeness

There’s some DNS funny business going on with this blog the past several days. I’m still trying to figure out exactly where the problem is. DNS has always been one of my least favorite things to deal with.

Categories
Blog

Site Outage

This server will be moving to a new data center tonight (Tuesday sometime between 1 – 7 AM EST). If your feed reader reports that I’m down… that’s why.

Edit: All done. Successfully moved to it’s new home.

Categories
Blog Internet Networking

Slow Site

Last Friday (May 2), the data center where this site lives suffered a power fluctuation due to some tornado activity in the area. The actual outage (if there was even one) seemed to have been in the 5 minute ballpark based on various monitors. Apparently this somehow resulted in a routing problem resulting in some lag and packet loss for some (including myself). Possibly a router that didn’t persist as well as one would hope. This is being investigated.

As a result, if this site (and it’s feed) seems slower than normal, that’s the reason.

Categories
Apple Hardware

New Home Server

Over the past few weeks, I’ve been in the process of setting up a new home server. The previous one was an old Beige G3 (266MHz) running Mac OS X 10.2 that was starting to show it’s age. The new system is a much more capable B&W G3 (400MHz) running Mac OS X 10.4. Despite only a slight increase in clock speed, the B&W G3 has much more modern hardware (USB, Firewire) not to mention more room for more storage. The opportunities are endless.

Decided to go with a multi-drive setup considering the extra bays. The system had a still usable 40GB Seagate Barracuda IV drive which would make a perfect system disk for OS/Software. Installed via a ACard ATA/66 controller it’s no speed daemon, but for the purpose it’s fine. For the data drive I decided to get a SIIG SATA card and a pair of Seagate SATA drives I found a good deal on at BestBuy. The drives were Seagate ST303204N1A1AS, which corresponds to 320GB. Inside the boxes as expected were (the newer and better) ST3320620AS, which is a Seagate Barracuda 7200.10 with firmware 3.AAE (not the AAK people have had in the past). Perfect.

Next I wanted to replicate data across the drives on a cron. Initially I was thinking rsync, since as of 10.4, it’s resource-fork aware. It turns out that’s not really true. I ended up going back to SuperDuper to copy between the drives. It only copies changed files, and once a week will delete removed files (so if you accidentally delete something, there’s still a chance to recover, unless you do it at the wrong time). Not a bad solution IMHO. Still would prefer rsync more. Initial backup took less than 1/2 hour. Just a few minutes should be enough to keep the disks in sync. I briefly considered setting up RAID, but decided against it since RAID is not backup. It doesn’t protect against things like corruption.

Apple needs to kill off resource forks ASAP. They should have done so when moving to Mac OS X several years ago.

Next up, I tried putting a copy of TechTools Pro I no longer use on my Mac Mini since upgrading to Leopard on the system, but that resulted in some drive problems that I couldn’t resolve without uninstalling. They seem to know about the problem, but haven’t fixed it. You see the following error repeatedly in the system.log file until you reboot:

kernel[0]: IOATAController device blocking bus.

Drag.

Also updated mrtg, and this time compiled GD, libpng, libjpeg, etc. all by hand, rather than use fink. Last time I went with fink, which saved me a few keystrokes, but when fink no longer updated packages for 10.2, left me high and dry. This time I think I’ll avoid it when possible. I need to try getting RRDtool setup at some point, since it’s so much better.

I use a few php scripts for easy admin of the box, and decided PHP 4 wasn’t adequate since it’s pretty much discontinued. So I upgraded to php 5.2, and all seems good so far. I think Apache 1.3.33 will serve me just fine for the moment, so not upgrading that.

I might give setting up BIND a try, since local DNS would be pretty handy for easily accessing the server without modifying the host file on computers.

I also disabled things like spotlight, which have absolutely no purpose on this box.

On another note, glib for some reason won’t compile for me. No clue what’s going on. Overall it’s looking pretty good. Should be about ready for real use. Just want to make sure the backups work as expected.

Categories
Apple Mozilla

Virtualization For Mac OS X?

Virtualization is a great way to improve reliability, take advantage of hardware and scale. For example Mozilla’s build team uses it to manage all the build instances that used to be on individual machines. These servers essentially compile code all day long. One problem with virtualization and cross platform building is that Mac OS X doesn’t run in any virtualization environment (because of Apple’s interest in selling hardware). This means while you can run Windows and Linux on the same boxes, you still need to have and maintain separate Xserve’s for the purpose of compiling for Mac OS X. Looks like Mac OS X Server 10.5 (and only server edition) now has a license that permits running virtual. While great, this makes it pretty expensive to do things like a build farm. You can’t just buy a Mac OS X client, even though that’s all you really need. You need to buy server.

Currently, there’s nothing other than PearPC that can run it (and PearPC is worthlessly slow). Hopefully VMWare will update at some point to support it. At that point, things can get interesting.

Categories
Blog General

Site Outages

This and other sites of mine will experience a few outages this weekend as the servers move to a new temporary data center.

Update: Server is snug in it’s new home. Will be moving again to a permanent new data center.

Categories
Internet Open Source Programming Web Development

Wikipedia Infrastructure

Here’s a great read on Wikipedia’s Infrastructure. Two excellent sets of slides. A lot can be done with a LAMP stack. The common theme: caching and careful optimization. There are some really impressive stats in there.

Categories
Mozilla

FoxTorrent

I said back in 2004 that Firefox needs built in support for BitTorrent. My idea was it would be integrated into the download manager so that it was “just another protocol” and would be transparent to a typical user. I still stand by that.

Fast forward to 2007: FoxTorrent is by RedSwoosh (now owned by Akamai).

I’d personally love to see something like this ship built in. It’s a great feature. BitTorrent is a great protocol for distributing large downloads without having to buy expensive infrastructure. Akamai’s interest is proof of that.

FoxTorrent has a blog if you want to keep an eye on it. FoxTorrent is MIT licensed as well. It seems like a very interesting product. I’ll have to dig into this and look at it a bit closer.

[Hat tip: TechCrunch]

Categories
In The News Tech (General)

Yahoo Goes Green

Yahoo is going carbon neutral. I’m curious how much is offset, and how much is reduction. Yahoo has a fairly large infrastructure. I wonder if they are using alternative power sources, or if they are going to plant a million trees. They do mention:

These projects could include a wind farm in India or a small-scale run of the river hydroelectric project in Brazil. We’re also looking to invest in emerging clean technologies.

Interesting. I wonder if we will see things like carbon neutral VoIP, carbon neutral bandwidth, carbon neutral data centers / colocation / hosting?

Categories
Blog Internet Web Development

Site Backups And Bandwidth Fun

I keep regular backups of everything on this server just in case something happens. Recently I switched to a more automated and secure (PGP encrypted) solution for this blog due to it’s fast-paced nature. Just the critical stuff (database, media, templates). I choose PGP (implemented using GPG) since it’s easy, and I only have to store the public key on the server, making it safer than most alternatives.

I’m strongly considering moving it all eventually over to Amazon’s S3 storage. At $0.15 per GB-Month of storage used and $0.20 per GB of data transferred it would be very affordable to keep backups in an even more secure fashion. I’d still use my own encryption on top of theirs for extra security. For things like media, I could even see myself hosting it solely at Amazon. It just seems like that may be a more practical and scalable approach.

Unfortunately until either FTTH or DOCSIS 3.0 comes to town, it doesn’t look like Amazon’s S3 will be practical for home backup purposes. This server has a beefy connection to a few large pipes to the internet (Level3, Global Crossing, and Cogent last I checked). They provides high speed connectivity so a backup would take only a few seconds. At home with a cable modem on a DOCSIS 1.1 network (such as Comcast) the bandwidth is just to slim to allow enough upload capacity. Comcast still only allows 384kbps up. Even the top plans in select areas don’t top 1Mbps. Of course these are Comcast’s numbers (the actual performance is often less). In areas that they currently serve, Verizon FiOS (FTTH) is available at 15 Mbps/2 Mbps. Much better suited for such purposes (though more would be welcome). As strange as it may seem pricing is quite competitive, giving cable a run for it’s money. Perhaps one day DOCSIS 3.0 will appear, though that seems to be a while away. Perhaps one day all homes will have 100Mbps full duplex connections with low latency.

The only real way to get around this limitation is to perhaps use rsync to perform backups. Initial backups would still suck, but after that it wouldn’t be too bad. Though that wouldn’t work with services such as Amazon’s S3, which are token based. There is an rsync-like clone, but it’s still not the real thing. Perhaps Google’s upcoming GDrive will be cool enough to allow the use of rsync over SSH (I could dream) in addition to WebDAV (which is what I expect to see). Last I checked rsync doesn’t support WebDAV because WebDAV is done over HTTP. If I understand it right, RFC 3229 would add Delta encoding support to HTTP, making something like rsync over WebDAV possible since it uses delta encoding.