I noted in January that WhiteHouse.gov relaunched for the Obama administration using a closed source infrastructure (it was using ASP.NET on IIS 6.0) running a proprietary CMS.
It has now relaunched using open source Drupal. Also interesting is that it’s no longer broadcasting any headers regarding it’s server. Considering Drupal is by far better tested on a Unix OS andApache, I’m wondering if they dropped Windows Server/IIS 6.0 in favor of some sort of Linux and Apache. I can’t find any hint at what they are using.
It’s noteworthy that Drupal was already used on recovery.gov and has been used in politics by way of CivicSpace for the Dean campaign in 2004.
Via Drupal it’s still using jQuery (verison 1.2.6). It’s also now using RSS rather than ATOM for feeds, which I presume is by way of the switch to Drupal rather than an intentional effort.
Another interesting change is they tweaked the doctype from XHTML Transitional to XHTML+RDFa.
Pretty much everything else is still the same including the design. Analytics is still done using WebTrends (holdover from the Bush administration) and Akamai still sits in front of their servers.
For CSS hackers: They still choose conditional CSS for IE compatibility.
Their pages don’t fully validate anymore, though there is no terrible markup either.
Video is still done using Flash, maybe they’ll consider adopting HTML5 video. They could do so and fallback to Flash. The latest versions of Firefox, Safari, and Chrome could take advantage of it today. The rest of the browsers would get the Flash experience. That would be the next major step in opening up. Mark Pilgrim has a good primer if they need.
Edit [9/26/2009 @ 1:45 PM EST]:Tim O’Reilly confirms it is indeed running on LAMP, specifically Red Hat Linux with Apache, MySQL and obviously PHP. Apache Solr is used for search.
I mentioned over a year ago that Apple was porting Sun’s ZFS file system to Mac OS X. While it was available as read-only on Leopard it seems to have been completely pulled from Snow Leopard. For something that was suspected to be the future of disk storage for Mac OS X, that seemed odd. Now Apple has officially discontinued the project.
I had heard about the ongoing NetApp vs. Sun patent war where NetApp feels that ZFS is too close to WAFL. It seems likely that Apple doesn’t want to get involved in that. Apple even has a fear of that potential with OGG Theora. Once the transition was made to ZFS it would be a costly and time-consuming effort to swap with something else since Mac OS has never been very file system neutral.
A new theory is that the Oracle/Sun deal leaves the company developing two filesystems: ZFS and Btfs. It sounds like Oracle’s Btfs is the more likely future. If Apple switched to ZFS they would have been left as the only platform using it. Linux can’t fully switch since the CDDL license isn’t fully compatible with GPL meaning it would need to be implemented through FUSE. Btfs is coming along for Linux.
Reading through Btfs, it seems like a lot of the big advantages of ZFS are already in Btfs though it lacks full disk encryption. It does however add online resizing. It’s also GPL and has support from RedHat, Novell, IBM and was accepted into the Linux mainline kernel as of 2.6.29rc1. That means it already has a much more robust community and seems likely to be widely accepted in UNIX land.
So will Apple switch to Btfs for Mac OS X 10.7 or 10.8? I think the two possibilities are that it will either build something in-house, or switch to Btfs. I think Btfs offers a compelling set of features and would allow Apple to brag about more compatibility with other OS’s and potentially adopt features as the file system matures at a low-cost. It’s possible we’ll hear something as soon as WWDC 2010.
I use RRDtool to make graphs on various things I monitor like server stats, network stats and it does a relatively good job. My one (big) complaint is that when you restart you occasionally see these gigantic spikes that completely mess up the data. I’ve even seen spikes larger than what the system can technically handle.
Nobody mentioned there’s a removespikes.pl script (download) that will remove these outliers from your rrds. I put together a quick shell script to make it quick for when I need to run it again:
for i in/path/to/graphs/rrd/*;
perl removespikes.pl $i;
If you have a ton of graphs a quick shell script to iterate through the directly may be quicker. If you only have a handful like me, no big deal.
Keep the script around for the next time you have spikes to deal with.
I’ve used old Mac’s as file servers for several years now. They are well-built machines that ship with a tightly integrated UNIX based operating system. Of all the consumer grade hardware/software out there, I think they are by far the best equipped for the task. They are expensive, but the quality is unmatched.
Apple today launched several product refreshes, but the one that really catches my eye is the Mac mini server. It’s pretty much just a Mac mini with the optical drive replaced with a second SATA 2.5″ hard drive and a copy of Snow Leopard server in place of the standard Mac OS X.
The hardware is pretty uneventful. People have been swapping drives in the Mac mini for years to add more storage as well as external drives. Software wise people have been using server products on the mini for some time. Nothing here is revolutionary. But marketing the product as a server is for a few reasons:
Home/Small Business Servers
Like I said, I’ve had a home server for years. It’s great for backing up and sharing files and printers. It can also be purposed for a myriad of other tasks. While you can set this all up on stock Mac OS X, tweaking it all is a little daunting as the Mac OS X UI only exposes the very basics. Mac OS X server has much deeper integration making it easier for people who don’t know what they are doing. I expect we’ll see some third party products that further expand the use of this in the home and small business market. I wouldn’t be surprised to even see some Home theater PC (HTPC) backend solutions. (MythTV anyone?)
The Mac Mini only consumes 16 watts when idle. It’s still a Core 2 Duo 2.53GHz CPU, and ships with 4 GB RAM. The place it suffers is disk I/O thanks to using 5400 RPM drives (It’s cost per GB isn’t that great either thanks to the 2.5″ drives). In previous models it wasn’t too difficult to swap the drive with 7200 RPM drives though I don’t know how the thermals will play out with dual HD’s. It may be possible to use software RAID, I’m not sure what sort of improvement in performance you could get since I don’t know the details of the motherboard. However, if you have a task that’s not IO bound, or you use a NAS via Gigabit Ethernet (or a Firewire/USB drive) it may not matter. That’s a pretty affordable low powered node in your grid. Even better if it could handle higher density RAM to get 8 GB into there via 2 x 4GB SO-DIMMs.