Netflix is trying to reduce it’s dependency on CDN’s by peering directly with ISP’s and with a new hardware appliance ISP’s can host on their own network to offload traffic. The peering option is pretty strait forward. The appliance however is interesting. Netflix is actually quite transparent about what they are doing, so I thought I’d dig in and take a little look since they are sharing:
Netflix says right up front they were influenced by Backblaze, and their appliance is actually quite similar in many respects. The difference is that Netflix does need a bit more CPU and Network IO and a little less storage. That balance is pretty achievable. The appliance must be a tad on the heavy side as this is a pretty heavily packed server.
Essentially the hardware is a Supermicro mATX board and a bunch of SATA hard drives in a custom 4U enclosure. There are 2 16 port LSI SAS controllers for 32 drives and 4 drives presumably running directly off the motherboard. Hitachi Deskstar or Seagate Barracuda drives. Nothing fancy here. An interesting tidbit is there are 2 x 512 GB “flash storage” (presumably SSD) for logs, OS, popular content. I’d assume those two are running in RAID 0 as one volume. They are managing the spinning disks in software RAID so they can handle failures.
FreeBSD is the OS of choice. Not sure if this software RAID they are doing is something they cooked up or something already out there. Another interesting note is they are using nginx for a web server and are using http for moving content. Huge win for nginx and says a lot for it’s abilities as a web server. It also sounds like Netflix is a customer of NGINX, Inc.
The idea of an appliance on the ISP end isn’t new. CDN’s generally live close, not in the ISP’s network. On the TV side Weather Channel has done this for ages via the little known WeatherSTAR appliances (pic). They sit at the headend and get weather from TWC. They then output local weather reports as a video for the cable provider to insert. The WeatherSTAR appliance like the Netflix appliance is essentially 0 maintenance. It just lives locally and serves it’s master remotely.
It’s nice that they’ve been as open as they have about what they are building. They also have an engineering blog worth keeping an eye on.
This blog is now up and running on a newer faster server. I’ve spent a fair amount of time over the past week moving sites and syncing databases as I transition things over. Still some loose ends, but I’m mostly there.
Back in 2008 I did a special segment in my “Secrets In Websites” series for the 2008 Presidential Elections. It was quite popular (almost crashed the server). I decided to do it again, but slightly revised for 2012.
A few days ago I did a kernel upgrade from 2.6.24 to 18.104.22.168. Surprisingly the load on the server has dropped slightly. The server is generally under minimal load, just the way I like it so a drop is particularly surprising. It was restarted just a few weeks prior, so I don’t think the restart had an impact on load. Unscientifically it appears the box is under the same level of usage as prior to the upgrade. The two spikes that delimit the restart are due to some log processing.
I’ve used old Mac’s as file servers for several years now. They are well-built machines that ship with a tightly integrated UNIX based operating system. Of all the consumer grade hardware/software out there, I think they are by far the best equipped for the task. They are expensive, but the quality is unmatched.
Apple today launched several product refreshes, but the one that really catches my eye is the Mac mini server. It’s pretty much just a Mac mini with the optical drive replaced with a second SATA 2.5″ hard drive and a copy of Snow Leopard server in place of the standard Mac OS X.
The hardware is pretty uneventful. People have been swapping drives in the Mac mini for years to add more storage as well as external drives. Software wise people have been using server products on the mini for some time. Nothing here is revolutionary. But marketing the product as a server is for a few reasons:
Home/Small Business Servers
Like I said, I’ve had a home server for years. It’s great for backing up and sharing files and printers. It can also be purposed for a myriad of other tasks. While you can set this all up on stock Mac OS X, tweaking it all is a little daunting as the Mac OS X UI only exposes the very basics. Mac OS X server has much deeper integration making it easier for people who don’t know what they are doing. I expect we’ll see some third party products that further expand the use of this in the home and small business market. I wouldn’t be surprised to even see some Home theater PC (HTPC) backend solutions. (MythTV anyone?)
The Mac Mini only consumes 16 watts when idle. It’s still a Core 2 Duo 2.53GHz CPU, and ships with 4 GB RAM. The place it suffers is disk I/O thanks to using 5400 RPM drives (It’s cost per GB isn’t that great either thanks to the 2.5″ drives). In previous models it wasn’t too difficult to swap the drive with 7200 RPM drives though I don’t know how the thermals will play out with dual HD’s. It may be possible to use software RAID, I’m not sure what sort of improvement in performance you could get since I don’t know the details of the motherboard. However, if you have a task that’s not IO bound, or you use a NAS via Gigabit Ethernet (or a Firewire/USB drive) it may not matter. That’s a pretty affordable low powered node in your grid. Even better if it could handle higher density RAM to get 8 GB into there via 2 x 4GB SO-DIMMs.
It’s hardly a secret that there is a serious demand for saving power in data centers. In a recent Times Magazine article:
Data centers worldwide now consume more energy annually than Sweden. And the amount of energy required is growing, says Jonathan Koomey, a scientist at Lawrence Berkeley National Laboratory. From 2000 to 2005, the aggregate electricity use by data centers doubled. The cloud, he calculates, consumes 1 to 2 percent of the world’s electricity.
To put that in a little more perspective, the 2009 census for Sweden puts the population at 9,263,872. Sweden’s population is just slightly higher than New York City (8,274,527 in 2007) or the state of New Jersey (8,682,661 estimate in 2008). Granted Sweden’s population density is 20.6/km2 compared to New York City’s 10,482/km2 or New Jersey’s 438/km2. Population density is important since that says a lot about energy consumption. Dense populations require less energy thanks to communal resources. I still suspect the average Swede uses less electricity than the average American anyway. All these numbers were pulled from Wikipedia.
The US Department of Energy does have data on power consumption and capacity as well as forecasts on consumption and production. The obvious downside in the data is the reliance on coal, oil and gas which have environmental impacts as well as political impacts and costs (we know about the instabilities of the oil market). This is why companies with lots of servers like Google are looking very carefully at power generation alternatives such as hydroelectric and solar.
We all benefit from data center efficiency. Lower cost computing is a big advantage to startups and encourages more innovation by removing price barriers. It’s also an advantage to the general public since the technology and tricks learned eventually trickle down to consumers. We already are seeing more efficient power supplies, some even beating the original 80 PLUS certification.
Perhaps if we started tracking “performance per watt” in addition to “watts per square foot” we’d be looking at things from a more sustainable perspective.
Data center capacity and consumption is pretty interesting when you look at all the variables involved. Growth, power costs, facility size, technology available, even foreign politics play a role in what it costs to operate.
Big news today is that Google “unveiled” (more like confirmed) some data center secrets:
It has been known for years that Google has been building it’s own servers rather than buy from a vendor. They have defended this as their servers are more efficient and customized for their needs than they could ever buy. They cut out things like a video card which do nothing but add a point of failure and waste power. They put a battery on the server itself rather than have a UPS for the rack they found it to be more cheaper and more efficient. They also hang the power supply away from the rest of the system itself, presumably for cooling. This actually isn’t shocking since it’s been leaked several times before, though this is the first time that I’m aware of Google speaking publicly about their design in this much detail.
Container Data Centers
Apparently since 2005 Google has been using shipping containers as data centers. It’s been known for a long time Google was interested in the idea (as were other companies) but a first that they have actually been using them for a while. 1,160 servers per container utilizing 250 kilowatts of power = 780 watts per square foot. Very impressive.
I guess it’s only a matter of time before we see commercial servers, and perhaps even some desktops with power supplies that have their own batteries.
Update [4/11/2009 @ 5:00 PM EST]: Google has a blog post up including video of the summit.
Buying JungleDisk makes sense since Rackspace wants to get into the cloud storage business. JungleDisk is one of the bigger Amazon S3 products out there. By adding Rackspace support to the product they can quickly attempt to get into that market. If they will succeed depends on their offering’s cost. Their press release suggests $0.15/GB, but that doesn’t say if they will bill based on requests and bandwidth (which is where Amazon S3 gets expensive). Also interesting is this little nugget:
Also later this year, Limelight Networks will team with Rackspace to allow developers to easily distribute content to millions of end users around the world and bring scalable content delivery and application acceleration services to the masses.
This is competing with Amazon’s attempts at starting a CDN later this year. It’s worth noting that these are both pretty primitive CDN’s since they require you to register objects before the CDN hosts them. Modern CDN’s like Limelight and Akamai allow you setup a CNAME so that their CDN essentially acts as a middle layer between your origin servers and your users. This requires no preregistering since the CDN can just check the origin for any asset requested. Caching is configured via configuration files and via standard http headers. I’m not sure how useful these CDN’s will be to most. Registering objects and uploading to another platform is a giant pain as opposed to just setting up a transparent CNAME. The difference is one requires development time, the other doesn’t.
Acquiring Slicehost makes sense since they apparently have technology that will be useful to Rackspace. They are making a bet that startups in need of hosting on virtual machines (which is much more complicated to manage than typical shared hosting) will produce a decent market in the future. With the economic downturn, at least in the short term this may not look like the most useful purchase. In the long run this may pay off handsomely. They have decent competition in that space and it’s quickly growing. Rackspace’s size may help it weather a downturn better than others though.
They closed 5.18 +0.22 (4.44%) today, despite the DOW being -514.45, so I guess I’m not alone in my assessment.
There’s some DNS funny business going on with this blog the past several days. I’m still trying to figure out exactly where the problem is. DNS has always been one of my least favorite things to deal with.
About Robert Accettura
Robert Accettura is a web developer, Mozilla contributor, open source advocate, tech enthusiast and occasional trouble maker. more »
You can follow this blog via RSS or follow me on any of the social sites below.