There is a proposal circulating around the web to create a
X-Torrent HTTP header for the purpose of pointing to a torrent file as an alternative way to download a file from an overloaded server. I’ve been an advocate of implementing BitTorrent in browsers in particular Firefox since at least 2004 according to this blog, and like the idea in principal but don’t like the implementation proposed.
The way the proposal would work is a server would send the
X-Torrent HTTP header and if the browser chose to use the BitTorrent it would do that rather than leach the servers bandwidth. This however fails if the server is already overloaded.
This is also a little unnecessary since browsers automatically send an
Accept-Encoding Requests header which could contain support for torrents removing the need for a server to send this by default. Regardless the system still fails if the server is overloaded.
A nicer way would be to also utilize DNS which is surprisingly good at scaling for these types of tasks. It’s already used for similar things like DNSBL and SPF.
Assume my browser supports the BitTorrent protocol and I visit the following url for a download:
My request would look something like this:
Get: /pub/myfile.tar.gz Host: dl.robert.accettura.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:188.8.131.52) Gecko/2008120122 Firefox/3.0.5 Accept: */* Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate,torrent Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Referer: http://robert.accettura.com/download
The servers response would look something like this:
Date: Sun, 18 Jan 2009 00:25:54 GMT Server: Apache Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: application/x-bittorrent
The content would be the actual torrent. The browser would handle as appropriate by opening a helper application or handling it internally. If I didn’t have
torrent in my
Accept-Encoding header, I would have been served via HTTP like we are all accustomed.
Now what happens if the server is not responding? A fallback to the DNS level could be done.
First take the GET and generate a SHA1 checksum for the GET, in my example that would be:
Now to generate a DNS Query in the format
The response would look something like a Base64 encoded
.torrent file broken up and served as
TXT records. Should the string not fit in one record (I think the limit is 512 bytes) the response could be broken up into multiple records and concatenated by the client to reassemble.
Odds of a collision with existing DNS space is limited due to the use of a SHA1 hash and the
_torrent subdomain. It coexists peacefully.
The downside here is that if your server fails your DNS is going to take an extra query from any client capable of doing this. There is slight latency in this process.
The upside is that DNS scaling has come a long way and is rarely an issue for popular sites and web hosts. DNS can (and often is) cached by ISP’s resulting in an automatic edge CDN thanks to ISP’s. ISP’s can also mitigate traffic on their networks by caching on their side (something I also suggested in 2004).
BitTorrent may be used for illegal content, but so is HTTP. I think costs for ISP’s and websites could be significantly cut by making BitTorrent more transparent as a data transfer protocol.