The Internet Of 1981

Internet Of 1981

Now here’s the “Internet” of 1981. This is not a concept, but a test of San Francisco Examiner content served electronically to personal computers. It’s however a misnomer to call it “Internet” instead of an online service via CompuServe. It wasn’t until about 1989 that CompuServe offered it’s subscribers limited access to the wider internet.

The San Francisco Examiner is still around though now free of charge and accessible on the Internet. They now use XHTML and jQuery and do offer images as well as text. What a difference a few 20+ years can make.

Internet Concept In 1969

Internet Concept In 1969

Here’s a great video of an Internet like concept in 1969. Some particularly interesting things are the gender specific roles and the simplicity of it all. The hardware is almost comical considering how few buttons are on the machines to control the entire interface. There isn’t even a keyboard. Amazingly multiple displays are used, something that even in 2009 is somewhat of a luxury. It even has a precursor to email, nicknamed the “Home Post Office”.

Measurement Lab

Google today unwrapped Measurement Lab (M-Lab) which can measure connection speed, run diagnostics and check if your ISP is blocking or throttling things via it’s blog.

In general it’s a good idea, but it’s nothing new. Sites like dslreports.com and SpeedTest.net have been measuring speed and providing diagnostics for years. The BitTorrent test however isn’t replicated by many yet.

One thing that isn’t addressed is how they will detect if an ISP is adjusting their routing to handle M-Lab servers specially. What stops an ISP from not throttling from one of Google’s “36 servers in 12 locations in the U.S. and Europe” but throttling all other data? Perhaps Vint Cerf and friends have a plan in mind but it seems to me this could be a cat and mouse game.

Open Video

Just the other day I was complaining that Ogg Theora/Vorbis hasn’t really proven itself and achieved market penetration to the point where people will still care about it in several years. My concern with less popular file formats is that data is lost forever if future computing can’t view it. Popularity, while it may not be fair does help encourage it. For example I can still open up old WordPerfect files easier than I can Professional Write files (trip down memory lane anyone?)

I’m thrilled to see a push for open video. Better encoders and decoders along with working with the Wikimedia Foundation (Wikipedia’s use of Theora can be very influential) will hopefully provide a boost for these formats which tends to be a cyclical trend once it gains momentum.

Where Is iPhone OS 2.3?

I’m somewhat perplexed with Apple’s iPhone SW update scheme since 2.0 was released. Apple has been somewhat erratic in it’s feature set and release schedule as of late. 2.0.x was obviously about big bug fixes and performance issues. 2.1.x was also largely bug/performance fixes plus a few enhancements like Genius playlist and usability enhancements. 2.2 was about small enhancements, street view, and bug fixes (mainly for Safari) getting the 2.x platform stable. Apple has been promising a push notification system since 2.0 but has been radio silence as to the status of this feature that developers have been waiting for.

Since 2.2 was released November 21, 2008, there have been no 2.3 seeds released to developers as far as I’m aware of. Apple needs to distribute seeds before a release to give developers a chance to update their applications as appropriate. Leaks are inevitable as various sites love posting this info and with enough developers, the odds of someone breaking NDA is inevitable. Considering there have been no leaks, it’s pretty safe to say 2.3 is either still under heavy development to immature for even a developer seed, or it doesn’t exist.

I suspect Apple has most of it’s engineers working on iPhone OS 3.0 which will likely launch with the next generation of the iPhone this summer. I suspect that’s when push notification will be addressed. Apple will need to give developers at least 2 months to play with it partially to shake out the bugs, and partially so it has some utility by the time of release.

This will be an interesting thing to spin in a positive light since Apple promised it would be seeded to developers in July 2008 and in users hands by September 2008. It was subsequently pulled from iPhone OS 2.1 and considered a bit immature by developers who played with it. That was back in August 2008.

I’m thinking there might be an iPhone OS 2.2.1 between now and June to hold perhaps a few bug fixes. I think the odds of an iPhone OS 2.3 release are growing slimmer due to June/July rapidly approaching.

I’m not alone in my thoughts. John Gruber thinks the same for the most part though is slightly more optimistic on the timeline.

Happy 25 Mac

Macintosh - Insanely Great

It all started January 24, 1984. Not long thereafter a PC pundit would insist every year that Apple would go out of business. 25 years later Apple is still around.

Enjoy the 1984 Macintosh launch presentation. That’s when Steve Jobs “reality distortion field” was a mere toddler (the term was coined in 1981 by Bud Tribble). Also noteworthy is Steve Jobs didn’t wear his trademark St. Croix mock turtleneck, Levi’s 501 blue jeans, or New Balance 991 sneakers. I don’t see bottled water either 😉 .

Whitehouse.gov Analysis

A few notes on the new whitehouse.gov website as I did for the campaign sites after about 5 minutes of sniffing around:

  • Running Microsoft-IIS 6.0 and ASP.NET 2.0.50727. The Bush administration ran Apache on what I think was some sort of Unix. Data is gzip’d.
  • Whitehouse.gov is using Akamai as a CDN and for DNS service.
  • Using jQuery 1.2.6 (someone should let them know 1.3 is out). Also using several plugins including jQuery UI, jcarousel, Thickbox. Also using swfobject.
  • Pages tentatively validate as XHTML 1.0 Transitional! I’m shocked by this. I’ve checked several pages all with the same result.
  • Using WebTrends for analytics. Bush Administration also did.
  • IE Conditional Stylesheets and a print stylesheet.
  • RSS feeds are actually Atom feeds.
  • The website is setting two cookies that I can see WT_FPC and ASP.NET_SessionId which expire at the end of the session which is not prohibited in federal government as per OMB Guidance for Implementing the Privacy Provisions of the E-Government Act of 2002 (using Google Cache for that link since I can’t find it anywhere else, our government should really keep those in a more permanent location).

I should note that this is quite different in architecture than the Obama campaign site which ran PWS/PHP, no notable JS library, feed, and Google Analytics.

Update [1/20/2009 @ 9:00 PM EST]:

Super Mario Gravity

The other day I mentioned there is a JS implementation of Super Mario brothers. I also mentioned that the physics feel about right compared to the real game.

Apparently someone did some analysis on the series and correctly concluded that the gravity physics are totally not realistic (shocking). The real nuggets are that he can jump 5 times his body height and should be unconscious on the way down since his falls achieve 9.31 g, which should render a human unconscious without a G-Suit. Maybe those are special overalls after all.

Science!

BitTorrent For HTTP Failover

There is a proposal circulating around the web to create a X-Torrent HTTP header for the purpose of pointing to a torrent file as an alternative way to download a file from an overloaded server. I’ve been an advocate of implementing BitTorrent in browsers in particular Firefox since at least 2004 according to this blog, and like the idea in principal but don’t like the implementation proposed.

The way the proposal would work is a server would send the X-Torrent HTTP header and if the browser chose to use the BitTorrent it would do that rather than leach the servers bandwidth. This however fails if the server is already overloaded.

Unnecessary Header

This is also a little unnecessary since browsers automatically send an Accept-Encoding Requests header which could contain support for torrents removing the need for a server to send this by default. Regardless the system still fails if the server is overloaded.

Doesn’t Failover

A nicer way would be to also utilize DNS which is surprisingly good at scaling for these types of tasks. It’s already used for similar things like DNSBL and SPF.

Example

Assume my browser supports the BitTorrent protocol and I visit the following url for a download:

http://dl.robert.accettura.com/pub/myfile.tar.gz

My request would look something like this:

Get: /pub/myfile.tar.gz
Host: dl.robert.accettura.com
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.5) Gecko/2008120122 Firefox/3.0.5
Accept: */*
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate,torrent
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Referer: http://robert.accettura.com/download

The servers response would look something like this:

Date: Sun, 18 Jan 2009 00:25:54 GMT
Server: Apache
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: application/x-bittorrent

The content would be the actual torrent. The browser would handle as appropriate by opening a helper application or handling it internally. If I didn’t have torrent in my Accept-Encoding header, I would have been served via HTTP like we are all accustomed.

Now what happens if the server is not responding? A fallback to the DNS level could be done.

First take the GET and generate a SHA1 checksum for the GET, in my example that would be:

438296e855494825557824b691a09d06a86a21f1

Now to generate a DNS Query in the format [hash]._torrent.[server]:

438296e855494825557824b691a09d06a86a21f1._torrent.dl.robert.accettura.com

The response would look something like a Base64 encoded .torrent file broken up and served as TOR or TXT records. Should the string not fit in one record (I think the limit is 512 bytes) the response could be broken up into multiple records and concatenated by the client to reassemble.

Odds of a collision with existing DNS space is limited due to the use of a SHA1 hash and the _torrent subdomain. It coexists peacefully.

Downside

The downside here is that if your server fails your DNS is going to take an extra query from any client capable of doing this. There is slight latency in this process.

Upside/Conclusion

The upside is that DNS scaling has come a long way and is rarely an issue for popular sites and web hosts. DNS can (and often is) cached by ISP’s resulting in an automatic edge CDN thanks to ISP’s. ISP’s can also mitigate traffic on their networks by caching on their side (something I also suggested in 2004).

BitTorrent may be used for illegal content, but so is HTTP. I think costs for ISP’s and websites could be significantly cut by making BitTorrent more transparent as a data transfer protocol.

JS Super Mario Brothers

Super Mario Brothers JS

Here’s Super Mario brothers rewritten in JavaScript. It’s remarkably well done, even the physics closely match that of the original game. The only thing I noticed was a slight lag in Firefox 3.0.5 which I suspect might be due to the speed of the events rather than the actual game.

There have been several attempts to port the classic game to the web including this one which is fairly complete for 14kB of JavaScript and uses <canvas/>, though I think the one above is more like the original game.