Categories
Photos

Bermuda

Categories
Space

We Choose To Go To The Moon

Moon Landing 40th Anniversary

It started during a Joint Session of Congress on May 25, 1961 with John F. Kennedy challenging the United States to put a man on the moon by the end of the decade. 1969, 6 years after JFK was assassinated Apollo 11 landed on the moon and this famous newscast with the late Walter Cronkite who coincidentally passed away on Friday.

For the 40th anniversary NASA restored some of the old video of the landing, now available in H.264 to view. It’s not true HD in today’s terms but still impressive to see. NASA’s Lunar Reconnaissance Orbiter (LRO) also manged to snap a few pictures of the landing sites of the Apollo missions just in time. I believe this is the first time they have ever been identified since the actual landings. 2-3X higher resolution images are under way.

Lastly The John F. Kennedy Library launched “We Choose the Moon” a clever “live” broadcast of the Apollo 11 mission in its entirety with exactly a 40 year delay.

Now 40 years later NASA is embarking on Constellation which even in vehicle design parallels what was done in Apollo. We may be back on the moon by 2020 assuming Constellation, Aries IV or DIRECT succeed.

Categories
In The News Security

RFID War Driving

I’ve been a critic of RFID for the purpose of identifying people from early on because the concept is inherently flawed despite the insistence of people paid to insist otherwise. Chris Paget is in a widely circulated story regarding him driving around Fisherman’s Wharf with $190 worth of gear (likely not bought with an RFID credit card) and grabbing ID’s of strangers in the area. It should be noted for anyone wondering that he didn’t break any federal laws.

The story ignores that Chris Paget also gave a talk at ShmooCon 2009 regarding RFID cloning. Of course cloning passports is nothing new, it happened in Europe just 48 hours after the passports were first issued. Don’t worry about that though, the US government says it’s passports can only be read from about 4 inches. Although as the article notes (page 3) researches from University of Tel Aviv disagree finding it can actually be read from several feet away using hobbyist gear. A student from the University of Cambridge found it can be read from 17 feet away.

While its admittedly handy you can now clone a British passport without even opening the envelope I question if this is a necessary feature.

This reminds me of that old prank where you pull the tag off a library book and sneak it onto someones belongings so when they leave the library the detector goes off repeatedly as if they tried to steal a book. Clever misuse of a pretty easy to misuse technology. Of course the other side of this is the book can now be removed without setting off the alarm. Double fail.

Putting a RFID card in a shield isn’t really a great solution since most people will never bother in a world where still only 83 percent of Americans bother to wear seat belts [NHTSA, PDF]. Besides, if the point of including RFID is to read from a distance without exposing the card to swipe it, isn’t this redundant? You can always disable by microwaving briefly though RSA Labs claims a small fire risk. I’ve heard of hammers used too, though not sure how you’d confirm it’s dead.

Can we admit this RFID stuff is half-baked now?

Categories
Google

Google Chrome OS

The big news over the past 24 hours is the announcement of Google Chrome OS. Effectively Google Chrome OS is a stripped down Linux Kernel with just enough to boot Chrome/WebKit as it’s main UI. The exact UI paradigm hasn’t been reveled as of yet. Google claims:

Speed, simplicity and security are the key aspects of Google Chrome OS. We’re designing the OS to be fast and lightweight, to start-up and get you onto the web in a few seconds. The user interface is minimal to stay out of your way, and most of the user experience takes place on the web. And as we did for the Google Chrome browser, we are going back to the basics and completely redesigning the underlying security architecture of the OS so that users don’t have to deal with viruses, malware and security updates. It should just work.

It’s an interesting and somewhat bold statement.

Categories
Mozilla Web Development

Debating Ogg Theora and H.264

Since the big HTML 5 news that there will be no defined codec for <audio/> or <video/> there has been a lot of discussion about the merits of such a decision, and what led to it. To quote Ian Hickson’s email:

Apple refuses to implement Ogg Theora in Quicktime by default (as used by Safari), citing lack of hardware support and an uncertain patent landscape.

Google has implemented H.264 and Ogg Theora in Chrome, but cannot provide the H.264 codec license to third-party distributors of Chromium, and have indicated a belief that Ogg Theora’s quality-per-bit is not yet suitable for the volume handled by YouTube.

Opera refuses to implement H.264, citing the obscene cost of the relevant patent licenses.

Mozilla refuses to implement H.264, as they would not be able to obtain a license that covers their downstream distributors.

Microsoft has not commented on their intent to support

I think everyone agrees this is going nowhere and isn’t likely to change in the near future. For the sake of moving HTML5 forward, this is likely the best decision.

Here’s how I interpret everyone’s position:

Apple’s Argument

One of the undeniable perks behind H.264 right now is that there is hardware decoding available and used on on certain devices. One of the most notable is the iPhone. Using hardware decoding means your not using the CPU which results in better performance, and most importantly better battery life.

Thus far there’s no hardware Theora decoder on the market (if you know of any let me know, my research says none), which I suspect is why Apple is hesitant to jump on board. Until there’s hardware that’s proven to perform well, be cost-effective in the quantities Apple needs, and not be bombarded with patent infringement claims, I suspect they’d rather settle with H.264. The patent part is critical. Apple can update software to comply with patent wars pretty quickly, as many other companies have done with software in the past. Hardware is not so easy. Last minute hardware changes are harder to deal with than software because of the many things it impacts, and the inability to update at a later date.

I’m almost positive the lack of hardware support is the exact same reason Apple has been so against Flash support. Remember the YouTube application isn’t using VP6 like regular flash, it’s using H.264 (that’s why it took so long for all of YouTube to be available on the iPhone).

If there’s enough Theora content out there, there will likely be Theora decoder hardware made to meet market demand. To get to this point will be difficult with the amount of VP6 (Flash) and H.264 content already on the web. H.264 alone has a major head start in applications. VP6 has several years of video on the web now (and I still don’t think it has a hardware decoder on the market though that might be due to licensing again).

In the long run, I think mobile technology will improve enough to make this a somewhat unnecessary constraint. Mobile CPU’s and GPU’s are just starting to get to the caliber needed for video. Performance per watt should improve. Battery technology is just starting to get pushed to the limits. This is a good thing for Theora in the long run, but the question is how long?

Until it can be played with minimal impact on battery life, I don’t think any company who has a heavy investment in mobile will want to jump on board.

Google’s Argument

Google has money and can license H.264. Shocker. Google however has trouble when it comes to Chromium. I suspect Google doesn’t care too much about which way this goes since what they support in Chrome doesn’t mandate that YouTube support it. However if the encoding quality for a given bitrate is good enough, it becomes a viable option.

Regarding the quality argument, I’ll simply point to this comparison. I the quality today is comparable already, and likely to get better as the encoders improve. I’ll leave this discussion here.

Opera’s Argument

Opera says H.264 is to expensive to license. I don’t know what the costs are, and what they would be for Opera, but I’ll take their word on it. After all, the do have a product available for free download. While commercial and closed source, they don’t have Google’s revenue stream and I respect that.

Mozilla’s Argument

Mozilla can’t license for downstream Gecko use etc. I’m sure a good part of the argument is also that requiring licensing fees to use <video/> is bad for the web and open source. I agree.

Microsoft’s Argument

No comment. Historically they implemented <marquee/> but not the <blink/>. Make of that what you will.

<video/> could be supported by plugin if needed. I recall Adobe supporting SVG by plugin a few years ago.

Where to go from here?

I think there are a few possible outcomes. As for what I think are the most likely:

  1. There’s a push for hardware decoding that makes Theora on mobile technically possible and working well. If Apple legally is satisfied and jumps on board that changes the game. As I stated earlier I think Google is mostly ambivalent since they support both right now. Opera doesn’t want H.264 anyway, so they are cool. IE 8 can likely be handled by a plugin. Apple really is the deciding factor. Theora is the future.
  2. See what the web does. I suspect at least for a long while the web will just stick with Flash since it works on almost all desktops. For mobile the iPhone and Android make up pretty much the bulk of the mobile video market and that doesn’t look like it’s changing so fast. Content providers that want mobile will encode for mobile. That means 3 target platforms, not ideal but reasonable. H.264 and whatever Adobe adopts is the future.

I know how the media is interpreting all of this. How do other developers, and open source folks see it?

Categories
Mozilla Web Development

Optimizing @font-face For Performance

You want to use @font-face, then you realize it’s got some downsides. First of all, it’s another http request, and we know that the golden rule of web performance is to keep http requests to a minimum. Secondly fonts aren’t even small files, they can be 50k+ in size. Lastly the lag of fonts loading last means you page seems to morph into it’s final form.

Here’s a cool little optimization. By using a data: url you can use the font inline by encoding in base64. For example:

@font-face {
    font-family: "My Font";
    src: url("data:font/opentype;base64,[base-encoded font here]");
}
 
body {
    font-family: "My Font", serif
}

You can see this in action here. This seems to work fine in Firefox 3.5, and Safari 4 (presumably any modern WebKit based browser). Other browsers will simply act as if they don’t support @font-face.

In practice I’d recommend putting it in a separate stylesheet rather than inline css so that your pages are smaller and css can be cached for subsequent page views.

Data url’s are part of Acid2, which most modern browsers either pass or plan to pass. If you use an Open Type font you’d get pretty decent compatibility (IE only supports Open Type). Using True Type you’d still get pretty good compatibility sans IE. Check the @font-face page on MDC for more details. Unlike images, browsers that support @font-face are likely to support data: url’s as well, making this a pretty good solution.

Special thanks to Open Font Library for having some nice free fonts with awesome licensing. This post was partially in response to a comment left the other day on my @font-face hacks blog post.