On Google’s Made In In The U.S.A. Experiment

Google is reportedly building the Nexus Q in the USA. It’s almost assumed these days that electronics are made in China, with a few notable exceptions in Japan or South Korea. However when you think about it, it makes sense. Consumer technology has come full circle.

Open up some electronics made before 1985. With rare exception, they look almost foreign by todays standards. They are spartan in design. Big circuit boards with a few components, some wires to peripheral lights, motors, whatever the device needs to do it’s thing. It’s almost elementary to figure out how it works. Even fixing it is well within reason.

Now open up a modern device like a modern cell phone. It’s generally a single highly concentrated circuit board with a mess of finely placed parts on and around it. Every year they get smaller and more complicated. Until recently.

A curious thing started to happen. Electronics in an attempt to use less power, become cheaper to manufacture, started simplifying their designs. Kind of. They consolidated many of their parts. For example the complex set of layers in a cell phones screen that separated the backlight from LCD panel from the digitizer were made into one slim component. Multiple chips were combined into a system on chip (SoC). Common things like WiFi a Bluetooth which are almost never exclusive were on one chip. Devices became simpler.

Most of these individual components are manufactured through highly automated means. For example those LCD panels are not made by hand, they are made by machines because a high level of precision is needed. No human etches a CPU or any other chip. Even soldering is increasingly machine driven as most use surface mount techniques like ball grid array which would be nearly impossible to do by hand with anywhere near the accuracy or speed needed.

The result is that these components are made increasingly by machines. These components are assembled increasingly by machines. This changes the equation when it comes to deciding where to manufacturer. The biggest advantage to China was very cheap skilled labor. This is changing. First of all China is starting to become more expensive as affluence builds in China. More notably, the need for labor is able to decrease as designs become more machine centric. While the iPhone is very labor intensive today, but don’t expect it to stay that way. It’s increasingly simplifying it’s design despite pushing the limits. Energy costs and shipping costs are also changing.

Google’s bet is that they’ve simplified the human part of manufacturing to the point where the labor costs are becoming minimal. It’s a reasonable bet. If you look closely at the photos in the NY Times piece, you’ll see lots of humans posing, and a few doing actual tasks related to assembling. Of those assembling, there’s no mention on if they are working on prototypes, or if they are in full scale production (which may involve less humans and more automation).

Factory automation will increasingly bring hardware manufacturing back to the US. But don’t expect to see the jobs of the 1980’s coming back with it. These will be highly automated facilities run by engineers and supervised by increasingly limited staff. They may one day operate like data centers.

The Best Of April Fools Day 2012

Unlike many of my peers I’m not really a curmudgeon when it comes to Internet pranks on April Fools. Some are cool, others aren’t very well done. Get on with it. There wasn’t that much this year, presumably because April 1 falls on a weekend this year. Two however were pretty awesome:

Google Maps 8 Bit “Quest”

April Fools Google Quest

This is just outright awesome. It’s a fully “usable” 8-bit version of their map. This wasn’t just a quickie hack, it’s actually well implemented and complete. It’s very NES.

XKCD’s Targeted Comics

Everyone’s favorite comic XKCD went over the top by creating a bunch of targeted comics based on several aspects including browser, ISP, and location. The folks at reddit have been working to break it down. It’s quite complicated and clearly took some time. A lot of comics were drawn to make this work.

On H.264 Revisited

Once again the debate over H.264 has come up in the Mozilla community. I’ve been a strong advocate of the WebM/VP8 codec given its liberal license and abilities and still am, but agree H.264 needs to be supported. It’s a requirement for mobile (B2G), and becoming necessary on the desktop.

A little over a year ago Chrome talked about dropping support for H.264. To date they have not done so, or given any indication that is even still in the plans as far as I know. In 2010 Adobe said they would be supporting WebM (link in that same blog post). They too have failed to live up to their promises. In either case I’ve found no indication on the internet they ever plan to go forward with those plans.

I suspect in Google’s case they were pressured by various providers and mobile partners who don’t want to encode or support another encoding. Google’s been trying to woo anyone/everyone for the purposes of Google TV and presumably YouTube. It’s likely just not worth it for them to push. There are various theories floating around about Adobe including a lack of clear Flash strategy in an HTML5 world. Adobe does however have a “tools” strategy. Perhaps time will tell.

Furthermore Apple and Microsoft are fundamentally opposed to WebM as they are both licensors for H.264. The odds of them supporting something that hurts their bottom line unless the rest of the web is threatening to leave them behind is nearly 0.

I question however if it should be bundled vs. using system codecs. Windows XP aside, system codecs mean that Microsoft and Apple are essentially responsible for making it work as well as the expense. Plugins could be used for OS’s that don’t ship with the appropriate codecs.

It’s time to put some effort into a JavaScript player for WebM and make that liberally licensed. Browsers still aren’t quite there, but eventually the day will come when that’s workable. The web will then gain the ability to have video play on any (modern) device. Just not natively. That is the backdoor for an open codec.

The real issue is larger than the <video/> element. It’s software patents and their ability to undermine innovation and progress. It’s important to keep this in mind. Just look at mobile. It’s completely possible that the entire mobile industry could come to a halt over patent lawsuits and fear of lawsuits. All it takes is a company willing to press the button. Google spent $12.5 billion in what is essentially the patent equivalent of nuclear proliferation. That’s how real the threat is perceived. H.264 is arguably a fart in a hurricane.

Google Glasses For True In Your Face Advertising

Google Glasses or Google Goggles will likely suffer the same limitation that despite its success hinder the iPad: Bandwidth. Unlike an iPad however which is a great couch surfing device, and very usable offline, the glasses are really best suited for outdoors use where augmented reality and real-time cloud data could make use of such an interface.

People don’t mind adopting strange new UI’s when they work well. The iPhone is a great example of that. A tiny touch screen display works well. Uses had no problem adapting to it. I don’t think glasses are anything different. If it’s intuitive and works well, uses will be fine with that.

Supplying them with data however is another story. WiFi is hardly abundant in most of the US. Much of it is paid, or only available with a cellular data plan for the device. Cellular data service is still very expensive. I don’t think there is any bluetooth profile outside of PAN (tethering) that would work and be acceptable to cellular providers. So piggybacking off of your cell phones data plan is unlikely.

Google could go the Kindle route and make connectivity free with purchase of the device, then largely lock it down to Google services. Since Google uses advertising they would be able to subsidize that expense.

The other issue is power, I wonder if Google is able to cram enough battery into a small enough package to make this all workable.

I’d like to try it. I hope it succeeds. However looking at the big picture there is a fundamental issues. Of course there are privacy issues as well, but that’s another topic.

How Google Music Works

Google announced Google Music. Needless to say I was curious how they implemented an audio player in the browser. Most of the application is your run of the mill modern Web Application with lots of JavaScript. It looks like pretty much anything Google’s built in recent years. It doesn’t do anything really out of the ordinary for the most part. Until you get to the audio playback.

How the audio is played is interesting:

<div id="embed-container">
  <audio autoplay="autoplay" id="html5Player"></audio>
  <div class="goog-ui-media-flash">
    <embed wmode="window" pluginspage="http://www.macromedia.com/go/getflashplayer" type="application/x-shockwave-flash" seamlesstabbing="false" allowfullscreen="true" allowscriptaccess="sameDomain" bgcolor="#000000" flashvars="" src="r/musicplayer.swf" class="goog-ui-media-flash-object" name=":0" id=":0" quality="high" style="width: 1px; height: 1px;"></embed>

You’re reading that right. That’s a HTML5 <audio/> tag. First time I’ve seen it appear in a major product. However as of this writing in Firefox, Safari, and Chrome on Mac OS X the Flash player seems to be used. I suspect, but can’t confirm that this may indicate a future intent of using HTML5 <audio/> in place of Flash. Flash is likely the default for now. But it’s still very interesting to see.

The audio itself seems to be 44,100 Hz 320 kb/s MPEG Layer 3 (MP3) audio. The samples I’ve looked at were encoded with LAME 3.98.2. Obviously if they intend to use HTML5 audio they will need to offer something other than MP3 at least for Firefox users. It’s not currently possible to serve everyone without multiple encodings. I don’t see that changing anytime soon.

The servers serving the media seem very similar to YouTube’s delivery servers for H.264 video. It’s progressive download, again just like YouTube. No DRM. I suspect there’s a shared history between this delivery system and YouTube or a very strong influence. But knowing how Google works, there’s likely a shared backend.

It’s pretty good stuff. I highly recommend checking it out. Google built a decent mp3 player in the cloud.

Quick Thoughts On Dart

Google yesterday officially took the wraps off Dart. Google decided to stop short of outright calling it a replacement for JavaScript, however that does seem to be one of the goals.

I’m still looking at it myself, but my first impression is that the point of another language is buried in the details of the announcement. This particular sentence I think is the focal point (emphasis mine):

  • Ensure that Dart delivers high performance on all modern web browsers and environments ranging from small handheld devices to server-side execution.

I suspect the real goal behind Dart is to unify the stack as much as possible. Web Development today is one of the most convoluted things you can do in Computer Science. Think about just the technologies/languages you are going to deal with to create a “typical” application:

  • SQL
  • Server Side Language
  • HTML
  • CSS
  • JavaScript

That’s actually a very simple stack and almost academic in nature. “In real life” Most stacks are even more complicated, especially when dealing with big data. Most professions deal with a handful of technologies. Web Development deals with whatever is at hand. I’m not even getting into supporting multiple versions of multiple browsers on multiple OS’s.

Google even said in a leaked internal memo:

– Front-end Server — Dash will be designed as a language that can be used server-side for things up to the size of Google-scale Front Ends. This will allow large scale applications to unify on a single language for client and front end code.


What happened to Joy?
The Joy templating and MVC systems are higher-level frameworks that will be built on top of Dash.

By using one language you’d reduce what a developer needs to know and specialize in to build an application. This means higher productivity and more innovation and less knowledge overhead.

This wouldn’t be the first attempt at this either for Google. GWT is another Google effort to let developers write Java that’s transformed into JavaScript. This however doesn’t always work well and has limitations.

The web community has actually been working on this in the other direction via node.js which instead takes JS and puts it on the server side, rather than inventing a language that seems almost server side and wanting to put it in the browser.

Google still seems to have plans for Go:

What about Go?
Go is a very promising systems-programming language in the vein of C++. We fully hope and expect that Go becomes the standard back-end language at Google over the next few years. Dash is focused on client (and eventually Front-end server development). The needs there are different (flexibility vs. stability) and therefore a different programming language is warranted.

It seems like Go would be used where C++ or other high performance compiled languages are used today and Dart would be used for higher level front-end application servers as well as the client side, either directly or through a compiler which would turn it into JavaScript.

Would other browsers (Safari, Firefox, IE) consider adopting it? I’m unsure. Safari would likely have a lead as the memo states “Harmony will be implemented in V8 and JSC (Safari) simultaneously to avoid a WebKit compatibility gap”. Presumably IE and Firefox would be on their own to implement or adapt that work.

New languages rarely succeed in adoption. On the internet the barrier is even higher.

Version Numbers Still Matter

Google Doesn't Care About Web DevelopersI ran into an interesting situation today not unlike one I’ve encountered hundreds of times before but this time with Google Chrome. One person was able to reproduce the bug on an internal tool with ease. Nobody else was able to. Eventually upon getting the version number it clicked. This particular computer had Chrome 10 installed.

For my younger readers, Chrome 10 is an “ancient” version from March 2011. This is back when Obama was still in office, the United States was in a recession, there was a debt problem in Europe, hipsters carried their iPads in man purses… These were crazy times.

For whatever reason this Chrome install, like a number out there didn’t update. It could be security permissions, it could have been disabled for some reason. I really don’t know, or care terribly much. The reality is not everyone can update on release day regardless of opinions on the matter.

Go try and find Chrome 10 Mac OS X on the internet. Try using a search engine like Google. Now try and find it for any platform. Good luck. It’s a pain. I can get a Phoenix 0.1 binary from Sept 2002 (this was my primary browser for part of fall 2002, I used it before Firefox was cool), but I couldn’t find Chrome 10 from way back in 2011. I was eventually able to trace down a Chrome 10 binary, work around the problem and move forward however it took way more time than it should have.

This to me illustrates a few key points:

  • Version numbers still matter – They matter. Simple enough. Even in a rather sterile environment that this was, I had to deal with an older browser. They exist in larger quantities out in the wild web. Saying they don’t matter anymore is naive. Idealistic, but naive.
  • Make old platforms available – Just because you ship a new version doesn’t mean the old one has no relevance or need anymore. Google lost some serious credit in my mind for making it nearly impossible to get an “older” version of Chrome to test with. This shouldn’t be difficult. Google is said to have approximately 900,000 servers. Surely they can setup an archive with an explicit notice it’s an archive and user should download the latest. Mozilla’s got less than that.

The web is a fluid platform. Browsers are evolving platforms. Versions still matter as long as two things, the web at large, and the platform that is the browser need to interact. When version numbers no longer exist, it will likely be because monoculture is so strong it doesn’t matter. Until then, knowing what browser and what version will matter. Browsers will likely never agree 100% on what to implement and a timetable for implementation.

That image is a joke if you can’t tell. Google Chrome Developers are good people, they just need to put together an archive page for web developers.

Googlebot on Facebook?

I’ve got a few Facebook Applications I’ve played around with developing that are not actually for use (read: they do nothing). I’ve noticed over the past few days their canvas URL’s are seeing traffic in the form of 1 hit approximately every 24 hours. Previously they saw no traffic at all. At first I thought this was just Facebook with some new process to check for malicious apps, which sounds like a good idea. Then I did some digging and found something surprising:

The first thing I found was the hostname where the request originated was out-sw251.tfbnw.net which is obviously owned by Facebook. That’s not terribly interesting and supports my theory up above.

Then I found these two curious bits in the request:

USER-AGENT: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)

That IP address is crawl-66-249-67-211.googlebot.com. That UserAgent is very telling and needs no introduction.

The request is otherwise pretty unremarkable other than no query string which a normal person would generate when hitting that canvas URL. However fb_sig_request_method is set to GET which suggests to me it’s actually using POST despite that what it claims. There’s no fb_sig_user or anything else that would suggest an actual user, which makes sense because fb_sig_logged_out_facebook is set to 1.

It appears as of March 20, 2011 Google has started crawling Facebook Apps. I’ve got no idea what it’s intent, abilities or relationship is. I can tell you that I’ve monitored since at least April 2010 and this only started a few days ago.