HP Produces More Of A Discontinued Product

John Gruber questions the point of HP’s decision to do a final run of TouchPad manufacturing. I’ll propose a likely theory:

iSuppli says the Bill of Materials for an iPad 2 (32 GB GSM) is 336.60 when you add in manufacturing. That same iPad retails for $729.00. This is common sense. There’s R&D, marketing, shipping, and of course profit. Keep this in mind. The retail price is not the break even. It’s a profit. HP is selling their Touchpad’s at $99 and $149 I believe, for 16 GB and 32 GB respectively. A loss, but not quite as substantial as comparing to retail pricing would lead you to believe.

Secondly, it’s important to keep in mind that costs aren’t incurred as products are produced. Supply chains often require commitments. HP likely spent considerable funds securing the parts for the Touchpad. They also spent money tooling the factory. This money is already spent. Contracts were signed (they might be able to get out if they pay a penalty + accept some bad will with vendors they may need in the future). Costs that exist regardless of their decision. This is like selling tickets to a sports event you can’t attend at a loss, because it’s better than being stuck with tickets you can’t use and being out 100% of the cost.

I suspect the primary purposes of this last production batch are as follows:

  • HP already incurred the majority of the cost with R&D, parts, etc. Using up the inventory they have is a way to recoup some of these funds, vs. selling back to the vendors or finding other interested parties. Given it’s a mobile device, parts may even have been custom fabricated to meet the specs and confined space.
  • HP wants to preserve it’s relationship with it’s supply chain.
  • HP isn’t giving up on tablets, they are giving up on WebOS tablets. Might as well get some tablets out there and find out how the hardware does in the wild so building v2 with new software can learn from v1. Again, most of the costs were already incurred.
  • HP isn’t (officially) giving up on WebOS, they are just giving up on WebOS tablets. Until they figure out what to do with it, either license to someone, use on other products, spin it off might as well keep the ecosystem alive so it retains some value. HP invested a lot of money in it. HP has almost 600 employees on it. Loosing a little more cash on hardware to keep demand in the ecosystem up for a few months may not be a bad investment.

Overall, it seems surprisingly logical to produce another batch. It costs HP a lot of money to cancel the product so quickly. They are taking the loss regardless. Might as well try and reap some rewards and recoup some cash from it.

Improving DNS CDN Performance With edns-client-subnet

Several months ago I wrote about how third party DNS services often slow you down since a DNS query is only one part of the equation and many websites use DNS to help their CDN figure out what servers are closest (and fastest). A few proposals to fix this have floated around, one is finally making headway.

Google, Bitgravity, CDNetworks, DNS.com and Edgecast have deployed support for edns-client-subnet. The idea is pretty simple. It passes part of your IP address (only part as to keep it semi-anonymous) in the request. A server that supports this extension can use it to geotarget and find a CDN node closest to you. Previously the best that could be done was using the location of the DNS server, which in many cases could be far away.

Still missing is support from some heavyweights like Akamai, who is the largest CDN, Limelight Networks and Level3. This is a pretty solid proposal with minimal negative implications. They are only passing part of the origin IP address, so it wouldn’t be a privacy invasion. In theory any website you browse could already harvest the IP you are using, this is just making part of it accessible to a partner who is already serving data on their behalf.

The Great East Coast Earthquake

8/23/2011 - Never Forget

I didn’t get a chance to post earlier about the earthquake. It was just a tiny earthquake for 15-20 seconds, but still extremely rare for this part of the world. Being 17 floors up the building definitely rocked a little bit. Hard to miss, but it wasn’t violent or anything like that. I was on a conference call which continued through it with no incident.

My first thought was construction in the building or across the street, but about half way through I realized without noise that wasn’t possible. My next thought was the fault lines near NYC, which I’ve heard about a few times before. I knew seismic activity is not unusual for NYC, but to the degree that we can feel it is very unusual. I suspect most never knew about those faults, but I like science ;-) . Oddly enough I wasn’t 100% wrong.

An immediate search of Twitter turned up reports of vibration in the city. A few seconds later turned up reports of the same thing in Philadelphia 94 miles away from NYC (which confirmed seismic activity in my head). Meanwhile my inbox had a bunch of reports about Pentagon evacuations and other happenings. A perk of working with a large news organization is being fed news 24×7 (it’s also a bad thing). Confirmation of an earthquake came what seemed like seconds after that. This all took place in a matter of a minute or two.

All together it took just a minute or two to find out the full story. I actually had the full story well before building management had it. They didn’t even know what happened and I knew it was about a 5.8. Amazing if you really think about it. Back in 2003 with the blackout it took considerably longer for substantially less information. Granted having electricity helped. Cell phone networks were still largely unusable for a short time after.

The jokes going across the net were quite amusing (as shown above).

Steve Jobs Steps Down As CEO

As released by Apple:

To the Apple Board of Directors and the Apple Community:

I have always said if there ever came a day when I could no longer meet my duties and expectations as Apple’s CEO, I would be the first to let you know. Unfortunately, that day has come.

I hereby resign as CEO of Apple. I would like to serve, if the Board sees fit, as Chairman of the Board, director and Apple employee.

As far as my successor goes, I strongly recommend that we execute our succession plan and name Tim Cook as CEO of Apple.

I believe Apple’s brightest and most innovative days are ahead of it. And I look forward to watching and contributing to its success in a new role.

I have made some of the best friends of my life at Apple, and I thank you all for the many years of being able to work alongside you.

Steve

A few things strike me here:

First of all, the letter is addressed “Apple Board of Directors and the Apple Community” (emphasis mine), which as far as I know is unprecedented by Steve Jobs and really by Apple. Apple has never really acknowledged the community around it. In past “letters” (for example Thoughts on Flash), Steve Jobs just starts. It’s like an actor only acknowledges his audience when he comes out to take a bow to ensure they don’t remove the fourth wall.

Second, I sadly suspect this position of “Chairman of the Board, director and Apple employee” is largely symbolic. From what’s known about Steve Jobs is he almost lived for this job. Stepping down is a major concession for someone so obsessive about a vision and passionate about achieving it with perfection. That said, he seemed pretty strong a few weeks ago at the Cupertino City Council, so I don’t mean to suggest he’s on his deathbed. Just unlikely to regain enough health to keep a CEO schedule. Several changes in 10.7 Lion like the odd design for Calendar and Address Book make me think he didn’t have much say in it’s design either.

Third, this succession plan is hardly shocking. Tim Cook was groomed for this a quite some time. I suspect this was known by a select few for a little while now. Jonathan Ive was long suggested as his replacement, but that seemed unlikely given he already is in charge of industrial design, and the other half of the role (the business side) he has no experience in. He’s also notably reclusive and more subtle in presentations in contrast to Steve’s “reality distortion field” persona on stage. By elevating Cook and leaving Jonathan Ive to focus on design Apple gets the best of both worlds.

Lastly, I think Colin Barrett’s tweet put my personal perspective on this best:

I was 11 when Steve came back, and I’m 25 now. Can’t overstate the enormous impact Steve and Apple had on me growing up. Good luck, dude.

- @cbarrett

Indeed. Good luck Steve Jobs.

Age In Media Years…

Here’s a great little tidbit to make you feel old(er):

In 4 years, the Back to the Future movie will be as old to us as 1955 was to us when Back to the Future came out.
- @joedrew

And to top it off:

The Wonder Years aired 88-93, was set 68-73. A modern Wonder Years would be set in 1991.
- @joedrew

I’d personally love to see a modern Wonder Years set in 1991. Queen’s These Are the Days of Our Lives would be the most appropriate intro given the year, but I’d prefer Milli Vanilli or MC Hammer. I’d also nominate Gilbert Gottfried for the narration. Steve Martin if Gilbert turns it down. The Gulf War and Somalia Conflict would obviously replace Vietnam. I see this being very workable.

On Firefox Versioning

Writing software is actually quite easy. Writing good software is relatively harder, but still easy. Writing software to a programmer is like painting to a painter. Shipping software is an incredibly complicated task. It’s like getting a stadium full of babies to all have clean diapers at the same time with only one or two people to do the work. As soon as you fix one thing, you discover more crap. The process stinks and you’ll never reach the end. Those who do it either by printing a CD, uploading a binary, or pushing out changes to a tier of web servers know what I’m talking about.

It’s easy to write code to do things. It’s harder to build a product. It’s harder still to actually draw a line in the sand and decide when you’re “done”. The truth is all software ships with bugs. Someone who tells you otherwise is an idiot. They almost certainly aren’t all discovered, very likely some will be, but they absolutely exist. The general consensus is you want no glaring bugs and you don’t want big bugs in common use cases. Obscure use cases will always be more buggy. That’s the nature of the beast.

Knowing this, it’s easy to understand that changing release cycles will be an arduous process with lots of details to think about. Not everything is quantitative or can be reduced to a math equation. How long is it worth waiting for a feature? Is the shiny button worth 3 days? 3 weeks? 3 months? Indefinite hold? Will it even work as we think? What bugs will it introduce? How long to deal with those? Not an easy decision. Even harder to reach a consensus on. The only thing certain is the lack of a decision will guarantee a failure to launch.

The Firefox Version Problem

Firefox is now a 6 week release cycle. This means features get out the door soon after they are fully baked. That’s a very good thing. That means adoption of modern technologies and the latest in security is out there quickly. We all benefit from that.

The downside however is that upgrades are disruptive. They can break compatibility, and they require extensive testing in large deployments (big companies, educational institutions). That can be expensive and time consuming if you’re impacted.

The other side of this is version numbers get blurred. 4.0, 5.0, 6.0… “WTF is the difference” most users would think given it looks largely the same. But is it really 4.0.1, 4.0.2, 4.0.3? As a web developer, what versions are you supporting? This is now much more complicated (don’t even get me started in testing).

Stable vs. Slipstream

My modest proposal is a Stable/Slipstream (I prefer “slipstream” vs. “bleeding edge”) model. For example:

Firefox 7.0 ships in 6 weeks, September 27 as of this blog post. From then on, every 6 weeks a new release ships and would become 7.1, 7.2, 7.3 etc. For users, it’s just auto-updates every so often. These intermediate releases are disposable as the users are on the slipstream. They rapidly update. A matter of weeks after the release the previous one is unsupported. Previous releases are just a rumor, recognizable only as deja vu and dismissed just as quickly1. They are oblivious to the concept of “versions” for the most part. After several release cycles (9-12 months), this becomes “stable” at 7.x. The next day 8.x starts and the process starts over.

From then on (I’d propose 12 months) only security fixes will be provided to 7.x. For large deployments who need to do extensive QA, they adopt the stable branch once a year on a predictable schedule and stick to it. For the vast majority of the internet, they adopt the slipstream (default) and get the latest release every 6 weeks. The stable branch is only around for a limited period of time before it moves to the next version. That last release cycle may be a bit more modest and lower risk than the previous ones.

The end result is that nobody cares about a release older than 12 months. Generally speaking only 2 matter. Slipstreamed users are updating rapidly (and will likely update even more rapidly as the process improves). Stable users have 12 months to hop to the next lily pad. This goes for IT, web developers, add-on developers, browser developers.

In the long term (next few years), I think web applications will become more agile and less rigid. Part of what things like HTML5 provide is a more standardized and less hacky way of doing things. That means less compatibility issues with untested browsers. As those older applications are phased out, the test cycles for large deployments will decrease. Ideally some will eventually just migrate away from “stable”.

Version Numbers

Yes, version numbers still exist, but for most users they don’t mean terribly much unless they have a problem or need to verify compatibility with something. In which case, the major release number is likely the important one. They are still a necessary evil, and users do need to know how to get it, even if they don’t need to know it offhand. Browser version number is pretty much the first step of any diagnostics for a web application as it’s the ultimate variable.

Just my thoughts on the last several weeks of debate.

1. Men In Black (2007)