Categories
Mozilla

Perception Of Performance

Google is pervasive about associating Chrome with being fast. It’s was their primary pitch when they first announced it. Back when Firefox went 1.0, it wasn’t so much about speed but “not sucking” as all the geeks liked to say. Given IE 6 was the competition, that was likely the best marketing on earth. Sure it was faster, but sucking fast wasn’t nearly as good as not sucking. Not sucking encompassed the missing features, broken rendering, crashing, constant parade of security problems. It summarized the product surprisingly well for not being an official slogan by any means.

Google now launched Chrome for iOS. On the desktop Chrome and Safari both use WebKit, Chrome applies it’s own touches to make things faster. Notably they have their own JS engine. Safari also has it’s own JS engine. This is the secret sauce of performance. In the iOS world however Apple being the totalitarian dictator decided that iOS will provide WebKit and JS. If your app has any web browser functionality it will utilize these API’s and not implement it’s own engine. Verbatim:

2.17 Apps that browse the web must use the iOS WebKit framework and WebKit Javascript

Google Chrome for iOS however is Google integration into a reskinned experience of Safari. It’s the same browser. Just a new UI bolted on with some Google features integrated in. It’s not a separate browser. It’s a UI.

That however doesn’t stop Google’s marketing machine (I’d argue Apple marketing’s top rival) from putting “fast” as the second word:

Browse fast with Chrome, now available on your iPhone, iPod touch and iPad. Sign in to sync your personalized Chrome experience from your computer, and bring it with you anywhere you go.

It goes on to clarify:

  • Search and navigate fast, directly from the same box. Choose from results that appear as you type.

So Google isn’t truly misleading. It’s just very strategic wording.

The truth of the matter however is that Google Chrome on iOS is substantially slower than Safari. Safari uses Nitro to accelerate JavaScript, which powers most of the complicated websites that will slow down a browser on any modern device. Apple however restricts Nitro to Safari, and doesn’t let third party apps like Google Chrome use it. This is still the case as of iOS 5, and I believe is the case in iOS 6, though I haven’t personally verified that.

How much slower is Google Chrome on iOS in comparison to Safari? Well Here’s a SunSpider test I did on my iPad 3:

Safari

============================================
RESULTS (means and 95% confidence intervals)
--------------------------------------------
Total: 1817.9ms +/- 0.2%
--------------------------------------------

3d: 214.7ms +/- 1.1%
cube: 72.3ms +/- 0.7%
morph: 57.9ms +/- 0.9%
raytrace: 84.5ms +/- 2.2%

access: 224.9ms +/- 0.6%
binary-trees: 44.4ms +/- 1.7%
fannkuch: 96.2ms +/- 0.6%
nbody: 56.0ms +/- 0.0%
nsieve: 28.3ms +/- 2.7%

bitops: 141.0ms +/- 0.4%
3bit-bits-in-byte: 23.4ms +/- 1.6%
bits-in-byte: 29.5ms +/- 1.3%
bitwise-and: 37.8ms +/- 1.5%
nsieve-bits: 50.3ms +/- 0.7%

controlflow: 15.7ms +/- 2.2%
recursive: 15.7ms +/- 2.2%

crypto: 123.3ms +/- 0.6%
aes: 70.5ms +/- 0.5%
md5: 29.4ms +/- 1.3%
sha1: 23.4ms +/- 1.6%

date: 274.4ms +/- 0.7%
format-tofte: 139.8ms +/- 1.1%
format-xparb: 134.6ms +/- 0.7%

math: 175.1ms +/- 0.3%
cordic: 61.5ms +/- 0.8%
partial-sums: 74.4ms +/- 0.7%
spectral-norm: 39.2ms +/- 0.8%

regexp: 70.8ms +/- 0.6%
dna: 70.8ms +/- 0.6%

string: 578.0ms +/- 0.5%
base64: 78.3ms +/- 1.9%
fasta: 68.1ms +/- 0.9%
tagcloud: 109.5ms +/- 1.2%
unpack-code: 207.5ms +/- 1.2%
validate-input: 114.6ms +/- 0.7%

Google Chrome

============================================
RESULTS (means and 95% confidence intervals)
--------------------------------------------
Total: 7221.0ms +/- 0.1%
--------------------------------------------

3d: 802.7ms +/- 0.2%
cube: 230.9ms +/- 0.6%
morph: 297.3ms +/- 0.5%
raytrace: 274.5ms +/- 0.1%

access: 1112.0ms +/- 0.2%
binary-trees: 98.4ms +/- 1.1%
fannkuch: 609.6ms +/- 0.2%
nbody: 247.9ms +/- 0.2%
nsieve: 156.1ms +/- 0.4%

bitops: 957.2ms +/- 0.2%
3bit-bits-in-byte: 210.4ms +/- 0.6%
bits-in-byte: 232.9ms +/- 0.2%
bitwise-and: 188.5ms +/- 0.4%
nsieve-bits: 325.4ms +/- 0.2%

controlflow: 129.5ms +/- 0.3%
recursive: 129.5ms +/- 0.3%

crypto: 493.3ms +/- 0.2%
aes: 214.3ms +/- 0.4%
md5: 140.2ms +/- 0.3%
sha1: 138.8ms +/- 0.5%

date: 381.1ms +/- 0.3%
format-tofte: 214.2ms +/- 0.2%
format-xparb: 166.9ms +/- 0.5%

math: 770.7ms +/- 0.2%
cordic: 316.6ms +/- 0.2%
partial-sums: 243.2ms +/- 0.3%
spectral-norm: 210.9ms +/- 0.4%

regexp: 1340.2ms +/- 0.2%
dna: 1340.2ms +/- 0.2%

string: 1234.3ms +/- 0.6%
base64: 175.7ms +/- 0.5%
fasta: 205.6ms +/- 0.2%
tagcloud: 284.0ms +/- 2.3%
unpack-code: 370.1ms +/- 0.9%
validate-input: 198.9ms +/- 0.6%

Quite a bit slower.

So really, if you’re using Chrome on iOS, it’s because you absolutely love the design and integration with Google’s services, and are willing to trade off considerable JavaScript performance for those perks.

That however doesn’t stop many people from thinking it’s fast. Just in the past few minutes I’m able to find these Tweets among the thousands streaming across the web. I won’t mention or link to them directly (you could find them however if you wanted):

“Chrome for iOS is FAST, takes the mobile browsing experience to a new level.”

“I like it! It’s fast and can sync with Chrome desktop, which I use all of the time.”

“Liking #chrome on #iOS very slick, fast and clean looking”

“using chrome on my iphone right now.. cant believe how fast it is”

“That chrome for iOS is freaking fast but so basic. No tweet button, no add-on. Man I kinda disappointed. I give ’em 1 ‘fore the update”

“Chrome for iOS? Hell yes!! So fast! #chrome”

“Google Chrome for iOS is fast.”

“Holy hell Chrome is fast on the iPad.”

The most touted feature isn’t actually a feature. It’s technically not even there. The numbers and the technology insist that it’s not (they prove it’s actually slower). But that’s what everyone is ranting and raving about. You could argue Google’s UI is faster, but I’d be highly skeptical that Google’s found Cocoa tricks Apple engineers haven’t. Perhaps a UI transition or two makes you think it’s faster or more responsive, however even that I can’t find any evidence of.

All the hard work the Google engineers did squeezing their services into a compact simple to use UI are ignored in favor of this non-existent feature. And as a developer who can’t ignore such a thing, I will say they did a great job with their UI.

I present to you, the power of marketing!

Categories
Google Mozilla

On H.264 Revisited

Once again the debate over H.264 has come up in the Mozilla community. I’ve been a strong advocate of the WebM/VP8 codec given its liberal license and abilities and still am, but agree H.264 needs to be supported. It’s a requirement for mobile (B2G), and becoming necessary on the desktop.

A little over a year ago Chrome talked about dropping support for H.264. To date they have not done so, or given any indication that is even still in the plans as far as I know. In 2010 Adobe said they would be supporting WebM (link in that same blog post). They too have failed to live up to their promises. In either case I’ve found no indication on the internet they ever plan to go forward with those plans.

I suspect in Google’s case they were pressured by various providers and mobile partners who don’t want to encode or support another encoding. Google’s been trying to woo anyone/everyone for the purposes of Google TV and presumably YouTube. It’s likely just not worth it for them to push. There are various theories floating around about Adobe including a lack of clear Flash strategy in an HTML5 world. Adobe does however have a “tools” strategy. Perhaps time will tell.

Furthermore Apple and Microsoft are fundamentally opposed to WebM as they are both licensors for H.264. The odds of them supporting something that hurts their bottom line unless the rest of the web is threatening to leave them behind is nearly 0.

I question however if it should be bundled vs. using system codecs. Windows XP aside, system codecs mean that Microsoft and Apple are essentially responsible for making it work as well as the expense. Plugins could be used for OS’s that don’t ship with the appropriate codecs.

It’s time to put some effort into a JavaScript player for WebM and make that liberally licensed. Browsers still aren’t quite there, but eventually the day will come when that’s workable. The web will then gain the ability to have video play on any (modern) device. Just not natively. That is the backdoor for an open codec.

The real issue is larger than the <video/> element. It’s software patents and their ability to undermine innovation and progress. It’s important to keep this in mind. Just look at mobile. It’s completely possible that the entire mobile industry could come to a halt over patent lawsuits and fear of lawsuits. All it takes is a company willing to press the button. Google spent $12.5 billion in what is essentially the patent equivalent of nuclear proliferation. That’s how real the threat is perceived. H.264 is arguably a fart in a hurricane.

Categories
Mozilla Security

Data Driven Lives

We do many things throughout the day. Most of the time we don’t give these things much thought. Often they are repetitive tasks we do every day. Our “routine” we call it. It may be that bathroom break mid-day, or that coffee break. Or might be those n Google searches throughout the day. You might be able to name some of them and put a count to it, but stop and think for a second. How many things do you actually know how many times you performed them? How much time was spent? How much energy/expense?

Companies collect this information, but strangely individuals don’t. The companies who we deal with often know more about us than we do. Google knows how many times you searched in a given day. It may (depending on your privacy settings) be able to recall each search you ever made. A feat I bet you can’t perform. Your credit card company knows how many times you purchase coffee at a given store in a given year. You quite possibly have no idea.

Stephen Wolfram has been analyzing his life for years. Just tiny aspects of it. The data is stunning. It makes you wonder why we don’t have more products out there that give us access to and control of our own data. Everyone else has more access to it than we have.

Collusion is a Firefox extension that gives another little bit of insight. Who knows where you’ve been online. Try installing it and running it for a week. It’s fascinating to see. But still so much in the browser isn’t exposed to the user. Your search history knows what you searched for. Your browser history knows when you browse the web, where you’re going. There’s a mountain of data there. The authorities use it when a crime is committed for a reason. about:me is a great extension for getting a little bit more of this information out of Firefox. It’s a fascinating area where I hope we’ll see more people spend time on. The great thing about these is they are client side and private. You don’t need to give your data away to someone else if you want to learn about yourself.

However we’re still at the infancy in personal analytics. There’s very few products out there to let us know what we do all day. FitBit can tell you when you sleep, when you’re active and how active you are. But not terribly much else about you. Your computer has a wealth of info, but really doesn’t tell you much. To even get a little out of it you need to be fairly technically adept.

I propose it’s time to encourage people to start learning more about themselves. Data is amazing and can change our behavior for the better. Data is all around us yet somehow it eludes us. Big companies know things about us that even we don’t know. Perhaps it’s time to change that?

Categories
Mozilla Open Source

Why Open Source Is Pretty Awesome

At some point I think it’s easy to take things for granted. Being able to alter software to meet your needs is an awesome power.

Today, a tweet rehashed an annoyance regarding a tactic on websites to alter copy/paste and put a link with tracking code in your clipboard. I could opt out, but that doesn’t fix when websites roll their own. It’s a fairly simple thing to implement. In my mind there’s little (read: no) legitimate justification for oncopy, oncut or onpaste events.

So I did an hg pull while working on some other stuff. I came back and wrote a quick patch, started compiling and went back to working on other stuff.

Then came back to a shiny new Firefox build with a shiny new preference that disabled the offending functionality. A quick test against a few websites shows it works as I intended by simply killing that event. You can’t do these things with closed source.

Of course I found the relevant bug and added a patch for anyone interested.

A 15 minute diversion and my web browsing experience got a little better. Sometimes I forget I’ve got experience on that side of the wire too 😉 .

Categories
Mozilla

How To Fix Broken about:home Search In Firefox

Not that I recommend it, well actually I have, and do for “advanced” users (I will update that at some point), but occasionally cleaning out your Firefox profile can be a good thing. Every so often I clean the cruft out of mine. Here’s a little quirk however. The new-ish browser start page won’t perform a search when localStorage is cleaned out. It will manifest by simply doing nothing when you try to search. The form goes nowhere. If you look for errors in the console you’ll see:

"gSearchEngine is null"

The best solution I’ve found to fixing this is to go into about:config and reset (right click -> reset) these properties and restart:

browser.startup.homepage_override.buildID
browser.startup.homepage_override.mstone

I suspect it’s just buildID, however neither should be harmful. Restart and they will be recreated.

Categories
Google Networking

Google Wants To Make TCP Faster

Google has been pushing SPDY for a little while now, and so far I haven’t really seen a good argument against SPDY. Firefox 11 will ship with it, though disabled by default until the bugs are worked out. Now Google is turning its eyes towards TCP. Very logical.

While there are a variety of proposals to speed up TCP floating around, I wonder if Google would be better off just buying FastSoft for Fast TCP and pulling a VP8 style opening up. The reason being that it’s already in use on the web, Google could capitalize on that overnight. There are several TCP congestion algorithms out there, however Fast TCP seems to have the most established customer base, including CDN Limelight who uses it to upload to them.

Categories
Google Mozilla

Version Numbers Still Matter

Google Doesn't Care About Web DevelopersI ran into an interesting situation today not unlike one I’ve encountered hundreds of times before but this time with Google Chrome. One person was able to reproduce the bug on an internal tool with ease. Nobody else was able to. Eventually upon getting the version number it clicked. This particular computer had Chrome 10 installed.

For my younger readers, Chrome 10 is an “ancient” version from March 2011. This is back when Obama was still in office, the United States was in a recession, there was a debt problem in Europe, hipsters carried their iPads in man purses… These were crazy times.

For whatever reason this Chrome install, like a number out there didn’t update. It could be security permissions, it could have been disabled for some reason. I really don’t know, or care terribly much. The reality is not everyone can update on release day regardless of opinions on the matter.

Go try and find Chrome 10 Mac OS X on the internet. Try using a search engine like Google. Now try and find it for any platform. Good luck. It’s a pain. I can get a Phoenix 0.1 binary from Sept 2002 (this was my primary browser for part of fall 2002, I used it before Firefox was cool), but I couldn’t find Chrome 10 from way back in 2011. I was eventually able to trace down a Chrome 10 binary, work around the problem and move forward however it took way more time than it should have.

This to me illustrates a few key points:

  • Version numbers still matter – They matter. Simple enough. Even in a rather sterile environment that this was, I had to deal with an older browser. They exist in larger quantities out in the wild web. Saying they don’t matter anymore is naive. Idealistic, but naive.
  • Make old platforms available – Just because you ship a new version doesn’t mean the old one has no relevance or need anymore. Google lost some serious credit in my mind for making it nearly impossible to get an “older” version of Chrome to test with. This shouldn’t be difficult. Google is said to have approximately 900,000 servers. Surely they can setup an archive with an explicit notice it’s an archive and user should download the latest. Mozilla’s got less than that.

The web is a fluid platform. Browsers are evolving platforms. Versions still matter as long as two things, the web at large, and the platform that is the browser need to interact. When version numbers no longer exist, it will likely be because monoculture is so strong it doesn’t matter. Until then, knowing what browser and what version will matter. Browsers will likely never agree 100% on what to implement and a timetable for implementation.

That image is a joke if you can’t tell. Google Chrome Developers are good people, they just need to put together an archive page for web developers.

Categories
Mozilla

On Firefox Versioning

Writing software is actually quite easy. Writing good software is relatively harder, but still easy. Writing software to a programmer is like painting to a painter. Shipping software is an incredibly complicated task. It’s like getting a stadium full of babies to all have clean diapers at the same time with only one or two people to do the work. As soon as you fix one thing, you discover more crap. The process stinks and you’ll never reach the end. Those who do it either by printing a CD, uploading a binary, or pushing out changes to a tier of web servers know what I’m talking about.

It’s easy to write code to do things. It’s harder to build a product. It’s harder still to actually draw a line in the sand and decide when you’re “done”. The truth is all software ships with bugs. Someone who tells you otherwise is an idiot. They almost certainly aren’t all discovered, very likely some will be, but they absolutely exist. The general consensus is you want no glaring bugs and you don’t want big bugs in common use cases. Obscure use cases will always be more buggy. That’s the nature of the beast.

Knowing this, it’s easy to understand that changing release cycles will be an arduous process with lots of details to think about. Not everything is quantitative or can be reduced to a math equation. How long is it worth waiting for a feature? Is the shiny button worth 3 days? 3 weeks? 3 months? Indefinite hold? Will it even work as we think? What bugs will it introduce? How long to deal with those? Not an easy decision. Even harder to reach a consensus on. The only thing certain is the lack of a decision will guarantee a failure to launch.

The Firefox Version Problem

Firefox is now a 6 week release cycle. This means features get out the door soon after they are fully baked. That’s a very good thing. That means adoption of modern technologies and the latest in security is out there quickly. We all benefit from that.

The downside however is that upgrades are disruptive. They can break compatibility, and they require extensive testing in large deployments (big companies, educational institutions). That can be expensive and time consuming if you’re impacted.

The other side of this is version numbers get blurred. 4.0, 5.0, 6.0… “WTF is the difference” most users would think given it looks largely the same. But is it really 4.0.1, 4.0.2, 4.0.3? As a web developer, what versions are you supporting? This is now much more complicated (don’t even get me started in testing).

Stable vs. Slipstream

My modest proposal is a Stable/Slipstream (I prefer “slipstream” vs. “bleeding edge”) model. For example:

Firefox 7.0 ships in 6 weeks, September 27 as of this blog post. From then on, every 6 weeks a new release ships and would become 7.1, 7.2, 7.3 etc. For users, it’s just auto-updates every so often. These intermediate releases are disposable as the users are on the slipstream. They rapidly update. A matter of weeks after the release the previous one is unsupported. Previous releases are just a rumor, recognizable only as deja vu and dismissed just as quickly1. They are oblivious to the concept of “versions” for the most part. After several release cycles (9-12 months), this becomes “stable” at 7.x. The next day 8.x starts and the process starts over.

From then on (I’d propose 12 months) only security fixes will be provided to 7.x. For large deployments who need to do extensive QA, they adopt the stable branch once a year on a predictable schedule and stick to it. For the vast majority of the internet, they adopt the slipstream (default) and get the latest release every 6 weeks. The stable branch is only around for a limited period of time before it moves to the next version. That last release cycle may be a bit more modest and lower risk than the previous ones.

The end result is that nobody cares about a release older than 12 months. Generally speaking only 2 matter. Slipstreamed users are updating rapidly (and will likely update even more rapidly as the process improves). Stable users have 12 months to hop to the next lily pad. This goes for IT, web developers, add-on developers, browser developers.

In the long term (next few years), I think web applications will become more agile and less rigid. Part of what things like HTML5 provide is a more standardized and less hacky way of doing things. That means less compatibility issues with untested browsers. As those older applications are phased out, the test cycles for large deployments will decrease. Ideally some will eventually just migrate away from “stable”.

Version Numbers

Yes, version numbers still exist, but for most users they don’t mean terribly much unless they have a problem or need to verify compatibility with something. In which case, the major release number is likely the important one. They are still a necessary evil, and users do need to know how to get it, even if they don’t need to know it offhand. Browser version number is pretty much the first step of any diagnostics for a web application as it’s the ultimate variable.

Just my thoughts on the last several weeks of debate.

1. Men In Black (2007)

Categories
In The News Mozilla

Mork And Casey Anthony

Jamie Zawinski linked to a very interesting blog post about the forensics problem in the recent Casey Anthony trial. To summarize, she was using an older version of Firefox, which stores its history in a Mork DB. For those not familiar with Mozilla internals, Mork is (I’m quoting JWZ here):

…the single most braindamaged file format that I have ever seen in my nineteen year career”.

That bug was actually one of two times where I brushed with Mork, that time learning, and another time shortly afterwards where I learned first hand how impossible it really is to work with as part of a hack I was trying to build and later abandoned. Perhaps it was my experience at the time that just made it impossible, perhaps it really was Mork.

Categories
Mozilla

Firefox 4

Firefox 4

Firefox 4 is out! If you for some reason don’t know why you want it here’s a few things you’ll love about Firefox 4.0.

Congrats to everyone involved in shipping.