On Gatekeeper

Gatekeeper is without question a bold move to prevent malware from impacting Mac OS X, but it will likely turn into a legal and ethical mess. Before I explain why, I’ll give a very high level overview. There are three options:

  • Mac App Store – Only run applications from the Mac App Store.
  • Mac App Store and identified developers – Only run applications from the Mac App Store and developers who sign up with Apple to get a key.
  • Anywhere – This is how every Mac and PC today operates out of the box.

The default in Mountain Lion is App Store and identified developers. As MacWorld’s Jason Snell explains:

Apple says, if a particular developer is discovered to be distributing malware, Apple has the ability to revoke that developer’s license and add it to a blacklist. Mountain Lion checks once a day to see if there’s been an update to the blacklist. If a developer is on the blacklist, Mountain Lion won’t allow apps signed by that developer to run.

It’s worth noting that at least today the authentication is only done on first run from what I’ve read. However it’s not impossible for Apple to later check an application on each run to make sure it’s not on the blacklist. That could even happen before the feature ships this summer.

What’s concerning is that Apple will now essentially be the gatekeeper (get it?) and thus pressured to control what users can or can’t install on their computer. Lets be honest, most developers will never get their users to open system preferences and change this, so getting “identified” is essentially required to develop on Mac OS X if you want more than geeks to use your software.

Apple in the past has been pressured to remove Apps from the iOS App Store. It’s likely (read: guaranteed) to be pressured to blacklist developers who write apps which are controversial. Anything that could be used for piracy from a BitTorrent client to VLC which uses libdvdcss (the library hasn’t been legally challenged ever AFAIK but pressuring Apple is a way around the court system) could be targeted. Apple has a bit of a history banning apps for all sorts of reasons including being negative towards Apple.

How would Apple deal with pressure from patent claims? What about a desktop client for WikiLeaks, like the one that was pulled from the App Store? What about a game distributed by Planned Parenthood or some other organization that tends to draw controversy? There’s also the international issues here (Nazi images and Germany, privacy violations and EU). What about more indirect things like Firefox which can run 3rd party code via plugins and addons. Mozilla refused to kill MaffiaaFire. Could the Feds have went to Apple?

These are all hypothetical situations technically since the feature hasn’t even launched and Apple hasn’t given any clear policies. That in my opinion is the big problem. Apple as far as I know hasn’t given any guidelines to what would put a developer on the blacklist? Is there even an appeals process?

I’m pretty sure we’ll learn more over the coming weeks. The cool guys over at Panic are pretty optimistic about the feature, so I guess we’ll see.

On Prefixing And Monobrowser Culture

I’ll say right off the bat that Daniel Glazman is right, and I fully support his message. The failure to alter the course of the web now will lead to headaches. Truthfully it’s already a headache, it’s just going to get worse. The IE Days were the dark ages of web development. I don’t want to go back to that.

In an ideal world, CSS prefixing wouldn’t be necessary. Browser vendors would spec things out, agree on a standard and implement it. That however is too rational, so CSS prefixing is an unfortunate reality. It outright won’t happen by the admission of Microsoft and Apple (pointed out by bz):

tantek (Mozilla): I think if you’re working on open standards, you should propose
your features before you implement them and discuss that here.
smfr (Apple): We can’t do that.
sylvaing (Microsoft): We can’t do that either.

Of course you can question if there’s really a legitimate need to work on standards in private. I’m personally skeptical a css property will leak the next iPhone.

It’s also worth noting Apple and Microsoft are both OS vendors and (cynically speaking) have interests that are explicitly contrary to the internet being a universal platform. Fragmenting the web and making it a more difficult platform to develop on is potentially in their interest. Not to different from their stance on h.264 of whom they are both licensors and thus haven’t implemented WebM.

I’m starting to second guess the permanence of prefixes. I personally think once there’s a standard the first release of a browser 12 months after standardization drops support for the prefix. Yes, this will break a few websites that never update. However it’s almost always an easy fix. I’d venture 95%+ of the time it could be done safely via a regex. Truth is you’re talking about 18-24 months from initial implementation in practice anyway. Possibly longer. A website that is so stale it can’t manage to deal with this in 1.5-2 years is in pretty poor shape to begin with. LESS and Sass can also be a big help in automating this. W3C CSS Validator already errors on prefixes. The tools to deal with this are in place today.

I should note dropping is unlikely to happen and thus wishful thinking.

A large part of this issue is how many websites are built these days, especially “mobile sites” which are typically separate sites bolted onto an API or even the backend database of a website. Often built by 3rd party vendors getting things passable and out the door is key. As a result every shortcut in the book is taken, including the absolute minimum in testing and compatibility.

For what it’s worth, this blog has only one prefix in use, and it’s coded as:

-moz-border-radius: 3px;
-khtml-border-radius: 3px;
-webkit-border-radius: 3px;
border-radius: 3px;

Which catches everyone. That takes all of 30 seconds at most to do.

Why Open Source Is Pretty Awesome

At some point I think it’s easy to take things for granted. Being able to alter software to meet your needs is an awesome power.

Today, a tweet rehashed an annoyance regarding a tactic on websites to alter copy/paste and put a link with tracking code in your clipboard. I could opt out, but that doesn’t fix when websites roll their own. It’s a fairly simple thing to implement. In my mind there’s little (read: no) legitimate justification for oncopy, oncut or onpaste events.

So I did an hg pull while working on some other stuff. I came back and wrote a quick patch, started compiling and went back to working on other stuff.

Then came back to a shiny new Firefox build with a shiny new preference that disabled the offending functionality. A quick test against a few websites shows it works as I intended by simply killing that event. You can’t do these things with closed source.

Of course I found the relevant bug and added a patch for anyone interested.

A 15 minute diversion and my web browsing experience got a little better. Sometimes I forget I’ve got experience on that side of the wire too ;-) .

How To Fix Broken about:home Search In Firefox

Not that I recommend it, well actually I have, and do for “advanced” users (I will update that at some point), but occasionally cleaning out your Firefox profile can be a good thing. Every so often I clean the cruft out of mine. Here’s a little quirk however. The new-ish browser start page won’t perform a search when localStorage is cleaned out. It will manifest by simply doing nothing when you try to search. The form goes nowhere. If you look for errors in the console you’ll see:

"gSearchEngine is null"

The best solution I’ve found to fixing this is to go into about:config and reset (right click -> reset) these properties and restart:

browser.startup.homepage_override.buildID
browser.startup.homepage_override.mstone

I suspect it’s just buildID, however neither should be harmful. Restart and they will be recreated.

Privacy Issues Behind localStorage

Browsers need to overhaul their privacy settings to account for things like localStorage and bring control back to the user. In the days of cookies it was relatively simple for a user to wipe any identifiers (excluding IP address) from their browser. Simply clear cookies.

Firefox has two basic abilities, you can clear cookies, or you can browse and delete cookies. That’s great but not terribly clear that there’s more than cookies.

Firefox Cookie Privacy

Chrome as far as I know has no cookie browser like Firefox has, but (edit: Erunno notes in the comments you can via chrome://settings/cookies) explicitly lets you “Delete cookies and other site and plug-in data”. That’s pretty good.

Chrome Cookie Privacy

Today, I think Safari’s UI is the closest to perfect. Each hostname shows exactly what it has. My only gripe is that Safari doesn’t let you see what’s there. That’s a “power-user” feature however and I think it does an adequate job regardless.

Safari Cookie Privacy

Websites use more than just cookies these days. I discussed this a little over a year ago. The reason evercookie is controversial is that browsers don’t quite give users the level of control (real or perceived) that they expect for objects other than cookies.

Here is another use case for why this is needed. Google Analytics is used on perhaps half the internet’s websites. It sets a cookie every time. That means 230 bytes added to every http request for a lot of websites. Google could switch to localStorage and free up that 230 bytes. While they technically could do this, in practice, this could create a firestorm of attacks against them. The problem is it would be spun as Google trying to evade cookie deletion and and a privacy violation. The same storm that evercookie created. I suspect that’s why it hasn’t been done to date. The truth is the Google Analytics team has done a lot for improving performance including making it entirely async. But this move would be controversial.

It’s no longer about “cookies”, but “user data”.

On webOS Going Open Source

webOS is going open source. I’ll start by saying I’m rooting for webOS. I’m skeptical webOS will have much success given the announcement. An OS is a huge undertaking. A mobile OS is even more difficult.

Define “open source”

The press release says “underlying code of webOS available under an open source license”. Technically Apple can say the same thing with OS X and iOS*. Working on or with an OS is an investment. A very large investment. If it’s not complete or nearly complete, it’s not going to fly. Similarly unless the license is free enough, it’s not worth the investment. It sounds like it will be pretty inclusive and liberally licensed (Apache could be a good choice), but until that happens, I wouldn’t place any bets. Especially with HP’s seemingly bizarre behavior lately.

Ecosystem/Community

Building an ecosystem and community around that is going to be tough. Years ago with no competition except a stale IE. AOL gave $2M US Dollars to start the Mozilla Foundation and that had open source legs for years already under Netscape. While few people knew of “Mozilla” and even “Firefox” both in name and concept were a while away, it was a popular browser on Linux and in some more technical crowds. webOS is starting off against Google Android. Google has resources. Google isn’t Microsoft in this story. Google won’t be Microsoft.

Mozilla was also “just” a browser with much less surface area than a mobile OS. By that I mean hardware and dealing with the Linux community intricacies. Releasing the source alone won’t do it. HP reportedly had about 500 engineers working on webOS. That’s the type of effort it takes. Google puts substantial resources behind Android.

Lastly, people don’t install open source OS’s on their phones. They don’t install any OS’s on their phones except upgrades. That means hardware partners are critical for any viability. Hardware vendors already have deals and plans with Google. This is going to be tough to penetrate. Mozilla never had much luck getting desktops to ship with Firefox. The vast majority of users choose Firefox. On desktops, at least for now that is an option. On mobile hardware that’s not generally the case.

Even if someone comes up with a way to root and “upgrade” Android and/or iPhone devices to run webOS, you can be sure hardware vendors and mobile providers will be in front of Congress the next morning to outlaw the practice and stop it (or claim it’s “wiretapping”). Given the money behind App Stores and mobile payments, which is already a mess, there’s too much money there. These “rogue” devices could be banned from major networks if it got traction.

I’d love to see it survive and thrive. I’d love to see a PC like community of hardware vendors. But it’s going to be an uphill battle.

More than likely, pieces will be taken and strapped to Android as a HTML5 based Adobe Air like platform for building/deploying apps. It may also find some use in non-mobile purposes from entertainment devices to home alarms. As more devices become ARM based computers vs. microcontrollers, webOS like Android could be a way to get started building an interface. I see that as being more likely than continuing as a mobile OS.

A successful open source project takes a lot more than most give it credit for. Source alone doesn’t do it. It’s the community and ecosystem that sustains a project, not a tarball.

* I’d consider Android half open considering it does source dumps and develops largely in private.

On The Future Of Flash

Adobe is killing Flash, as a plugin for mobile. This shouldn’t come as a surprise to anyone who works on the web. Anyone who knows me knows I’ve bet on HTML5 since the beginning and haven’t been ashamed to say it. I don’t do Flash. To quote Adobe:

Our future work with Flash on mobile devices will be focused on enabling Flash developers to package native apps with Adobe AIR for all the major app stores. We will no longer continue to develop Flash Player in the browser to work with new mobile device configurations (chipset, browser, OS version, etc.) following the upcoming release of Flash Player 11.1 for Android and BlackBerry PlayBook.

I strongly suspect that even this use case is limited and will experience the same fate as the Flash plugin within the next 24-36 months. HTML5 is supported by browsers, a browser is shipped with the OS and is highly optimized for what it’s running on. It’s also the ultimate in cross-platform. Why write Flash when you can do something for every platform and not rely on a vendor to abstract you?

Platforms like PhoneGap bridge the world of Apps and HTML5 quite nicely. Adobe bought Nitobi which develops PhoneGap, but PhoneGap is also going to Apache Software Foundation which means Adobe’s ability to derail the project would be somewhat limited if they wanted to go that route.

Quite a few Apps use HTML/JS extensively already. HTML5′s success is despite Apple essentially crippling the use of HTML5 in native apps by preventing UIWebView from taking advantage of the Nitro engine. If/when Apple gets to fixing this another barrier will be gone. I suspect Apple will eventually make scrolling that doesn’t suck on iOS easier. Right now Joe Hewitt’s Scrollability is likely your best bet.

Adobe goes on to say:

However, HTML5 is now universally supported on major mobile devices, in some cases exclusively. This makes HTML5 the best solution for creating and deploying content in the browser across mobile platforms. We are excited about this, and will continue our work with key players in the HTML community, including Google, Apple, Microsoft and RIM, to drive HTML5 innovation they can use to advance their mobile browsers.

Interestingly they left out that little browser vendor Mozilla. Perhaps because they are most likely targeting WebKit on mobile and that’s the common tie between those companies sans-Microsoft which they need IE support. If Adobe wants a future here they should learn quick that you can’t ignore platforms. My advice to Adobe is to make sure their solution allows developers to bring their product to any modern browser on any device.

Flash is the last plugin with real usage even on the desktop. This is the first step towards the concept of plugins in the browser going away. It’s unlikely many will see a need to go HTML5 on mobile and develop a separate Flash code base to do the same thing on a desktop. The name of the game these days is write once, run anywhere (credit to Sun for the slogan). Today marks the start of the decline of Flash.

As Brendan Eich best put it: “Always bet on JavasScript“. I have and I continue to do so. The Open Web is winning. Slowly but surely.

Quick Thoughts On Dart

Google yesterday officially took the wraps off Dart. Google decided to stop short of outright calling it a replacement for JavaScript, however that does seem to be one of the goals.

I’m still looking at it myself, but my first impression is that the point of another language is buried in the details of the announcement. This particular sentence I think is the focal point (emphasis mine):

  • Ensure that Dart delivers high performance on all modern web browsers and environments ranging from small handheld devices to server-side execution.

I suspect the real goal behind Dart is to unify the stack as much as possible. Web Development today is one of the most convoluted things you can do in Computer Science. Think about just the technologies/languages you are going to deal with to create a “typical” application:

  • SQL
  • Server Side Language
  • HTML
  • CSS
  • JavaScript

That’s actually a very simple stack and almost academic in nature. “In real life” Most stacks are even more complicated, especially when dealing with big data. Most professions deal with a handful of technologies. Web Development deals with whatever is at hand. I’m not even getting into supporting multiple versions of multiple browsers on multiple OS’s.

Google even said in a leaked internal memo:

- Front-end Server — Dash will be designed as a language that can be used server-side for things up to the size of Google-scale Front Ends. This will allow large scale applications to unify on a single language for client and front end code.

Additionally:

What happened to Joy?
The Joy templating and MVC systems are higher-level frameworks that will be built on top of Dash.

By using one language you’d reduce what a developer needs to know and specialize in to build an application. This means higher productivity and more innovation and less knowledge overhead.

This wouldn’t be the first attempt at this either for Google. GWT is another Google effort to let developers write Java that’s transformed into JavaScript. This however doesn’t always work well and has limitations.

The web community has actually been working on this in the other direction via node.js which instead takes JS and puts it on the server side, rather than inventing a language that seems almost server side and wanting to put it in the browser.

Google still seems to have plans for Go:

What about Go?
Go is a very promising systems-programming language in the vein of C++. We fully hope and expect that Go becomes the standard back-end language at Google over the next few years. Dash is focused on client (and eventually Front-end server development). The needs there are different (flexibility vs. stability) and therefore a different programming language is warranted.

It seems like Go would be used where C++ or other high performance compiled languages are used today and Dart would be used for higher level front-end application servers as well as the client side, either directly or through a compiler which would turn it into JavaScript.

Would other browsers (Safari, Firefox, IE) consider adopting it? I’m unsure. Safari would likely have a lead as the memo states “Harmony will be implemented in V8 and JSC (Safari) simultaneously to avoid a WebKit compatibility gap”. Presumably IE and Firefox would be on their own to implement or adapt that work.

New languages rarely succeed in adoption. On the internet the barrier is even higher.

Version Numbers Still Matter

Google Doesn't Care About Web DevelopersI ran into an interesting situation today not unlike one I’ve encountered hundreds of times before but this time with Google Chrome. One person was able to reproduce the bug on an internal tool with ease. Nobody else was able to. Eventually upon getting the version number it clicked. This particular computer had Chrome 10 installed.

For my younger readers, Chrome 10 is an “ancient” version from March 2011. This is back when Obama was still in office, the United States was in a recession, there was a debt problem in Europe, hipsters carried their iPads in man purses… These were crazy times.

For whatever reason this Chrome install, like a number out there didn’t update. It could be security permissions, it could have been disabled for some reason. I really don’t know, or care terribly much. The reality is not everyone can update on release day regardless of opinions on the matter.

Go try and find Chrome 10 Mac OS X on the internet. Try using a search engine like Google. Now try and find it for any platform. Good luck. It’s a pain. I can get a Phoenix 0.1 binary from Sept 2002 (this was my primary browser for part of fall 2002, I used it before Firefox was cool), but I couldn’t find Chrome 10 from way back in 2011. I was eventually able to trace down a Chrome 10 binary, work around the problem and move forward however it took way more time than it should have.

This to me illustrates a few key points:

  • Version numbers still matter – They matter. Simple enough. Even in a rather sterile environment that this was, I had to deal with an older browser. They exist in larger quantities out in the wild web. Saying they don’t matter anymore is naive. Idealistic, but naive.
  • Make old platforms available – Just because you ship a new version doesn’t mean the old one has no relevance or need anymore. Google lost some serious credit in my mind for making it nearly impossible to get an “older” version of Chrome to test with. This shouldn’t be difficult. Google is said to have approximately 900,000 servers. Surely they can setup an archive with an explicit notice it’s an archive and user should download the latest. Mozilla’s got less than that.

The web is a fluid platform. Browsers are evolving platforms. Versions still matter as long as two things, the web at large, and the platform that is the browser need to interact. When version numbers no longer exist, it will likely be because monoculture is so strong it doesn’t matter. Until then, knowing what browser and what version will matter. Browsers will likely never agree 100% on what to implement and a timetable for implementation.

That image is a joke if you can’t tell. Google Chrome Developers are good people, they just need to put together an archive page for web developers.

On Firefox Versioning

Writing software is actually quite easy. Writing good software is relatively harder, but still easy. Writing software to a programmer is like painting to a painter. Shipping software is an incredibly complicated task. It’s like getting a stadium full of babies to all have clean diapers at the same time with only one or two people to do the work. As soon as you fix one thing, you discover more crap. The process stinks and you’ll never reach the end. Those who do it either by printing a CD, uploading a binary, or pushing out changes to a tier of web servers know what I’m talking about.

It’s easy to write code to do things. It’s harder to build a product. It’s harder still to actually draw a line in the sand and decide when you’re “done”. The truth is all software ships with bugs. Someone who tells you otherwise is an idiot. They almost certainly aren’t all discovered, very likely some will be, but they absolutely exist. The general consensus is you want no glaring bugs and you don’t want big bugs in common use cases. Obscure use cases will always be more buggy. That’s the nature of the beast.

Knowing this, it’s easy to understand that changing release cycles will be an arduous process with lots of details to think about. Not everything is quantitative or can be reduced to a math equation. How long is it worth waiting for a feature? Is the shiny button worth 3 days? 3 weeks? 3 months? Indefinite hold? Will it even work as we think? What bugs will it introduce? How long to deal with those? Not an easy decision. Even harder to reach a consensus on. The only thing certain is the lack of a decision will guarantee a failure to launch.

The Firefox Version Problem

Firefox is now a 6 week release cycle. This means features get out the door soon after they are fully baked. That’s a very good thing. That means adoption of modern technologies and the latest in security is out there quickly. We all benefit from that.

The downside however is that upgrades are disruptive. They can break compatibility, and they require extensive testing in large deployments (big companies, educational institutions). That can be expensive and time consuming if you’re impacted.

The other side of this is version numbers get blurred. 4.0, 5.0, 6.0… “WTF is the difference” most users would think given it looks largely the same. But is it really 4.0.1, 4.0.2, 4.0.3? As a web developer, what versions are you supporting? This is now much more complicated (don’t even get me started in testing).

Stable vs. Slipstream

My modest proposal is a Stable/Slipstream (I prefer “slipstream” vs. “bleeding edge”) model. For example:

Firefox 7.0 ships in 6 weeks, September 27 as of this blog post. From then on, every 6 weeks a new release ships and would become 7.1, 7.2, 7.3 etc. For users, it’s just auto-updates every so often. These intermediate releases are disposable as the users are on the slipstream. They rapidly update. A matter of weeks after the release the previous one is unsupported. Previous releases are just a rumor, recognizable only as deja vu and dismissed just as quickly1. They are oblivious to the concept of “versions” for the most part. After several release cycles (9-12 months), this becomes “stable” at 7.x. The next day 8.x starts and the process starts over.

From then on (I’d propose 12 months) only security fixes will be provided to 7.x. For large deployments who need to do extensive QA, they adopt the stable branch once a year on a predictable schedule and stick to it. For the vast majority of the internet, they adopt the slipstream (default) and get the latest release every 6 weeks. The stable branch is only around for a limited period of time before it moves to the next version. That last release cycle may be a bit more modest and lower risk than the previous ones.

The end result is that nobody cares about a release older than 12 months. Generally speaking only 2 matter. Slipstreamed users are updating rapidly (and will likely update even more rapidly as the process improves). Stable users have 12 months to hop to the next lily pad. This goes for IT, web developers, add-on developers, browser developers.

In the long term (next few years), I think web applications will become more agile and less rigid. Part of what things like HTML5 provide is a more standardized and less hacky way of doing things. That means less compatibility issues with untested browsers. As those older applications are phased out, the test cycles for large deployments will decrease. Ideally some will eventually just migrate away from “stable”.

Version Numbers

Yes, version numbers still exist, but for most users they don’t mean terribly much unless they have a problem or need to verify compatibility with something. In which case, the major release number is likely the important one. They are still a necessary evil, and users do need to know how to get it, even if they don’t need to know it offhand. Browser version number is pretty much the first step of any diagnostics for a web application as it’s the ultimate variable.

Just my thoughts on the last several weeks of debate.

1. Men In Black (2007)