Mozilla Open Source

Open Source And Recessions

There’s an interesting blog post on Open Source and recessions worth reading. Essentially the question is this: Does a recession have a negative impact on open source?

I’d say the answer is somewhat more complex than a simple yes/no. There are many different types of projects out there with entirely different circumstances. However I suspect a projects impact could be gaged on a few key aspects of it’s operation:

Purpose – The purpose of the project is likely the most critical aspect. For example, I don’t think there would be any significant impact on projects like the Linux kernel which is essential to many products out there including server infrastructure that powers much of the web and many companies computer systems. Then you have consumer products like TiVo, Google Android etc. Because it’s purpose is so broad there are enough people with a financial interest in seeing development continue. WebKit, Mozilla, Apache, are good examples of this. They have broad usage by many. Something specific to a more obscure task would have more trouble due to it’s more limited market.

Development Team – Of course for a project to succeed it needs one or more developers. During a recession one could theorize that many would be less inclined to participate. This may not necessarily so. First of all, quite a bit of open source development is loosely sponsored. Several projects have actual staff, paid employees who write open source code. For example Apple employees people to work on WebKit. Mozilla has staff working on Firefox. There are people paid to work on Linux (Red Hat, IBM, Novell, etc.) and many other open source projects. There are also companies who contribute some code that would be of strategic value to them. There’s also those who are simply willing to sponsor some work they want to see happen. All of which fund developers of larger open source projects. But would developers who aren’t sponsored or employed to code still participate? I theorize most still would as they don’t depend on it for income during good times, presumably a job during a recession wouldn’t generally prohibit participation and more than a job during years of economic growth. There’s also the impact of college students who participate partially for the educational aspect. The early 2000’s was a recession and still showed a fair amount of growth of open source. In fact many of todays stars really started to take shape during that period. For example:

Funding – Somewhat obvious: Funding is key. Who pays the developers (partially the last aspect I discussed)? Who pays for the projects needs (servers, etc.)? Many of the more popular projects (almost all of the above) have either an organization of for-profit company around built around it. That company often sponsors the needs of the project. Unless the needs of that companies product/service is no longer needed during the recession, funding likely remains. That’s partially the first aspect I discussed.

It’s my belief the larger and more popular open source projects would feel a minimal impact during a recession. I think history has shown this, and common sense agrees. They are mostly low development cost, adequately funded (often from diverse sources), stable, and have a broad team of developers. The projects that are in trouble are the ones who have very few or only 1 developer, even worse if they share the same sponsor, even worse if there is little community around the project. Most projects would generally experience a slight slowdown in development the degree would depend on the above. A few may go dormant for a period of time. Thanks to things like GPL licensing, another developer can pick up should there be a market in the open source ecosystem.

Overall I don’t think open source would be nearly as impacted as most businesses during a recession. The model is very different. Open source when successful has a community and many different sponsors. The diversity allows the project to survive even when recession causes some sponsors to need to reduce or eliminate involvement. Open Source also by definition is used to this type of environment. It’s used to developing on a budget, soliciting sponsors to help cover costs, etc.

The interesting thing about recession is that it impacts everyone, but the degree to which someone is impacted varies. For example construction and housing are generally harder hit than other industries. People tend to cut back on new home purchases before they cut back on other things. Each of those industries has computing needs, sometimes met by open source. This all feeds into the open source ecosystem.

I’d suggest that all of the projects I have mentioned here will do ok during a recession. Many with a slowdown, but all will still continue as long as they provide value. A notable situation is Mozilla’s income comes largely from Google which is based on ad revenue. During a recession and bubble bursting this would likely dramatically reduce the revenue brought in. This isn’t being ignored. As the 2006 Financial FAQ states:

First, the cash reserve is of course a form of insurance against the loss of income. We will continue to maintain enough of a reserve to allow us flexibility in making product decisions….

It seems that an open source project with a diverse stream of funding from individuals and companies of various industries, as well as developers in different situations is in the best position to survive.

It’s an interesting topic.

Mozilla Open Source

Mitch Kapor Leaves Chandler

Chandler is the attempt by OSAF to create a PIM. Several years later, it’s still not ready for prime time. Now Mitch Kapor is leaving, and his funding will follow.

He also sits on the board of directors for the Mozilla Foundation. Parent of the Mozilla Corporation and the yet-to-be-named mail corporation which will continue Thunderbird’s development.

I think this quote from the article is really something to pay attention to:

The best communal open-source projects are run like Mozilla (strong core development team with easy pluggability from the outside), Eclipse (cohesive corporate involvement to create a common core while competing at the edges–come to think of it, Linux is like this too), or Apache (strong technology brand that allows for a wide range of experimentation).

Some more interesting reflections on the news can be found on Why does everything suck. Chandler always sounded very interesting, but it never really found it’s way.

Things to keep in mind as Thunderbird develops wings of it’s own.

Mozilla Open Source Programming

Benchmarking And Testing Browsers

When people talk about open source they often talk about the products, not the process. That’s not really a bad thing (it is after all about the product), but it overlooks some really interesting things sometimes. For example open source tools used in open development.

A few months ago Jesse Ruderman introduced jsfunfuzz, which is a tool for fuzz testing the js engine in Firefox. It turned up 280 bugs (many already fixed). Because the tool itself is not horded behind a firewall it’s also helped identify several Safari and Opera bugs. It’s a pretty cool way to find some bugs.

The WebKit team has now released SunSpider a javascript benchmarking tool. Something tells me this will lead to some performance improvements in everyone’s engine. How much will be done for Firefox 3.0 is a little questionable considering beta 2 is nearing release, though you never know. There’s been some nice work on removing allocations recently. So just because it’s beta, you can’t always assume fixes will be minor in scope.

Another test that many are familiar with is Acid 2 which essentially is checking CSS support among browsers. Ironically this one too was released when Gecko is somewhat late in the development cycle.

Efforts like this really help web development by allowing browser developers to have a baseline to compare their strengths and weaknesses. Having a little healthy competition as motivation can be pretty helpful too 😉 .



I hate when misinterpretations become seen as fact. Supposedly 80% of Firefox bugs won’t be fixed. That’s said to be a bad thing. Here are some realities:

  • In every release cycle, everyone wants every bug to block a release and therefore everyone is “blocker-happy”, and later in the cycle, all are changed to non-blocker status except the most critical as perceived by developers, drivers, and testers.
  • Every release of Firefox, like every release of every large software project ships with thousands and thousands of documented bugs. The overwhelming majority of which nobody encounters, or are so minor you don’t even notice.
  • This process isn’t new, it’s been happening since the early days of software development.
  • If this process didn’t work like this, there would never be a release of major software products.

A bug is either a defect or an “unintended feature”. Complex products like browsers have thousands. This isn’t a surprise to anyone who works with software on a daily basis. Why? Because every bug you fix, feature you add introduces new code, which potentially causes new bugs in other places. Even if you devote 100% effort to fixing bugs, you’ll likely never get there. That’s the nature of the game. So what makes one bug worthy of blocking? Well generally they must meet some requirements:

  • Must be reproducible and clearly a bug (not a Firefox doesn’t load ActiveX).
  • A fix must be identifiable and achievable.
  • Must be in a more visible location. It’s not effective to allocate large amounts of effort for something so obscure 1 in 10 million people will ever encounter such a testcase.
  • Must be severe in some sense (data loss, security, usability, performance, etc.)
  • Fix must not be beyond risk tolerance threshold.

or, it must be a project requirement, meaning a feature that is deemed necessary to ship the release and worth holding for (artwork for UI for example).

Every project involves deciding what bugs ship, and what holds a release. Every single one. If there’s someone who doesn’t, it means their QA is likely flawed or inadequate. Firefox has the advantage of thousands of nightly testers. This helps quite a bit not only finding bugs, but seeing how prevalent a particular bug is, and what it’s impact is.

One should note that just because something isn’t blocking, that doesn’t mean it won’t get fixed. It simply means the release won’t be held for that bug. Should someone fix it, and it’s approved, it can still potentially make the release. The key is that the fix be low enough risk that the benefits outweigh the risk of potential regressions.

If you’re still shocked by this, let me alert you to something: the product (browser, feed reader, etc.) you are using to read this has thousands of bugs. The OS it runs on, has thousands of bugs. Any alternative you pick will be the same. Pick your poison.

I should note these bugs do not get marked as WONTFIX or INVALID. They remain open. They may be fixed in a subsequent release or they may just become outdated and fixed through some other means (code is depreciated and replaced with something else, feature dropped, revamped).


The Shape Of Firefox 3.0

Alex Faaborg has an awesome post on UI changes for Firefox 3.0. It’s a little lengthy, and most pics are wireframes but it’s a rewarding read for anyone in the browser space, or has an interest in user interface.

Overall I like most of the changes. I’ve been ranting about a need for a better bookmarking interface since 2005. Not sure if I was ahead of my time, or just impatient (likely the ladder), but it’s finally becoming a reality which I’m thrilled about. I’ve got some ideas on where it could go from here to make it even better, but that’s another post I hope to get to sometime.

One change that caught my eye is this:

-The lock is being removed from primary UI, and Firefox will now use a metaphor based on identity, rather than security, which will appear on the site button if an SSL or EV certificate is available. The super short explanation for this change is that the user might have an encrypted connection to criminals, so telling them that they are safe is a false cue. For an in-depth discussion of why we are moving away from the metaphor of a lock, watch Johnathan Nightingale’s Mozilla24 presentation Beyond the Padlock.

I’m not sure if this is really the best solution. I’d personally like to see the lock stay in the UI, but it’s meaning redefined. For a decade or more, the public has been told that the best way to tell if your information is safe is to look for the lock. I’d venture 99% of the general population doesn’t really know it symbolizes the use of SSL. They just know that it means your information is “safe”. My thinking is that it would be the most graceful transition to map that to the new identity system. Essentially the information it reveals would be the new identity information, but it provides backwards compatibility with previous versions, and other browsers. One less learning curve. Still in regards to safety, look for the lock.

Regarding the iconic form:
Iconic Form

Image from Alex Faaborg The Shape of Things.

I could make a rather infantile joke, but I’ll leave that as an exercise for the reader.

Overall it’s some great progress. I think these changes allow for a much more functional user interface with added features and less UI. The native appearance will also be excellent for Mac and Linux users who have longed for a UI that looked “right” on their systems.

Apple Mozilla

iPhone/iPod touch SDK On The Way

Readers know I’ve been big on Apple opening up the iPhone/iTouch to developers since the beginning. Apple finally came through announcing a SDK will be made available, though not until early next year. It specifically noted Apps will work on the iPod touch as well. About time. All of a sudden these devices went from being cool, but not really worthwhile to having massive potential. Still missing on the iPhone is 3G, but that’s coming, and likely in an ’08 refresh of the product line.

Gizmodo has an interesting banner on top of their coverage of the announcement. Notice the positioning of the Firefox logo. This comes pretty soon after the announcement of the Firefox Mobile effort. Provided the SDK provided is good enough, I think there’s a pretty good chance we will see a Gecko product on the iPhone in the not too distant future. For quite some time it will likely be Minimo based and very simple, not the more robust plans which require Mozilla 2.

On a side note, I’m surprised nobody has managed to get Linux running on the iPod touch yet. I thought that would have happened by now. The iPhone would be somewhat pointless since getting the phone functionality to work would be a real battle.

Some sort of simulator/emulator to aid development would also be interesting, though I don’t think that’s very likely.

Overall it’s great news. Lets see that SDK already!

Below is what was posted on Apple’s site today


Firefox Mobile

I am really glad to see the new Mozilla Mobile initiative. Mozilla 2 is a great time to undertake most of these changes. The thing that really sucks about developing for mobile devices is the browsers are pathetic at best (with the notable exception of the iPhone). Wireless speed is still an issue in some cases, but with 3G coming about, it’s not the biggest concern if you can manage to keep things slim. XUL on mobile will be very interesting. If done right, it would allow for client side applications that don’t suck, yet have the lowest barrier to entry (JS+XML = Easy). Not to mention you can target a bunch of devices with one download and code base. Don’t forget you’d still be able to do rather realistic debugging on your desktop.

Hopefully by the time this all comes around, data charges for mobile will drop significantly. The iPhone is still $60 for the cheapest plan. If you need more than 450 minutes of voice, you’ll be spending even more. While interest in the iPhone is high, between hardware and plan costs, I think it’s still to high to attract the masses. There’s still time. Firefox 3 isn’t even out yet. Mozilla 2 is still a little while away. I suspect these prices will be dropping as other providers try to compete with the iPhone. A price war is very likely.

One question remains: will it run on the gPhone?

Apple Mozilla

Identity Crisis?

Some real quick thoughts on UI this evening. This isn’t a very formal post but an attempt to get some thoughts out there.

So there’s talk of a new theme for Firefox on Mac OS X. According to some, it’s a clone of Safari. One must remember these are just early prototypes, not final UI by any stretch of the imagination.

I’m going to agree it’s got some similarities, but I don’t think there’s much choice if Firefox is to look like a native Mac OS X application. Originally Mac OS X preferred the “pinstripe” interface design. This is essentially what the current Mac OS X theme for Firefox is going for. I recall the pinstripe theme for Firefox even being considered a rip-off of other Mac OS X applications at the time. In more recent releases Apple has moved away from pinstripe and towards the “Brushed Metal ” interface. Apple in 10.5 is said to be moving away from Brushed Metal towards a “Unified” interface to address some perceived inconsistencies in the previous two UI schemes. There’s not to much on the web about Unified since 10.5 screenshots are forbidden under NDA, but you can catch a small glimpse via Apple’s Mac OS X pages for things like Mail and Finder. I’d consider it an incremental evolution from brushed metal, based on what I’ve seen thus far.

The application everyone seems to watch for cues to Apple UI standards seems to be iTunes/Quicktime. Which if you notice, even Safari resembles.

Consistency can be regarded as “boring”, but it does have an advantage. It’s becomes familiar quickly, and has less of a learning curve. It also makes applications seem more intuitive since UI elements are well understood. Apple wants this to encourage people to make the jump. Now more than ever (iPod effect).

That leaves the question: How do you blend in with the OS, while remaining unique? Especially one that’s looking to make things as simple as possible for the user by taking consistency to new levels. I personally think it’s all about making the easiest to use product out there, with the best features (not an easy combo). I don’t think most users are aren’t attracted to an “unique UI”. I think they are attracted to a clean, easy to use UI on an already great product. That’s not to say one shouldn’t be unique, or shouldn’t do a better job than others.

Perhaps it would be interesting to start a “user generated” brainstorm (yea, I threw in a “web 2.0 term”) similar to that of Gimp UI Redesign effort. Let users mock up what they think it should look like. If anyone wants to do so, feel free to do so (you can use free image hosting if needed) and leave a comment pointing to them. If someone wants to do so, I’ll gladly make a follow up post and put it on Planet Mozilla to get more eyes.

Edit [9/28/2007 @ 9:28PM EST]: Official wiki page for posting your mockups.

Google Mozilla Spam

Phishing Unit Testing And Other Phishy Things

Seeing these results is pretty cool. I hope someone has/will come up with a way to have a test like this running periodically (at least weekly, if not daily or multiple times a day) which does an analysis on Phishing sites and how many are being blocked. I’d presume Google and other data services would have some interest in this. It could be as simple as an extension for browsers (yes IE too) which reads a feed and visits each site, and reports the results to a web service. Running in a confined environment (virtual machine, or dedicated box) free of tampering. I think the real advantage would be to see how effectiveness varies over time as phishers become more sophisticated.

Take for example spammers. First spam was pretty simple, now they are using animated GIF’s, sophisticated techniques to poison Bayesian analysis, botnet’s etc. I presume over time we’ll see the exact same thing with Phishing attacks. I doubt it’s going to get any better. On the positive side of things, this is still at it’s infancy, so we can start learning now, and be more aggressive than people were about the spam problem, which got way out of hand before everyone realized it was really something to worry about.

I’d ultimately like to see just percentages of different anti-phishing blacklists/software updated frequently, so we can keep a running tally. Perhaps it would be a good indicator of when phishing tactics require a software or methodology update. I think overall everyone would benefit from some industry collaboration rather than competition. The problem with phishing is to be effective your research must be good. To do good research you need to cast a wide net, and capture only one species of phish while not letting any dolphins get stuck in the net (sorry, couldn’t resist).

I’d be curious to know what others think of such testing, and efforts (from general users, as well as anti-phishing/spam vendors). Is the war against spam effective? Should the same techniques be used? Is it time for coalition building? Should we each go in alone? How do you monitor changes in techniques used by phishing?

I know Google is pretty serious about keeping up with the data in a very timely manner, and from what I can tell, most other vendors are as well. But I wonder how industry wide statistics could further benefit. Perhaps simply the competition of trying to have a higher average score. Perhaps simply the detection of changes in techniques (noted by everyones collective decline in detection rate).

I’d love to hear what others think of Phishing protection. It’s a rather interesting topic that many don’t give too much thought to, but it really is an important part of how browsers make the internet safer.

Mozilla Personal

Moving Forward

It seems that since Firefox 2.0 has shipped, everyone is really taking some time to think about the future. Not that it wasn’t on peoples minds before 2.0. For me 2.0 was really a maintenance release. End users got some great new features and fixes, but all I really contributed was a small fix or two, most of the time I could allocate was spent on planning and server side development (more on that some other time). Mike Connor and I seem to have overlapped on feelings towards future improvements:

  • Site compatibility We’re doing pretty good, marketshare helps, but we need to be better. We need to push Reporter, and put real time into analysis of the top sites showing up there. Sometimes its our fault and we need to prioritize bugfixing, sometimes its Tech Evangelism (and we need to get back to doing that too).

I mentioned a few weeks ago it’s important for end users to report problems, and got some traction. But I’m still looking for ways to get more casual users reporting problems they encounter. Anyone with ideas on how this could best be accomplished, without annoying the user or adding intrusive UI should let us know. Either leave a comment here or contact me. We can’t help what we don’t know.

To help improve the quality of analysis of reports, I have gotten pretty close to a new reporter webtool. This version has much more flexibility and allows for easier viewing and manipulation of data. I hope to give it more time in the next few weeks and make a drive to go live with it. It’s been delayed several times (my fault), but it’s now in the final stretch. Future revisions will be much more incremental to prevent such delays again.

To further help improve the quality of reports, Gavin Sharp wrote a patch to capture the character encoding of web pages reported. I wrote one to allow users to send a screen shot of what they see. Both I believe should make 3.0. I think these changes will help improve things in the long term. Knowing the charset can help improve character related problems users experience (since charset detection is somewhat of a messy game), and having actual screenshots of what users see is of course beneficial for rendering issues.

Hopefully some of the bigger Gecko changes taking place on the trunk will further improve site compatibility. Of course growing marketshare has and will continue to help websites adopt a policy of cross browser compatibility. That has in the past, and will continue to be a driving force. So remember: don’t spoof your useragent more than absolutely necessary. Make sure webmasters know who you are. “Stand up and be counted”.

My personal goals for Firefox 3.0 are these:

  • Get new reporter webtool in place. Sooner the better. It’s been delayed to many times. At least now it’s close.
  • Get charset and screenshot support up and running. Investigate if there’s something else that would be really beneficial.
  • Find new ways to get more end users to let us know when they encounter a problem, rather than just keep quiet.
  • Keep reading, playing around and getting new ideas. IMHO that should be a goal for anyone doing anything in life.

I think that’s a rather obtainable set of goals with a definite positive impact.

I’m also working on an update to MozPod to allow for synchronizing your iPod calendar with Lighting (in Thunderbird) or Sunbird. It’s somewhat working but still rather buggy. There are also several fixes for other issues since the last release of MozPod. I hope to have that out by the end of the year (which would be 1 year after the last release).

On a more personal note, last month I accepted a position with CBS Digital Media as a developer at For those wondering, yes I am working on improving the experience for Firefox (and Mac users) among other development work. It’s not too bad right now (personal opinion), but being better is of course welcome. The usual disclaimer that the views on this blog are mine alone and do not represent those of my employer of course apply (but I’m sure you knew that already).

So there you have it. I plan to write a few more posts in the next few weeks more specific to individual things discussed here, but for now that should give everyone an idea about what I’ve got cooking. It’s a rather interesting mix of things I get to work on.