Someone looking for their 5 minutes of fame (obviously not worth 15 minutes) decided to post some Firefox Myths. It’s an interesting read, though has a few oddball statements, that really don’t make sense.
“Firefox has lower System Requirements than Internet Explorer”
The author omits that the “system requirements” don’t make the product usable. It’s just the lowest tested environment where the product runs. Windows XP can run on a 233 MHz CPU with 64 MB RAM. It doesn’t include a warning that you’ll throw it against a wall for the poor performance. To use any modern browser you going to need more than the minimum specs. Just ask any gamer how accurate the “minimum specs” are.
“Firefox is faster than Internet Explorer”
“Faster” can refer to many things (boot, css rendering, html rendering, large file rendering, UI responsiveness, etc. etc.). Assuming boot time, yes IE is faster considering it boots on startup. I don’t think anyone has calculated what IE would take if it didn’t integrate into the OS. My bet would be Opera is the fastest on Windows.
“Firefox is a secure Web Browser”
This is literally the first time I’ve heard that argument. The closest I’ve heard is “more secure”. Nothing more than a “Hello World” program is secure. Every product has vulnerabilities no matter how good the programmer, and no matter how good the audit on the source code. The question is how easy to detect and utilize are the vulnerabilities. I’d say since you can trick an IE user into trusting an ActiveX object (you can’t do that in Firefox since it won’t use ActiveX), there’s an advantage right there. Social Engineering is a form of hacking. You don’t have to know how to program to hack. The closest Firefox has is Extensions, though they seem to be mainly limited to more advanced users, who tend to be a bit more cautious.
“Firefox is a Solution to Spyware”
“Firefox is Bug Free”
Ok, I admit I literally laughed at this one. I can’t imagine anyone with any computing experience possibly making this claim. So I’d say the author made this one up. As the author points out it’s impossible for software to be bug free.
“Firefox was the first Web Browser to offer Tabbed Browsing”
Again something I doubt is really said, especially considering as Asa Notes:
In September of 2001, Dave Hyatt added a tabbed browsing mode to Mozilla. This feature was release in Mozilla 0.9.5 in October of 2001
Yes that’s right. Mozilla (SeaMonkey) had tabs before Firefox was even on the radar. He also notes Netcaptor as being first.
“Firefox fully Supports W3C Standards”
Again not likely anyone really says that. Anyone who cares enough to even know what W3C Standards are knows how poorly implemented they are. Interestingly the author omits that IE doesn’t fair to well in most categories of the site the author choose to reference. The author also misreads the statistics:
|XHTML 1.0 changes
|XHTML 1.1 changes
Notice the word “changes” as the stats author defines it (“not covered in the sections above”). The results are cumulative. You can achieve 100% XHTML 1.1 but still be pretty much nowhere because your XHTML 1.0 is so low. 100% XHTML 1.0 and 24% XHTML 1.1 (Firefox) is more usable than 58% XHTML 1.0 and 39% XHTML 1.1 (IE) for most (if not all) real purposes. Now to be fair to everyone the author notes “Percentages only concern the features tested by this resource”. I’m not sure if there is a more through analysis than that. If someone knows of one, please leave a comment.
“Firefox works with every Web Page”
This is the topic I have a fair amount of experience with, considering I implemented the reporting tool, and work with the data a bit. Of course the author managed to pull a percentage (15% incompatible) out of it’s proper context to make the percentage appear to be something static, when in reality, the source the author quotes states:
If Mozilla and the other non-Microsoft browser outfits hold their own or gain share, the 15% of Web sites that aren’t completely compatible with non-Microsoft browsers will come under pressure to design their sites to open Net standards. That way, Microsoft won’t be able to control how content is presented on the Web.
I personally can’t vouch for the accuracy of that number to begin with, so I’ll take it as truth with a grain of salt. I can’t imagine how someone could even make such a number without testing each website on the internet manually (since you can’t tell compatibility by machine since expected output isn’t a quantitative term. You’d need some revolutionary AI to do a task like that). Then you’d most likely need to factor in a site’s relevance. A 12 year olds GeoCities website shouldn’t have the same weight as Google for example (considering each to be 1 website). It’s actually an interesting statement. I’d love to know how WebSideStory (who came up with the stat) actually calculated it. If anyone from WebSideStory is reading, and would be willing to email me a bit more on the topic, I’d love to get a better understanding of the number.
Overall it was an entertaining read, though I’d question how many really are “myths” and how many are made up “myths” so the author had content to write about. Most of them are highly technical, and anyone who would even mention them would know how ridiculous they are. It’s like a Chief believing that Extra Virgin Olive Oil has to be pressed by virgin women (for those wondering EVOO is actually the first press, regardless of the history of the person who actually does the press).