Categories
Google Mozilla

Quick Thoughts On Dart

Google yesterday officially took the wraps off Dart. Google decided to stop short of outright calling it a replacement for JavaScript, however that does seem to be one of the goals.

I’m still looking at it myself, but my first impression is that the point of another language is buried in the details of the announcement. This particular sentence I think is the focal point (emphasis mine):

  • Ensure that Dart delivers high performance on all modern web browsers and environments ranging from small handheld devices to server-side execution.

I suspect the real goal behind Dart is to unify the stack as much as possible. Web Development today is one of the most convoluted things you can do in Computer Science. Think about just the technologies/languages you are going to deal with to create a “typical” application:

  • SQL
  • Server Side Language
  • HTML
  • CSS
  • JavaScript

That’s actually a very simple stack and almost academic in nature. “In real life” Most stacks are even more complicated, especially when dealing with big data. Most professions deal with a handful of technologies. Web Development deals with whatever is at hand. I’m not even getting into supporting multiple versions of multiple browsers on multiple OS’s.

Google even said in a leaked internal memo:

– Front-end Server — Dash will be designed as a language that can be used server-side for things up to the size of Google-scale Front Ends. This will allow large scale applications to unify on a single language for client and front end code.

Additionally:

What happened to Joy?
The Joy templating and MVC systems are higher-level frameworks that will be built on top of Dash.

By using one language you’d reduce what a developer needs to know and specialize in to build an application. This means higher productivity and more innovation and less knowledge overhead.

This wouldn’t be the first attempt at this either for Google. GWT is another Google effort to let developers write Java that’s transformed into JavaScript. This however doesn’t always work well and has limitations.

The web community has actually been working on this in the other direction via node.js which instead takes JS and puts it on the server side, rather than inventing a language that seems almost server side and wanting to put it in the browser.

Google still seems to have plans for Go:

What about Go?
Go is a very promising systems-programming language in the vein of C++. We fully hope and expect that Go becomes the standard back-end language at Google over the next few years. Dash is focused on client (and eventually Front-end server development). The needs there are different (flexibility vs. stability) and therefore a different programming language is warranted.

It seems like Go would be used where C++ or other high performance compiled languages are used today and Dart would be used for higher level front-end application servers as well as the client side, either directly or through a compiler which would turn it into JavaScript.

Would other browsers (Safari, Firefox, IE) consider adopting it? I’m unsure. Safari would likely have a lead as the memo states “Harmony will be implemented in V8 and JSC (Safari) simultaneously to avoid a WebKit compatibility gap”. Presumably IE and Firefox would be on their own to implement or adapt that work.

New languages rarely succeed in adoption. On the internet the barrier is even higher.

Categories
Mozilla

view-source: Now Supports Links

A very cool change landed in Firefox 3.1. View source will now create links where appropriate (a rather old bug I might add). I must have copy/pasted millions of url’s over the years out of view source so that I can look at a JS or CSS. This is an immense help for anyone who does this quite often.

Just another great piece of polish for Firefox 3.1.

Categories
Google Mozilla Web Development

Usefulness + Speed = Users

As a frontend developer I’ve long argued the magic formula for a good website is:

Usefulness + Speed = Users

This is based on the fact that the best websites on the internet are pretty spartan in appearance. When you look at many of the successful ones (Google, Yahoo, Craigslist, Facebook), they’ve all taken the approach of simplicity on the frontend. They keep the user interface as minimal as possible, and they keep the technology and code as minimal as possible.

An interesting quote from CNet:

The same effect happened with Google Maps. When the company trimmed the 120KB page size down by about 30 percent, the company started getting about 30 percent more map requests. “It was almost proportional. If you make a product faster, you get that back in terms of increased usage,” she said.

Emphasis mine.

Just goes to show that faster things become more than useful to users. They become a convenience. Users don’t really care how it looks or they would have switched from boring Google a long time ago. They just find it so convenient and quick they can’t stop using it.

I suspect this is why digital clocks are so popular.

Roman Numeral Analog Clock

Most people find an analog clock to be “classy”, in particular when there are roman numerals. But when you come down to being practical, they aren’t as quick to read for most people since we rarely deal with roman numerals. The solution used to be using Arabic numbers to increase usability and speed:

Arabic Numeral Analog Clock

This is better, but not perfect. Still slow to read, and your estimating the minutes. These days, we have the technology to produce low cost digital time readouts with Arabic numbers. These are more accurate since they show the minutes, and maybe even seconds, and can be read at a glance with almost no effort.

Arabic Digital Clock

Despite hardly looking fancy, this is what you see in most train stations, airports, etc. The older clocks are still around, but mostly for aesthetic purposes. People are willing to sacrifice looks for convenience. That’s why they walk around with digital watches rather than the more classy ones. Both can be found for cheap, but one can easily be read (even with poor vision, and in the dark).

Simplicity always rules. Unless your a nerd with a binary clock (which is cool).

I suspect this rule also holds true for software. If it’s faster, people are more inclined to use it. People moved from IE 6 to Firefox because it’s faster. Given that Firefox 3 is even faster… I’m hoping this trend will be proven yet again with an improved adoption rate.

Another upcoming test of this principle will be the Apple’s 3G iPhone. Will the average number of minutes browsing the web increase with the additional speed of a 3G network? Will faster performance make people use the device more? I suspect so. I also think it will increase adoption as many people were turned off on the idea of spending that much for EDGE. For 3G, that’s a different story.

It’s really pretty interesting stuff. People often associate usability with user interface design, and never performance. But that data really does seem to point to performance being one of the easiest ways to make a product more usable.

Images: Grand Central Terminal clock © 2004 Metropolitan Transportation Authority, Clock in Kings Cross, LCD Clock Grey via Wikipedia

Categories
Blog Personal

5 Years

Despite this year actually knowing it was coming, it still sounds strange that I’ve had this site up and running for 5 years now. That’s a half decade. In all honesty when I started I didn’t expect it to last too long. I’ve had the domain for nearly 10 years.

From Then To Now

This started out as a few static pages over 5 years ago, and eventually turned into a blog in March 2003 as a college Freshman. Generally I’ve kept the format pretty much the same, the most notable change was switching from more “random” posts to mostly tech related posts in recent years. This wasn’t intentional, it’s just how it worked out. The reception to that change has been overwhelmingly positive. Though I’ve been asked from time to time, to bring some humor back.

Just since graduating college in 2006, I’ve been mentioned on numerous blogs, made the Digg homepage, quoted on Ars Technica (more than once), linked on Gruber’s Darling Fireball. Most of that is pretty recent too. Daily traffic has been increasing pretty steadily during this period.

Now 1,323 posts and 3,481 comments later, I’ve been contemplating what I want to do next…

Where it’s going

I’ve given a fair amount of though as to what I want to do here at the 5 year mark. I’ve decided to make the following changes slowly over upcoming months as I think it’s a better approach:

  • New Design – The current design has been live since about 2005, with only minor tweaks. It’s to narrow for many images I’ve wanted to use, and there’s a lot of wasted space. The new design, already in the works is optimized for 1024×768. Also pushing content up further by removing that stale image from the header. I initially though I’d change that more often, but it’s not happening. I had hoped to have this done for today, but that didn’t happen. Instead “it’s done when it’s done”.
  • Features vs. Regular Posts – The biggest change from a content perspective is I want to make is to distinguish what will be more notable posts from the daily posts. These posts tend to be a dozen a year. Both in terms of development, giving them more time and thought, to how I present them. I’ve yet to decide exactly how to accomplish this.
  • Post Regularly – I’ve always posted in bursts, it’s become a dirty habit. I’m going to give it yet another try to be more consistent. I think being regular makes things flow better and results in a better thought process. Will my mind work like this? I don’t know. But if it can, that would be awesome.
  • Going Off Topic – This is clearly a more technical blog (where else do you find holiday SQL and other code jokes), but it’s time to bring some off topic stuff back. Not sure when this will start, but it’s a goal of mine.
  • More On Topic – I spend a fair amount of time lately on current news. I’m thinking of slowly downplay this a little more and focus on real development stuff.

There are other changes planned, some large, some small, but there are the ones that I think are worth mentioning right now.

The focus on technology, in particular web development, and business will remain, I have no intent on changing that. I just want to improve how I do it, and what surrounds it.

Before someone asks, I’ll still cover all Firefox and Thunderbird related news both big and small as I always have. Nearly half the posts on this blog fall into that category. They are among the most popular ones, and some of my personal favorites.

Like anything else in the world a website falls into three categories: it grows and matures, it dies, or it sits like a rock. I like the idea of growing and maturing.

So here’s to the next 5 years.

Categories
Mozilla Web Development

The Winner For Most Embedded Is: SQLite

So the format war of Blue-ray vs. HD-DVD is over. There are still several other rather significant battles going on in the tech world right now that aren’t Microsoft vs. Apple or Yahoo vs. Google. For example:

Adobe Air vs. Mozilla Prism vs. Microsoft Silverlight

Google Gears vs. HTML5 Offline support

Android vs. iPhone SDK vs. Symbian

Ruby On Rails vs. PHP

Not every case will have a true “winner”. That’s not really a bad thing. Choice is good. In some cases they will merge to form one standard, such as what’s likely for offline web applications.

What is interesting is that SQLite really dominates right now. Adobe Air, Mozilla Prism, Google Gears, Android, iPhone SDK (likely through Core Data API), Symbian, Ruby On Rails (default DB in 2.0), PHP 5 (bundled but disabled in php.ini by default). It’s becoming harder and harder to ignore that SQL survived the transition from mainframe to server, and now is going from server to client.

No longer is the term “database” purely referring to an expensive RAID5 machine in a datacenter running Oracle, MySQL, DB2 or Microsoft SQL Server. It can now refer to someone’s web browser, or mobile phone.

This has really just begun to have an impact on things. The availability of good information storage, retrieval, and sorting means much less of these poorly concocted solutions and much better applications. Client side databases are the next AJAX.

Edit [2/27/2008 9:14 AM EST]: Added Symbian, since they also use SQLite. Thanks Chris.

Categories
Internet

W3C On DTD Perversions

According to the W3C Systeam’s blog, there’s a lot of poorly designed software out there. It’s pretty rare that something has a legitimate need to pull down a DTD in order to work. They should never be requesting it on a very frequent basis. It’s a very cachable asset. The post includes some pretty impressive stats too:

..up to 130 million requests per day, with periods of sustained bandwidth usage of 350Mbps, for resources that haven’t changed in years.

They also make a few requests which really all developers should follow. Here’s my summary:

  • Cache as much as possible, to minimize your impact on others (not to mention improve your performance).
  • Respect caching headers
  • Don’t fetch what you don’t need
  • Identify yourself. Don’t use a generic UA.
  • Try not to suck.
Categories
Mozilla Web Development

Meta Stupidity

As Robert O’Callahan, John Resig, Anne van Kesteren all point out, this idea of using a meta tag to select a rendering engine is bad. Here are my personal thoughts on the issue. Not as a browser developer but as a web developer.

Essentially the argument by the IE team is this: Rather than fix the problem, lets create a larger problem so the smaller one isn’t very noticeable.

Yea, that’s how I parsed the blog post. For anyone who disagrees, perhaps I interpreted it wrong because they didn’t select the correct parser because they didn’t include the following:

<meta http-equiv="X-UA-Compatible" content="IE=8;FF=3;raccettura=serious;OtherUA=4" />

All joking aside it’s an insane idea guaranteed to set things back.

Categories
Mozilla Web Development

Geek Reading: High Performance Web Sites

So I decided to do a little book shopping a few weeks ago and one thing I purchased was High Performance Web Sites: Essential Knowledge for Front-End Engineers (affiliate link). At its core is essentially a 14 step guide to making faster websites. I don’t think any of the steps are new or innovative, so anyone looking for something groundbreaking will be sorely disappointed. I don’t think the target audience has that expectation though. It’s still a rather practical book for any developer who spends a lot of time on the front-end of things.

It gives many great examples on how to implement, as well as suggestions based on what some of the biggest sites on the web are doing (including Yahoo, the authors employer). I found it pretty helpful because it saves hours worth of research on what other sites are doing to improve their performance. For that reason alone it’s a worthwhile book to checkout. For each rule there’s enough discussion to help you decide if you can implement an improvement on your own site or not. Most sites are limited by their legacy systems such as cms, processes (including human) and audience in what they can actually do. Unless you’ve got a serious budget, you likely fail rule #2 (use a CDN) right off the bat. Regardless there’s likely a few tips you can take advantage of. It’s also a very fast book to get through.

Most steps are pretty quick to implement provided they are feasible in your situation. Overall one of the best “make it better” tech books I’ve seen regarding web development. One of the few that actually appeared worth purchasing (and I did). The majority of the tips require a somewhat tech savvy approach to web development, the book isn’t oriented much towards web designers (with the notable exception of reducing the # of requests by using CSS and better use of images) or casual webmasters. It’s important for those who understand the importance of HTTP headers, but could use some help deciding on best practices, and those who want to know how the small things can add up.

Interestingly enough, I learned about the book by trying the YSlow extension which essentially evaluates a page against the rules suggested in the book. Interesting from a marketing perspective I guess. Overall this blog evaluates ok (about as well as it ever will considering I’m not putting it on a CDN anytime soon). Though I guess I could add some expires headers in a few places.

Categories
Open Source Web Development

Snoopy’s Relative Redirect Bug

Snoopy is a PHP class that automates many common web browsing functions making it easier to fetch and navigate the web using PHP. It’s pretty handy. I found an interesting bug recently and diagnosed it this afternoon.

If you navigate to a 301 or 302 redirect in a subdirectory you can get something like this:

HTTP/1.1 302
Date: Sat, 13 Oct 2007 20:26:46 GMT
Server: Apache/1.3.33 (Unix)
Location: destination.xml
Transfer-Encoding: chunked
Content-Type: text/html; charset=utf-8

The key thing to pay attention to here is Location: destination.xml. Say your initial request was to:

http://somesite.tld/directory/request.xml

Our next request based on the redirect should be to:

http://somesite.tld/directory/desination.xml

Instead what Snoopy is doing is appending to the hostname, resulting in an incorrect request:

http://somesite.tld/request.xml

This is correct in cases where the first character of a redirect location contains a “/”. In this case it does not, which makes it incorrect. The following patch I wrote corrects this behavior. As far as I can tell (I haven’t read every word of the spec, but many chunks over the years) the HTTP 1.1 specs RFC 2616 only dictate that URI be provided, it doesn’t seem to require full url’s. See comments for follow up discussion on the specs. My conclusion is that it’s best practice but not required to use absolute uri’s). I wouldn’t call this a very common practice, but it does exist in the wild.

— Snoopy.class.php    200511-08 01:55:33.000000000 -0500
+++ Snoopy-patched.class.php    2007-10-13 16:10:38.000000000 -0400
@@ -871,8 +871,18 @@
                                // look for :// in the Location header to see if hostname is included
                                if(!preg_match("|\:\/\/|",$matches[2]))
                                {
                                        // no host in the path, so prepend
                                        $this->_redirectaddr = $URI_PARTS["scheme"]."://".$this->host.":".$this->port;
+                                       // START patch by Robert Accettura
+                                       // Make sure to keep the directory if it doesn’t start with a ‘/’
+                                       if($matches[2]{0} != ‘/’)
+                                       {
+                                               list($urlPath, $urlParams) = explode(‘?’, $url);
+                                               $urlDirPath = substr($urlPath, 0, strrpos($urlPath, ‘/’)+1);
+                                               $this->_redirectaddr .= $urlDirPath;
+                                       }
+                                       // END patch by Robert Accettura
+
                                        // eliminate double slash
                                        if(!preg_match("|^/|",$matches[2]))
                                                        $this->_redirectaddr .= "/".$matches[2];

Code provided in this post is released under the same license as Snoopy itself (GNU Lesser General Public License).

Hopefully that solves this problem for anyone else who runs across it. It also teaches a good lesson about redirects. I bet this isn’t the only code out there that incorrectly handles this. Most redirects don’t do this, but there are a few out there that will.

Categories
Mozilla

Firefox Mobile

I am really glad to see the new Mozilla Mobile initiative. Mozilla 2 is a great time to undertake most of these changes. The thing that really sucks about developing for mobile devices is the browsers are pathetic at best (with the notable exception of the iPhone). Wireless speed is still an issue in some cases, but with 3G coming about, it’s not the biggest concern if you can manage to keep things slim. XUL on mobile will be very interesting. If done right, it would allow for client side applications that don’t suck, yet have the lowest barrier to entry (JS+XML = Easy). Not to mention you can target a bunch of devices with one download and code base. Don’t forget you’d still be able to do rather realistic debugging on your desktop.

Hopefully by the time this all comes around, data charges for mobile will drop significantly. The iPhone is still $60 for the cheapest plan. If you need more than 450 minutes of voice, you’ll be spending even more. While interest in the iPhone is high, between hardware and plan costs, I think it’s still to high to attract the masses. There’s still time. Firefox 3 isn’t even out yet. Mozilla 2 is still a little while away. I suspect these prices will be dropping as other providers try to compete with the iPhone. A price war is very likely.

One question remains: will it run on the gPhone?