Categories
Google Networking

Google Wants To Make TCP Faster

Google has been pushing SPDY for a little while now, and so far I haven’t really seen a good argument against SPDY. Firefox 11 will ship with it, though disabled by default until the bugs are worked out. Now Google is turning its eyes towards TCP. Very logical.

While there are a variety of proposals to speed up TCP floating around, I wonder if Google would be better off just buying FastSoft for Fast TCP and pulling a VP8 style opening up. The reason being that it’s already in use on the web, Google could capitalize on that overnight. There are several TCP congestion algorithms out there, however Fast TCP seems to have the most established customer base, including CDN Limelight who uses it to upload to them.

Categories
Web Development

localStorage With Cookie Fallback

I mentioned the other day that localStorage can be used as an alternative for cookies. The benefit is that localStorage doesn’t sent it’s data back on every request to the server like a cookie. Headers are uncompressed in http making that very costly. However not every web browser out there supports localStorage. Most however do.

Here’s an example based on jQuery and jQuery.cookie. It’s designed to turn objects into JSON and store the JSON representation. On retrieval it restores the object. This doesn’t handle things like expiring cookies (it simply defaults to 365 days). I just had this around from an old project with these very specific requirements and figured I’d post it as-is in case it can help anyone and to demonstrate. This isn’t ideal for reuse, but for someone playing with the idea, maybe it will motivate 😉 .

function storeData(type, obj){
    var data = jQuery.toJSON(obj);
 
    // If using a modern browser, lets use localStorage and avoid the overhead
    // of a cookie
    if(typeof localStorage != ‘undefined’ && localStorage !== null){
        localStorage[type] = data;
    }
 
    // Otherwise we need to store data in a cookie, not quite so eloquent.
    else {
        jQuery.cookie(type, data, { expires: 365, path: ‘/’ });
    }
}
 
function loadStoredData(type){
     var data;
 
    // If using localStorage, retrieve from there
    if(typeof localStorage != ‘undefined’ && localStorage !== null){
        data = localStorage[type];
    }
 
    // Otherwise we have to use cookie based storage
    else {
        data = jQuery.cookie(type);
    }
 
    // If we have data, lets turn it into an object, otherwise return false
    if(data){
        return jQuery.secureEvalJSON(data);
    }
    return false;
}

Pretty simple right? It’s still a key/value API.

Categories
Mozilla

Privacy Issues Behind localStorage

Browsers need to overhaul their privacy settings to account for things like localStorage and bring control back to the user. In the days of cookies it was relatively simple for a user to wipe any identifiers (excluding IP address) from their browser. Simply clear cookies.

Firefox has two basic abilities, you can clear cookies, or you can browse and delete cookies. That’s great but not terribly clear that there’s more than cookies.

Firefox Cookie Privacy

Chrome as far as I know has no cookie browser like Firefox has, but (edit: Erunno notes in the comments you can via chrome://settings/cookies) explicitly lets you “Delete cookies and other site and plug-in data”. That’s pretty good.

Chrome Cookie Privacy

Today, I think Safari’s UI is the closest to perfect. Each hostname shows exactly what it has. My only gripe is that Safari doesn’t let you see what’s there. That’s a “power-user” feature however and I think it does an adequate job regardless.

Safari Cookie Privacy

Websites use more than just cookies these days. I discussed this a little over a year ago. The reason evercookie is controversial is that browsers don’t quite give users the level of control (real or perceived) that they expect for objects other than cookies.

Here is another use case for why this is needed. Google Analytics is used on perhaps half the internet’s websites. It sets a cookie every time. That means 230 bytes added to every http request for a lot of websites. Google could switch to localStorage and free up that 230 bytes. While they technically could do this, in practice, this could create a firestorm of attacks against them. The problem is it would be spun as Google trying to evade cookie deletion and and a privacy violation. The same storm that evercookie created. I suspect that’s why it hasn’t been done to date. The truth is the Google Analytics team has done a lot for improving performance including making it entirely async. But this move would be controversial.

It’s no longer about “cookies”, but “user data”.

Categories
Web Development

Profiling CSS

Some interesting research regarding CSS and performance that any web developer should read. Nothing really groundbreaking but it’s good to see the reason behind many of the rules we often hear. It’s also good to see multiple browsers releasing tools to test performance of selectors.

Categories
Web Development

Mobile Experience On A Budget

This blog is largely read by people on traditional desktops and laptops. It’s mobile usage is a bit on the low side, though that’s changing. I decided I wanted to start making the mobile experience suck less, but I didn’t want to go as far as serving a whole new experience for mobile. Responsive web design is interesting, but I didn’t want to invest so much time in it just yet.

This is still an ongoing project, and partially an experiment but here is my game plan:

Make The Standard Design Light/Flexible

The site right now is actually pretty simple in design and structure. It’s a grid layout, everything is modular and ID/classed up. I’m a slight performance junkie and graphically impaired, so image use is pretty minimal and I’ve sprited what I could to make the design as light weight as possible. The core site itself is actually just a few requests. Everything that can benefit from being so is minimized/gziped to lessen the payload down the wire. JS is only included on pages where it’s needed. Light payloads and minimal requests are the name of the game.

This benefits all platforms. Even with 45 thumbnails from Project 365 images on one page, I still can load it all about 2.5 seconds on my laptop. That’s not terrible. Some say “think mobile first”. I say “think performance everywhere”. There’s no reason why performance should be limited to mobile.

Rejigger For Mobile

Step two is to adjust the site for mobile. In my case that means hiding some less useful things, and some size and layout related changes to fit on a smaller screen. Like I said, I’m not serving up a separate mobile site. My pages are already rather light and saving 1 KB doesn’t seem worth it to me just yet. I just want my existing site to not feel like a desktop site. This is more about usability. Performance wise I optimized with mobile in the back of my head while working on the desktop experience.

I didn’t want to include a separate mobile stylesheet since my changes are intended to be subtle and minimal. Besides, that’s a separate request for mobile users. Instead I appended to my existing stylesheet with something like this:

@media only screen and (max-device-width: 480px) {
    body  {
        min-width: 0;
        width: auto;
    }
    #page {
        min-width: 0;
        width: 100%;
    }
    /* and so on… */
}
@media screen and (max-device-width: 480px) and (orientation:landscape) {
    /* landscape specific css */
}

Like I said, I intended for my changes to be pretty subtle. This works pretty well. The one thing it can’t really handle is images. I tend to be pretty light on image use, so it’s not a deal killer for me. However I may eventually look at better solutions in the responsive image world. For now I’ll just make the editorial decision to keep image use tame.

In general my philosophy has been:

  • Does this have value in a mobile context? (no: hide it, yes: continue)
  • Can I adjust the layout/design to make this not suck on mobile? (no: hide it, yes: continue)
  • Is this more work than it’s worth? (no: do it, yes: hide it)

Final Thoughts

I’ve still got some more polish to do, I know <pre/> blocks don’t look/feel right and the comments area is still not quite there. The image gallery experience is not even started. But overall it’s still better than the desktop experience I was serving just hours ago.

Perhaps one day the paradigm will shift on this blog, but I don’t see that happening just yet. Most of what benefits mobile also benefits the desktop experience. This approach gives me an improved mobile experience with minimal hassle. I also benefit from not needing two do work twice as I would if I had a separate mobile experience. That means more time to be productive.

Categories
Internet Networking

Improving DNS CDN Performance With edns-client-subnet

Several months ago I wrote about how third party DNS services often slow you down since a DNS query is only one part of the equation and many websites use DNS to help their CDN figure out what servers are closest (and fastest). A few proposals to fix this have floated around, one is finally making headway.

Google, Bitgravity, CDNetworks, DNS.com and Edgecast have deployed support for edns-client-subnet. The idea is pretty simple. It passes part of your IP address (only part as to keep it semi-anonymous) in the request. A server that supports this extension can use it to geotarget and find a CDN node closest to you. Previously the best that could be done was using the location of the DNS server, which in many cases could be far away.

Still missing is support from some heavyweights like Akamai, who is the largest CDN, Limelight Networks and Level3. This is a pretty solid proposal with minimal negative implications. They are only passing part of the origin IP address, so it wouldn’t be a privacy invasion. In theory any website you browse could already harvest the IP you are using, this is just making part of it accessible to a partner who is already serving data on their behalf.

Categories
Programming Web Development

PHP’s include_once() Is Insanely Expensive

I’ve always heard the include_once() and require_once() functions were computationally expensive in PHP, but I never knew how much. I tested the following out on my i7 2010 MacBook Pro using PHP 5.3.4 as shipped by Apple.

This first test uses include_once() to keep track of how often a file is included:

$includes = Array();
$file = ‘benchmarkinclude’;
 
for($i=0; $i < 1000000; $i++){
    include_once($file.‘.php’);
}

Took: 10.020140171051 sec

This second example uses include() and uses in_array() to keep track of if I loaded the include:

$includes = Array();
$file = ‘benchmarkinclude’;
 
for($i=0; $i < 1000000; $i++){
    if(!in_array($file, $includes)){
        include($file . ‘.php’);
        $includes[] = $file;
    }
}

Took: 0.27652382850647 sec

For both, the include had the following computation:

$x = 1 + 1;

Lesson learned: Avoid using _once if you can avoid it.

Update: That means something like this will theoretically be faster:

$rja_includes = Array();
function rja_include_once($file){
    global $rja_includes;
    if(!in_array($file, $rja_includes)){
        include($file);
        $rja_includes[] = $file;
    }
}
Categories
Google Web Development

Async AdSense

About a year ago I asked where the async Google AdSense was. It finally arrived, and you need to do nothing to gain the performance. Awesome!

Categories
Networking

DNS And CDN Performance Implications

I’ve seen various people complain about performance problems when using services like Google’s DNS or OpenDNS. The reason why people generally see these problems is because many large websites live behind Content Distribution Networks (known as a CDN) to serve at least part of their content, or even their entire site. You’re getting a sub-optimal response and your connection is slower than needed.

I’ve worked on large websites and setup some websites from DNS to HTML. As a result I’ve got some experience in this realm.

How DNS Works

To understand why this is, you first need to know how DNS works. When you connect to any site, your computer first makes a DNS query to get an IP address for the server(s) that will give the content you requested. For example, to connect to this blog, you’re computer asks your ISP’s DNS servers for robert.accettura.com and it gets an IP back. Your ISP’s DNS either has this information cached from a previous request, or it asks the websites DNS what IP to use, then relays the information back to you.

This looks something like this schematically:

[You] --DNS query--> [ISP DNS] --DNS query--> [Website DNS] --response--> [ISP DNS] --response--> [You]

Next your computer contacts that IP and requests the web page you wanted. The server then gives your computer the requested content. That looks something like this:

[You] --http request--> [Web Server] --response--> [You]

That’s how DNS works, and how you get a basic web page.

How a CDN Works

Now when you’re website gets large enough, you may have servers in multiple data centers around the world, or contract with a service provider who has these servers for you (most contract). This is called a content distribution network (CDN). Parts of, or your entire website may be hosted with a CDN. The idea is that if you put servers close to your users, they will get content faster.

Say the user is in New York, and the server is in Los Angeles. You’re connection may look something like this:

New York : 12.565 ms  10.199 ms
San Jose: 98.288 ms  96.759 ms  90.799 ms
Los Angeles: 88.498 ms  92.070 ms  90.940 ms

Now if the user is in New York and the server is in New York:

New York: 21.094 ms  20.573 ms  19.779 ms
New York: 19.294 ms  16.810 ms  24.608 ms

In both cases I’m paraphrasing a real traceroute for simplicity. As you can see, keeping the traffic in New York vs going across the country is faster since it reduces latency. That’s just in the US. Imagine someone in Europe or Asia. The difference can be large.

The way this happens is a company using a CDN generally sets up a CNAME entry in their DNS records to point to their CDN. Think of a CNAME as an alias that points to another DNS record. For example Facebook hosts their images and other static content on static.ak.facebook.com. static.ak.facebook.com is a CNAME to static.ak.facebook.com.edgesuite.net. (the period at the end is normal). We’ll use this as an example from here on out…

This makes your computer do an extra DNS query, which ironically slows things down! However in theory we make up the time and then some as illustrated earlier by using a closer server. When your computer sees the record is a CNAME it does another query to get an IP for the CNAME’s value. The end result is something like this:

$ host static.ak.facebook.com
static.ak.facebook.com is an alias for static.ak.facebook.com.edgesuite.net.
static.ak.facebook.com.edgesuite.net is an alias for a749.g.akamai.net.
a749.g.akamai.net has address 64.208.248.243
a749.g.akamai.net has address 64.208.248.208

That last query is going to the CDN’s DNS instead of the website. The CDN gives an IP (sometimes multiple) that it feels is closest to whomever is requesting it (the DNS server). That’s the important takeaway from this crash course in DNS. The CDN only sees the DNS server of the requester, not the requester itself. It therefore gives an IP that it thinks is closest based on the DNS server making the query.

The use of a CNAME is why many large websites will 301 you to from foo.com to www.foo.com. foo.com must be an A record. To keep you behind the CDN they 301.

Now lets see it in action!

Here’s what a request from NJ for an IP for static.ak.facebook.com looks like:

$ host static.ak.facebook.com
static.ak.facebook.com is an alias for static.ak.facebook.com.edgesuite.net.
static.ak.facebook.com.edgesuite.net is an alias for a749.g.akamai.net.
a749.g.akamai.net has address 64.208.248.243
a749.g.akamai.net has address 64.208.248.208

Now lets trace the connection to one of these responses:

$ traceroute static.ak.facebook.com
traceroute: Warning: static.ak.facebook.com has multiple addresses; using 64.208.248.243
traceroute to a749.g.akamai.net (64.208.248.243), 64 hops max, 52 byte packets
 1  192.168.x.x (192.168.x.x)  1.339 ms  1.103 ms  0.975 ms
 2  c-xxx-xxx-xxx-xxx.hsd1.nj.comcast.net (xxx.xxx.xxx.xxx)  25.431 ms  19.178 ms  22.067 ms
 3  xe-2-1-0-0-sur01.ebrunswick.nj.panjde.comcast.net (68.87.214.185)  9.962 ms  8.674 ms  10.060 ms
 4  xe-3-1-2-0-ar03.plainfield.nj.panjde.comcast.net (68.85.62.49)  10.208 ms  8.809 ms  10.566 ms
 5  68.86.95.177 (68.86.95.177)  13.796 ms
    68.86.95.173 (68.86.95.173)  12.361 ms  10.774 ms
 6  tengigabitethernet1-4.ar5.nyc1.gblx.net (64.208.222.57)  18.711 ms  18.620 ms  17.337 ms
 7  64.208.248.243 (64.208.248.243)  55.652 ms  24.835 ms  17.277 ms

That’s only about 50 miles away and as low as 17ms latency. Not bad!

Now here’s the same query done from Texas:

$ host static.ak.facebook.com
static.ak.facebook.com is an alias for static.ak.facebook.com.edgesuite.net.
static.ak.facebook.com.edgesuite.net is an alias for a749.g.akamai.net.
a749.g.akamai.net has address 72.247.246.16
a749.g.akamai.net has address 72.247.246.19

Now lets trace the connection to one of these responses:

$ traceroute static.ak.facebook.com
traceroute to static.ak.facebook.com (63.97.123.59), 30 hops max, 40 byte packets
 1  xxx.xxx.xxx.xxx (xxx.xxx.xxx.xxx)  2.737 ms  2.944 ms  3.188 ms
 2  98.129.84.172 (98.129.84.172)  0.423 ms  0.446 ms  0.489 ms
 3  98.129.84.177 (98.129.84.177)  0.429 ms  0.453 ms  0.461 ms
 4  dal-edge-16.inet.qwest.net (205.171.62.41)  1.350 ms  1.346 ms  1.378 ms
 5  * * *
 6  63.146.27.126 (63.146.27.126)  47.582 ms  47.557 ms  47.504 ms
 7  0.ae1.XL4.DFW7.ALTER.NET (152.63.96.86)  1.640 ms  1.730 ms  1.725 ms
 8  TenGigE0-5-0-0.GW4.DFW13.ALTER.NET (152.63.97.197)  2.129 ms  1.976 ms TenGigE0-5-1-0.GW4.DFW13.ALTER.NET (152.63.101.62)  1.783 ms
 9   (63.97.123.59)  1.450 ms  1.414 ms  1.615 ms

The response this time is from the same city and a mere 1.6 ms away!

For comparison www.facebook.com does not appear to be on a CDN, Facebook serves this content directly off of their servers (which are in a few data centers). From NJ the ping time averages 101.576 ms, and from Texas 47.884 ms. That’s a huge difference.

Since www.facebook.com hosts pages specifically outputted for the user, putting them through a CDN would be pointless since the CDN would have to go to Facebooks servers for every request. For things like images and stylesheets a CDN can cache them at each node.

Wrapping It Up

Now the reason why using a DNS service like Google’s DNS or OpenDNS will slow you down is that while a DNS query may be quick, you may no longer be using the closest servers a CDN can give you. You generally only make a few DNS queries per pageview, but may make a dozen or so requests for different assets that compose a page. In cases where a website is behind a CDN, I’m not sure that using even a faster DNS service will ever payoff. For smaller sites, it obviously would since this variable is removed from the equation.

There are a few proposals floating around out there to resolve this limitation in DNS, but at this point there’s nothing in place.

Categories
Web Development

Making Websites Faster

I’ve always been somewhat of a fan of minimalism when it comes to websites. The way I figure it:

Simpler + faster = better

Lately I’ve become slightly obsessed with seeing how much I can tweak a website to perform faster. In this case it’s my password generator SafePasswd.com, which I built in 2006 and overhauled in 2009. It’s always been somewhat of a playground for me to try things a little different.

The old site circa 2006 took about 5 seconds to load. It was sometimes a little more if the server was under load. After an overhaul I got down to about 3.5 seconds. A solid and respectable improvement considering the new version is way better.

After my latest round of optimization with a clear cache, I’m down to 1.61 seconds, and I think I can get it slightly lower. Google Page Speed score is now a 95 and YSlow is B (83) or A (95) for v2 and v2 small site respectively.

SafePasswd.com Load Time

Most of the improvements were relatively small and simple. A few required some backend changes to make it all work from a technical perspective. Even with a fairly image-centric design it’s possible to get pretty decent performance.