I just noticed that the Acid2 test is today’s featured article on Wikipedia. It is a well done article that even shows a timeline of when browsers became compliant and screenshots of various browsers status. The Acid3 article is also rather good.
Loren Brichter is the author of the popular Twitter application Tweetie, an iPhone only application until the Mac version was released on Monday. MacWorld has a great little interview with Loren. One thing I really admire is that Loren really understands how to build a good application. Performance, ease of use, simplicity are all taken into account. Not just features and toys.
I thought this particular nugget was the highlight though:
..AIR apps are like modern day Java applets… sure, they run on every platform. But they also suck on every platform.
I’ve yet to find an Adobe AIR application I like even though several have great ideas behind them. Even on Windows, where I presume AIR has the biggest market share they all look strange, the UI is garbage and the performance is abysmal. On the Mac it gets even worse. Creating a Mac theme won’t help as my expectations for a Mac UI are different than they are on Windows or Linux. Java apps have the same issues.
I think this is why more and more “applications” are becoming web based. If your going to feel awkward and unnatural to the user anyway, why even bother with the installation barrier? Why not just be web based so you don’t have to download and install. As awkward as they may be, those that add Adobe Flash tend to make the problem worse by adding more strange feeling UI to their application. Adobe Flash does do good video, it’s a big reason YouTube became popular, but it’s really no replacement for user interface. Hopefully in 2017 when HTML5 is wrapping up we’ll have this problem solved.
If you’ve held a unibody MacBook and a previous generation , you know the new unibody is a substantial improvement on an already solid frame. Even with that knowledge I’m surprised to see these pictures of a MacBook that survived Turkish Airlines Flight 1951 which crashed outside of Schiphol in February.
Noteworthy is that rather than a sharp bend in the frame which would have cracked the motherboard and potentially crushed the hard drive it instead only bent a few degrees. It looks like the board stayed at least mostly intact, at least enough to boot. The display is pretty busted but that’s to be expected.
There’s a fair amount of controversy regarding Phorm a company who plans to target advertising by harvesting information via deep packet inspection. They are already in talks with several ISP’s. I’ll leave the debate over Phorm from a user perspective for someplace else.
They claim to offer ways to let websites opt out of their tracking but it’s a true double-edged sword as they don’t play nice with a standard robots.txt file. Take a look at what they are doing here:
The Webwise system observes the rules that a website sets for the Googlebot, Slurp (Yahoo! agent) and “*” (any robot) user agents. Where a website’s robots.txt file disallows any of these user agents, Webwise will not profile the relevant URL. As an example, the following robots.txt text will prevent profiling of all pages on a site:
Rather than use a unique user agent they are copying that of Google and Yahoo. The only way to block them via a robots.txt file is to tell one of the two largest search engines in the western world not to index your site. This seems fundamentally wrong.
There is an email address where you can provide a list of domains to exclude, but that requires intervention and updating a list of domains when you create a site. This obviously doesn’t scale.
Now I’m curious. Is piggybacking off of another companies user agent considered a trademark violation? From what I understand they aren’t broadcasting it, just honoring it. If I were Google or Yahoo I’d be pretty annoyed. Particularly Yahoo since there are websites who will just block Slurm given Google’s dominance in search. Yes there are many user-agent spoofing products out there (including wget and curl), but nobody to my knowledge is crawling web pages for a commercial purpose hiding behind another company name.
robots.txt is a somewhat flawed system as not all user agents even obey it (sadly) though it’s one of the only defenses without actual blocks that exist.
This is the second literal video I’ve mentioned here. I still think it’s an interesting concept. This one is particularly amusing since it’s The Monkees, and their video is indeed full of things to make fun of 40 years later. Enjoy!
Yes, this is the third Baby Got Back post on this blog. Why? Because I find these parodies of the most bizarre song ever made to be quite amusing. Enjoy. This one is a part of a Burger King promotion (featuring The Burger King himself) and SpongeBob. This is a new level of bizarre.
Internet companies have the unique ability to scale quicker than any other industry on earth. Never before has a company been able to position itself from being nothing more than an idea to being in the living rooms of millions around the globe in a matter of hours. While this introduces seemingly unlimited opportunities to grow it also allows for exponential waste if a company isn’t careful. It’s interesting to see how they scale. Scaling businesses in many ways isn’t very different than scaling servers and software stacks.
Started in 1907 and adopting the name United Parcel Service in 1919 UPS has no real “high tech” background unless you include the Ford Model T. That doesn’t mean it couldn’t become more efficient. UPS has made a science of the delivery business. For example it’s famous for it’s “no left” policy. Simply put they found that avoiding left turns means less time waiting at lights which means less fuel is wasted. The more efficient routing formerly done by humans now computerized saves them 3 million gallons of fuel in 2007 alone. Lets do the math:
Assuming they run 100% diesel at an average cost of $2.87/gallon in 2007 [doe] multiplied by 3 million that’s $8.61 million dollars by trying to avoid left turns.
Not bad for a souped up mapping application.
By having their drivers do things like turning of the ignition while unbuckling their seat belt at the same time, and scanning for the doorbell while walking towards the door (it’s easier to see from a distance than up close) they can shave time off of their routes.
Then of course there’s package tracking. While customers might like to know in what city their weight loss taps are sitting tracking systems help reduce loss and monitor package routing for optimal efficiency.
Being the largest search engine, a large ad network, email provider, analytics firm, mapping service, video site, and whatever else they do means Google needs a ton of servers. Cramming servers into data centers and keeping them cool to prevent hardware failures is a complicated and expensive task. Keeping the whole thing powered is also really expensive. Google has scrutinized server designs to eliminate all waste possible. This has resulted in Google having more horsepower at a lower cost than their competitors. Having more capacity at a lower cost means Google can do more at a lower cost than their competitors. I won’t discuss Google in too much detail since they did a great job themselves recently and I mentioned it the other day in another blog post: Google’s Data Center Secrets.
Amazon’s long been improving efficiency by using data collection and analysis to encouraging their customers to spend more. Their algorithms to recommend related products you might be interested in is one of the best out there. Their ordering system is streamlined to prevent customers from bailing before completion. Their products are SEO’d to appear on the top of Google searches. That doesn’t mean Amazon can’t improve other parts of their business.
Amazon several months ago started a Frustration-Free Packaging program. Here’s how they describe it:
The Frustration-Free Package (on the left) is recyclable and comes without excess packaging materials such as hard plastic clamshell casings, plastic bindings, and wire ties. It’s designed to be opened without the use of a box cutter or knife and will protect your product just as well as traditional packaging (on the right). Products with Frustration-Free Packaging can frequently be shipped in their own boxes, without an additional shipping box.
The key here is
“can frequently be shipped in their own boxes”. By shipping a box alone rather than packaging they can skip a step in their warehouses (and the packaging materials that go with packaging something for delivery). This also lowers the weight as those extra boxes don’t weigh 0 oz. The frustration free packaging is also the perfect shape for efficiently filling trucks and strong enough to not crush easily thus lowering returns due to damage.
Amazon now even has a feedback form [login required] for users to share what they think of their package. This has the added bonus of helping further reduce the inefficient shipping practices so common right now.
Amazon’s also done a significant amount of work on their infrastructure to make their servers scale well using tech such as EC2 and S3. By selling capacity to other companies they able to take advantage of economy of scale as well as diversify their business beyond just retail. Of course they are planning their data centers to have access to cheap power.
These aren’t haphazard attempts at increasing efficiency, these are well calculated engineered approaches to removing even the smallest inefficiencies with the knowledge of how they compound as operations scale. Aren’t they clever?
Thunderbird is not dying. Great work has already taken place by some talented developers. Need proof? Check it out (though take note it’s a beta and shouldn’t be used in a production environment, yada, yada yada). That said, it been a long road to 3.0 but progress is being made.
You can also take a look at Postbox (run by some ex-mozilla folks), based the same core code with some pretty innovative features.
I buy a decent amount of stuff online, both physical goods and services from various vendors. It amazes me how few get the order confirmation and shipment notification emails right. Most companies do a downright awful job.
Order confirmations should be sent shortly after an order has been sent and the credit card has been accepted. It should contain the following information:
A shipment notification should be sent for each day something ships. All shipments from all warehouses should appear in 1 email. For example if I order 3 things and it ships from 3 warehouses on 1 day, I expect 1 email. If I order 3 things and it ships over 2 days from any number of warehouses I expect 2 emails, one each day. If there is more than one package, I expect each to be listed in the email.
The email should contain the following:
It makes things so much easier the closer companies get to following this. Most companies get about 80% of the list, only a select few get this done correctly 100% of the time. The closest I’ve seen to date is ThinkGeek who has been pretty close to perfect every time I order.
Big news today is that Google “unveiled” (more like confirmed) some data center secrets:
It has been known for years that Google has been building it’s own servers rather than buy from a vendor. They have defended this as their servers are more efficient and customized for their needs than they could ever buy. They cut out things like a video card which do nothing but add a point of failure and waste power. They put a battery on the server itself rather than have a UPS for the rack they found it to be more cheaper and more efficient. They also hang the power supply away from the rest of the system itself, presumably for cooling. This actually isn’t shocking since it’s been leaked several times before, though this is the first time that I’m aware of Google speaking publicly about their design in this much detail.
Apparently since 2005 Google has been using shipping containers as data centers. It’s been known for a long time Google was interested in the idea (as were other companies) but a first that they have actually been using them for a while. 1,160 servers per container utilizing 250 kilowatts of power = 780 watts per square foot. Very impressive.
I guess it’s only a matter of time before we see commercial servers, and perhaps even some desktops with power supplies that have their own batteries.
Update [4/11/2009 @ 5:00 PM EST]: Google has a blog post up including video of the summit.