V-Chip 2.0?

According to CNet:

The Senate Commerce Committee approved legislation Thursday asking the Federal Communications Commission to oversee the development of a super V-chip that could screen content on everything from cell phones to the Internet.

The article omits the fact that it’s 99.99997% sure to fail and the committee knows that. Taking a look at it from a tech and historical view of the Internet alone proves that. From a web developer perspective, this stuff is pretty interesting.

The V-Chip is really not a complicated device. Essentially it works on the following logic (written in js for psudocode fun):

if(content.rating > user_setting.max_rating){
    interface.block();
}

Pretty simple right? Well that’s all the true “technology” does. The science of meta data decides how rating is determined and organized (what’s “Violent”, and what’s “Gore”?). That’s the tech side of things in a nutshell.

Here’s where the problem lies. The rating must come from somewhere. In order for this system to have any sort of effectiveness, every site on the Internet needs to be accurately rated. That’s right, every site. Doesn’t matter if it’s in English, German, or Japanese. It doesn’t matter if it’s hosted in the US, or in Korea. It still needs to be rated. But who does this? You could pass a law requiring content to be labeled. But that would only apply to US based sites. Enforcement overseas is virtually impossible. Enforcement in the US is virtually impossible. According to Pew Internet & American Life Project in 2005 “more than 11 million American adults who say they have created blogs”. I’m sure that numbers higher now. That means there’s at least 11 million blogs that need to be patrolled by the FCC to ensure they are labeled, and labeled correctly. Forget about the rest of the blogs in the world and the millions of different owners.

Still think this plan has a shot at working? Well it’s been done and pretty much failed before. It was the ICRA Rating Sytem. At first it was thought every site would label themselves. A quick glance around the web shows most sites don’t bother. AOL and Yahoo both do (both were big backers years ago), Microsoft also was, and even equipped IE with “Content Adviser”, but they no longer have their homepage labeled. MySpace, Facebook? Despite all the criticism about safety they receive: not labeled. PBS Kids, Disney, Sesame Street, Nickelodeon? Nope, though Disney does use P3P, which also never took off. Now you’d expect the Senate, who deeply wants to keep the net safe for kids would set a good example. Think again. Same goes for the White House. We could go on for quite a while.

Think you can automate calculating ratings for sites that don’t provide (or lie)? Fat chance. Just ask the Susan G. Komen Breast Cancer Foundation who is famous for being considered a porn site by filters. I can’t even imagine how filters would interpret something like YouTube, where the content in question is binary garbage like most other Flash heavy sites.

The tech side is pretty easy. It’s been around for about a decade now. The precedent for enforcing US laws upon content providers overseas? That’s a new one. Enforcing laws by checking millions of websites owned by millions of people around the world? Good luck.

I’m not going to even bother with Mobile phones, because since the iPhone, the precedent has been set that the phone is just a mobile browser and is subject to the same rules.