The NYT Discovers That, Lo and Behold, Web Filters Don't Work

from the finally-figuring-these-things-out dept

Web filters are seen by lots of people as some sort of silver bullet for so many of the ills they see on the internet, whether it’s stopping piracy or blocking child porn or just “cleaning up the internet” in general. But there’s just one small problem with filters: they don’t work. Despite claims from politicians and other groups, they simply aren’t effective, and often end up blocking desirable content while letting undesirable stuff flow through. Given the long history of filter failures, it’s a little surprising to see people who seem shocked that filters don’t work. The latest example comes from The New York Times, which has discovered that YouTube’s Safety Mode filters don’t really work at all. The company’s weak defense of its poor filters seems more like a shrug of the shoulders than anything, creating an impression that the filters are there for appearances and little else. The NYT does deserve some credit, though, for recommending that parents take an active role with their kids in helping them determine for themselves what’s inappropriate viewing material on YouTube. That’s really the bottom line: you can’t expect filters to replace parenting.

Filed Under:
Companies: nytimes

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “The NYT Discovers That, Lo and Behold, Web Filters Don't Work”

Subscribe: RSS Leave a comment
22 Comments
Anonymous Coward says:

I remember back in the 90s when filters were first coming about. I was at a friend of a friend’s house or something, and she was talking about this new Netnanny thing (or whichever one it was) and how great it was and how the kids could be safe now. I walked over to the computer and typed in “sex.com”. It went through.

Those filtering programs seemed cool for about 30 seconds when they first came out. Then I realized how worthless they were. And study after study and article after article confirmed it, and expanded on the reasons why they suck.

PaulT (profile) says:

The problems with web filters is they can *never* find the correct balance between blocking “bad” content and allowing legitimate sites.

Those that work on a blacklist/whitelist structure will always fail because so many new sites appear every single day. They can never keep up, while some have a tendency to block perfectly legitimate hosts because one or two pages (or even hyperlinks) violate some form of moral code.

Those that filter on other criteria may keep up with new sites more easily, but they are more error prone – block mentions of “sex”, and nobody in Middlesex can Google their own county; block mentions of breasts and no woman can look for information on preventing breast cancer or donate to a breast cancer charity, and so forth.

My workplace uses Websense to block certain traffic, but it’s full of holes. GMail is blocked via gmail.com, but mail.google.com works fine. Twitter is blocked, but the Firefox Echofon extension gets through no problem. It’s so full of holes it’s ridiculous, yet my company pays them thousands per year for “protection”…

Anybody who uses these for any purpose other than “plausible deniability” is a fool. Anybody who uses these as a catch-all solution to block child porn or to block objectionable content from children instead of actual parenting should be shot.

Pete Austin says:

NYT fails to disclose Conflict of Interest

Apple has been giving the New York Times some valuable free advertising lately, Apple is in a proxy war with Google over Android, and here is the NYT attacking another Google property.

It is shameful that the NYT doesn’t mention this conflict of interest in the article.

http://www.guardian.co.uk/media/pda/2010/mar/08/apple-ipad-ad
http://gadgetwise.blogs.nytimes.com/2010/04/16/is-youtubes-safety-mode-safe-not-very/?partner=rss&emc=rss
http://www.guardian.co.uk/technology/2010/mar/02/apple-sues-htc-iphone-patents

Jason Airlie (user link) says:

Filters as a reminder

At my office I view the filters more as a reminder to employees that they are at work, and where they go on the net is being logged. I set the block bar pretty high so they don’t see the filter in action very often, but when they do it gives them that little kick that this isn’t your home network. We don’t even look at the logs unless we have a specific issue.

I certainly don’t rely on our web filter to block all improper surfing, that is impossible. Improper surfing is a management issue not a technology issue.

John Fenderson (profile) says:

Re: Filters as a reminder

In my office, I don’t block any employees from anything at all. I truly don’t care if they’re surfing, checking email, or whatever, so long as they aren’t breaking any laws. My employees are adults, and I treat them as such.

I ensure productivity by having deadlines. If you meet deadlines with quality work, that’s all that matters. If you don’t, I don’t care why (with certain exceptions — illness, etc.) — you need to find another job.

Big Mook (profile) says:

Filtering at work vs home

In my company, we want content filtering just so that it’s a little harder for the average user to hit questionable content. Yes, you could argue it’s just for appearance sake, but it’s not entirely ineffective and most users just give up trying to get past it. We also use DNS black hole for some zones because a handful of users are quite adept at finding ways around filters. We don’t use filtering to go after offenders, it’s just a speed bump, if you will, to gently remind users of why they’re at work in the first place.

Now, at home it’s a different story. I advocate using OpenDNS since it’s free and is pretty good for keeping the majority of hardcore content away from young eyes. But the wife wants it locked down tight, so she insists on BSecure from American Family Association. It’s a pain in that it blocks all kinds of legitimate content, but in over 6 years, our family has never happened upon anything approaching questionable, much less offensive. And all of the sites we need to use work just fine.

Now you will probably conclude that we’re bad parents because we don’t leave our PC filterless and allow our children to learn from all of the offensive content they might find, but the whole point of the filter is so that we don’t have to babysit the kids every time they want to get online. We can be assured that they won’t get anywhere that we deem inappropriate. We do use the PC with them, and interact with them, but there are also many times when everyone is busy and the kids will be on the Internet unsupervised.

Anonymous Coward says:

No filter is 100%

The problem is the false belief that the filter is 100% perfect.

It may block 50% of the bad stuff, and 20% of the stuff it should NOT block. Sometimes some protection is better than none even if it means blocking some valid sites. It is all in the level of trade off your company/ISP is willing to live with.

I know at work we have some major filters running. I have had working sites blocked with-in hours. I fill in a request to get the sit unblocked and it is unblocked for a few days then the automated filters updates and kicks in again to block the site under some NEW rule. Lucky I work in a area with lots of IT professionals who have a passion for accessing many of the blocked sites – home proxy servers work wonders around any filters.

Anonymous Coward says:

Only 4.4% of the total email traffic is
delivered, with 95.6% blocked by the
various anti-spam measures
With only a small portion of email
traffic being delivered, the anti-spam
measures in use appear to be
cumulatively effective.

The various anti-spam measures currently filter out over 95% of email traffic, greatly reducing the volume of spam that customers receive, without causing significant problems with false positives. Anti-spam measures are doing their job, reducing the threat of spam to a manageable security process. This process still requires focus, expertise and resources, but it is arguably predictable.

(slides 28 and 29)

http://www.enisa.europa.eu/act/res/other-areas/anti-spam-measures/studies/spam-slides

Leave a Reply to Anonymous Coward Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...