The Napoleon Dynamite Problem Stymies Netflix Prize Competitors

from the love-it-or-hate-it dept

We’ve been covering the ongoing race to claim the $1 million Netflix Prize for a while now, highlighting some surprising and unique methods for attacking the problem. Every time we write about it, it appears that the lead teams have inched just slightly closer to that 10% improvement hurdle, but progress has certainly been slow. Clive Thompson’s latest NY Times piece looks at the latest standings, noting that the issue now is “The Napoleon Dynamite problem.”

Apparently, the algorithms cooked up by various teams seems to work great for your typical mainstream movies, but where it runs into trouble is when it hits on quirky films, like Napoleon Dynamite or Lost in Translation or I Heart Huckabees, where people tend to have a rather strong and immediate love or hate reaction to those films, with very little in-between. No one seems quite sure what leads to such a strong polar reaction, and no algorithm can yet figure out how people will react to such films, which is where all of the various algorithms seem to run into a dead end.

Some folks believe that’s just the nature of taste. It really can’t just be programmed like an algorithm, but takes into account a variety of other factors: including what your friends think of something, or even if you happened to go see that movie with certain friends. Basically, there are external factors that could play into taste, that isn’t necessarily indicated in the fact that you may have liked some other set of quirky movies, and therefore you must love Napoleon Dynamite. In some ways, it makes you wonder if we’re all putting too much emphasis on an algorithmic approach to the issue, and if other recommendation systems, including what specific friends think of a movie might be more effective. Of course, Netflix is hedging its bets. It’s been pushing social networking “friend recommendation” features for a while as well.

Filed Under: , , , ,
Companies: netflix

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “The Napoleon Dynamite Problem Stymies Netflix Prize Competitors”

Subscribe: RSS Leave a comment
Anonymous Coward says:

Approaching true noise

If you ask same set of people to rate same set movies multiple times (and assuming that they forget the rating they gave last time), the ratings are going to change. Any algorithm that beats this “noise” threshold is just overfitting.

In this case it seems the contestants have reached that threshold.

mischab1 says:

Re: Approaching true noise

They can account for some of that already. From the linked article, “For example, the teams are grappling with the problem that over time, people can change how sternly or leniently they rate movies. Psychological studies show that if you ask someone to rate a movie and then, a month later, ask him to do so again, the rating varies by an average of 0.4 stars. “

Anonymous Coward says:

Re: Re: Approaching true noise

But it doesnt account for the fact that people do remember how they rated last time and are most likely stick with it.

There is going to be a difference between actual rating and declared rating. It would be easier to predict actual rating (exactly what viewer thinks) than the declared ones. Some factors affecting declared (to be different from actual) are: alcohol, company, time…….

Anonymous Coward says:

I don’t see the problem really..

you track what people like what movies (with what percentage), then you tie those together and you have a tree-system that tracks distance of likability, how many people liked it, and the amount they liked it by. so for example:

most people who liked old yeller liked homeward bound.
a few people who liked old yeller liked airbud

most people who liked homeward bound liked Beethoven.
a few people who liked homeward bound liked Dunstan checks in

so it would suggest them in roughly this order:

homeward bound
Dunstan checks in

recurse until your algorithm would give a movie 50% or less probability, adjusting for more links where most movies you watch have the have common movies that people liked. the problem would be making it fast on anything other than a beast of a machine. but as they specify accuracy, not performance…

(feel free to poke holes in my plan, I really only thought about it for a few minutes before typing it up)

Anonymous Coward says:

Re: algorithm

That’s not too far off from the baseline algorithm, at least in general principal. (Actually, it’s quite different in practice, but I’m not here to pick nits.) The problem is, you have to get 10% better than the baseline algorithm in order to win the prize.

The problem isn’t one that’s hard to solve to a first approximation. The hard part is improving significantly from a reasonable baseline algorithm.

Bam says:


This movie strikes me as a word of mouth movie that’s either a letdown or a surprise and I think that explains the ratings and also what makes it hard to predict.

No matter what aspect of the movie, its actors, or its characters that you look at you don’t find anything compelling about it that would make more than a small subset of the population want to see it based on that alone. It was purely a word of mouth promotion.

That means most people that saw it did so ONLY because they heard it was really good. That means most either agreed and gave it five stars or were sorely disappointed and gave it one or two.

That differs from movies that appeal to and attract a large portion of the audience before word of mouth gets in the mix. These folks provide a lot of the middle ratings.

Anonymous Coward says:

can't predict my movies!

I have watched movies that were awesome because of the friends I had and/or alcohol on hand. There are movies I’ve seen I know I would not have liked if I was in the wrong mood. Brainy thought invoking movies could be what I want to watch one day but the next I’ll be popping in brainless action movie with over the top explosions and one liners.

darkone says:

Newton's got nothing on intuition

The weak point of any A.I. is the fact that we don’t even fully understand human intelligence let alone how to mimic it. Human understanding works from three sources logic, emotion, and intuition; we got the logic one down pretty good, and we are even making strides in understanding emotion, but intuition is still a shot in the dark. Taste comes from the realm of intuition, so if you want even a shot at the answer drop the Newtonian particle physics and enter the wild world of wave mechanics. Hey, why not base it on resonant frequencies of sympathetic circuits? Who knows, you might actually be in the ballpark then.

Anonymous Coward says:

tastes change, and even among people, and the audience with whom you’re watching the show… further depending on the time you watch the movie and the environment, can affect what you like/don’t like. further it is possible to watch a movie once, love it, and then second time around, despise it. So good luck on the algorithm, boyz… 🙂

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...