Can A Computer Pick Out Fake Online Reviews When Humans Can't?

from the sounds-like-it dept

It’s no surprise that there are a ton of “fake” reviews online of just about anything that can be reviewed. Businesses, hotels, authors, musicians, etc., all want to make sure that whatever it is they’re selling, people see good reviews when they go searching. But, of course, that’s a problem for consumers who rely on such fake reviews… and on the sites who host such reviews and want them to be as accurate as possible. So it’s fascinating to see that some researchers at Cornell (yes, my alma mater) were able to come up with an algorithmic way to figure out what reviews are fake. You can read the full paper here (pdf). It’s only 11 pages.

The method was pretty clever. First, they used Mechanical Turk to create 400 faked 5-star reviews of Chicago hotels:

To solicit gold-standard deceptive opinion spam using AMT, we create a pool of 400 Human- Intelligence Tasks (HITs) and allocate them evenly across our 20 chosen hotels. To ensure that opinions are written by unique authors, we allow only a single submission per Turker. We also restrict our task to Turkers who are located in the United States, and who maintain an approval rating of at least 90%. Turkers are allowed a maximum of 30 minutes to work on the HIT, and are paid one US dollar for an accepted submission.

Each HIT presents the Turker with the name and website of a hotel. The HIT instructions ask the Turker to assume that they work for the hotel?s marketing department, and to pretend that their boss wants them to write a fake review (as if they were a customer) to be posted on a travel review website; additionally, the review needs to sound realistic and portray the hotel in a positive light. A disclaimer indicates that any submission found to be of insufficient quality (e.g., written for the wrong hotel, unintelligible, unreasonably short, plagiarized, etc.) will be rejected

Then, of course, they need “real” reviews. But since part of the issue is that many “real” reviews are faked, the team did their best to find a bunch of real reviews from TripAdvisor, by narrowing them down based on a few factors:

For truthful opinions, we mine all 6,977 reviews from the 20 most popular Chicago hotels on TripAdvisor. From these we eliminate:

  • 3,130 non-5-star reviews;
  • 41 non-English reviews;13
  • 75 reviews with fewer than 150 characters since, by construction, deceptive opinions are at least 150 characters long…
  • 1,607 reviews written by first-time authors? new users who have not previously posted an opinion on TripAdvisor?since these opinions are more likely to contain opinion spam, which would reduce the integrity of our truthful review data…

Finally, we balance the number of truthful and deceptive opinions by selecting 400 of the remaining 2,124 truthful reviews, such that the document lengths of the selected truthful reviews are similarly distributed to those of the deceptive reviews. Work by Serrano et al. (2009) suggests that a log-normal distribution is appropriate for modeling document lengths. Thus, for each of the 20 chosen hotels, we select 20 truthful reviews from a log-normal (left-truncated at 150 characters) distribution fit to the lengths of the deceptive reviews.

They then test how humans see the two kinds of reviews, and discover that they can’t tell them apart. In fact, their accuracy was only slightly above 50%. However, they then work out algorithmic ways of distinguishing the “real” reviews from the fake reviews, and come up with a system that is 90% accurate in picking out which reviews are which. Apparently, while humans can’t pick out the differences, faked reviews have some common characteristics:

We observe that truthful opinions tend to include more sensorial and concrete language than deceptive opinions; in particular, truthful opinions are more specific about spatial configurations (e.g., small, bathroom, on, location). This finding is also supported by recent work by Vrij et al. (2009) suggesting that liars have considerable difficultly encoding spatial information into their lies. Accordingly, we observe an increased focus in deceptive opinions on aspects external to the hotel being reviewed (e.g., husband, business, vacation)…

[….]

… we find increased first person singular to be among the largest indicators of deception, which we speculate is due to our deceivers attempting to enhance the credibility of their reviews by emphasizing their own presence in the review.

Obviously, it’s just one bit of research, but apparently those involved in it have been contacted by… well, just about everyone doing online reviews. Hopefully this means that we’re not too far off from better quality online reviews.

Filed Under: ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Can A Computer Pick Out Fake Online Reviews When Humans Can't?”

Subscribe: RSS Leave a comment
22 Comments
Rich Kulawiec (profile) says:

There are other ways to do this...

…using methods that these researchers either overlooked, omitted, or are not aware of. Hint: combinations of IP, network block, ASN, passive OS fingerprinting, browser fingerprinting, etc. are quite effective in dealing with comment/blogspam. The problem isn’t that these methods don’t work; the problem is that very people know how to use them.

Pitabred (profile) says:

Re: There are other ways to do this...

What if a company tells it’s employees to go home at night and write a positive review for them? Not sure how all your fancy filtering would stop that, because every employee would likely pass all of those tests you set up. At least if you didn’t make it to block all kinds of legitimate feedback.

Squirrel Brains (profile) says:

I experimented with being a Mechanical Turk (wanted to see what it was all about). I saw several HITs like this one, but I never did them. There are a lot of shady HITs and I never wanted to do any that I thought would contribute to spam (there are a lot of HITs where you read something and then type it into various fields). The higher the payout, the more suspicious the tasks (usually). It is interesting to see it used this way, though I would never participate.

Anonymous Coward says:

Re: "...we eliminate: * 3,130 non-5-star reviews;"

Are you an idiot? They eliminated non-5-star reviews because they wanted to compare real 5-star reviews to fake 5-star reviews.

I know you want to shoehorn elitist conspiracies into every single comment you make about every single post, but it doesn’t work.

Overcast (profile) says:

What they are really doing is learning how to build a better lie…

I kinda had that thought too. Who’s going to govern what the rules are on the filtering? Probably whoever’s paying for it, which makes perfect sense.

This may convert opinions and reviews into conceptual paid content from the company who hosts them. Immediately this brings the question of bias into the picture.

Consumers will see it as ‘controlled’ and as factual as any paid advertisement… might be. Then we get to company reputation – and many are lacking sorely there. It seems the concept now is if they all suck – then consumers have no choice.

Kevin (profile) says:

The Human Coefficient

This is interesting (at least for the time being, until other commenters are proven correct and the fake-review cottage industry learns how to “beat” this) for humans as well, because it provides some interesting human-applicable strategies for bumping up that 50% (i.e., look for first-person, weight spatial descriptions more heavily, etc.).

Anonymous Coward says:

Slightly Off-Topic

Is the term “Turk” important/meaningful when it comes to AI?

I ask this because the only “Turk” in know (that’s related to AI – I know that a country called Turkey exists) is from Terminator: The Sarah Connor Chronicles, so was wondering if the term was inspired from the series.

hhotelconsult (user link) says:

Filter bubbles

Anyone engaged in this conversation, ESPECIALLY the author, needs to read Eli Pariser’s book “The Filter Bubble: What the internet is hiding from you”, basically about auto-filtering and personalization.

It’s vital, it’s on point, and it addresses just how frightening this concept is…. especially if the algorithm automates and doesn’t really provide any real logic.

Just read it.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...