How Does Google Identify Fake Online Reviews?

In this blog post I reference two published research papers that cover a very important topic: Identifying fake online reviews. The two research papers I reference below, start to shed light on how online communities and the search engines will combat review spam.  The second research paper listed in this article includes a researcher from Google; Natalie Glance.  

 

As Google increases its consumer influence through Google Places, making sure that the reviews posted on Google Places are authentic is critical.  Review sites like Yelp and TripAdvisor have much at stake as well especially with national advertisement running from companies like Reputation Defender or Reputation.com. 

 

Research Paper #1

 

The first published research paper I will reference was completed by Myle Ott, Claire Cardie, and Jeff Hancock from Cornell University.  Their work gives us insight into the growing interest in assuring that online reviews are authentic  This topic has been discussed many times within the automotive community, and I thought this research would appeal to many readers.

 

Opening  Abstract

 

Consumers' purchase decisions are increasingly influenced by user-generated online reviews [3]. Accordingly, there has been growing concern about the potential for posting deceptive opinion spam- fictitious reviews that have been deliberately written to sound authentic, to deceive the reader [15].

But while this practice has received considerable public attention and concern, relatively little is known about the actual prevalence, or rate, of deception in online review communities, and less still about the factors that influence it.

We propose a generative model of deception which, in conjunction with a deception classier [15], we use to explore the prevalence of deception in six popular online review communities: Expedia, Hotels.com, Orbitz, Priceline, TripAdvisor, and Yelp.

We additionally propose a theoretical model of online reviews based on economic signaling theory [18], in which consumer reviews diminish the inherent information asymmetry between consumers and producers, by acting as a signal to a product's true, unknown quality.

We did find that deceptive opinion spam is a growing problem overall, but with different growth rates across communities. These rates, we argue, are driven by the different signaling costs associated with deception for each review community, e.g., posting requirements. When measures are taken to increase signaling cost, e.g., filtering reviews written by first-time reviewers, deception prevalence is effectively reduced.

 

Download PDF of Research

 

To download a free copy of the research paper, click on the link below to get the PDF document:

 

Estimating the Prevalence of Deception in Online Review Communities

 

Research Paper #2

 

A second research paper entitled "Spotting Fake Reviewer Groups in Consumer Reviews" is also a great read and the research was conducted by Natalie Glance from Google, and Bing Liu and Arjun Mukherjee from the University of Illinois at Chicago.

Opening Abstract

 

Opinionated social media such as product reviews are now widely used by individuals and organizations for their decision making. However, due to the reason of profit or fame, people try to game the system by opinion spamming (e.g., writing fake reviews) to promote or demote some target products. For reviews to reflect genuine user experiences and opinions, such spam reviews should be detected. Prior works on opinion spam focused on detecting fake reviews and individual fake reviewers.

 

However, a fake reviewer group (a group of reviewers who work collaboratively to write fake reviews) is even more damaging as they can take total control of the sentiment on the target product due to its size. This paper studies spam detection in the collaborative setting, i.e., to discover fake reviewer groups. The proposed method first uses a frequent itemset mining method to find a set of candidate groups.

 

It then uses several behavioral models derived from the collusion phenomenon among fake reviewers and relation models based on the relationships among groups, individual reviewers, and products they reviewed to detect fake reviewer groups.

 

Additionally, we also built a labeled dataset of fake reviewer groups. Although labeling individual fake reviews and reviewers is very hard, to our surprise labeling fake reviewer groups is much easier. We also note that the proposed technique departs from the traditional supervised learning approach for spam detection 

 

Download PDF of Research

 

To download a free copy of the research, click on the link below to get the PDF document:

 

Spotting Fake Reviewer Groups in Consumer Reviews

 

Reputation Management and Marketing

I hope you found this research helpful.  I have to dig into the meat of the research more before formulating my feedback.

 

If your business has been struggling with an online review strategy, I invite to you to join the discussion at the upcoming 2012 Automotive Boot Camp.  A number of workshops will be offered to discuss this important topic. 

 

 

Brian 

 

Brian Pasch, CEO

 

brian@pcgmailer.com

 

Views: 154

Comment

You need to be a member of DealerELITE.net to add comments!

Join DealerELITE.net

© 2024   Created by DealerELITE.   Powered by

Badges  |  Report an Issue  |  Terms of Service