Making sense of science

Unmasking Fake Reviews on the Web

Unmasking Fake Reviews on the Web

Fake reviews on the Web are far more widespread than we think. Andreas Munzel, a specialist in online-user review analysis, deciphers this type of fraudulent practice.

“We’ll be back!”, “Excellent, a must-try!”: whether you are looking for a good restaurant, a well-located hotel, or even a film to watch, reviews posted by other consumers on online review sites such as TripAdvisor can be very useful in limiting the chances of making a poor choice. Studies confirm the importance of other web users’ reviews in consumers’ decision-making process: 80% of online purchasers say they take such reviews into consideration while Nielsen surveys reveal that 68% of respondents trust the reviews of other consumers posted online. Yet such confidence may be misplaced when we consider that between 10% and 30% or more of reviews published online are in fact fake.

Sophisticated detection software 

In addition to regulations, standards (such as those introduced in France by Afnor1) and codes of conduct introduced by actors in the sector, website operators may use sophisticated tools to improve their ability to identify fake reviews. This strategy—the inclusion of algorithms and models to detect characteristic features of the text used in fake reviews—is essential: during my research, I verified the impact on web users of the ability of a website to detect and remove fake reviews.2 Lack of credibility of the source of a review negatively impacts both the service provider being evaluated (e.g., a hotel or restaurant) and the review site itself.

Until now, software to detect fake reviews had primarily been developed by computer scientists in the US, who, using work by linguists on lie detection, managed to identify certain characteristic markers of the text used to voice fake reviews. Thus, with the help of indicators such as style or levels of language, length of text or the use of certain words, researchers claim to be able to distinguish fake reviews from authentic reviews with a 90% accuracy. Similarly, in a study conducted by a former doctoral student I supervised, we were able to develop a tool, which identifies fake reviews with a 93% deceit detection accuracy.3 Researchers from the US also invite web users to post any type of review in English on the site to test its authenticity. While this strategy is useful in fighting fake reviews, its effectiveness still requires evaluation. Indeed, online reputation management firms, specialized in the creation of fake reviews, know very well how to adapt their writing style to the most recent advances in text-based detection systems. This ongoing race between researchers and professional peddlers of fake reviews makes the development of a detection filter extremely difficult and fragile.

The potential limitations of algorithms based solely on analysis of the text used in reviews led me to include contextual factors that could help web users detect fake reviews. Usually, even skilled web users have a truth bias, that is, without contextual information they tend to consider a review in question rather as authentic than bogus.4 The aim of this context-related second strategy is to place web users and their “digital” skills at the centre of focus. In a series of experimental studies, I analyzed the importance and utility of various contextual features of reviews.5

One of the most relevant indicators is the consistency between a given review and the average rating attributed by other web users for the product or service under review. A very noticeable divergence between a highly positive review and previous ratings is cause for some skepticism. The advantage of this indicator is that it is fairly difficult to manipulate by the various actors. Furthermore, disclosure by a review writer of his/her identity (name, town, age) and the context of his/her experience (as a couple, alone) also affects the credibility of a review and of its author.6 Such information in fact allows the writer’s previous reviews to be checked. Despite the initial work undertaken to this end, further studies are needed to identify and test contextual indicators likely to be of help to web users in detecting fake reviews.

Malicious practices that continue to be profitable

Finally, as with all attempted and actual fraud, the practice of fake reviews appears to be closely tied to the nature of the rewards: in a context in which players are aware that the best strategy is to abide by the rules and tell the truth, fraudulent practices are of no value. Unfortunately, as is evident from the example of Samsung, dishonest strategies continue to be the most profitable today, to the detriment of confidence in the system as a whole. All this can contribute to a general loss of confidence: “in any case, everything (and everyone) today is fake. What or who can be trusted these days?”

And the upshot of all this? If fraudulent practices remain profitable while strategies to detect fake reviews are ineffective, consumers will turn towards sources of information they consider to be trustworthy, such as friends and colleagues. In this scenario, all of the actors involved lose out: both consumers, since their own sources of information are likely to be more limited than the wisdom of crowds, and companies, since all recommendations exchanged between peers will tend to take place behind closed doors and will be inaccessible and thus invisible to them. The abandonment of review sites will, for example, deprive companies of an important source of feedback about customer experiences, which would otherwise enable them to discover ways of improving their products and developing their competitive edge.

The analysis, views and opinions expressed in this section are those of the authors and do not necessarily reflect the position or policies of the CNRS.

  • 1. Association française de normalisation .
  • 2. A. Munzel, " Malicious practice of fake reviews: Experimental insight into the potential of contextual indicators in assisting consumers to detect deceptive opinion spam," Recherche et Applications en Marketing (English Edition, 2015), 30(4): 24-50.
  • 3. D. Plotkina, "Deceptive communication: fake online reviews. Doctoral dissertation, Strasbourg University," (defended on May 12th 2016).
  • 4. Ibid.
  • 5. Munzel, A. (2015), Ibid.
  • 6. A. Munzel, "Assisting consumers in detecting fake reviews: The role of identity information disclosure and consensus," Journal of Retailing and Consumer Services, 32, 2016. 96-108.


0 comment
To comment on this article,
Log in, join the CNRS News community