Tripadvisor Reports Rise in Fake Reviews Amid Increased Scrutiny

In a revealing update about online reviews, Tripadvisor has announced in its "Transparency Report 2025" that approximately 8% of the 31.1 million reviews submitted to the platform in 2024 were identified as fake. This figure represents a significant leap, more than doubling the number of fake reviews detected in 2022. However, it is crucial to note that this increase does not necessarily correlate with a sudden surge in insincere reviews. Becky Foley, the vice president and head of trust and safety at Tripadvisor, clarified that the rise in detected fraudulent submissions is actually a reflection of the company's enhanced policies and a higher volume of overall submissions.
Foley elaborated on the changing landscape of reviews by stating, "Our policies regarding fake reviews have evolved over time, especially our more aggressive stance against what are known as 'incentivized reviews.'" These types of reviews often arise when businesses offer customers discounts or other perks in exchange for a positive review, or encourage employees to generate reviews that mention their names. Foley explained that this practice skews the integrity of feedback on the platform, leading to a collection of reviews that do not provide genuine insights for other users. "The employees get their mom, best friend or cousin to submit reviews, mentioning their names," she said, clearly outlining how this practice compromises the utility of the review system.
So, what constitutes a fake review? Tripadvisor defines it as "any review submitted by someone who is knowingly submitting biased or non-firsthand content, in an effort to manipulate a property's reputation." This definition underscores the intention behind the submission, emphasizing that the motive is often rooted in deceit.
The increase in fake review detections can also be attributed to Tripadvisor's continuous advancements in its review moderation system. Foley noted that the company's approach incorporates a three-pronged process that combines auto-detection, human review, and community feedback. According to the report, around 7% of reviews submitted in 2024 were automatically rejected before they could even be posted. Additionally, another 5% of submissions were flagged for further examination by human moderators.
In a significant effort to maintain trust and safety, Tripadvisor's team moderated more than 4.2 million reviews, which amounts to over 13% of all submissions made in 2024. The report also highlighted that approximately 244,000 reviews faced disputes from users during the third stage of review evaluation. Interestingly, out of these disputed reviews, around 72% were allowed to remain on the site, while 28% were ultimately removed.
Foley identified four primary categories of fake submissions that Tripadvisor monitors: boosting, vandalism, member fraud, and paid reviews. Contrary to popular belief, she stated that misconceptions often lead many to think that vandalism is responsible for the majority of fake reviews. In reality, boosting accounts for about 54% of these fraudulent entries, while member fraud constitutes 39%. Paid reviews, which represent a smaller yet more concerning 4.8% of the total, often involve what Foley refers to as "review farms"—operations where individuals are compensated for writing fake reviews. She pointed out that many of these paid reviews are generated from regions in Asia, even though only 17% of legitimate submissions came from that continent last year. Specifically, in 2024, more than one-third of all paid submissions flagged by Tripadvisor originated from Indonesia and Vietnam, marking a shift from prior years when India was the primary source.
Foley described the ongoing challenge of eliminating fake reviews as a constant "cat and mouse" game. She emphasized that while Tripadvisor is continually improving its ability to detect fake submissions, achieving absolute perfection remains an elusive goal. "We might not catch [a fake] the first time, but we'll catch it eventually," she assured. The company's technology, developed over the past 25 years, has increasingly shifted its focus from the content of the reviews to the manner in which they are posted. By leveraging artificial intelligence and behavioral biometrics, Tripadvisor can identify patterns indicative of fraudulent behavior, such as unusual submission spikes or attempts to mask IP addresses.
Moreover, Tripadvisor employs undercover investigators who pose as fake review brokers to gather intelligence on paid reviewers. Upon detecting a paid review, the company collects comprehensive data to build a profile of the author, which helps identify additional fraudulent submissions in the future. While violators are not immediately banned from the platform, their rankings may be penalized for a year. Repeat offenders are marked with a red badge on their listings, alerting users that the property may be attempting to mislead them.
Interestingly, while some individuals have turned to AI for generating reviews—an act that is not explicitly deemed fake but is also not permitted—Foley remarked on the ongoing discussions surrounding the legitimacy and ethical considerations of using artificial intelligence in this context. The ever-evolving dynamics of online reviews continue to raise pertinent questions about authenticity and trust, making this an area to watch closely in the future.