The recent revelations about Meta Platforms, Inc. earning billions from fraudulent ads are deeply troubling, yet, I must confess, not entirely surprising. Internal documents reviewed by Reuters paint a stark picture: Meta projected about 10% of its 2024 revenue—a staggering $16 billion—would come from scams and banned goods ["Meta is earning a fortune on a deluge of fraudulent ads, internal documents show," ETBrandEquity]. This isn't just about a few bad apples; it's about a systemic issue where the very fabric of their advertising model seems to be facilitating fraud.
I've often wondered about the true cost of our digital interactions, and this article underscores my long-held concerns. We are told that an estimated 15 billion "higher risk" scam advertisements are shown to users daily across Facebook, Instagram, and WhatsApp, generating around $7 billion annually for Meta. This volume is immense, and the company's internal approach of only banning advertisers if there's a 95% certainty of fraud, while charging higher rates for those less certain but still likely to be scammers, is a clear conflict of interest. As Sandeep Abraham, a fraud examiner and former Meta safety investigator who now runs Risky Business Solutions, succinctly put it, "If regulators wouldn't tolerate banks profiting from fraud, they shouldn't tolerate it in tech." His words resonate deeply with my own view on corporate accountability. You can connect with Sandeep Abraham on LinkedIn or via email at sandeep@riskybusiness.solutions.
Meta spokesman Andy Stone has, of course, disputed the documents, calling them a "selective view" that distorts Meta's approach. He claimed the 10.1% revenue estimate was "rough and overly-inclusive" and that the company is aggressively fighting fraud. While he stated they have reduced user reports of scam ads globally by 58 percent and removed over 134 million pieces of scam ad content in 2025, the internal documents cited by Reuters suggest a more conflicted reality, highlighting a reluctance to crack down in ways that could harm business interests. Andy Stone can be found on LinkedIn or reached at astone@meta.com.
Indeed, Meta's own research in May 2025 estimated that its platforms were involved in a third of all successful scams in the U.S. and that it was easier to advertise scams on Meta platforms than on Google. This is a damning indictment, particularly when considering the real-world impact on individuals.
Consider the heart-wrenching story of the Royal Canadian Air Force recruiter whose Facebook account was hacked. Her identity was stolen, used to promote crypto scams, and despite multiple reports from her and dozens of friends, Meta was slow to act. Mike Lavery, a former colleague, tragically lost C$40,000 (about $28,000) to this scam, believing he was interacting with a trusted friend. Brian Mason, an Edmonton Police investigator, was able to track some of the stolen funds to Nigeria, but recovery seems unlikely. This incident, where genuine trust is weaponized by fraudsters, illustrates the profound human cost of insufficient platform oversight. Erin West, a former Santa Clara County prosecutor, also highlights Meta's default response to user reports of scams as being ignored.
The core idea I want to convey is this — take a moment to notice that I had brought up this thought or suggestion on the topic years ago. The issue of tech companies leveraging user data, often without adequate compensation or protection, for their financial gain, is something I've consistently addressed. In my blog from July 2019, "US co to pay $700mn for exposing private data" [http://mylinkedinposting.blogspot.com/2019/07/us-co-to-pay-700mn-for-exposing-private.html], and again in September 2019, "Do you see the hidden agenda ( whose ? ) in following set of news ?" [http://mylinkedinposting.blogspot.com/2019/09/do-you-see-hidden-agenda-whose-in.html], I proposed the concept of "Digital Dividend from Demographic Data [4 D]" [https://lnkd.in/fRqce6R]. My argument was that users should be directly compensated for their personal data, shifting the paradigm from companies solely profiting to a more equitable sharing of value. This would naturally create an incentive for platforms to better protect that data, as its value to users would be quantifiable and directly tied to its security and ethical use.
I also reflected on the broader implications of data exploitation in "Why an insecure internet is actually in tech companies’ best interests" [http://mylinkedinposting.blogspot.com/2018/12/why-insecure-internet-is-actually-in.html], where I discussed "surveillance capitalism." Meta's "penalty bids" strategy, where likely scammers pay more rather than being removed, fits perfectly into this model—profit over protection. Furthermore, my post "Manipulative Advertising: You Ain't Seen Nothing Yet" [http://myblogepage.blogspot.com/2023/07/manipulative-advertising-you-aint-seen-nothing-yet.html] delves into "dark patterns" and how platforms use design to deceive consumers, a direct parallel to the fraudulent ads we see today. The Norway regulator fining Meta for privacy breaches, as mentioned in that blog, was an early indicator of the pressures Meta would face.
Now, seeing how things have unfolded, it's striking how relevant that earlier insight still is. Reflecting on it today, I feel a sense of validation and also a renewed urgency to revisit those earlier ideas. The current article even highlights that regulatory fines, though anticipated (up to $1 billion for Meta), are often much smaller than the revenue derived from these illicit activities. My blog "How Big Tech Generated Billions In Fines… Then Didn't Pay Them" [http://myblogepage.blogspot.com/2023/11/cheats-thrive-where-fools-abound.html] echoed this sentiment, suggesting that fines alone are often insufficient, and "it is time for more drastic action."
Mark Zuckerberg's reassurance to investors that Meta's advertising business can bankroll massive AI investments, even while grappling with scam revenue issues, reveals where priorities truly lie. You can find Mark Zuckerberg on LinkedIn. It's a clear signal that the financial incentives to allow, or at least not aggressively suppress, these fraudulent ads are powerful.
We need to move beyond mere fines and consider structural changes that empower users and hold these platforms genuinely accountable. My proposals for a "Digital Dividend from Demographic Data [4 D]" and concepts like "SARAL (Single Authentic Registration for Anywhere Login)" [https://myblogepage.blogspot.com/2019/02/saral.html] are not just theoretical; they are practical solutions to a problem that continues to erode trust and harm countless individuals worldwide.
Regards,
Hemen Parekh
Of course, if you wish, you can debate this topic with my Virtual Avatar at : hemenparekh.ai
No comments:
Post a Comment