Deepfake Statistical Data (2023–2025)
Global Deepfake Incidents by Type (2023–2025)
Non-Consensual Pornography:
Malicious deepfake content is overwhelmingly pornographic. Recent analyses show that around 98% of deepfake videos circulating online are non-consensual porn, nearly all of which target women. The volume of such videos is exploding – the number of deepfake porn videos produced in 2023 was reported to be 464% higher than in 2022. Major deepfake pornography sites have cataloged thousands of victims (almost 4,000 female celebrities were found across the top deepfake porn websites), in addition to countless private individuals whose images are stolen for “face-swap” porn. This gendered abuse is a primary use of deepfakes and has grown alarmingly fast.
Political Misinformation:
Although only a small fraction of total deepfake content (political deepfakes and other categories comprise the remaining ~2%), their impact is significant and growing. Between mid-2023 and mid-2024, researchers documented 82 political deepfakes targeting public figures across 38 countries (in 30 of those countries elections were upcoming or ongoing). These have included fake speeches, endorsements, or interviews created to misinform or smear candidates.
For example, in 2023 a deepfake video of Singapore’s Prime Minister endorsing a cryptocurrency scam had to be officially debunked by his office. In Turkey’s 2023 elections, a deepfake was used by a campaign to falsely link an opposition leader to terrorism.
Political deepfakes are often used for false statements or “cheapfakes” (about 25% of observed cases), election propaganda (~16%), and character assassination (~11%) according to one analysis. Women in politics are disproportionately targeted – even pornographic deepfakes have been weaponized to discredit female politicians.
Financial Scams and Fraud:
Deepfake technology has become a powerful tool for scammers. Reports indicate a surge of well over 700% in AI-enabled financial fraud incidents in 2023 compared to the prior year. By 2023, deepfake fraud (e.g. impersonation scams) accounted for about 6.5% of all reported fraud attempts, reflecting a 2,137% increase over three years.
Common schemes involve cloned voices or videos of executives (“CEO fraud”) and celebrities to trick victims. For instance, criminals have circulated thousands of deepfake videos of tech CEOs like Elon Musk and influencer MrBeast promoting fake investment schemes on social media. The FBI warns that nearly 40% of online scam victims in 2023 encountered deepfake content as part of the scam – underscoring how widespread this threat has become.
Notably, the cryptocurrency sector has been especially hit, with deepfake-related incidents in crypto rising 654% from 2023 to 2024, often via fake endorsements and fraudulent crypto investment videos. Businesses are targeted frequently; an estimated 400 companies a day face “CEO impostor” deepfake attacks aimed at tricking employees.
Voice Cloning and Impersonation:
Advancements in AI voice synthesis have led to a spike in voice-based deepfake scams (a subset of financial fraud, but worth noting separately). Scammers now clone voices from just a few seconds of audio – sometimes 3 seconds of sample audio is enough to achieve an 85% realistic voice match. According to a 2023 McAfee survey, 70% of people feel they could not reliably tell a real voice from a deepfaked voice message.
This has enabled frightening scams: 1 in 10 people report having received a deepfake voice call (often someone impersonating a family member in distress), and 77% of those targeted lost money to the scam. In early 2025 the FBI even alerted that U.S. officials were being targeted by AI-generated voice phishing (vishing) attacks, showing that high-profile individuals are not immune.
From fake kidnappings (where a parent hears a cloned voice of their child demanding ransom) to bogus calls from “your boss” requesting a wire transfer, voice cloning fraud has emerged as a global menace.
Estimated Financial Losses from Deepfake Scams (USD)
Deepfake-enabled fraud has already caused significant monetary damage, and losses are escalating rapidly. In the corporate sector, deepfake scams cost businesses nearly $500,000 on average per incident in 2024, with large enterprises sometimes losing as much as $680,000 in a single attack.
High-profile examples illustrate the scale: in one case a bank manager in Hong Kong was deceived by a deepfake video call into transferring $25 million to fraudsters; in another, a UK energy firm’s CEO was duped by a cloned voice into wiring about $243,000 to criminals. These individual incidents hint at a much larger trend.
Aggregate losses:
Deloitte analysts estimate that generative AI-driven fraud (including deepfakes) cost about $12.3 billion in the U.S. in 2023, and they project these losses could soar to $40 billion by 2027. This represents a 30%+ annual growth rate in harm.
Likewise, the FBI’s Internet Crime Center has noted an overall surge in cybercrime losses (over $16 billion reported in 2023 across all internet crimes) and attributes a growing share of this to deepfake tactics. Globally, while precise figures are hard to pin down, it’s clear that deepfake scams are already causing billions in fraud losses annually.
Older adults have proven particularly vulnerable – in the U.S., older Americans reported $3.4 billion in fraud losses in 2023 (an 11% rise from 2022), and many of the newest scams (like impostor phone calls) involve AI-generated voices.
Victim losses distribution:
A recent McAfee study of deepfake scam victims found that 77% of victims ended up losing money, and about one-third lost over $1,000 to the scam. A small but significant 7% of victims lost upwards of $15,000 in a deepfake fraud incident.
In the financial industry, a 2024 survey by Medius revealed over 50% of finance professionals in the US/UK had been targeted by a deepfake scam, and 43% admitted they fell for it – highlighting that even savvy individuals can be deceived.
In summary, the economic toll of deepfakes – from direct theft and fraud, as well as indirect costs (recovery, reputation damage, legal expenses) – has grown from isolated cases of a few hundred thousand dollars to a multi-billion-dollar problem worldwide in just a few years.
Volume and Growth of Deepfake Content (2023–2025)
The creation of deepfake content is undergoing explosive growth, both in quantity and sophistication. By 2023, approximately 95,000–100,000 deepfake videos were known to exist online, a 550% increase since 2019. For context, researchers counted about 14,678 deepfake videos in 2019, which jumped to 85,000+ by the end of 2020.
Growth has continued at an exponential pace: experts observed that the number of deepfake videos was doubling roughly every 6 months around 2019–2020.
With the proliferation of easy-to-use apps and AI models, deepfakes have become far more commonplace on social media. DeepMedia (an AI detection firm) estimated that roughly 500,000 deepfake videos and audio clips were shared on social platforms in 2023 alone.
If current trends hold, this volume will skyrocket – projections suggest up to 8 million deepfake videos may be circulating by 2025.
One Reuters report noted that by early 2023 there were three times as many deepfake videos and eight times as many deepfake audio clips online compared to the year before – underscoring especially the explosion of AI-generated voice content.
Such growth is global. A biometric security company found detected deepfakes worldwide increased tenfold from 2022 to 2023.
98% of deepfakes are pornographic, targeting women.
Only ~2% are political or other types, though they have disproportionate societal impacts.
Deepfake-Driven Fraud Losses (U.S., 2023–2025)- $12.3B in 2023
- $17.2B in 2024
- $23.1B projected for 2025
Reflects the exponential risk escalation.
Growth of Deepfake Content (2019–2025)From 14K videos in 2019 to a projected 8 million by 2025.
Global Deepfake Detection Market Growth (2023–2026)
Market projected to triple in size:
$5.5B in 2023 → $15.7B by 2026
Reflects increasing investments by tech firms, governments,
and platforms in AI moderation and verification tools.
99% of victims are women.
Highlights the urgent need for protective laws and detection.
Age Distribution of Deepfake Scam Victims35–54 age group: Most frequently affected (~35%)
Older adults (55+): Significant target (~28%), especially in voice-based scams
Young adults (18–34): Also targeted (~25%), often via impersonation or social engineering
Teens/Minors: ~12% – mostly linked to “nudification” and sextortion threats
Geographic Growth in Detected Deepfakes (2022–2023)North America: +1,740% year-over-year growth – the steepest rise globally
Asia-Pacific: +1,530%, with high activity in China, India, and Southeast Asia
Europe: +920%, driven by election-related deepfakes and regulatory scrutiny
Africa & Latin America: Emerging threats, with substantial growth despite lower overall counts
2023, with especially sharp spikes in North America (+1,740% year-over-year) and Asia-Pacific (+1,530%). Some industry sectors are seeing deepfake content surge at alarming rates: for example, “face swap” style deepfakes (often used in videos) increased 704% from early to late 2023, and attempts to use deepfakes to defeat ID verification jumped 3,000% in 2023 in one analysis.
Overall, the curve is clearly exponential – as generative AI tech improves, we are on track for millions of AI-generated fake videos, images, and voices flooding the internet by 2024–2025 unless countermeasures slow it down.
Deepfake Detection Efforts: Accuracy, Adoption, and Limits
As the deepfake threat grows, significant efforts are underway to detect and filter these forgeries. Automated deepfake detectors have improved in recent years – many state-of-the-art AI detection models claim over 90% accuracy in lab settings.
For example, Microsoft’s and Intel’s deepfake detection tools report success rates in the 90–99% range on benchmark datasets, and researchers continue to refine algorithms (often using deepfake examples to “train” the detectors). In a controlled study, top algorithms could spot AI-generated faces about 84% of the time.
However, this still means a significant miss rate, and real-world performance is often lower. By contrast, human detection of deepfakes is poor – in experiments, untrained people achieved only ~57% accuracy (nearly chance) at distinguishing real vs fake content.
In fact, for high-quality deepfake videos, human accuracy fell to an alarming 24.5% (essentially blind to the fakery). This highlights the need for robust technical tools, since the average person can no longer reliably “trust their eyes or ears”.
Platform adoption:
Major tech platforms and institutions have started deploying deepfake detection in various ways. Social media companies use AI-based content moderation to flag suspected deepfakes – for instance, Facebook and YouTube have policies banning certain deepfakes (e.g. election misinformation), and they employ automated scanners that score videos on how likely they are manipulated.
Detected fakes are often forwarded to human reviewers for confirmation, though this process is resource-intensive. The deepfake detection industry is growing rapidly as a result: analysts project the global market for deepfake detection solutions will grow ~42% annually, reaching $15.7 billion by 2026 (up from $5.5 billion in 2023).
Governments and law enforcement are also investing in detection: for example, the U.S. Department of Defense’s Media Forensics (MediFor) program and Europol’s European Cybercrime Centre have both developed deepfake detection initiatives.
Despite these advances, detection faces serious limitations. It’s an AI cat-and-mouse game: as detectors improve, deepfake creators tweak their algorithms to evade them. One concern flagged by experts is that malicious actors use the same open-source generative models to produce fakes that “fly under the radar” of known detectors.
Additionally, the sheer volume of synthetic media expected (millions of fakes) could overwhelm current detection workflows. Detection AI can also be brittle – many tools that work in one domain (say, detecting face swaps in videos) might fail against another (like an AI-generated audio clip). A recent multi-modal study in 2024 confirmed that people can only distinguish deepfake images or audio about half the time, meaning even crowdsourced human verification is unreliable. In summary, while leading detection systems boast ~90%+ accuracy on known deepfakes, unknown or highly sophisticated fakes can still slip through, and bad actors continuously adapt to exploit detection blind spots. This arms race means no detector is foolproof; hence, companies are also exploring proactive measures like cryptographic content authentication and AI-generated content watermarking to complement detection.
Jan 2023: China enforces Deep Synthesis Regulation
Feb 2023: Japan arrests suspects for deepfake extortion videos
Mar 2023: FBI issues national alert on deepfake voice phishing
May 2023: Singapore PM deepfake crypto scam debunked
Jun 2023: Interpol releases advisory on deepfake-enabled cybercrime
Aug 2023: UK announces criminal law plans against explicit deepfakes
Oct 2023: Deepfake scandal impacts Turkey’s election campaigns
Dec 2023: US consultant indicted over Biden robocall deepfake
Jan 2024: Crypto scam using deepfake targets French officials
Mar 2024: UK finalizes Online Safety Act addressing deepfakes
Jun 2024: EU Media Authenticity Initiative releases AI standards
Nov 2024: NATO launches deepfake image detection challenge
Jan 2025: FBI warns of AI-generated “vishing” against officials
Mar 2025: US introduces federal Deepfake Porn Bill in Congress
May 2025: Facebook updates AI to flag deepfake
Law Enforcement Actions and Policy Measures Against Deepfakes
North America leads with 8 major actions (e.g. fines, prosecutions, new state laws)
Europe with 6 efforts (UK, EU AI Act, national bans)
Asia-Pacific has 5 (China, Japan, Singapore)
Africa and Latin America have rising efforts, but fewer formal frameworks so far
Deepfake Awareness by CountryMexico leads awareness with 40% of respondents saying they know what a deepfake is.
The UK follows at 32% awareness.
Spain and Germany show the lowest awareness, with 75% of people unaware.
Globally, about 71% of people do not recognize the term “deepfake,” leaving just 29% informed.
Governments and law enforcement worldwide have begun responding to malicious deepfakes with new laws and enforcement actions, though the landscape is evolving. Legislation and Regulations: In China, a pioneering regulation known as the “Deep Synthesis” Provisions took effect in January 2023 – it mandates that AI-generated or altered content (deepfakes) be clearly labeled and prohibits the creation of deepfakes to mislead or slander.
The Chinese rules also require deepfake service providers to authenticate users and maintain audit logs, imposing steep penalties for non-compliance.
In the United States, there is not yet a federal deepfake law, but several states have acted: e.g. California and Texas outlawed the distribution of malicious political deepfakes near elections, and California expanded its penal code to allow victims of pornographic deepfakes to sue for damages.
In 2023, New York enacted an election law amendment requiring disclosure when any political ads use “materially deceptive AI media”. Meanwhile, Virginia and Maryland updated their “revenge porn” statutes to explicitly criminalize fake explicit images created without consent. At the federal level in the U.S., lawmakers have proposed bills to combat deepfakes – for example, the “Preventing Deepfakes of Intimate Images Act” was introduced in March 2025 on a bipartisan basis, aiming to create federal criminal penalties for making or sharing non-consensual deepfake porn. This reflects a growing consensus that victims need stronger protection at law.
Across the Atlantic, the U.K. moved to outlaw explicit deepfakes as well: in 2023 the UK government announced that creating or sharing sexually explicit deepfakes without consent will be made a specific criminal offense, as part of the Online Safety Act framework. This UK reform (expected to be in force by 2024) was hailed as a “landmark development for the protection of women and girls”, since previously such behavior often escaped legal accountability. The European Union is addressing deepfakes chiefly through broader laws – the forthcoming AI Act will require transparently labeling AI-generated content, and the updated Digital Services Act and Code of Practice on Disinformation push platforms to monitor and mitigate deepfake propaganda. In addition, coalitions like the EU’s Media Authenticity initiative and the multistakeholder C2PA (Coalition for Content Provenance and Authenticity) are establishing technical standards for certifying real content, indirectly aiding deepfake detection.
Enforcement and Notable Actions:
Law enforcement has started to pursue cases involving harmful deepfakes. One of the first major criminal cases in the U.S. came in late 2023: a political consultant in New Hampshire was indicted on multiple counts (including voter suppression and identity fraud) for using deepfake audio in automated election robocalls designed to deceive voters. The FCC issued a $6 million fine in that case as well, since it violated robocall laws with AI-cloned voice messages of a candidate. In Europe, in 2021-2022, Europol warned that organized crime groups had begun routinely using deepfakes in social engineering scams (like job interview fraud and CEO fraud), and it has since run training exercises for investigators on spotting deepfakes. There have been reports of arrests in cases of deepfake-facilitated extortion – for example, Japanese police in 2022 arrested individuals for creating AI-generated explicit videos of real people to blackmail them (many jurisdictions extend existing cybercrime laws to cover this). In 2023, Interpol also highlighted deepfakes as an emerging threat to financial and personal security, issuing purple notices to member police agencies about modus operandi in deepfake scams. Overall, the global trend is toward stronger measures: more than a dozen countries have either passed or proposed laws targeting malicious deepfakes (from Singapore’s law against deepfake election ads to Australia’s plans to criminalize intimate-image deepfakes). Law enforcement is still catching up, but we are seeing the first deterrence signals – as the New Hampshire Attorney General said in the deepfake voter suppression case, “I hope our enforcement actions send a strong deterrent signal to anyone considering using AI to interfere with elections”. Moving forward, we can expect continued refinement of laws (e.g. explicit bans, required watermarks, takedown mechanisms) and more proactive policing of deepfake abuse, as governments recognize the urgent need to counter this new form of digital deception.
Victim Demographics and Targets of Deepfakes

Gender and Sexual Exploitation:
Women form the largest victim group of malicious deepfakes by far. It is estimated that 99% of deepfake pornographic content targets women, often using the images of actresses, social media influencers, or even private individuals pulled from their online profiles. A 2023 analysis confirmed that the vast majority of deepfake videos online were pornographic and nearly all of those featured female victims.
The abuse is often aimed at celebrities (e.g. movie stars, K-pop idols, Twitch streamers), but has also affected ordinary women – for instance, in one notorious 2020 case, over 100,000 women’s private photos (including minors) were algorithmically “nudified” by a deepfake bot on Telegram. This misogynistic skew was a driving force for new laws (as noted, many jurisdictions cite protecting women from deepfake sexual violence as a key motivator). Men have been victims too, but typically in other deepfake contexts (such as business fraud or political impersonation) rather than porn.
The gendered impact is clear: deepfake tech has become a new weapon for online sexual harassment and revenge porn targeting women.
Public Figures vs Private Individuals:
Public figures are frequent targets of deepfakes – both in porn and misinformation. Researchers found that on deepfake porn sites a huge number of face swaps involve famous actresses, musicians, or internet personalities. Likewise, political leaders and government officials have had their likeness used in fake videos: examples range from a deepfake of Ukrainian President Zelenskyy “surrendering” in 2022, to fake audio of U.S. President Biden, to numerous instances of scammers using well-known billionaires’ faces in investment cons.
Public figures are appealing targets because fakers can capitalize on their credibility or notoriety – e.g., deepfake videos of Elon Musk or Tom Cruise attract attention and can lend false legitimacy to scams.
That said, private individuals are increasingly in the crosshairs as well. We have seen cybercriminals target regular people for extortion by creating explicit fakes from their stolen photos. A disturbing trend is the use of AI to impersonate family members’ voices – criminals choosen to mimic loved ones’ voices in distress to demand money.
Many such cases have been reported worldwide, showing that you don’t have to be famous to be victimized by a deepfake. In fact, a UK survey found 15% of people had seen deepfake pornography of someone they know (demonstrating how this can spread among friends and family), and 20% had seen deepfake misinformation about a public issue..
Notable Victim Demographics:
Beyond gender, age and profession are factors. Older adults are particularly vulnerable to voice-scam deepfakes – as mentioned, they suffered billions in fraud losses recently, and authorities worry that retirees may be less familiar with deepfake tactics. On the other end, minors and young people have been victimized in sextortion schemes where AI-edited explicit images of them are used for blackmail. Another demographic trend is the targeting of journalists, activists, and women in public life with deepfake harassment. For example, female political journalists who speak out have been attacked with fabricated porn videos (one investigative reporter in India was silenced for a time after a deepfake sex video of her went viral as retaliation). Executives and professionals are targets in a different way: a 2024 study noted over half of finance professionals had encountered deepfake scams aimed at their companies. We also see geographic spread: early deepfakes (2017–2019) often originated in North America and East Asia, but by 2023 cases were reported on every continent. Countries with high technology use (US, UK, China, India, etc.) see more incidents, yet no region is immune – Europol has highlighted cases from Europe and Africa,
In summary, women (especially in sexualized deepfakes) and high-profile figures (politicians, CEOs, celebrities) have been the primary targets, but the net is widening. Deepfake abuses now affect everyday citizens through scams and defamation. The one common factor is the victim’s exposure: those with more public photos, videos, or audio (whether a famous actress or a person sharing content on social media) are at higher risk of being “weaponized” in a deepfake. As awareness grows, there is also more reporting – which might show that many private individuals have already been targeted without widespread knowledge. Combatting deepfake harms will thus require protecting the most vulnerable demographics (such as women and minors) and also reducing risks for society at large, since anyone could eventually be a victim of a convincing fake.
Sources: The above data is drawn from recent government reports, cybersecurity firms, and studies (2023–2025), including Security.org’s deepfake fraud analysis, Sensity/Deeptrace research, Recorded Future’s Insikt Group report on political deepfakes, McAfee’s 2023 consumer survey, Deloitte’s 2024 risk forecast, and various media investigations. These figures paint a stark picture of the deepfake phenomenon’s rapid growth and the global efforts underway to measure and mitigate its impact.