AI-powered online abuse: How AI is amplifying violence against women and what can stop it
We already live in a world where at least one in three women experience physical or sexual violence. Enter a host of extremely powerful AI tools, trained on existing gender biases, now enabling that violence to spread further, faster, and in more complex ways. It’s a perfect storm.
While technology-facilitated violence against women and girls has been intensifying – with studies showing 16 to 58 per cent of women worldwide impacted – artificial intelligence is creating new forms of abuse and amplifying existing ones at alarming rates.
The numbers are stark: one global survey found that 38 percent of women have personal experiences of online violence, and 85 percent of women online have witnessed digital violence against others. This isn't just about what happens on screens. What happens online spills into real life easily and escalates. AI tools target women, enabling access, blackmail, stalking, threats and harassment with significant real-world consequences – physically, psychologically, professionally, and financially.
Consider this – developed by male teams, many deepfake tools are not even designed to work on images of a man’s body.
UN Women interviewed feminist activist and author of The New Age of Sexism, Laura Bates, and AI and technology policy expert Paola Gálvez-Callirgos to learn what’s at stake.
What is AI-facilitated violence against women?
AI-facilitated violence against women refers to acts of digital abuse generated and spread by AI technology, resulting in physical, sexual, psychological, social, political, or economic harm, or other infringements of women’s rights and freedoms.
The scale, speed, anonymity and ease of communication in digital spaces create an enabling context for this violence. Perpetrators feel that they can get away with it, and victims often do not know if and how they can get help, and legal systems are playing catch up with the rapid changes in technology.
According to feminist activist and author Laura Bates, the best way to address the risk of digital and AI-powered abuse is “to recognise that the online-offline divide is an illusion.”
“When a domestic abuser uses online tools to track or stalk a victim, when abusive pornographic deepfakes cause a victim to lose her job or access to her children, when online abuse of a young woman results in offline slut-shaming and she drops out of school” – these are just some examples that show how easily and dangerously digital abuse spills into real life.
Is AI creating new forms of violence against women?
Yes. AI is both creating entirely new forms of abuse and dramatically amplifying existing ones. The scale and undetectability of AI create more widespread and significant harm than traditional forms of technology-facilitated violence.
Some new AI-powered forms of abuse against women include:
- Image-based abuse through deepfakes: According to research, 90 to 95 percent of all online deepfakes are non-consensual pornographic images, with around 90 percent depicting women. The total number of deepfake videos online in 2023 was 550 percent higher than in 2019. Deepfake pornography makes up 98 percent of all deepfake videos online, and 99 percent of the individuals targeted are women.
- Enhanced impersonation and sextortion: AI enables the creation of interactive deepfakes impersonating humans and beginning online conversations with women and girls who don't know they're interacting with a bot. The practice of "catfishing" on dating sites can now be scaled and rendered more realistic as AI bots adapt to simulate human conversations, luring women and girls into revealing private information or meeting up offline.
- Sophisticated doxing campaigns: Natural Language Processing tools can identify vulnerable or controversial content in women's posts – such as discussing sexual harassment or calling out misogyny – making them easier targets for doxing campaigns. In some cases, AI is used to craft personalized, threatening messages using a victim's own words and data, escalating psychological abuse. Read Ljubica’s story.
What are deepfakes and why do they target women?
Deepfakes are digitally altered images, audio, or videos created using AI that appear as though someone has said or done something they never actually did. While the technology can be used for entertainment or creative purposes, deepfakes are increasingly misused as a form of digital abuse – for example, to create non-consensual sexual images, spread disinformation, or damage a person’s reputation.
Deepfakes are increasingly and overwhelmingly targeting women. Laura Bates weighed in on why – “In part, this is about the root problem of misogyny – this is an overwhelmingly gendered issue, and what we're seeing is a digital manifestation of larger offline truth: men target women for gendered violence and abuse.”
“But it's also about how the tools facilitate that abuse”, adds Bates.
AI technology has made the tools user-friendly and one doesn’t need much technical expertise to create and publish a deepfake image or video. In this context, the rise of "sextortion" using deepfakes – in which non-consensual, fabricated images are shared widely on pornographic sites to harass women – is a growing concern.
AI-generated deepfake pornographic images, once disseminated online, can be replicated multiple times, shared and stored on privately-owned devices, making them difficult to locate and remove.
What should someone do in the first 24 hours if a deepfake or doctored image of them appears online?
There is no right or wrong way to respond, and experts stress that it is vital to hold perpetrators accountable – that includes creators, advertisers and platforms that host them, and people who use them. But if you are a victim of such abuse, experts recommend contacting organizations that have the most up-to-date information on how to help.
Here are some resources, although it’s not an exhaustive list:
- Stop non-consensual image-abuse helps victims of revenge porn and prevent intimate images from being shared online. If your intimate image is in the hands of someone who could misuse it, StopNCII.org can generate a hash (digital fingerprint) of the image which prevents anyone from sharing it.
- Chayn Global Directory offers a curated list of organisations and services that support survivors of gender-based violence, both online and in person.
- The Online Harassment Field Manual – Help Organisations Directory is a specialist directory listing regional and international organisations that help journalists, activists, and others facing online abuse, offering digital safety advice, referrals, and emergency contacts.
- Cybersmile Foundation provides a global service that offers emotional support and signposts users experiencing cyberbullying or online abuse to helpful resources.
- Take it down assists with removing online nudes.
For more tips, see our explainer Online safety 101: What every woman and girl should know.
Are there laws that protect women from AI-generated abuse?
Less than half of countries have laws that prosecute online abuse and where they do exist, enforcement is weak.
Additionally, there is limited reporting and access to justice, and tech platforms lack accountability. The transnational nature of AI-generated digital abuse further drives impunity.
UN Women’s call to action for the 16 Days of Activism includes the need for laws and their enforcement to ensure perpetrators’ accountability, together with better support for victims and survivors and digital literacy for women and girls.
Laws are beginning to adapt to emerging trends, although they're struggling to keep pace with rapid developments in generative AI. Some examples include:
- The UK Online Safety Act (passed in 2023) made it illegal to share explicit images or videos that have been digitally manipulated. However, the Act does not prevent the creation of pornographic deepfakes or sharing them where intent to cause distress cannot be proved.
- The EU's AI Act (2024) promotes transparency by requiring the creators of deepfakes to inform the public about the artificial nature of their work and providers of general-purpose AI tools to tag AI-generated content.
- In Mexico, Ley Olimpia recognises and punishes digital violence, and has inspired similar legislation in other countries in the region – Argentina, Panama and Uruguay are expected to follow.
- Australian legislation is being introduced to strengthen laws targeting the creation and non-consensual dissemination of sexually explicit material online, including material created or altered using generative AI and deepfakes.
- One recommended approach is global cooperation and sector-wide regulation mandating that AI tools have to meet a safety and ethics standard before being rolled out to the public. The Council of Europe's framework convention on artificial intelligence offers a model. The recently established UN High Level Advisory Body on AI represented by the UN Global Digital Compact is another example of such coordinated efforts.
Paola Gálvez-Callirgos, expert in AI and digital technology policy and governance, cautions: “There isn’t a one-size-fits-all model for AI governance. Policymakers must consider that national context and culture matter.”
However, she believes that there are some basic measures that all countries can take to criminalize all forms of technology-facilitated violence against women and invest in building institutional capacities so that enforcement is possible.
Another loophole she recommends plugging through legislation is by mandating content provenance – meaning, the ability to trace the history of digital assets. “The producers of synthetic media tools must attach verifiable content credentials (in-file metadata or robust watermark/provenance per C2PA-style standards) that allow platforms and investigators to detect origin and manipulation”, she explains. “This will support automated filtering and make it harder for perpetrators to plausibly deny origin.”
Gálvez-Callirgos is part of UN Women’s AI School – a free, invitation-based course currently offered to women’s rights organizations under the ACT to end violence against women programme to learn how to use AI tools ethically for advocacy, influence AI policy development and to leverage AI tools responsibly to prevent and respond to violence against women. The course also includes selected expert talks and innovation labs open to the public.
“At the core of successful AI adoption is trust. Innovation that builds trust, is inclusive, and prevents harm is ultimately more sustainable and widely adopted than innovation that undermines those values.”
What should technology companies do to prevent AI-powered online abuse?
Technology companies have a critical role to play in preventing and stopping AI-generated digital violence. They should:
- Make pornographic deepfake or "nudify" tools inaccessible to consumers and children.
- Refuse to host images or videos created by these tools.
- Develop clear, easily accessible reporting features for responding to abuse and respond swiftly and effectively when victims report abusive content.
- Implement proactive solutions for identifying falsified content, including auto-checking for algorithmically detectable watermarks.
- Mandate tagging or identification of AI-generated content.
- Recruit more women as researchers and builders of technology and work with women’s organizations in designing AI tech.
From misogynistic online content to real life harm: Manosphere, digital abuse, and how can we talk to boys and men?
“There is massive reinforcement between the explosion of AI technology and the toxic extreme misogyny of the manosphere”, says Laura Bates. “AI tools allow the spread of manosphere content further, using algorithmic tweaking that prioritizes increasingly extreme content to maximize engagement.”
The manosphere is a growing corner of the digital world and comprised of a loose network of online communities that claim to address men’s struggles – dating, fitness, or fatherhood, for example – but often promote harmful advice and misogynistic attitudes. This content is gaining traction – two-thirds of young men regularly engage with masculinity influencers online, and the content on manosphere not only normalizes violence against women and girls, but has growing links to radicalization and extremist ideologies. Learn more about the Manosphere and what you can do.
Aided by the ease of AI tools, online users can now generate and spread propaganda materials, including false information and statistics, to recruit and groom vulnerable young men into misogynistic and extremist ideologies.
It's vital to start conversations earlier, say experts, because prevention is much more effective than deradicalization. And a big part of prevention is providing young people with digital literacy and teaching them source skepticism – who is saying what, and why, and how to verify the credibility of information.
Some tips for talking to men and boys who are consuming misogynistic and abusive content online:
- Use a calm, supportive, non-judgmental framework.
- Look at potential fears or concerns that may have been stoked by the content.
- Work together to find unbiased information to address those fears.
- Talk about the many significant ways that men and boys are negatively impacted by patriarchy and gender stereotypes.
- Question whether the online personalities who claim to be championing for men and boys are supporting them or contributing to harms.
- Discuss who profits from the content and how.
There's a vital role for male role models to do this work – conversations with a father, uncle, brother, male teacher, sports coach, or youth worker may be more effective in helping boys reexamine this content.
What are three habits that can make everyone safer online today?
Our digital experts recommend the following actions:
- Educate – yourself and others about digital literacy, source skepticism, and the realities of AI-facilitated abuse. Understanding how these tools work and their gendered impact is the first step toward meaningful change.
- Stay safe – Use strong and unique passwords, turn on two-factor authentication in all your accounts, use private profiles and periodically check your privacy settings on all social media platforms and apps to protect your personal information. For more tips on how to spot signs of online abuse, read Online Safety 101.
- Take action – Demand accountability from tech platforms and companies creating and profiting from AI tools. And follow and support feminist-led campaigns working on the issue. Amplify their content, sign their petitions, or contact your legal representative to show how much this matters to you.
“The key is to move toward accountability and regulation – creating systems where AI tools must meet safety and ethics standards before being rolled out to the public, where platforms are held accountable for the content they host, and where the responsibility for prevention shifts from potential victims to those creating and profiting from harmful technologies”, concludes Bates.
For those wondering if regulations and governance of AI would make it less innovative, Gálvez-Callirgos has a different take: “The dichotomy of ‘regulation or innovation’ is a falsehood learned from the unregulated evolution of social media over the past decade”, she says.
AI governance doesn’t mean prohibiting invention, but rather putting in place safeguards that channel innovation toward outcomes that are beneficial to the whole society, she concludes.
16 Days of Activism: #NoExcuse for online abuse
Online and digital spaces should empower women and girls. Yet every day, for millions of women and girls the digital world has become a minefield of harassment, abuse, and control.
From 25 November to 10 December join the UNiTE campaign to learn about and take action to stop digital abuse against women and girls.