How AI amplifies Online Violence
(Note that these are ideas plucked from the 16 Days of Activism website, though re-written, at least in part. I try and let you know when it’s me talking and when I’m quoting the website, but if you want to see more stories, and arguably better written, go to the website linked above.
Back in 2016, Microsoft created Tay “the AI with zero chill” and released it on Twitter, designed to interact with people. It was meant to be a fun experiment in early artificial intelligence, but it rather quickly went off the rails, as a group of Twitter uses began tweeting at Tay highly offensive language, as well as racist and sexist messages.
As Tay was deliberately designed to learn from its interactions with people, it soon began to respond using such language, and began to deny the holocaust. within 16 hours, Tay was turned off.
Madhumita Murgia of The Telegraph described Tay as being "artificial intelligence at its very worst—and it's only the beginning".
While companies have worked to impose guardrails on AI, the fact is, some people see guardrails as a fun challenge to get around, and have been training AI tools to pick up on and amplify existing gender biases.
According to the UN Women website, this isn't just about what happens on screens. “What happens online spills into real life easily and escalates. AI tools target women, enabling access, blackmail, stalking, threats and harassment with significant real-world consequences – physically, psychologically, professionally, and financially.”
“Consider this – developed by male teams, many deepfake tools are not even designed to work on images of a man’s body.
According to feminist activist and author Laura Bates, the best way to address the risk of digital and AI-powered abuse is “to recognise that the online-offline divide is an illusion.”
“When a domestic abuser uses online tools to track or stalk a victim, when abusive pornographic deepfakes cause a victim to lose her job or access to her children, when online abuse of a young woman results in offline slut-shaming and she drops out of school” – these are just some examples that show how easily and dangerously digital abuse spills into real life.
AI is both creating new forms of abuse and amplifying existing ones. The scale and undetectability of AI create more widespread and significant harm than traditional forms of technology-facilitated violence.
Some new AI-powered forms of abuse against women include, says the UN:
Image-based abuse through deepfakes: According to research, 90 to 95 percent of all online deepfakes are non-consensual pornographic images, with around 90 percent depicting women. The total number of deepfake videos online in 2023 was 550 percent higher than in 2019. Deepfake pornography makes up 98 percent of all deepfake videos online, and 99 percent of the individuals targeted are women.
Enhanced impersonation and sextortion: AI enables the creation of interactive deepfakes impersonating humans and beginning online conversations with women and girls who don't know they're interacting with a bot. The practice of "catfishing" on dating sites can now be scaled and rendered more realistic as AI bots adapt to simulate human conversations, luring women and girls into revealing private information or meeting up offline.
Sophisticated doxing campaigns: Natural Language Processing tools can identify vulnerable or controversial content in women's posts – such as discussing sexual harassment or calling out misogyny – making them easier targets for doxing campaigns. In some cases, AI is used to craft personalized, threatening messages using a victim's own words and data, escalating psychological abuse.
Say Cheese
Deepfakes are digitally altered images, audio, or videos created using AI to make it appear that someone has said or done something they never actually did.
A classic example is Jordan Peele’s Obama Deep Fake.
Deepfakes can be fun, especially when you know what’s happening. The trouble is, things have gotten to a point where it’s hard to tell. The video of Peele above was seven years ago, and you can tell that something isn’t quite right.
In the intervening time, AI has just got better and better. And with over 90 percent of deep fakes being non-consensual sexual images, AI makes it easier to spread disinformation, or damage a person’s reputation.
Deepfakes are increasingly and overwhelmingly targeting women. Laura Bates says “In part, this is about the root problem of misogyny – this is an overwhelmingly gendered issue, and what we're seeing is a digital manifestation of larger offline truth: men target women for gendered violence and abuse.”
“But it's also about how the tools facilitate that abuse”, adds Bates.
AI technology has made the tools user-friendly and one doesn’t need much technical expertise to create and publish a deepfake image or video. In this context, the rise of "sextortion" using deepfakes – in which non-consensual, fabricated images are shared widely on pornographic sites to harass women – is a growing concern.
AI-generated deepfake pornographic images, once disseminated online, can be replicated multiple times, shared and stored on privately-owned devices, making them difficult to locate and remove.
If you are a victim of a deep fake, know that there is no right way or wrong way to respond to this. Canadian laws have been moving to prevent deep fakes and help women who are victim, but it can be hard to bring perpatrators to justice. If someone in Russia makes a deep fake of someone in Canada, there are no international laws that can help.
Here are some resources provided by the UN, although it’s not an exhaustive list:
Stop non-consensual image-abuse helps victims of revenge porn and prevent intimate images from being shared online. If your intimate image is in the hands of someone who could misuse it, StopNCII.org can generate a hash (digital fingerprint) of the image which prevents anyone from sharing it.
Chayn Global Directory offers a curated list of organisations and services that support survivors of gender-based violence, both online and in person.
The Online Harassment Field Manual – Help Organisations Directory is a specialist directory listing regional and international organisations that help journalists, activists, and others facing online abuse, offering digital safety advice, referrals, and emergency contacts.
Cybersmile Foundation provides a global service that offers emotional support and signposts users experiencing cyberbullying or online abuse to helpful resources.
Take it down assists with removing online nudes.
Can we stop it at its source?
Technology companies have a critical role to play in preventing and stopping AI-generated digital violence. Again, according to the UN, they should:
Make pornographic deepfake or "nudify" tools inaccessible to consumers and children.
Refuse to host images or videos created by these tools.
Develop clear, easily accessible reporting features for responding to abuse and respond swiftly and effectively when victims report abusive content.
Implement proactive solutions for identifying falsified content, including auto-checking for algorithmically detectable watermarks.
Mandate tagging or identification of AI-generated content.
Recruit more women as researchers and builders of technology and work with women’s organizations in designing AI tech.
“There is massive reinforcement between the explosion of AI technology and the toxic extreme misogyny of the manosphere”, says Laura Bates. “AI tools allow the spread of manosphere content further, using algorithmic tweaking that prioritizes increasingly extreme content to maximize engagement.”
It's vital to start conversations earlier, say experts, because prevention is much more effective than deradicalization. And a big part of prevention is providing young people with digital literacy and teaching them source skepticism – who is saying what, and why, and how to verify the credibility of information.
Stopping inovation or preventing abuse?
Many fans of AI argue that putting into place rules like this stifles invention. But does it? Or does it simply safeguard the most vulnerable in our society, and channel innovation towards being beneficial?
Unfettered capitalism brought us ten year old kids working in sweat shops for pennies a day. By creating guardrails in the form of laws, we have arguably built a better society. We can do the same with AI.

