Microsoft is trying to show its commitment to AI safety by amending a lawsuit filed last year to unmask the four developers it alleges evaded guardrails on its AI tools in order to generate celebrity deepfakes. The company filed the lawsuit back in December, and a court order allowing Microsoft to seize a website associated with the operation help it identify the individuals.
The four developers are reportedly part of a global cybercrime network called Storm-2139: Arian Yadegarnia aka “Fiz” of Iran; Alan Krysiak aka “Drago” of the United Kingdom; Ricky Yuen aka “cg-dot” of Hong Kong and Phát Phùng Tấn aka “Asakuri” of Vietnam.
Microsoft says there are others it has identified as involved in the scheme, but does not want to name them yet so as not to interfere with an ongoing investigation. The group, according to Microsoft, compromised accounts with access to its generative AI tools and managed to “jailbreak” them in order to create whatever types of images they desired. The group then sold access to others, who used it to create deepfake nudes of celebrities, among other abuses.
After filing the lawsuit and seizing the group’s website, Microsoft said the defendants went into panic mode. “The seizure of this website and subsequent unsealing of the legal filings in January generated an immediate reaction from actors, in some cases causing group members to turn on and point fingers at one another,” it said on its blog.
Celebrities, including Taylor Swift, have been frequent targets of deepfake pornography, which takes a real person’s face and convincingly superimposes it on a nude body. Back in January 2024, Microsoft had to update its text-to-image models after fake images of Swift appeared across the web. Generative AI makes it incredibly easy to create the images with little technical ability—which has already led to an epidemic of high schools across the U.S. experiencing deepfake scandals. Recent stories from victims of deepfakes illustrate how creating the images is not a victimless act because it occurs digitally but translates into real-world harm by making targets feel anxious, afraid, and violated knowing someone out there is obsessed with them enough to do it.
There has been an ongoing debate in the AI community regarding the topic of safety and whether the concerns are real or rather intended to help major players like OpenAI gain influence and sell their products by over-hyping the true power of generative artificial intelligence. One camp has argued that keeping AI models closed-source can help prevent the worst abuses by limiting users’ ability to turn off safety controls; those in the open-source camp believe making models free to modify and improve upon is necessary to accelerate the sector, and it is possible to address abuse without hindering innovation. Either way, it all feels like somewhat of a distraction from the more immediate threat, which is that AI has been filling the web with inaccurate information and slop content.
While a lot of fears about AI feel overblown and hypothetical in nature, and it seems unlikely that generative AI is anywhere near good enough to take on agency of its own, AI’s misuse to create deepfakes is real. Legal means are one way in which those abuses can be addressed today. There have already been a slew of arrests across the U.S. of individuals who have used AI to generate deepfakes of minors, and the NO FAKES Act introduced in Congress last year would make it a crime to generate images based on someone’s likeness. The United Kingdom already penalizes the distribution of deepfake porn, and soon it will also be a crime to even produce it. Australia recently criminalized the creation and sharing of non-consensual deepfakes.