Microsoft is taking steps to demonstrate its commitment to ensuring the safety of artificial intelligence (AI). Recently, the company has updated a lawsuit it filed last year, aiming to reveal the identities of four developers who allegedly bypassed the safety measures of Microsoft’s AI tools to create fake adult content featuring celebrities. The original lawsuit was submitted in December, and Microsoft obtained a court ruling that allowed them to take control of a website associated with the accused developers, which assisted in uncovering their identities.
The four individuals are reportedly linked to a global cybercrime group known as Storm-2139. They include Arian Yadegarnia, nicknamed “Fiz,” from Iran; Alan Krysiak, known as “Drago,” from the United Kingdom; Ricky Yuen, or “cg-dot,” from Hong Kong; and Phát Phùng Tấn, who goes by “Asakuri,” from Vietnam. Microsoft claims there are other individuals involved in this scheme but has chosen not to name them yet to avoid interfering with ongoing investigations.
According to Microsoft, this group managed to hack into accounts that had access to their generative AI tools and found a way to circumvent safety features. They used this access to create and distribute various types of images, including fake explicit images of celebrities, which they later sold to others. After Microsoft’s legal actions and website seizure, the developers reportedly reacted in a state of panic, with some members of the group turning against each other.
Deepfakes, particularly fake adult content, have become a significant issue, with celebrities like Taylor Swift often being targeted. This process involves taking a real person’s face and placing it onto a nude body in a convincing way. In January 2024, Microsoft had to update its image generation models after fake photographs of Swift circulated online. The accessibility of generative AI means that even individuals with minimal technical skills can create such images, leading to numerous problems, including scandals in high schools across the United States. Victims of deepfakes have shared their experiences, highlighting that these incidents are far from lack of harm; the emotional distress can lead to feelings of anxiety and fear for those whose likenesses are misused.
Debates continue within the AI community regarding safety measures, with some arguing that concerns are exaggerated and serve to bolster the interests of major AI companies like OpenAI. On one side, critics of open-source AI argue that keeping models secret can prevent misuse. In contrast, supporters advocate for open-source availability, believing that allowing modifications can spur progress while still managing abuse. However, many view these discussions as distractions from a more pressing issue: the spread of misleading information and low-quality content on the internet.
While some concerns about AI might seem exaggerated or hypothetical, the reality of using AI to create harmful deepfakes is significant. Legal action is a potential response to these abuses. There have already been numerous arrests in the U.S. of people generating deepfakes involving minors. In addition, the NO FAKES Act, proposed in Congress last year, aims to make it illegal to create images based on a person’s likeness without consent. In the United Kingdom, distributing deepfake pornography is punishable by law, and soon, even the production of such material will also be criminalized. Additionally, Australia has enacted laws against creating and sharing non-consensual deepfakes.
Overall, as the technology behind AI continues to evolve, it is crucial to navigate its potential for misuse carefully while working to ensure the protection and safety of individuals.