The release of Sora 2, a generative AI video tool from OpenAI, ignited intense industry scrutiny after it emerged that the system had generated videos using a public figure’s likeness and voice without consent. Among those affected was actor Bryan Cranston, of Breaking Bad fame, who formally raised the issue through his union and representation, triggering a swift and public response from OpenAI and major talent agencies.
The Controversy Explained
At its core, the controversy highlights a core tension in the digital-media landscape: the power of AI to create novel content — and the danger that such content can replicate real people without permission.
Sora 2’s initial soft-launch generated material that depicted Cranston (and reportedly other public figures) in synthetic videos. Despite OpenAI’s prior policy requiring “opt-in” consent for use of voice or likeness, these AI-generated outputs slipped through the guardrails.
Recognizing the severity of the backlash, both public and legal, OpenAI moved to reinforce its protections. In a joint statement with agencies and talent representation, the company acknowledged the issue, apologized for the unintentional generations, and committed to enhanced guardrails around replication of voice and likeness when individuals have not opted in.
Key Takeaways
Notably, the update draws attention to several key developments. First, OpenAI’s renewed commitment to an opt-in protocol reaffirms that the use of any individual’s voice or likeness requires express consent, a principle aligned with existing legal protections under publicity rights and copyright, as well as guidelines espoused by SAG-AFTRA, the largest actors guild.
Second, the move signals growing industry momentum behind regulatory solutions, including support for proposed legislation like the NO FAKES Act, which would ban the creation or distribution of unauthorized AI-generated representations of individuals.
Third, the case serves as a cautionary example for all stakeholders, AI developers, content platforms, artists, estates, and rights holders, underscoring the need for robust consent mechanisms, clear licensing frameworks, and proactive rights management before releasing or monetizing synthetic media.
Legal Implications
For legal practitioners and rights holders, the Sora 2 incident underscores several urgent considerations. How will consent and licensing for AI-generated likenesses be documented? What contractual language is needed to give creators control over misuse? How should estates, agencies, and platforms coordinate to ensure ongoing compliance as AI tools evolve? And what liability may arise if the guardrails fail again?
As AI-generated media becomes more accessible and powerful, navigating these legal questions is no longer optional, it’s essential.
Conclusion
If you represent content creators, public figures, estates, or rights-holding entities, or if you work at a company deploying AI video or generative-media tools, now is the time to evaluate your contracts, licensing frameworks, and consent protocols.
Contact Romano Law today to ensure your use of AI-generated content respects image, voice, and publicity rights, and that you’re protected from downstream liability as this technology rapidly evolves.
Contributions to this blog by Kennedy McKinney.




