The Personality Rights Challenge in India’s AI Era
Deepfakes, identity misuse and legal gaps drive celebrities and courts to forge new defences
The rapid arrival of deepfake technology and AI-driven identity manipulation has brought India’s fragile framework for personality rights into sharp focus.
In recent weeks, Bollywood personalities such as Aishwarya Rai Bachchan, Abhishek Bachchan and Karan Johar have sought judicial protection over the use of their names, likenesses, voices and persona.
Their cases highlight a broader, systemic challenge: how to guard individual identity in a digital world where imitation is easy and regulation is patchy.
Unlike jurisdictions such as the United States, where “publicity rights” are often codified, India has no standalone statute protecting personality or publicity rights.
Instead, courts have constructed a legal patchwork based on constitutional privacy, defamation, intellectual property and tort law.
As public figures push for stronger safeguards, the uncertainty and inconsistency of India’s legal regime are becoming increasingly exposed.
At the heart of the challenge lies the unique nature of AI-generated impersonation.
Deepfake content can replicate a person’s face, voice, expressions or mannerisms and deploy them dynamically in content that distorts consent or intent.
Traditional doctrines of copyright or trademark struggle to address these harms, especially when no underlying original work or commercial mark is involved.
Legal scholars argue that a broader “right of personality” must evolve to encompass non-commercial misuse, reputational damage, and identity cloning.
Current court interventions are notable but reactive.
The Delhi High Court recently granted interim injunctive relief to block websites misusing the identity of Aishwarya Rai and Abhishek Bachchan, ordering platforms to remove unauthorized content and restrain further exploitation.
These orders reflect a judicial willingness to protect dignity and reputation, but they also underscore the limitations of pursuing enforcement after harm has occurred.
Jurists and academics warn that piecemeal litigation cannot keep pace with technology.
India’s pending regulatory efforts—including proposed amendments to digital and AI laws to address manipulated content—must be matched by clear legislative standards.
These should cover who bears liability (creators, intermediaries, platforms), how affected persons can seek compensation, and how courts can assess irreparable harm in the digital medium.
Beyond legal reform, technical safeguards are also essential.
Industry and government must invest in detection tools, watermarking, and provenance verification to flag manipulated media.
Public awareness campaigns should teach citizens to recognize synthetic content.
Platforms need stronger notice-and-takedown systems tied to identity misuse claims.
Only a combination of legal, technological, and institutional tools can address the novel threats to identity in the AI age.
Celebrities’ battles in court may draw headlines, but the broader stakes encompass all individuals.
In a future where one’s face, voice or persona can be replicated without consent, the notion of personal autonomy, reputation and dignity is on trial.