BitcoinWorld Grammarly Lawsuit Explodes as AI ‘Expert Review’ Faces Class Action Over Unauthorized Impersonation In a landmark legal challenge that strikes at BitcoinWorld Grammarly Lawsuit Explodes as AI ‘Expert Review’ Faces Class Action Over Unauthorized Impersonation In a landmark legal challenge that strikes at

Grammarly Lawsuit Explodes as AI ‘Expert Review’ Faces Class Action Over Unauthorized Impersonation

2026/03/13 01:00
8 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

BitcoinWorld

Grammarly Lawsuit Explodes as AI ‘Expert Review’ Faces Class Action Over Unauthorized Impersonation

In a landmark legal challenge that strikes at the heart of AI ethics and digital identity, journalist Julia Angwin has filed a class action lawsuit against Grammarly’s parent company Superhuman, alleging the writing assistant platform turned her and hundreds of other experts into unauthorized ‘AI editors’ through its controversial ‘Expert Review’ feature. The lawsuit, filed in federal court, represents a significant escalation in the ongoing debate about AI companies’ use of personal identities without consent.

Grammarly Lawsuit Centers on Unauthorized AI Impersonation

Grammarly released its ‘Expert Review’ feature last week, promising premium users AI-generated feedback that simulated editorial critiques from notable figures including novelist Stephen King, scientist Carl Sagan, and tech journalist Kara Swisher. However, the company failed to secure permission from any of the hundreds of experts whose names and professional identities it utilized. This oversight has triggered immediate legal consequences and widespread criticism across the journalism and technology communities.

The class action lawsuit specifically alleges violations of privacy and publicity rights under both state and federal law. According to court documents, Grammarly’s actions constitute unauthorized commercial use of personal identities for profit. The feature was exclusively available to users paying $144 annually, creating a direct commercial benefit from the unauthorized use of expert names and reputations.

Julia Angwin’s Career-Long Privacy Advocacy

Julia Angwin, the lead plaintiff in the case, brings particular credibility to the lawsuit given her extensive career investigating technology companies’ impacts on privacy. As a Pulitzer Prize-finalist journalist and former investigative reporter for ProPublica, Angwin has authored multiple books on digital surveillance and data privacy. Her statement regarding the lawsuit highlights the personal and professional violation she experienced.

“I have worked for decades honing my skills as a writer and editor,” Angwin stated. “I am distressed to discover that a tech company is selling an imposter version of my hard-earned expertise.” This sentiment reflects broader concerns among creative professionals about AI systems appropriating their identities without compensation or consent.

AI Ethics Experts Also Targeted Without Consent

The scope of Grammarly’s unauthorized use extends beyond journalists to include prominent AI ethicists and researchers. Timnit Gebru, renowned for her work on algorithmic bias and AI ethics, was included in the ‘Expert Review’ feature without her knowledge or approval. This inclusion creates particular irony given Gebru’s extensive public criticism of unethical AI practices and her advocacy for responsible AI development.

Other affected individuals include Casey Newton, founder and editor of Platformer, who discovered his inclusion when testing the feature. Newton fed one of his own articles into the tool and received feedback from Grammarly’s approximation of Kara Swisher. The generic nature of the feedback raised questions about the feature’s fundamental value proposition.

Grammarly’s imitation of Swisher produced feedback so nonspecific that it failed to demonstrate any meaningful understanding of Swisher’s actual editorial style or expertise. The generated question—”Could you briefly compare how daily AI users versus AI skeptics articulate risk, creating a through-line readers can follow?”—was criticized as generic and lacking the incisive quality characteristic of Swisher’s actual work.

Industry Reactions and Legal Precedents

The lawsuit emerges against a backdrop of increasing legal scrutiny of AI companies’ practices. Recent court decisions have begun establishing boundaries for AI training data usage and identity appropriation. The Grammarly case represents one of the first major challenges specifically focused on AI impersonation of living individuals for commercial purposes.

Legal experts note that right of publicity laws, which vary by state but generally protect individuals from unauthorized commercial use of their identity, may provide strong grounds for the plaintiffs. These laws have traditionally applied to celebrity endorsements but are increasingly being tested in digital contexts.

Key Figures in Grammarly ‘Expert Review’ Controversy
Individual Profession Status in Feature
Julia Angwin Investigative Journalist Lead Plaintiff
Kara Swisher Tech Journalist Unauthorized Use
Timnit Gebru AI Ethicist Unauthorized Use
Casey Newton Platformer Editor Unauthorized Use
Stephen King Novelist Unauthorized Use

Grammarly’s Response and Feature Removal

Following the lawsuit filing and mounting public criticism, Grammarly has disabled the ‘Expert Review’ feature. Superhuman CEO Shishir Mehrotra announced the removal via LinkedIn, offering an apology while continuing to defend the underlying concept. Mehrotra’s statement attempted to reframe the controversy while acknowledging the execution flaws.

“Imagine your professor sharpening your essay, your sales leader reshaping a customer pitch, a thoughtful critic challenging your arguments, or a leading expert elevating your proposal,” Mehrotra wrote. “For experts, this is a chance to build that same ubiquitous bond with users, much like Grammarly has.”

This defense has been met with skepticism from affected individuals and industry observers. Critics argue that the fundamental issue isn’t the concept’s execution but rather the basic ethics of using personal identities without consent. The apology’s conditional nature—defending the idea while regretting the implementation—has done little to assuage concerns.

Technical Implementation and Quality Concerns

Beyond the ethical and legal issues, technical analysis of the ‘Expert Review’ feature reveals significant quality concerns. The AI-generated feedback consistently failed to capture the distinctive voices or expertise of the individuals it purported to emulate. Instead, it produced generic writing advice that could have been generated without referencing specific experts.

This raises questions about why Grammarly chose to use real names rather than creating fictional expert personas or generic categories. Industry analysts suggest the company may have believed that name recognition would drive premium subscriptions, underestimating the legal and ethical implications of this approach.

The feature’s technical limitations become particularly apparent when comparing its output to actual editorial feedback from the referenced experts. Real editorial critiques typically demonstrate deep subject matter expertise, distinctive voice, and contextual understanding—qualities the AI system failed to replicate meaningfully.

Broader Implications for AI Industry Practices

The Grammarly lawsuit represents a potential turning point for AI ethics and regulation. As AI systems become increasingly capable of simulating human expertise and identity, legal frameworks struggle to keep pace with technological developments. This case may establish important precedents regarding:

  • Consent requirements for using personal identities in AI systems
  • Commercial boundaries for AI-generated impersonations
  • Compensation frameworks for individuals whose expertise trains AI models
  • Transparency standards for AI features that reference real people

Industry observers note that similar issues are emerging across multiple AI applications, from voice synthesis to digital avatars. The Grammarly case provides a concrete example of how these abstract ethical concerns manifest in real products affecting real people.

Historical Context of Technology and Identity Rights

The current controversy continues a long history of technological innovation outpacing legal and ethical frameworks. Similar debates emerged with photography in the 19th century, television advertising in the mid-20th century, and internet privacy in the early 21st century. Each technological leap required society to renegotiate boundaries around personal identity and commercial use.

What distinguishes the current AI era is the scale and sophistication of identity appropriation. Unlike previous technologies that might use a name or image, AI systems can simulate entire patterns of thought, communication style, and expertise. This creates fundamentally new challenges for existing legal frameworks designed for simpler forms of identity use.

Conclusion

The Grammarly lawsuit over its AI ‘Expert Review’ feature represents a critical test case for AI ethics and identity rights in the digital age. As the class action proceeds through the legal system, it will likely establish important precedents regarding consent, compensation, and commercial boundaries for AI systems that reference or simulate real individuals. The case highlights growing tensions between AI innovation and personal rights, with implications extending far beyond Grammarly to the entire technology industry. Ultimately, this legal challenge may force clearer standards for how AI companies can ethically incorporate human expertise and identity into their products.

FAQs

Q1: What exactly is Grammarly being sued for?
Grammarly faces a class action lawsuit for using hundreds of experts’ names in its AI ‘Expert Review’ feature without obtaining their consent, allegedly violating privacy and publicity rights by commercially exploiting their identities.

Q2: Who is leading the lawsuit against Grammarly?
Investigative journalist Julia Angwin is the lead plaintiff, filing on behalf of herself and other affected individuals whose names were used without permission in Grammarly’s premium AI feature.

Q3: Has Grammarly responded to the lawsuit?
Yes, Grammarly has disabled the ‘Expert Review’ feature and CEO Shishir Mehrotra has issued an apology, though he continued to defend the underlying concept of the feature while acknowledging implementation failures.

Q4: What legal principles does this case involve?
The case centers on right of publicity laws, which protect individuals from unauthorized commercial use of their identity, and privacy rights that prevent commercial exploitation of personal attributes without consent.

Q5: How might this lawsuit affect other AI companies?
The outcome could establish important precedents for consent requirements and commercial boundaries when AI systems reference or simulate real people, potentially affecting numerous AI applications beyond writing assistants.

Q6: What was the quality of Grammarly’s AI-generated expert feedback?
According to tests by affected journalists, the feedback was generic and failed to capture the distinctive expertise or editorial style of the referenced individuals, raising questions about the feature’s fundamental value.

This post Grammarly Lawsuit Explodes as AI ‘Expert Review’ Faces Class Action Over Unauthorized Impersonation first appeared on BitcoinWorld.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.