Grammarly AI "Expert Review" rapped for unauthorised use of expert identities
Grammarly AI "Expert Review" rapped for unauthorised use of expert identities
Occurred: March 2026
Page published: March 2026
Grammarly faced ethical condemnation for its "Expert Review" feature, which uses the names and professional reputations of real journalists, authors and academics to frame AI-generated writing advice without their consent, raising questions about identity rights, digital impersonation, and the company's governance and ethics.
Grammarly ignited a major firestorm when its AI-powered "Expert Review" feature (User Guide) drew fierce condemnation from the experts whose identities and life's work were used without consent to commercially power the tool's writing suggestions.
Grammarly owner Superhuman markets Expert Review as a way to "meet the expectations of your discipline and your project by drawing on insights from subject matter experts and trusted publications." Writers open a document, pick a scholar from the list, and receive AI-generated feedback modelled on that person's research and style. The system can rewrite sections of a paper based on its suggestions.
Wired reported that the feedback is written as though it is coming directly from well-known authors, whether those authors are living or not. Historian Verena Krebs discovered Exoert Review users are able to pick historian David Abulafia as one of the available "experts" to check their paper. Abulafia died in January 2026.
The Verge found that the tool also puts words in the mouths of real journalists including Nilay Patel and Casey Newton working at The New York Times, The Atlantic, Bloomberg, Gizmodo, and other publications.
None of the people named were involved in building the feature, none were contacted beforehand, and none gave the company permission to use their identities this way. Nor did Superhuman ensure the AI's "advice" aligned with the experts' real-world opinions.
To make matters worse, the system provided broken links to "source" material and used outdated job titles for the people it was mimicking.
The feature appears designed to increase user trust and differentiation by attaching AI-generated feedback to recognisable expert personas rather than anonymous machine output.
Grammarly’s parent company, Superhuman, says it relied on the fact that these experts’ work is “publicly available” to justify using their names and styles, rather than obtaining explicit licences or consent.
The U.S. lacks clear federal rules on creating synthetic versions of real people, leaving a patchwork of state laws, a regulatory gap that companies like Grammarly appear to have exploited.
For "experts", the incident represents the theft of their identities and their commodification. It also associates them with a commercial product or specific advice they did not give, thereby damaging their reputations. The issue is particularly acute in the case of recently deceased individuals, whose estates have had no opportunity to manage or consent to the posthumous use of their work and legacy.
For users, people receiving AI-generated feedback believe they are benefiting from expert insight, but historian C.E. Aubin told Wired: "These are not expert reviews, because there are no 'experts' involved in producing them." The feature is fundamentally misleading about the nature and quality of the advice provided.
For society, it raises alarms about the "dead web" and the erosion of trust; if AI can convincingly masquerade as a specific trusted professional, the value of human critical thinking and creativity are diminished.
For policymakers, the controversy highlights gaps in "Right of Publicity" and personality rights laws, which are currently struggling to keep pace with generative AI's ability to create synthetic versions of real people for commercial gain.
Expert Review
Developer: Superhuman
Country: Multiple
Sector: Media/entertainment/sports/arts
Purpose: Provide writing feedback
Technology: Generative AI
Issue: Appropriation; Authenticity/integrity; Consent; Transparency
August 2025. Grammarly/Superhuman quietly launches the "Expert Review" feature.
January 24, 2026. Professor David Abulafia passes away; his identity continues to be used by the tool.
March 4–6, 2026. Wired and The Verge publish exposés revealing the unauthorised use of dozens of journalists' and authors' identities.
March 7, 2026. Public outcry intensifies on social media and professional forums; experts like Casey Newton publicly denounce the "AI clones."
March 9, 2026. Grammarly executives defend the feature, stating it does not claim "direct participation" despite using the experts' full names and photos.
AIAAIC Repository ID: AIAAIC2241