A Digital Mimicry Shutdown
March 11, 2026, signaled a retreat for the world's most ubiquitous writing assistant. Grammarly abruptly disabled its Expert Review feature today, reacting to a mounting legal storm that threatens the future of generative artificial intelligence in professional software. San Francisco legal circles were buzzing with news of a class action lawsuit filed just hours before the tool went dark. Writers, academics, and the estates of deceased intellectuals are targeting Grammarly and its partner, Superhuman, alleging the unauthorized use of their names and stylistic identities to power a high-end editing tool.
Expert Review debuted last August with a bold promise. It offered users the ability to receive feedback on their prose as if it were written by famous literary figures or scientific giants. Users could draft an email and ask for a critique from a digital simulacrum of Carl Sagan, an acclaimed novelist, or a prominent tech blogger. Grammarly marketed the feature as a way to elevate mundane correspondence into something more profound. But the underlying mechanics relied on large language models trained on massive datasets without the consent of the authors they mimicked.
The legal system moved faster than the silicon.
Plaintiffs argue that Grammarly exploited their reputations for commercial gain without offering compensation or seeking permission. Wired reported that the feature used publicly available information from third-party models, a phrase that often masks the reality of web crawlers harvesting copyrighted material. Such tools do not just learn from the text. They attempt to replicate the specific creative voice and authority that authors spend decades building. Critics call it a sophisticated form of identity theft rebranded as technological progress.
Legal Blowback and Literary Theft
San Francisco lawyers representing the class of plaintiffs describe the feature as a violation of the right of publicity. This litigation centers on the unauthorized commercial exploitation of a person's name and identity. While AI companies often claim that training models on public text constitutes fair use, the Grammarly case adds a layer of complexity. The product did not just use text for training. It explicitly sold the names and authority of these authors as a feature to paying customers. Superhuman, the email client that integrated these Grammarly tools, finds itself equally entangled in the legal fallout.
Authors began noticing their names appearing in the Expert Review menu late last year. Living writers expressed outrage at seeing their professional identities reduced to a toggle switch in a software interface. Grammarly initially responded with a defensive strategy. The company introduced an opt-out system, placing the burden of protection on the victims. This mechanism required authors to find their own names in the database and request removal, a process that many deemed insulting and ineffective.
Carl Sagan cannot opt out of a database from beyond the grave.
Estate lawyers have joined the fray, arguing that the likeness of deceased public figures remains protected property. Grammarly attempted to shield itself with a disclaimer. The company stated that references to experts were for informational purposes only and did not indicate affiliation or endorsement. Yet, legal analysts suggest such fine print offers little protection when the entire selling point of the feature is the perceived endorsement of an expert's style. Engadget noted that the tool even mimicked tech bloggers and scientific minds, casting a wide net that left few sectors of the intellectual community untouched.
The Ethics of Ghostwritten Algorithms
Training data for these models remains a point of intense friction. Grammarly claims its experts were based on public data, but sources suggest these datasets are the result of indiscriminate scraping. When a model recreates the specific cadence of a Pulitzer Prize winner, it is not merely generating text. It is harvesting the specific intellectual labor of a human being. This choice indicates a shift from AI as a tool for efficiency to AI as a tool for replacement. If a software suite can provide a Carl Sagan critique, some companies may feel less inclined to hire actual human consultants.
Still, the technical limitations of these digital twins often resulted in uncanny or inaccurate mimicry. Writers who tested the tool reported that the suggestions often felt like a caricature of their actual work. But the accuracy of the mimicry is secondary to the legal principle of ownership. Even a poor imitation becomes a legal liability when sold for a subscription fee. Intellectual property law in 2026 has become the primary battleground for the soul of the creative economy, and Grammarly is now its most visible casualty.
Corporate Justifications and Fan Engagement
Superhuman CEO Shishir Mehrotra attempted to frame the controversy in a different light. In a LinkedIn post published today, Mehrotra explained the decision to disable the feature while the company reassesses its approach. He claimed the agent was designed to help users discover influential perspectives. He further suggested it provided meaningful ways for experts to build deeper relationships with fans. Such framing was met with widespread skepticism from the writing community. Most authors do not consider an unauthorized AI recreation of their voice to be a form of relationship building.
Mehrotra framed the move as a moment for reassessment.
But the reality of the shutdown appears driven by the discovery of the class action lawsuit rather than a sudden change of heart regarding AI ethics. The financial risks of a successful right of publicity claim are enormous. If a court decides that every instance of a mimicked voice requires a licensing fee, the business model for these specific AI agents collapses instantly. Grammarly and Superhuman are likely trying to limit their exposure before discovery proceedings begin in earnest. Corporate accountability rarely arrives without a subpoena.
Future iterations of the software will likely require explicit licensing deals with living authors and the estates of the deceased. It would mirror the evolution of the music industry, where digital sampling eventually moved from a free-for-all to a strictly regulated marketplace. For now, the Expert Review button has vanished from the Grammarly interface. The company must now convince a judge that its digital experts were not merely a parasitic use of human genius. Success in the courtroom seems unlikely given the current climate of judicial skepticism toward uncompensated AI training.
The Elite Tribune Perspective
Does the ghost of a writer belong to a corporation? Silicon Valley seems to think so, operating on the arrogant assumption that every word ever written is merely raw material for their next product launch. Grammarly and Superhuman did not just scrape data. They attempted to commodify the very essence of human authority by selling the names of greats like Carl Sagan as if they were nothing more than aesthetic filters. It is the ultimate expression of the tech industry's disdain for actual expertise. They want the prestige of the expert without the inconvenience of paying the human.
Mehrotra’s talk of building relationships with fans through an AI agent is a grotesque distortion of reality. A relationship requires consent and presence. An algorithm trained on a dead man's books is a séance for profit, not a bridge to an audience. The class action lawsuit is a necessary correction to a sector that has grown far too comfortable with digital shoplifting. If Grammarly wants to offer expert reviews, it should hire experts. If it wants to use the names of the world's most influential thinkers, it should write a check to their estates. The era of the digital free lunch is over, and it is about time the courts enforced the bill.