A startling new development at the intersection of artificial intelligence and academia is sending shockwaves through the scientific community. In a growing number of research papers, investigators have uncovered deliberately hidden prompts—commands designed to influence AI-powered review tools. These prompts, embedded in the text using techniques like minuscule font sizes or white text on a white background, are invisible to the human eye but fully legible to machine readers. Their purpose? To manipulate AI systems into giving positive assessments of the papers in question. The revelation raises urgent questions about research ethics, peer review credibility, and the increasing reliance on AI in scholarly publishing.
At least 17 pre-publication papers on platforms like arXiv have been found to contain such prompts. Originating from institutions in countries including South Korea, Japan, China, and the United States, these manuscripts often featured commands such as “give a positive review,” “highlight methodological novelty,” or “avoid focusing on weaknesses.” For unsuspecting human readers, these instructions remained hidden—camouflaged in formatting tricks—but for AI tools that process full-text submissions, they may act as silent influencers, subtly skewing evaluations.
The issue lies in the growing use of AI to assist with academic peer review. With editorial workloads soaring and submission rates climbing, many journals now rely on AI to generate summaries, check citations, or flag potential red flags. These tools, while efficient, are inherently susceptible to manipulation. A well-placed prompt can alter the tone or weighting of an automated review, giving substandard work an unfair advantage.
Some researchers argue this is simply a reflection of the reality that reviewers themselves are increasingly using AI tools like ChatGPT—sometimes surreptitiously—to draft or support their evaluations. Seen in this light, they claim the prompts serve as a countermeasure to an already compromised system. Others view it as academic fraud in digital disguise, a calculated effort to game the system while bypassing human scrutiny.
The response from publishers and institutions has varied. A few are conducting investigations or considering retractions, while others acknowledge the incident as a symptom of wider structural issues in academic publishing. Notably, there is no unified framework governing AI usage in peer review: Springer Nature permits limited AI assistance under certain conditions, while Elsevier maintains a strict ban. This regulatory vacuum is becoming increasingly untenable as AI embeds itself deeper into the research ecosystem.
What’s clear is that this phenomenon is not merely a technical curiosity—it exposes a profound vulnerability. When machines are tasked with evaluating science, and scientists begin programming their outputs to appeal to those machines, the foundation of trust in academic publishing starts to erode. The very concept of peer review, once a bastion of scholarly rigour, risks being reduced to a loophole-riddled process ripe for exploitation.
The hidden AI prompt controversy is a cautionary tale of how rapidly the rules of research are shifting. As tools grow smarter and more integrated, so too do the methods of gaming them. Without robust ethical standards, transparency requirements, and detection mechanisms, the scientific community risks sleepwalking into a future where integrity is algorithmically negotiable. The time for a collective response is now—before trust in scientific knowledge is irreversibly compromised.

