Can AI fool itself? That question lies at the heart of an investigation into President Vanya Quiñones’s departure announcement, emailed to the student body at Cal State Monterey Bay (CSUMB) on Wednesday.
Quiñones’s email pinged nine of 13 AI checkers used – including ZeroGPT, Winston AI and GPTZero, regarded as among the most accurate according to a study conducted by American software company Zapier. Results from those nine checkers ranged from 61% to 100% probability of AI-generated content. The four “negative” results (those with a detection score below 50%) were less diverse, with 0-25% AI detection. All 13 checkers returned mostly negative results for various emails from the University Communications department, and even some of Quiñones’s previous announcements.
Similarly, the above photo of Quiñones was flagged by multiple checkers as likely edited using AI — 98%, 83% and 55% from ZeroGPT, Quillbot and Sight Engine, respectively.
“The office of the president has communications professionals who draft and edit messages for internal and external audiences, including campuswide messages, speeches and scripts, among other communications materials, with guidance and input from the leadership team,” said CSUMB spokesperson Walter Ryce in a written statement. Ryce’s statement did not confirm or deny whether AI is used in that process.
The academic integrity policy at CSUMB defines plagiarism as “presenting someone else’s work or ideas or Artificial-Intelligence-generated content as your own without full acknowledgement,” with written words and statements at the top of a list of examples. While it’s unclear how or if this is enforced among faculty, the policy “is an essential component to the CSUMB learning community and shall be upheld by all members of the university community.”
However, for better or worse, the reality is more complex.
“AI detectors are marketed like lie detectors: paste text in, get a verdict out. But in 2026, that mental model breaks fast,” said Anangsha Alammyan, a freelance journalist who tested over 30 checkers as part of her own investigation. “Most writing isn’t ‘human’ or ‘AI.’ It’s a mix of human intent, AI drafts, human edits, AI polish, and human rewrites. AI detectors struggle most where work actually happens: short, structured, edited content.”
In other words, the accuracy of these detectors breaks down when edits are made or AI is used to supplement writing. So how can students know whether the president of their school is leaning on ChatGPT or Grammarly to make announcements as significant as her upcoming departure? The short answer is, barring an official statement from Quiñones herself, we can’t (when we tried to ask, her office referred us to the communications department for the statement Ryce provided above).
The Turing test was devised by British scientist Alan Turing, inventor of the computer, in 1950 to test whether a machine is capable of thought. The test relies on an “imitation game” in which a remote human interrogator individually questions a computer and another human. The interrogator must identify which is which; if they fail, then the computer has passed the test.
An e-mail and a live conversation aren’t the same. Yet the fact remains that not only can humans no longer distinguish the stilted writing of an academic executive from that of a machine, AI itself can’t do it – at least, not with consistent accuracy. This could have significant implications for academic integrity across the CSU system, where a $17 million deal with OpenAI was signed last year to expand access for students and faculty alike.
