I'd like to 100% prohibit LLMs, but I refuse to police AI use in my academic writing & writing-intensive courses
So, here's what my course policy is, and how it's working.

A colleague at another institution recently asked me how I was handling AI in my courses. He knows I teach writing and writing-intensive courses, so it was a fair bet that I have opinions.
GenAI/LLM concerns
What he perhaps did not know is that this is the stickiest topic in the writing mentorship book I co-wrote with Stephen B. Heard (pre-order now!). While we navigated many philosophical, stylistic, and practical differences, LLMs are the mole hill Steve and I could have easily built into a mountain.
Steve has written a fair bit about his (mostly positive) stance on AI/LLM use in writing, STEM training, etc. I have only written one thing until now—and it was a firm critique of Chat-GPT’s incompetence. You’ll have to read our book to see how we resolved our differences!
But, something we didn’t get into in the book is that I'm on the "avoid LLMs/AI" side of the fence. I have a lot of concerns. Two of them include (a) stealing other people's intellectual property to train LLMs and (b) how that teaches students to severely devalue the intrinsically human activities of creativity and communication. (And I plan to write more about that at a later date.)
Anti-AI policing
In the meantime, I realize that I can't effectively prohibit students from using LLMs in my classes without policing for said use. I actually refuse to do that. I didn't go into teaching to be a patroler or adversary to students.1 Moreover, there are loads of false-positives in so-called plagiarism checkers, and the equivalents for LLM detection are equally suspect.2 Don’t get me started on using LLMs to provide writing/assignment feedback or correspond with students. 😤
Limited institutional guidance
At the same time, my institution doesn't have a universal policy on AI use in courses. However, Academic Affairs offers instructors four syllabus template options re AI:
AI Technology: We recommend that faculty include a section focused on permitted/unpermitted AI technology use in each of their syllabi, generally in the location of their Student Academic Dishonesty statement. Additionally, it is important that faculty clearly communicate their expectations of course collaboration policies (with other students) in this same area.
We offer the following language as draft material (adapted from University of Delaware) that instructors may want to consider.
Option 1: Use prohibited
Students are not permitted to use advanced automated artificial intelligence or machine learning tools on assignments in this course. Each student is expected to complete each assignment without substantive assistance from others, including automated tools.
Option 2: Use only with prior permission
Students are permitted to use advanced automated artificial intelligence or machine learning tools on assignments in this course if instructor permission is obtained in advance. Unless given permission to use those tools, each student is expected to complete each assignment without substantive assistance from others, including automated tools.
Option 3: Use only with acknowledgement
Students are permitted to use advanced automated artificial intelligence or machine learning tools on assignments in this course if that use is properly documented and credited. For example, text generated using ChatGPT-3 should include a citation such as: “Chat-GPT-3. (YYYY, Month DD of query). “Text of your query.” Generated using OpenAI. https://chat.openai.com/” Material generated using other tools should follow a similar citation convention.
Option 4: Use is freely permitted with no acknowledgement
Students are permitted to use advanced automated artificial intelligence or machine learning tools on assignments in this course; no special documentation or citation is required.
None of those felt adequate to me on their own.
My “use of AI in coursework” policy
So, here's my course policy, which I embed in the Academic Integrity section of my syllabi.3 I’m providing it as a link so this post doesn’t get horrendously long. The short story is that I require students to discuss with me in advance any LLM/AI use they’d like to do in the class, and get my permission for it. If they succeed, they’d also need to cite it.4
How’s this policy working?
No students have asked me for permission to use LLMs or any other AI.
I do not run students' work through any sorts of "checker" apps/programs. See above re false positives and not policing students.
Even so, I could tell that several students in spring 2025 did use LLMs/AIs without securing permission. The "tells" included things like:
The super-saturated, "utopia"-style images generated by a lot of the current visual AI things.
Images that were way too perfectly aligned with the student's topic/content to be anything but a custom image5, and I know (and specify) that students don't spend money to complete course projects. So, if they didn't commission an illustrator to create the image, then it clearly came from AI.
Writing that analyzed itself within the text.
Writing that was quite circular or even repetitive, while still being pretty clearly written.
Writing with zero grammatical or spelling errors, but content errors or unclear "thinking." (Usually, a writer of any skill level is going to polish ideas before grammar/spelling. This is particularly true for developing/under grad writers, who are often dashing out a single, first/rough draft right before a deadline.)6
Writing skill/voice/tone that changed abruptly partway into the semester.
For students who use this technology without securing permission, I am reducing points on their grades for those assignments. (Most of the time, I just do complete/incomplete grades, as I want them to experience the extensive writing in my courses as a skill building process, not an excessive number of "exams.")
I recognize that there are folks who are a lot more AI-permissive and even build it into their assignments, but this is where I land with it after loads of discussions, lots of reading, and 20+ years of teaching experience.
How about you?
Regardless of my stance, I think we should absolutely be talking about this as a key philosphical and applied aspect of being academics. I'm curious: what is your AI/LLM policy in your courses, and how did you settle on it?
This post was first published on my blog at commnatural.com. © 2025, B.G. Merkle, all rights reserved.
One of my favorite, recent-ish books on this theme is Radical Hope: A Teaching Manifesto by Kevin Gannon. It’s a short book very much worth your time.
For one thing, a lot of the LLM “detectors” claim that using an em dash—a long-standing, essential part of English-language writing, and one of my favorites—is a sure sign of LLM writing. That’s preposterous. It’s categorically wrong. And it unhelpfully reduces people’s understandings of the craft of writing to searching for specific types of punctuation. (That—reducing writing to line editing and punctuation policint— is a topic that Steve and I cover repeatedly and at length in our book!)
I just recently came across this model as a tool for coaching students in their discussions/disclosure of using an LLM. I haven’t assigned it yet (I’m on sabbatical, so not teaching for the next two terms), but I probably will work it in next time I teach.
I used to partially make my living as an illustrator, so I know a custom image use-case when I see one.
Recognizing, embracing, and working through the stages of writer development and the development of a single piece of writing are two of the key ideas/tools I provide (with Steve) in our forthcoming book! You can pre-order Teaching and Mentoring Writers in the Sciences from University of Chicago Press now!
Oh, and since you mention that I've written a fair bit about LLMs - for those interested, here's one example: https://scientistseessquirrel.wordpress.com/2023/06/20/how-to-use-chatgpt-in-scientific-writing/. I think I'd put it a bit differently than you did: rather than say I've taken a "(mostly positive) stance on AI/LLM use in writing", I'd say I've taken a "mostly positive stance about there being some appropriate and effective ways to use LLMs in writing". I am (as you know!) very much opposed to "Hey ChatGPT, write my paper for me?" - those who do that deserve every bit of the disdain they get :-)
Great post, and I really like the thoughtful place you've landed on this. I especially like the decision not to police. I wonder, though - it kind of sounds like you ARE policing, because of your point 4: "For students who use this technology without securing permission, I am reducing points on their grades for those assignments". Have you had pushback from students who appeal this penalty, perhaps claiming that they did not actually use an LLM?