Over the AI Wars

Dear Moxie,

I am a faculty member in a doctoral program and have a love/hate relationship with AI in education. Personally, I love it. I see AI as a tool that can be used or misused, just like any other tool. But my university doesn’t see it this way. I feel like they are stuck in the AI stone age (if such a thing exists). All anybody wants to talk about is how to keep students from using AI to cheat. Sometimes these conversations get really heated and uncomfortable.

Well, I finally got permission to at least to make my case for AI use to assist a couple of our writers who are really struggling. My audience will be the program director and faculty members. I expect a lot of resistance, but I honestly hope that there is a way forward. My goal would be to move the policy from “Absolutley no AI ever” to “Some AI use is permitted under guided circumstances.” It’s not a huge win, but it’s a win.

Where do I start in planning this presentation? How can I convince my program that AI can be safely and ethically wielded for good? It’s tense up here, and I’m nervous about this. A particular concern is about the use of generative AI writing for students. 


Over the AI Wars

Dear Over the AI Wars,

A few years back, when one of us was navigating a particularly bad breakup, a good friend handed them a Kleenex and said, “I’m sorry . . . and congratulations.” At the time, these words were cold comfort, but they reflected the duality of the moment. When change is happening, whether at the individual, family, or organizational level, a tension exists between these two ideas. Why am I talking about a breakup on an Ed-tech website? I bring this story to bear because it might seem like you have a tech problem, but what you actually have is a human change problem. Do you need to show that AI is not s not the Devil prancing into your doctoral program wearing Prada? Of course, you do, but that is not the first thing you need to do, Over

This conflict, while mired in tech implementation language, is a problem about how people navigate change. When AI burst onto the scene, it ushered in nothing short of a whirlwind of change. The tornadic discourse around the technology has been swirling for a year and change now. The dust is not even close to settling. As institutions, yours include scrabble to adopt or opt out, you can lean into your wise mind and take a calmer stance. Here’s how that might look. 

What is true about change is that it is messy, is not linear, and doesn’t happen all at once. Next, try to contextualize the change. Kurt Lewin’s change model can help you understand where your institution is in the process of change. His three phases of change are listed below.

  1. Unfreezing: In this phase, people are made aware of problems so that they can let go of old ways of doing things. This phase disrupts an organization’s status quo. (Most people and organizations just love this phase).

  2. Changing/Moving: showing the benefits of change and offering alternatives. Think “brainstorming” 

  3. Refreezing: implementing new ways and returning an organization to homeostasis. 

These little changes look so tidy on paper, Over, but they are not neat, not tidy, and often feel very unmooring. When we think about the context in which many universities encountered AI for the first time, we can see that AI was not introduced as a solution to a problem but rather another problem itself — a way to exacerbate an already existing problem — plagiarism. 

Over, you are what we at Moxie call an AI champion. I salute you. But put yourself in your program director’s shoes for a moment. Why would your program want to pivot toward another problem?

I can hear you — “But . . . but, I know what AI can actually do!” 

Sure, and how much good do you think it will do if I send you that little snippet of our prompt that says, “Never, ever write for the student — even if they beg you to?” 

Before you plan that presentation and model our tools (I hope you do), take a step back and try to situate where your program is in the change process. What existing problems exist with which AI can assist? What values do your faculty members uphold that you share?

Before you go championing tools that faculty members don’t understand using language they likely don’t know, you need to come together around the common ground of helping doctoral students learn to write better. Agree on the problem and show how what is currently being done is not working. Then you can begin brainstorming in phase 2. And be open to dialogue here. If you hammer your AI agenda so hard without hearing what other things are being done to help students, you may miss an opportunity to see how AI and other methods can synergize.

Only after these conversations have been had (and there will be many because change takes time), do I recommend that you model tools and show outcomes. And no, your institution does not need to worry about our tools for writing for students because they never do, even if students beg them to :). 

So, Over,  we are sorry . . . and congratulations.


Elevate Your Research with AI

Dive into a curated collection of AI tools specifically designed to enhance and streamline your academic journey.