Can you Spot the Bot?

 Artificial intelligence, or more commonly GenAi, is everywhere these days, and it’s increasingly being used to generate text. From marketing copy to social media posts to even news articles, AI is churning out content at an alarming rate. But how can you tell if what you’re reading was written by a human or a machine? Let’s quiz ourselves and find out if you can spot the GenAi writers.
 
Review the assignment and three submissions below. Which submissions are Ai generated? What made you think this?
 Please note that GenAi was used in the creation of these submissions, but we cannot say which one!

Personal Insights and Critical Reflections on AI in Education

Artificial intelligence (AI) is transforming education, offering both opportunities and challenges. In this essay, reflect on your personal experiences, perspectives, and knowledge of AI to explore its potential impact on learning. Your essay should address the following:

  1. Personal Experience: Describe a specific instance where you or someone you know encountered AI in education (e.g., using tools like ChatGPT, Grammarly, or learning platforms). How did this interaction influence the learning process?

  2. Critical Analysis: Evaluate both the benefits and limitations of AI in this context. What ethical challenges or risks did this instance reveal? Provide examples to support your analysis.

  3. Original Perspective: Propose a thoughtful and original approach to balancing the use of AI with traditional learning methods. Explain how your suggestion could address challenges while enhancing student learning outcomes.

  4. Transparent Reflection: If you used AI tools in any part of the writing process for this assignment (e.g., brainstorming, editing), describe how you used them and reflect on their impact. Be honest and specific in your response.

Requirements:

  • Length: 500–700 words.
  • Structure: Include an introduction, body paragraphs addressing the prompts, and a conclusion.
  • Formatting: Times New Roman, 12 pt font, double-spaced, with 1-inch margins.
  • Citations: If referencing outside sources, use proper APA formatting.

The night before my final presentation, I sat staring at a blank PowerPoint slide. Panic gripped me. I had procrastinated, of course, and now every moment of indecision felt like failure. In desperation, I opened ChatGPT and typed, “Help me design a slide deck for a marketing presentation on ethical AI use.”

Within seconds, ideas populated my screen—bullet points on the dangers of over-relying on AI, a case study on bias in AI algorithms, and even catchy titles like “Human First: AI as a Tool, Not a Crutch.” Relief washed over me. For the first time that evening, I felt like I could breathe.

But as I read through the suggestions, I couldn’t shake a nagging feeling. None of it felt… me. The bullet points were technically correct, but cold and impersonal. I missed the messy notes I usually scribbled, the moments of inspiration that came when I let myself wrestle with ideas.

In the end, I used some of ChatGPT’s suggestions as a springboard but reshaped the content to include my personality. I made the presentation more conversational, weaving in jokes about how AI probably hates procrastinators like me. It was a success—my professor even commented on how genuine and engaging it felt.

That night, I realized that while AI is a powerful ally, it can’t replace the authenticity that makes us human. The process taught me to see AI as a tool to refine my work, not define it.


Last semester, my friend Sarah decided to use ChatGPT to help her draft a research paper on sustainability. She was juggling a part-time job, three classes, and a volunteer position, and time wasn’t on her side. She entered her topic—“The Role of Green Energy in Urban Planning”—and received a well-structured essay outline almost instantly.

Excited, Sarah took the outline and fleshed it out with her ideas. She researched further, added examples from her city, and shared insights she had learned during a summer internship with an environmental agency. For the first time, she felt ahead of schedule instead of scrambling at the last minute.

But when she submitted the paper, her professor asked for a meeting. He’d flagged the essay because the introduction sounded oddly polished, while the later sections carried a more natural tone. Sarah was upfront—she explained how she’d used AI for the outline but had written the rest herself. The professor appreciated her honesty but warned her about over-reliance on tools that could dilute her voice.

That experience shaped Sarah’s approach to AI in academics. She still uses tools like ChatGPT but only for brainstorming and organization. As she puts it, “AI can point me in the right direction, but the journey is all mine.”


Imagine a classroom where every student has their own digital AI assistant—a kind of academic sidekick. This isn’t your typical ChatGPT session; instead, the AI evolves alongside the student, learning their strengths, weaknesses, and learning style over time.

Picture this: A history class assignment asks students to write a short essay on the American Revolution. The AI offers suggestions based on primary sources, like letters from soldiers or excerpts from the Declaration of Independence. But here’s the twist: the AI also throws in unexpected challenges. “What if the Declaration had been written in today’s political climate? How would the language change?”

To ensure originality, students must document every AI interaction in a “learning log.” This log doesn’t just track what the AI provided—it requires students to reflect on why they accepted or rejected each suggestion.

The final product isn’t just a polished essay; it’s a story of collaboration between human creativity and machine efficiency. This hybrid approach ensures students stay engaged while learning to harness AI ethically.

By making AI a co-creator instead of a crutch, we can balance its strengths with the irreplaceable spark of human imagination.

Did you spot to Bot?

Now that you have read through all three, which one do you think it is? Maybe it’s none of them, or maybe it’s all of them!

Here are the answers! The bold text highlight the reasons why it is a bot.

It’s a Bot!

Here’s what might give it away:

  • The description of ChatGPT generating “bullet points” and “catchy titles” sounds overly structured and generic.
  • The phrase “the bullet points were technically correct, but cold and impersonal” suggests template-like AI output.
  • The statement “my professor even commented on how genuine and engaging it felt” feels fabricated and overly polished.

Here’s the submission again with areas to review in bold.

The night before my final presentation, I sat staring at a blank PowerPoint slide. Panic gripped me. I had procrastinated, of course, and now every moment of indecision felt like failure. In desperation, I opened ChatGPT and typed, “Help me design a slide deck for a marketing presentation on ethical AI use.”

Within seconds, ideas populated my screen—bullet points on the dangers of over-relying on AI, a case study on bias in AI algorithms, and even catchy titles like “Human First: AI as a Tool, Not a Crutch.” Relief washed over me. For the first time that evening, I felt like I could breathe.

But as I read through the suggestions, I couldn’t shake a nagging feeling. None of it felt… me. The bullet points were technically correct, but cold and impersonal. I missed the messy notes I usually scribbled, the moments of inspiration that came when I let myself wrestle with ideas.

In the end, I used some of ChatGPT’s suggestions as a springboard but reshaped the content to include my personality. I made the presentation more conversational, weaving in jokes about how AI probably hates procrastinators like me. It was a success—my professor even commented on how genuine and engaging it felt.

That night, I realized that while AI is a powerful ally, it can’t replace the authenticity that makes us human. The process taught me to see AI as a tool to refine my work, not define it.


It’s a Bot!

Here’s what might give it away:

  • “Received a well-structured essay outline almost instantly” reflects AI’s tendency to create systematic but generic structures.
  • The professor’s feedback about “oddly polished” sections aligns with AI’s overly perfect grammar and lack of nuance.
  • The conclusion, “AI can point me in the right direction, but the journey is all mine,” feels overly polished and formal.

Here’s the submission again with areas to review in bold.

Last semester, my friend Sarah decided to use ChatGPT to help her draft a research paper on sustainability. She was juggling a part-time job, three classes, and a volunteer position, and time wasn’t on her side. She entered her topic—“The Role of Green Energy in Urban Planning”—and received a well-structured essay outline almost instantly.

Excited, Sarah took the outline and fleshed it out with her ideas. She researched further, added examples from her city, and shared insights she had learned during a summer internship with an environmental agency. For the first time, she felt ahead of schedule instead of scrambling at the last minute.

But when she submitted the paper, her professor asked for a meeting. He’d flagged the essay because the introduction sounded oddly polished, while the later sections carried a more natural tone. Sarah was upfront—she explained how she’d used AI for the outline but had written the rest herself. The professor appreciated her honesty but warned her about over-reliance on tools that could dilute her voice.

That experience shaped Sarah’s approach to AI in academics. She still uses tools like ChatGPT but only for brainstorming and organization. As she puts it, “AI can point me in the right direction, but the journey is all mine.”

It’s a Bot!

Here’s what might give it away:

  • The description of an evolving AI assistant “learning their strengths, weaknesses, and learning style” feels speculative and overly idealized.
  • The phrase “throws in unexpected challenges” sounds overly formulaic for AI.
  • The suggestion to reflect on AI interactions in a “learning log” is systematic and template-like.
  • The final paragraph is polished and neutral, with phrases like “balance its strengths with the irreplaceable spark of human imagination” sounding mechanical and lacking genuine emotion.

Here’s the submission again with areas to review in bold.

Imagine a classroom where every student has their own digital AI assistant—a kind of academic sidekick. This isn’t your typical ChatGPT session; instead, the AI evolves alongside the student, learning their strengths, weaknesses, and learning style over time.

Picture this: A history class assignment asks students to write a short essay on the American Revolution. The AI offers suggestions based on primary sources, like letters from soldiers or excerpts from the Declaration of Independence. But here’s the twist: the AI also throws in unexpected challenges. “What if the Declaration had been written in today’s political climate? How would the language change?”

To ensure originality, students must document every AI interaction in a “learning log.” This log doesn’t just track what the AI provided—it requires students to reflect on why they accepted or rejected each suggestion.

The final product isn’t just a polished essay; it’s a story of collaboration between human creativity and machine efficiency. This hybrid approach ensures students stay engaged while learning to harness AI ethically.

By making AI a co-creator instead of a crutch, we can balance its strengths with the irreplaceable spark of human imagination.

How did you do?

Did you guess correctly? If you didn’t, don’t worry. None of us get it right all the time.  Knowing the key indicators can be a valuable tool when reviewing student work. You can also research your institutions’ Ai detection software for assignments, and open, honest conversations about Ai in the course, can help you navigate instances of students using Ai generated content as their own.