Build a foundation for assessment ideation with AI using the 3Ps of Quality Prompting. Then, take your AI assessment brainstorm to the next level with these tips for prompting!
Prep: Setting the Stage with Contextual Information
These techniques focus on providing the AI with the necessary background and context to improve the relevance of its output.
The use of personas (directing the AI to “pretend you are X”) is becoming less important as AI grows more sophisticated but can still be useful. The prompt can include the persona’s background, personality traits, values, and interests/motivations.
Tip: Ask the AI to assume the character of an expert in the subject matter and (online) course design.
Role-specific prompting asks the AI to adopt specific functional roles (e.g., struggling student, non-native speaker) to explore different perspectives and challenges. When combined with bias and ethics considerations and edge case exploration, this approach ensures that the assessment is inclusive, free from bias, and adaptable to a wide variety of learner profiles and unexpected challenges.
Tip: When designing assessments, use role-specific prompting to uncover student challenges, evaluate for potential bias, and explore less obvious scenarios. For example, “Assume the role of a non-native English speaker: What barriers might this assessment present?” Then follow up by asking, “Evaluate this assessment for any implicit bias or ethical concerns” and “What potential pitfalls or edge cases could impact its success for students with different learning needs?” This comprehensive approach ensures the assessment is inclusive, ethically sound, and adaptable to diverse learners and scenarios.
Contextualizing output means asking the AI to align its responses with specific course content, learning objectives, or student demographics. This ensures that the assessments are relevant and tailored to the course context.
Tip: When creating an assessment, provide the AI with course-specific information. For example, “Design an assessment based on the key concepts from Module 3, considering diverse student backgrounds.” This helps ensure that the assessment is not only aligned with course goals but also relevant to the students’ experiences.
Purpose: Defining the Goal and Structure
These techniques ensure the AI understands the instructional goals and objectives of the task, as well as the structure it should follow.
Summarization-based prompting (also called pre-ideation summarization) asks the AI to provide brief summaries of ideas or options before exploring them in detail.
Tip: Apply this technique to quickly generate overviews of possible assessment topics or formats, enabling you to compare and select the most suitable option (or combination) before committing.
Self-interrogative prompting encourages the AI to ask clarifying questions or restate its understanding of the task before beginning, improving alignment with your goals and enhancing response accuracy. This ensures that the AI is aligned with your expectations and reduces the chance of misinterpretations.
Tip: Before generating an assessment, ask the AI to either restate the task in its own words or raise any clarifying questions. For example, “How do you interpret this task?” or “What additional information would help you create a more accurate assessment?” This ensures the AI fully understands the assignment and can produce relevant, precise output.
Self-evaluation prompting asks the AI to critique its own output for accuracy, relevance, and quality, while also reflecting on its process to suggest improvements. This encourages iterative refinement and deeper insights into the generated content.
Tip: After generating an assessment, ask the AI to evaluate its output against specific criteria and reflect on potential areas for improvement. For instance, you could prompt, “Evaluate your assessment based on the rubric and suggest how it could better promote critical thinking.” This ensures a high-quality, thoughtful output aligned with your objectives.
One-shot prompting gives the AI a single example before asking it to complete a task, while two-shot prompting provides two examples, helping refine the AI’s understanding of the task.
Tip: When creating assessments, you might offer the AI a sample rubric or assignment format to clarify your expectations.
Positive reinforcement prompting focuses on internal, constructive motivations—encouraging thoughtful, valuable, and meaningful contributions because of their positive impact on learning, decision-making, or problem-solving. The aim is to motivate the AI (or learner) to perform well because the task itself is valuable and leads to beneficial outcomes.
Tip: Emphasize the importance of the task (“help improve understanding” and “lead to better student outcomes”). Reinforce the value of the AI’s contribution (“Your input is valuable”). Focus on a positive outcome rather than pressuring the AI to avoid negative consequences.
Parameters: Setting Boundaries and Guidelines
These techniques define the boundaries for the AI’s output, including constraints, formatting, and step-by-step structure.
Constraint-based prompting involves setting clear limits on scope, workload, and duration to guide the AI’s output within relevant boundaries.
Tip: Providing scope, workload, and duration in online course assessment design ensures that the assessment is appropriately challenging, manageable, and time-bound, leading to clear expectations for students.
Chain-of-thought (or computational thinking) prompting asks the AI to break tasks into logical steps or explain its reasoning. Thinking in AI logic and proceeding through details and parameters step-by-step improves outputs, transparency, and task structure.
Tip: This can be used to both understand the AI’s “thought process” so you can identify and correct misinterpretations more easily, and to help guide the AI to structure complex, multi-step assessments, guiding students in a way that mirrors the AI’s step-by-step process.
Iterative prompting is a technique where the AI is repeatedly prompted with slight modifications to improve its output over several rounds. This allows for refining responses based on feedback or new insights, leading to a more polished final product.
Tip: After generating an initial assessment, ask the AI to refine its output. For example, you could request adjustments to the difficulty level, clarity of instructions, or alignment with learning objectives. Iterating ensures that the assessment becomes more targeted and effective with each revision.
Self-prompting is a technique where the AI generates its own task prompt to clarify its understanding and ensure better alignment with the intended goal. This helps the AI critically assess the task requirements, leading to more precise and relevant outputs.
Tip: When designing assessments, ask the AI to write its own prompt for the task. This can help clarify any unclear expectations and ensure the AI-generated assessment is aligned with your learning objectives.