ALWAYS generate interactive quizzes using the Quiz component (50 comprehensive questions total). Generates 50 college-level conceptual questions with immediate feedback per question. Quiz component...
Version: 5.0.0 | Consolidates: quiz-answer-redistributor v3.0.0 | Alignment: Constitution v3.1.2, Co-Learning Partnership, "Specs Are the New Syntax" | KEY FEATURES: 50 total questions, automated answer redistribution with intelligent explanation regeneration, 15-20 displayed per batch, immediate feedback per question, randomized batching on retake, no pass/fail threshold, evals-aligned questions
Quiz Component Location: robolearn-interface/src/components/Quiz.tsx (globally registered—no imports needed)
Usage Reference: robolearn-interface/src/components/QUIZ_USAGE.md (example structure + best practices)
Example Quiz: robolearn-interface/src/components/references/example-quiz.md (full working example with all patterns)
Generate high-quality, college-level MCQ quizzes (50 comprehensive questions total) using the globally-registered Quiz component (<Quiz />). The component automatically:
Output is ALWAYS a Quiz component with 50 questions, NEVER static markdown quizzes.
Core Principles:
This skill should be used when creating end-of-chapter assessments. CRITICAL: This skill ALWAYS generates interactive quizzes using the Quiz component—NEVER static markdown quizzes. ALWAYS 50 QUESTIONS TOTAL.
Activate this skill when:
<Quiz /> componentTrigger phrases:
MANDATORY REQUIREMENTS:
Shift from Recall to Understanding:
❌ Recall (Avoid):
"What is a Python list?" → Tests memorization
✅ Conceptual (Target):
"Given this code with list operations, what misconception does this error reveal?" → Tests understanding
Why College-Level?
Cognitive Level Target:
Quiz component shows feedback IMMEDIATELY after each answer (not delayed until results page). This enables:
Feedback shows:
Example feedback structure:
✓ CORRECT!
[Explanation of why this is right...]
vs.
✗ INCORRECT
Why your answer was wrong:
You selected: "append() is immutable"
This is incorrect. The correct answer is: "append() modifies the list in place"
Explanation:
Lists are mutable (changeable) in Python. The append() method modifies the original list
in place and returns None (doesn't return a new list). This is different from strings,
which are immutable. Understanding mutability is crucial for debugging...
Why 50 questions?
How batching works:
Example flow:
Session 1: User sees questions [3, 47, 12, 28, 5, 11, ...] (15-20 questions)
Completes, sees results: 14/18 correct
User clicks "Try Another Batch" →
Session 2: User sees questions [42, 8, 31, 1, 19, 35, ...] (15-20 DIFFERENT questions)
Takes quiz again: 16/19 correct
Session 3: User clicks "Try Another Batch" →
Component shows completely DIFFERENT shuffle: [25, 9, 48, 2, 18, ...]
THE PROBLEM: Options of unequal length allow test-takers to guess correctly without reading questions. Example:
THE SOLUTION: ALL options within ±3 words of each other across all 50 questions.
Validation Procedure (MANDATORY):
Count words for EVERY option in EVERY question (all 50 questions):
Question: "Why is X important?"
A: "It improves performance quickly" (4 words)
B: "It simplifies code structure" (4 words)
C: "It prevents common bugs" (4 words)
D: "It helps developers work together" (5 words) ✓ All within ±3 range (4-5 is acceptable)
Flag any question failing the ±3 word rule:
❌ FAIL:
A: "Yes" (2 words)
B: "The framework processes requests asynchronously in a single event loop without blocking" (12 words)
→ Difference: 10 words → FAIL (>3 word difference)
✅ PASS:
A: "AI amplifies existing practices" (4 words)
B: "AI fixes all problems" (4 words)
C: "AI changes developer roles" (4 words)
D: "AI prevents errors completely" (4 words)
→ All within 4-4 range → PASS
Verify length distribution (no correlation with correctness):
Document validation in handoff:
Option Length Validation Complete:
✓ All 50 questions checked
✓ All options within ±3 word range
✓ No length-correctness correlation
✓ Longest option correct in 6 questions
✓ Shortest option correct in 7 questions
✓ Middle-length correct in 12 questions
Why ±3 words matters:
📖 Reference: option-length-validation.md for detailed examples and verification scripts
Requirements:
Quiz Component Format:
{
question: "Your question?",
options: ["Option A", "Option B", "Option C", "Option D"],
correctOption: 2, // Index 0-3, NOT 1-4!
explanation: "Why this is correct AND why other options are wrong..."
}
Workflow: Generate → Redistribute → Validate
LLMs are excellent at content creation but struggle with strict procedural constraints like answer distribution. Therefore, this skill uses a two-step process:
Step 1: Generate Quiz (Content Creation)
Step 2: Automated Redistribution (Procedural Validation)
After quiz generation, automatically fix answer distribution bias using the bundled Python script:
python .claude/skills/authoring/quiz-generator/scripts/redistribute_answers_v2.py <quiz_file_path> <sequence_letter>
Example:
python .claude/skills/authoring/quiz-generator/scripts/redistribute_answers_v2.py \
robolearn-interface/docs/04-Python-Fundamentals/14-data-types/quiz.md A
Available Sequences (A-H):
What the Redistributor Does:
Critical Features:
Output Report Example:
Successfully re-distributed quiz using Sequence C
Execution Summary:
* Questions Processed: 50
* Options Swapped: 18
* Explanations Updated: 18
* Validation: ALL CHECKS PASSED
Final Distribution:
* Index 0: 12
* Index 1: 12
* Index 2: 13
* Index 3: 13
All 50 explanations verified. Each explanation correctly references the corresponding correct answer option.
📖 Reference: answer-distribution.md for verification methods
Quiz component shows explanations immediately after each answer. Comprehensive explanations enable deeper learning:
Good Explanations (100-150 words):
Example:
explanation: "Lists are mutable (changeable) in Python, so append() modifies the original
list in place. This is different from strings, which are immutable—you can't change them
after creation. The extend() method adds multiple items (like appending a list), but
append() adds a single item. Insert() requires both value and position. Understanding
mutability is crucial for debugging variable scope issues and understanding function
side effects. When a function calls append() on a list parameter, it modifies the
original list outside the function—a common source of bugs."
The source field links each question to the specific lesson it addresses. This helps students know which lesson material to review.
Format: "Lesson N: [Lesson Title]"
Source Extraction:
title: field in YAML frontmatter)Examples:
source: "Lesson 1: Understanding Mutability"
source: "Lesson 3: Scope and Closures"
source: "Lesson 2: Unit Test Design"
Display in Feedback:
question_count: 50 # Comprehensive bank (50 total questions)
questions_per_batch: 15-20 # Questions displayed per session (component shuffles)
options_per_question: 4 # Always exactly 4 options
question_format: multiple_choice # Only MCQ
correct_answer_distribution: random_equal # indices 0-3 equally distributed (~12-13 per index)
feedback_timing: immediate # Shown after each answer (not delayed)
passing_score: NONE # No pass/fail threshold—just score tracking
file_naming: ##_chapter_##_quiz.md # e.g., 05_chapter_02_quiz.md
output_format: Markdown with Quiz component # <Quiz {...} questions={[...50 questions...]} />
component_globally_registered: true # No imports needed
CRITICAL ANTI-PATTERNS:
##_quiz.md (use ##_chapter_##_quiz.md)---
sidebar_position: X # e.g., 05 (lesson count + 1)
title: "Chapter X: [Topic] Quiz"
---
# Chapter X: [Topic] Quiz
Brief introduction (1-2 sentences describing what students will assess).
<Quiz
title="Chapter X: [Topic] Assessment"
questions={[
{
question: "Question 1 text here? (Conceptual, not recall)",
options: [
"Option A (specific text for this concept)",
"Option B (specific text for this concept)",
"Option C (specific text for this concept) ← CORRECT",
"Option D (specific text for this concept)"
],
correctOption: 2, // Index 0-3 (NOT 1-4!)
explanation: "COMPREHENSIVE explanation (100-150 words): Explain why C is correct (2-3 sentences).
Then address why each distractor is wrong: Why A is wrong (1-2 sentences). Why B is wrong (1-2 sentences).
Why D is wrong (1-2 sentences). Real-world connection or misconception clarification (1-2 sentences).
Total should be 100-150 words addressing all four options.",
source: "Lesson 1: Understanding Mutability"
},
{
question: "Question 2 text?",
options: [
"Option A (distinct alternative misconception)",
"Option B (distinct alternative misconception)",
"Option C (distinct alternative misconception)",
"Option D (correct answer) ← CORRECT"
],
correctOption: 3,
explanation: "Full explanation addressing why D is correct and why A, B, C are incorrect...",
source: "Lesson 2: Reference vs. Value"
},
// ... 48 more questions (total: 50 questions)
// Quiz component will shuffle and display 15-20 per session
]}
questionsPerBatch={18} // Optional: customize questions per session (default: 15)
/>
Key Requirements (CRITICAL):
correctOption uses 0-3 index (NOT 1-4!)source field REQUIRED for all questions (format: "Lesson N: [Lesson Title]")<Quiz /> (globally registered component)passingScore prop (removed—no pass/fail threshold)questionsPerBatch prop (default: 15, can be 15-20)📖 Reference: file-naming.md for naming conventions | example-quiz.md for complete working example
Chapter Content → Analyze Concepts → Generate 50 Questions →
Design Distractors → Randomize Answers → Write Explanations →
Format Quiz Component → Validate → ##_chapter_##_quiz.md
<Quiz {...} questions={[...50 questions...]} /> with all 50 in questions array📖 Reference: generation-process.md for detailed stage-by-stage workflow
##_chapter_##_quiz.mdquestionsPerBatch prop for customization📖 Reference: quality-checklist.md for complete validation criteria
Fewer than 50 Questions (CRITICAL): Only generating 15-20 questions
Index Confusion (CRITICAL): Using correctOption: 1-4 instead of 0-3
correctOption: 4 → References non-existent 5th optioncorrectOption: 3 → Correct (last option, 4th item)Missing Source Field (CRITICAL): Not including source field for questions
source field → Students don't know which lesson the question addressessource: "Lesson N: [Lesson Title]"Including Passing Score: Adding passingScore prop
<Quiz ... passingScore={70} /> → No pass/fail in new version<Quiz ... /> → Just score tracking, no thresholdTesting Recall: "What is X?" questions → Memorization
Weak Distractors or Incomplete Explanations: Not addressing why each option is right/wrong
Answer Patterns: Obvious distribution patterns in correctOption across 50 questions
Option Length Bias (🚨 CRITICAL - TEST VALIDITY THREAT): Options of unequal length allow test-takers to achieve 60-70%+ accuracy by selecting longest/shortest option WITHOUT reading questions
Impact: Unequal lengths undermine the entire quiz's validity. Student might appear to understand when they're just following a pattern.
Examples:
❌ INVALID: A: "Yes" (2 words), B: "The framework processes requests asynchronously in a single event loop" (11 words), C: "No" (2 words), D: "Maybe" (5 words)
❌ INVALID: Longest option is correct in 35 out of 50 questions (70%)
✅ VALID: All options 4-5 words: "AI amplifies existing practices" (4 words), "AI fixes broken processes" (4 words), "AI prevents all errors" (4 words), "AI changes developer skill" (5 words)
Fix (MANDATORY):
Validation Checklist:
📖 Reference: pitfalls-and-solutions.md for all common mistakes
##_chapter_##_quiz.mdWhere:
## = sidebar_position (lesson count + 1)## = chapter number (zero-padded).md (Quiz component is globally registered, no imports needed)Examples:
05_chapter_02_quiz.md07_chapter_05_quiz.md06_chapter_14_quiz.mdWhy this naming:
.md extension (Quiz component handles JSX rendering in markdown)📖 Reference: file-naming.md for complete guidance
specs/book/chapter-index.mdThe quiz is ready for human review when:
Content Complete:
Answer Randomization Verified:
Option Length Validation Verified (🚨 MANDATORY):
Explanation Quality Verified (CRITICAL for Immediate Feedback):
Quiz Component Format Valid:
source field present for ALL 50 questions (format: "Lesson N: [Lesson Title]")questionsPerBatch={18} (or omitted to use default 15)<Quiz /> is globally registered##_chapter_##_quiz.md (correct numbering).md extensionHuman Review Checklist:
📖 Reference: quality-checklist.md for complete validation
This skill includes detailed reference documentation:
Use Read tool to access references as needed during quiz generation.
**Quiz Generator v4.0.0 ALWAYS creates interactive assessments using the globally-registered Quiz component with 50 COMPREHENSIVE QUESTIONS. NEVER creates static markdown quizzes or fewer than 50 questions. Component automatically displays 15-20 random questions per batch, shuffled differently on each retake. Features immediate feedback per question (correct option + explanation + why wrong if incorrect), no passing/failing threshold (just score tracking), progress tracking, answer validation, color-coded feedback, retake button, and full theme support.
Every quiz MUST: