Generate measurable learning outcomes aligned with Bloom's taxonomy and CEFR proficiency levels for educational content. Use this skill when educators need to define what students will achieve,...
Enable educators to create measurable, actionable learning objectives aligned with Bloom's taxonomy and CEFR proficiency levels. This skill helps:
Constitution v4.0.1 Alignment: This skill implements evals-first objective design—defining success criteria BEFORE creating learning objectives, integrating CEFR proficiency levels (Principle 5: Progressive Complexity), and incorporating Section IIb (AI Three Roles Framework) co-learning outcomes.
Use this skill when:
CRITICAL WORKFLOW:
Template:
### Objective Design (Evals-First)
**Source**: Chapter spec at `specs/part-X/chapter-Y/spec.md`
**Success Evals from Spec**:
1. 75%+ write valid specification (business goal: reduce iteration cycles)
2. 80%+ identify vague requirements (business goal: prevent scope creep)
**Learning Objectives Designed to Achieve Evals**:
- LO-001: Write clear specifications → Eval #1
- LO-002: Identify ambiguous requirements → Eval #2
Do NOT create objectives without:
When a request comes in to generate learning objectives, first understand:
If you're not familiar with the specific topic's cognitive levels, read: 📖 reference/blooms-taxonomy-programming.md
This document maps Bloom's 6 levels to programming contexts with:
Read the guidance on prerequisite analysis: 📖 reference/prerequisite-analysis.md
For your objectives, determine:
Based on the Bloom's level of each objective, review appropriate assessment methods: 📖 reference/assessment-methods.md
This guides you to pair objectives with realistic assessment approaches (code exercises for Apply level, code reviews for Evaluate, etc.).
For each topic, create 3-5 objectives (typically):
Use the template as guidance: 📄 templates/learning-objective-template.yml
Key principle: Each objective should answer:
Align objectives with international proficiency standards (from skills-proficiency-mapper v2.0):
A1 (Beginner - Recognition):
A2 (Elementary - Guided Application):
B1 (Intermediate - Independent Application):
B2 (Upper-Intermediate - Analysis):
C1 (Advanced - Creation/Synthesis):
Proficiency Progression Rule: Lessons should progress A1→A2→B1 within a chapter (not jump from A1 to C1).
CRITICAL: AI-native learning objectives must include ability to work WITH AI in bidirectional co-learning partnership (per Section IIb forcing functions), not just independently.
Traditional Objective Format:
LO-001: Implement user authentication (independent skill)
AI-Native Objective Format:
LO-001: Implement user authentication working with AI as co-learning partner
- Use AI as Teacher: Learn security patterns from AI suggestions
- Use AI as Student: Refine AI's output through clear specifications
- Use AI as Co-Worker: Iterate toward optimal solution collaboratively
- Validate: Verify AI-generated code meets security requirements
Three-Role Objective Types:
1. AI as Teacher Objectives (Student learns from AI):
2. AI as Student Objectives (Student teaches AI):
3. AI as Co-Worker Objectives (Collaborative iteration):
Example AI-Native Objective Set:
- id: "LO-AUTH-001"
statement: "Implement OAuth authentication working with AI as co-learning partner"
blooms_level: "Apply"
cefr_level: "B1"
three_role_integration:
ai_as_teacher: "Learn refresh token rotation pattern from AI suggestion"
ai_as_student: "Guide AI through security requirements via clear spec"
ai_as_coworker: "Iterate on session management approach together"
assessment_method: "Code + reflection: Show implementation AND what you learned from AI"
success_criteria:
- "OAuth implementation works correctly"
- "Student identifies at least one pattern learned from AI"
- "Student demonstrates validation of AI output"
Objective Balance for AI-Native Content:
Once you've generated objectives, invoke the validation script to check they're measurable:
python .claude/skills/authoring/learning-objectives/scripts/validate-objectives.py objectives.yml
The script checks:
If validation fails, review the error messages and refine:
Based on validation feedback:
Present learning objectives in clear, structured format:
topic: "Topic Name"
objectives:
- id: "LO-001"
statement: "Action verb + what learner will do"
blooms_level: "Remember/Understand/Apply/Analyze/Evaluate/Create"
context: "Specific situation or problem"
prerequisites:
- "Prerequisite 1"
- "Prerequisite 2"
assessment_method: "How this will be assessed"
success_criteria:
- "Criterion 1"
- "Criterion 2"
- [ID: LO-001 | Bloom: L3-Apply]
Statement: Implement X in Y context
Evidence: Quiz Q3, Task T2
User Request: "Create objectives for teaching Python list comprehensions to beginners"
Generated Objectives:
topic: "Python List Comprehensions"
objectives:
- id: "LO-C1-001"
statement: "Define list comprehension and identify its basic syntax"
blooms_level: "Remember"
context: "Introduction to list comprehensions"
prerequisites:
- "Know Python list syntax and indexing"
- "Can write basic for loops"
assessment_method: "Quick quiz: match syntax to purpose"
success_criteria:
- "Can identify list comprehension in code"
- "Can label parts of syntax (brackets, for, if)"
- id: "LO-C1-002"
statement: "Explain how list comprehension replaces for loop patterns"
blooms_level: "Understand"
context: "Comparing traditional loops with comprehensions"
prerequisites:
- "Understand for loop iteration"
assessment_method: "Paraphrase: explain in own words"
success_criteria:
- "Can describe equivalent for loop for any list comprehension"
- "Can explain what's happening step-by-step"
- id: "LO-C1-003"
statement: "Implement list comprehensions to filter and transform data"
blooms_level: "Apply"
context: "Given various data transformation tasks"
prerequisites:
- "Can write list comprehensions with filter conditions"
- "Understand string/number methods"
assessment_method: "Code exercise: write comprehensions for provided requirements"
success_criteria:
- "Code executes without errors"
- "Output matches expected results"
- "Uses list comprehension (not a for loop)"
- id: "LO-C1-004"
statement: "Compare list comprehension with traditional loops by examining efficiency and readability"
blooms_level: "Analyze"
context: "Choosing approaches for data processing"
prerequisites:
- "Implement list comprehensions fluently"
- "Understand basic time complexity"
assessment_method: "Code analysis: compare two implementations"
success_criteria:
- "Identifies efficiency differences (space/time)"
- "Considers readability and maintainability"
- "Justifies when to use each approach"
User Request: "I'm teaching OOP fundamentals. What should students be able to do?"
Generated Objectives:
topic: "Object-Oriented Programming Fundamentals"
objectives:
- id: "LO-OOP-001"
statement: "Define class, instance, method, and attribute and distinguish between them"
blooms_level: "Remember"
context: "Introduction to OOP vocabulary"
prerequisites:
- "Understand functions and variables in Python"
assessment_method: "Matching exercise and short definitions"
success_criteria:
- "Can define each term correctly"
- "Can match definitions to examples in code"
- id: "LO-OOP-002"
statement: "Explain how __init__ initializes instances and why self references the current object"
blooms_level: "Understand"
context: "Understanding object instantiation"
prerequisites:
- "Know class definition syntax"
- "Understand function parameters and returns"
assessment_method: "Walkthrough: explain code line-by-line"
success_criteria:
- "Can trace execution of __init__ method"
- "Can explain what self represents"
- id: "LO-OOP-003"
statement: "Design and implement a class with attributes and methods for a specific domain"
blooms_level: "Apply"
context: "Real-world object requirements (e.g., Student, BankAccount, Car)"
prerequisites:
- "Can write class definitions with __init__"
- "Understand instance vs class scope"
assessment_method: "Code exercise: implement class from requirements"
success_criteria:
- "Class correctly models the domain"
- "Attributes store state appropriately"
- "Methods perform expected behaviors"
- "Code follows PEP 8 naming conventions"
- id: "LO-OOP-004"
statement: "Analyze class hierarchies and identify inheritance relationships"
blooms_level: "Analyze"
context: "Understanding code organization and reuse patterns"
prerequisites:
- "Implement basic classes fluently"
- "Understand inheritance syntax"
assessment_method: "Code analysis exercise: identify class relationships"
success_criteria:
- "Identifies parent/child relationships correctly"
- "Explains why inheritance is used"
- "Identifies methods that are overridden"
- id: "LO-OOP-005"
statement: "Evaluate a class design and justify changes for maintainability and extensibility"
blooms_level: "Evaluate"
context: "Code review of existing class structures"
prerequisites:
- "Analyze inheritance patterns"
- "Understand design principles (DRY, SOLID)"
assessment_method: "Design critique: suggest improvements with justification"
success_criteria:
- "Identifies duplication that could be eliminated"
- "Suggests appropriate inheritance changes"
- "Justifies changes based on maintenance concerns"
- id: "LO-OOP-006"
statement: "Design an OOP solution for a complex domain problem"
blooms_level: "Create"
context: "Building a small system with multiple interacting classes"
prerequisites:
- "Master all previous OOP concepts"
- "Can implement and evaluate class designs"
assessment_method: "Create a mini-project with multiple classes"
success_criteria:
- "Solution correctly models the domain"
- "Uses inheritance appropriately"
- "Code is well-organized and reusable"
- "Design decisions are documented and justified"
Problem: "Understand decorators"
Why it's bad: Not measurable. What does "understand" mean? How will you know if they understand?
Solution: Use specific action verb and add context
Problem: Created 15 objectives for 1-hour lesson
Solution: Reduce to 3-5 objectives. Ask:
Problem: Prerequisites include "Master Python" and "Understand all design patterns"
Solution: Be minimal. Ask: "What's the absolute minimum they need to know to start learning this?"
Problem: Only created Remember and Apply objectives
Solution:
This skill works well with:
→ exercise-designer skill: Once you have learning objectives, use that skill to create practice exercises aligned to each objective's Bloom's level
→ technical-clarity skill: Use to review the clarity of your objective statements
→ code-example-generator skill: Use to create examples that demonstrate Apply-level objectives
As books increasingly cover advanced AI topics, use this skill to define learning objectives for:
Topic: "Building Autonomous AI Agents"
topic: "Agentic AI Fundamentals"
objectives:
- id: "LO-AGENT-001"
statement: "Define autonomous agents, distinguish them from traditional chatbots, and explain key characteristics (goal-seeking, state management, tool use)"
blooms_level: "Understand"
context: "Introduction to agentic AI concepts"
prerequisites:
- "Understand AI basics and language models"
assessment_method: "Short answer or concept mapping"
success_criteria:
- "Can list key differences between agents and chatbots"
- "Can explain goal-directed behavior and autonomy"
- id: "LO-AGENT-002"
statement: "Implement a simple autonomous agent that uses tools to accomplish a specified goal"
blooms_level: "Apply"
context: "Building working agents with Python"
prerequisites:
- "Understand agent architecture and tool-use patterns"
- "Can write Python functions and handle API calls"
assessment_method: "Code exercise: implement agent from requirements"
success_criteria:
- "Agent autonomously reaches its goal"
- "Correctly selects and uses available tools"
- "Handles tool responses and iterates"
- id: "LO-AGENT-003"
statement: "Analyze agent behavior and identify when agents succeed, fail, or enter loops; propose fixes"
blooms_level: "Analyze"
context: "Debugging and improving agent performance"
prerequisites:
- "Can implement basic agents"
- "Understand agent decision-making patterns"
assessment_method: "Debug exercise: analyze agent logs and improve behavior"
success_criteria:
- "Identifies root causes of agent failures"
- "Proposes specific improvements (better prompts, new tools, constraints)"
Topic: "Integrating MCP into Python Applications"
topic: "Model Context Protocol (MCP) Integration"
objectives:
- id: "LO-MCP-001"
statement: "Explain MCP architecture, the role of servers and clients, and how MCP extends model capabilities"
blooms_level: "Understand"
context: "Introduction to MCP concepts"
prerequisites:
- "Understand Python modules and client-server patterns"
assessment_method: "Explanation exercise: describe MCP client-server relationship"
success_criteria:
- "Can explain what MCP servers provide"
- "Can describe the client-server protocol flow"
- id: "LO-MCP-002"
statement: "Install and configure existing MCP servers and integrate them into a Python application"
blooms_level: "Apply"
context: "Using MCP in real applications"
prerequisites:
- "Understand MCP architecture"
- "Can write Python clients and manage dependencies"
assessment_method: "Code exercise: integrate MCP server into working application"
success_criteria:
- "Correctly configures MCP client"
- "Application successfully calls MCP server tools"
- "Responses are properly handled and integrated"
- id: "LO-MCP-003"
statement: "Design and implement a custom MCP server that provides tools for a specific domain"
blooms_level: "Create"
context: "Building reusable MCP tools"
prerequisites:
- "Can integrate existing MCP servers"
- "Understand server-side architecture and tool definitions"
assessment_method: "Project: implement MCP server with documentation"
success_criteria:
- "Server correctly implements MCP protocol"
- "Tools are well-defined with clear descriptions"
- "Client can successfully call server tools"
- "Code is documented and maintainable"
Key Considerations for Advanced Topics:
Ready to generate objectives? Provide:
Or paste existing objectives and I'll help make them more measurable!