By the end of this session, you can…
- LO 3.1Classify each objective in your project against the five-type taxonomy (retrieval, discrimination, procedural fluency, conceptual reasoning, judgment under uncertainty).
- LO 3.2Name the signature mechanic family that fits each objective type, and the common mismatches to avoid.
- LO 3.3Produce D2 — a crosswalk mapping 2–4 objectives to mechanics with rationale, risks, and an amplifier.
- LO 3.4Defend one mechanic choice against a plausible alternative, citing the objective type and a named failure mode.
- LO 3.5Mark at least one objective as out of scope for this build — and say why.
A workable taxonomy for game design
Bloom's revised taxonomy has six levels and is faithful to how cognition layers. For game design decisions, six is too many and the vertical metaphor misleads. The five categories below cover the vast majority of educational-game objectives and — crucially — each one responds to a different kind of loop.
| Type | What it is | Tell-tale verbs |
|---|---|---|
| Retrieval | Bring a fact, name, or definition back from memory on demand. | Name, list, recognize, recall |
| Discrimination | Tell two or more similar cases apart based on distinguishing features. | Distinguish, sort, classify, identify |
| Procedural fluency | Execute a known procedure quickly, accurately, and adaptively. | Perform, apply, operate, compute |
| Conceptual reasoning | Use a model to explain or predict; transfer across surface features. | Explain, predict, model, derive |
| Judgment under uncertainty | Choose among partially-informed options with real trade-offs. | Decide, prioritize, triage, weigh |
A good educational game picks a dominant type, supports it with one or two others, and says no to the rest. A game that tries to teach all five is almost certainly teaching none of them well.
Which loops teach which kinds
There is no one-to-one map from objective to mechanic — but there are strong fits, bad fits, and amplifiers. Treat this as a hypothesis generator. Test in prototype (Session 07), not in argument.
| Objective type | Signature mechanic families | Characteristic failure mode |
|---|---|---|
| Retrieval | Spaced recall; timed matching; streak-break loops. | Decorative context. Players memorize the quiz, not the content. |
| Discrimination | Sorting under pressure; progressive pair reveals; same-or-different bursts. | Surface cues leak. Learners pass without seeing the discriminating feature. |
| Procedural fluency | Step sequencing with drift detection; tool-chain puzzles; speed-run with correctness gates. | Procedure becomes mechanical. No transfer to new tools or edge cases. |
| Conceptual reasoning | Sandboxes with lawful constraints; prediction-then-test; scenario ladders that vary one feature. | Sandbox with no friction. Players play, do not reason; model never gets pressure-tested. |
| Judgment under uncertainty | Branching scenarios with delayed feedback; resource trade-off economies; role play with partial info. | Branching that telegraphs the right answer. No real trade-off → no real judgment. |
A one-page instrument you will revise every session
D2 is a four-column table. Each row is one objective. Every downstream deliverable references this artifact — revisit it after every playtest.
| Column | Content | Quality cue |
|---|---|---|
| Objective | Written as an observable verb with a concrete object. | Passes the "what would I score?" test. |
| Type | One of the five types. No hyphenates. | If you need two, split into two rows. |
| Mechanic (candidate) | A specific loop — not a genre. "Time-pressured triage with delayed consequence," not "a strategy game." | Can be described in one sentence a developer would act on. |
| Rationale & risk | Why this mechanic teaches this type for your learner; the most likely failure mode; an amplifier that would raise quality if budget allows. | The risk sentence names a concrete mechanism, not a feeling. |
It will change after every playtest. Version it in your repo with a dated changelog; do not edit in place. The reviewer's job in Session 12 is to audit what you changed and why.
From D1 problem to D2 crosswalk
Three objectives; three mechanics; one amplifier; one explicit out-of-scope.
| Objective | Type | Mechanic | Rationale & risk |
|---|---|---|---|
| Name the three tests most likely to discriminate leading diagnoses in an unstable patient, within 60 seconds. | Discrimination | Timed "pick the discriminating test" cards with near-neighbor distractors. | Fit: forces attention to feature differences. Risk: players memorize cards; amplifier — rotate patient contexts so the discriminator changes. |
| Escalate to attending within the indicated time window given five deterioration vignettes. | Judgment under uncertainty | Branching shift simulator with delayed consequence feedback at shift end. | Fit: the real skill is choosing under partial info. Risk: branching telegraphs; mitigation — multiple valid paths with different trade-offs. |
| State a leading diagnosis with three differentials, given an ambiguous vignette. | Conceptual reasoning | Post-shift "whiteboard" reflection with peer-review loop. | Fit: articulation surfaces the model. Risk: social desirability; mitigation — anonymous peer review, rubric-scored. |
| Out of scope: Full pharmacology recall. | — | — | Covered by existing curriculum; including it dilutes the judgment core. Documented and declined. |
Notice that the fourth row exists. Every good D2 names at least one objective that was tempting but rejected — with a reason. If your crosswalk has no out-of-scope row, you have not made a design.
Drafting the crosswalk in minutes, not hours
AI Studio is faster than you at making a first pass and worse than you at getting it right. The D2 crosswalk is the perfect use case — a model can classify objectives against the five-type taxonomy and propose mechanics, but it cannot tell you whether the proposal survives your D1 constraints. That last step is why you are still the designer.
Use case · Classify your objectives against the five-type taxonomy
Gemini 2.5 · temperature 0.3Paste your 2–4 candidate objectives. The model forces a single type per objective, which is exactly the discipline this session requires. You will still override calls you disagree with — but the disagreements are the useful part.
You are classifying learning objectives for an educational game designer.
Use ONLY these five types:
1. Retrieval
2. Discrimination
3. Procedural fluency
4. Conceptual reasoning
5. Judgment under uncertainty
For each objective I give you, output:
Objective: [verbatim]
Type: [one of the five, exactly]
Confidence: low | mid | high
Why: one sentence citing the verb + cognitive demand.
If it splits: say "SPLIT" and propose two separate objectives,
each a single type.
Rules:
- No hyphenated types.
- No "it depends." Force a call.
- If the objective is unobservable ("understand"), flag it and suggest
an observable rewrite. Do not classify it until rewritten.
Here are my objectives from D1 + pre-work for S3. Classify each.
1. Residents will escalate to attending within the indicated time window
on 5 deterioration vignettes.
2. Residents will name the three tests most likely to discriminate
leading diagnoses in an unstable patient, within 60 seconds.
3. Residents will understand differential diagnosis.
4. Residents will state a leading diagnosis and three differentials,
given an ambiguous vignette.
The model will flag #3 as unobservable and refuse to classify it until you rewrite. That refusal is worth the whole session.
Use it when
You have 2–4 draft objectives and are unsure which type each one is. The model is quick and decisive, which is what you need to break analysis paralysis.
Don't use it when
You have not written your D1 yet. A type is meaningless without a learner — AI will classify "students will understand fractions" in isolation, and you will build a crosswalk grounded in nothing.
Use case · Generate three mechanic candidates per objective
Divergent mode · temperature 0.9Once types are locked, run a divergent-generation pass. The point is not to find the mechanic — it is to see three, so you can tell which one actually fits your D1 context. Almost always the model's third idea is the best one; the first two are the ones it was trained to expect.
For each objective below, propose three candidate mechanics that fit
its type. Keep in mind these D1 context constraints:
- Played solo on a hospital laptop during down-time.
- 30-45 min sessions.
- No audio.
- Must pause gracefully for pager interruptions.
Format each candidate as:
Name (one sentence verb-phrase)
Fit rationale (one sentence)
Most likely failure mode (one sentence)
Do not label one as "best." Give me three options per row, equally
defended, so I can choose.
One option is a proposal. Two is a binary. Three forces you to articulate what you are actually optimizing for. Never accept a single-option answer from the model on design questions.
Primer · Prompt engineering, 5 rules that matter
If you remember nothing else- 01System prompt does the heavy lifting. Put role, constraints, output format, and refusal rules in the system prompt, not the user message. The user message is the data.
- 02Force an output shape. Models that can wander, will. Name the fields and the exact format — the model will fill them and stop.
- 03Write the refusal rule. Tell the model what to do when the input is bad ("if the objective is unobservable, flag it and refuse"). Without a refusal rule, the model smooths over bad input.
- 04Temperature low for classification, high for divergence. 0.2–0.4 when you want the same call twice; 0.8–1.0 when you want three different answers.
- 05Treat the output as a draft, not an answer. Every AI Studio session ends with you rewriting at least one thing. If you never override, you outsourced your design. Do not do that.
Build your crosswalk
| Time | What happens | Facilitator cue |
|---|---|---|
| 00:00–20:00 | Solo — classify each of your 2–4 objectives against the five types. One row per objective. | "Type is a forced choice. If it splits, split the row." |
| 20:00–50:00 | Draft the mechanic column. One specific loop per row. | Circulate; push back on genre labels. |
| 50:00–70:00 | Pair swap — partner names the most likely failure mode for each of your mechanics; you write it down. | "Your partner is not attacking. They are pre-paying the playtest tax." |
| 70:00–90:00 | Add one amplifier per row; mark one objective out of scope. | "If you cut nothing, you designed nothing." |
The crosswalk is a hypothesis, not a commitment. It will be disproven in playtest. Your job today is to make the hypothesis specific enough to be disprovable.
Cases to stress-test your crosswalk
Before committing to a mechanic column, pressure-test your choice against six worked examples. Look at how each case justifies its game form — practice loop, systems sim, role-play, scenario sim, or spatial challenge.
Worked Examples Casebook
Six educational-game concepts annotated with loop, facilitation notes, likely risks, revision ideas, and a Three.js implementation bridge. The Game Form Selection visual is the direct companion to today's crosswalk.
Why this week Pick the case whose learning problem is closest to yours. Copy its loop one-liner. If you cannot copy it cleanly, your objective type is probably not yet the one you think it is.
Orbit Sum Lab
A practice-loop lab built on a concrete learning objective — estimate, commit, get feedback, repeat. Map it across your crosswalk: what is the objective type, which mechanic column does it sit in, what feedback kind does it emit?
Why this week Use it as a seventh case against the casebook. If your D2 mechanic column cannot explain what this lab is doing, your column definition is underspecified.
Before next week
Defend one mechanic choice
Pick the crosswalk row you are least sure about. In three sentences: objective type, why this mechanic, what would make you drop it.