Oracle Questions (Delphic / Divinatory)

It is tempting to treat oracles as mere superstition engines, but the historical record paints a more interesting picture. Many questions brought to the oracle at Delphi had already been through multiple rounds of reasoning, debate, and failed consensus before anyone made the expensive trip up the mountain. The oracle was not a first resort but something closer to an arbiter—consulted the way a court system is consulted, after lower agencies and investigations have been exhausted. Xenophon noted that oracles helped people figure out what they should and should not do, and surviving records suggest that consultants did not typically ask about events at specific times. Instead, they presented A/B decisions: a limited and conventional set of problems where two or more options appeared superficially equal, but the oracle might perceive reasons why one was vastly preferable to the other.

Two design features are worth noting. First, oracles had to prove they were unbiased. The testing for the state of enthusiasmos—divine possession—appears to have served, among other things, as a credentialing step: the oracle's authority depended on demonstrating that her answers were not her own. (Compare the way an oungan in Haitian Vodou is tested in ways a non-possessed human could not endure, to confirm that the lwa is genuinely present.) Second, Robert Parker argues that divination of this kind requires resistance. The famously cryptic, riddling speech of the Pythia is an example of that resistance in action. Ambiguity is not a flaw in the system; it is the system. A clear answer would make the oracle a vending machine. A cryptic answer shifts the onus of interpretation back onto the consultant, who must do further reasoning to extract a decision—meaning the oracle's role is to catalyze thought rather than replace it.

This has clear pedagogical parallels. An instructor who answers a student's question with a counter-question or a carefully ambiguous prompt ("What would happen if you tried the other approach?") is functioning as a small oracle: not withholding knowledge out of spite, but applying resistance so the student does the final mile of reasoning themselves. The quiz-design takeaway is that sometimes the most powerful question is one whose answer is itself a riddle.

EXAMPLE: A consultant asks the oracle: "Should we found the colony at Site A or Site B?" The oracle replies: "The ram drinks where the hawk circles." The consultant must now determine which site matches the oracle's imagery. THERE IS NO SINGLE "CORRECT" ANSWER; THE DESIGN POINT IS THAT THE CONSULTANT MUST DO FURTHER INTERPRETIVE WORK

QUIZ QUESTION TYPES (Online Quizzes)

The question types that follow are the bread and butter of online quiz platforms: Moodle, Canvas, Blackboard, Google Forms, and their many cousins. They may seem pedestrian after oracles and Balderdash, but they deserve the same analytical attention. Each type encodes assumptions about what "knowing" looks like, and the constraints of digital delivery—auto-grading, screen size, mouse versus touch—have shaped them in ways that classroom paper tests never had to worry about.

Multiple Choice

The workhorse. One correct answer, several distractors. The format is so ubiquitous that students sometimes treat "multiple choice" as a synonym for "quiz," but designers know that the quality of a multiple-choice item lives or dies in its distractors. A bad distractor is transparently absurd and tells you nothing about the student who avoids it. A good distractor is plausible enough that choosing it reveals a specific gap or misunderstanding—at which point the question starts to overlap with the diagnostic / misconception-probe type discussed earlier.

Online platforms add a wrinkle: instant feedback. In a paper exam, you circle your answer and move on. In a digital quiz, the system can (if configured) tell you immediately that you were wrong, and even explain why. This changes the psychology of the question. It is no longer purely an assessment instrument; it becomes a micro-lesson. Whether that is a benefit or a distraction depends on how the quiz is designed and when it is deployed.

EXAMPLE: What is the chemical symbol for gold? A) Au B) Ag C) Fe D) Go AU

Multiple Response (Select All That Apply)

Multiple response questions look like multiple choice but permit (and require) more than one correct answer. This seemingly small change has outsized consequences. In a standard multiple-choice item, a student who knows nothing can still guess correctly 25% of the time on a four-option question. In a "select all that apply" item with four options, the number of possible answer combinations is 24 = 16, dropping the random-guess probability to about 6%. The format punishes vague recognition and rewards confident, comprehensive knowledge.

Scoring is its own design problem. Should a student who selects three of four correct answers and no wrong ones receive partial credit? Full credit? Zero? Most LMS platforms allow partial-credit schemes, but each scheme sends a different message about what counts as knowing. In game terms, this is a question about how forgiving the system should be—a balance knob that designers should turn deliberately rather than leaving at the default.

EXAMPLE: Which of the following are programming languages? (Select all that apply) A) Python B) HTML C) Java D) Photoshop PYTHON AND JAVA (HTML IS A MARKUP LANGUAGE; PHOTOSHOP IS SOFTWARE)

Choose the Best Response

This is the subtle, slightly ruthless cousin of standard multiple choice. Here, more than one answer may be technically correct, but the student must identify the most correct, the most complete, or the most precise option. It is currently the only egalitarian way to test for precision over vagueness in a closed-ended format: if the premises are true, a vague statement would also be true, so the only way to distinguish a student who really knows from one who sort of knows is to offer both the precise answer and the vague-but-not-wrong answer and see which one gets chosen.

Students often find these questions infuriating, and not without reason. The line between "best" and "also acceptable" can feel arbitrary if the question is poorly written. But when done well, best-response items are a compact way to test depth. Medical board exams rely on them heavily: "All of these treatments might help, but which one should you try first?"

EXAMPLE: What is the primary function of red blood cells? A) Fighting infection B) Transporting oxygen C) Circulating nutrients throughout the body D) Maintaining homeostasis B) TRANSPORTING OXYGEN — THE OTHERS ARE ARGUABLY TRUE OF BLOOD IN GENERAL BUT NOT THE "BEST" ANSWER FOR RED BLOOD CELLS SPECIFICALLY

Matching

Matching questions present two columns and ask the student to pair items across them: terms to definitions, dates to events, authors to works. They test associative knowledge—not just "do you know what mitosis is?" but "can you distinguish mitosis from meiosis, prophase from metaphase, all at once?" The format forces comparison, which is cognitively more demanding than isolated recall.

Online implementations vary. Some platforms use dropdown menus beside each item; others support drag-and-drop. A common design pitfall is making the two columns the same length, which lets students use process of elimination on the last pair. Adding one or two extra options in the right-hand column ("not all options will be used") closes this loophole and makes the question fairer—or harder, depending on your perspective.

EXAMPLE: Match each country to its capital: 1) Japan 2) Egypt 3) Brazil. Options: A) Cairo B) Brasília C) Tokyo D) Lima 1-C, 2-A, 3-B (LIMA IS THE UNUSED EXTRA)

Fill in the Blank

Fill-in-the-blank removes the safety net of provided options entirely. There is no distractor to eliminate, no "well, it's probably not D" reasoning. The student must produce the answer from memory, which tests recall rather than recognition—a harder cognitive task by any measure.

The online version introduces a grading headache: what counts as correct? If the answer is "mitochondria," does "Mitochondria" count? "The mitochondria"? "mitocondria"? Most LMS platforms offer case-insensitive matching and wildcard patterns, but a designer who does not configure these carefully will generate a wave of student complaints and manual regrading. Some platforms now use fuzzy matching or accept multiple valid strings, but the fundamental tension between precision and forgiveness remains a design choice, not a technical one.

EXAMPLE: The organelle often called the "powerhouse of the cell" is the ________. MITOCHONDRIA (OR MITOCHONDRION)

If appropriate, answers to the examples are immediately after the question or clue, in invisible ink (highlight to see)

Yes, like that

CLICK ANY QUESTION TYPE

------------------------------

>> Forward to PAGE 5

<< Back to PAGE 3

Back to the MAIN PAGE