Select from Lists (Inline Dropdown)

Select-from-lists questions embed dropdown menus directly inside a sentence or passage, so that the student reads along and makes choices at each gap without leaving the flow of the text. The effect is something like a choose-your-own-adventure book compressed into a single paragraph: "The [dropdown: legislative / executive / judicial] branch is responsible for interpreting laws."

Compared to drag-the-words, this format is tidier on small screens and more accessible to keyboard users, since dropdowns are a native HTML element that screen readers handle well. The trade-off is that the student sees all the options for each blank, which means recognition is doing more of the work than recall. A thoughtful designer will include plausible distractors in each dropdown rather than padding them with obviously wrong items. The format is especially useful for language instruction, where choosing the correct verb conjugation or preposition from a short list mirrors real-time reading decisions.

EXAMPLE: "Water freezes at [0 / 32 / 100] degrees Celsius and boils at [0 / 32 / 100] degrees Celsius." FREEZES AT 0, BOILS AT 100

Numeric Questions

Numeric questions ask for a number and grade it against an expected value, usually with a configurable tolerance range. "What is the acceleration due to gravity on Earth's surface, in m/s²?" expects something near 9.8, and a well-configured question will accept 9.8, 9.81, and perhaps 9.807, while rejecting 10 or 98.

The tolerance range is the key design decision. Too tight, and students who round differently or use slightly different constants are penalized for trivia rather than understanding. Too loose, and the question cannot distinguish a student who calculated carefully from one who guessed in the right neighborhood. Some platforms support significant-figure grading or unit conversion (accepting both "9.8 m/s²" and "32.2 ft/s²"), which adds sophistication but also complexity. Numeric questions overlap with the Fermi / estimation type discussed earlier, but where Fermi questions celebrate rough reasoning, numeric questions typically demand a specific computation.

EXAMPLE: A right triangle has legs of length 3 and 4. What is the length of the hypotenuse? 5

Likert Scale

Likert-scale items are not really "questions" in the quiz sense; they are opinion or self-assessment instruments dressed in quiz clothing. "On a scale from Strongly Disagree to Strongly Agree, rate the following statement: I feel confident solving quadratic equations." There is no correct answer. The data is attitudinal, not cognitive.

Why include them in a taxonomy of quiz question types? Because online quiz platforms offer them as a question type, and instructors use them—sometimes embedded alongside graded items in the same quiz, which creates a strange hybrid: a document that is simultaneously a test and a survey. The design risk is that students, accustomed to being graded, may try to give the "right" answer to a Likert item ("I should say I feel confident, because that's what the teacher wants to hear"), which defeats the purpose. When used deliberately, though, Likert items can serve as metacognitive prompts: asking students to rate their own confidence before they see the quiz results and then after can produce illuminating self-calibration data.

EXAMPLE: Rate the following statement: "I can explain the difference between weather and climate to a friend." — Strongly Disagree / Disagree / Neutral / Agree / Strongly Agree NO CORRECT ANSWER; THE ITEM MEASURES SELF-REPORTED CONFIDENCE

Essay Questions

Essay questions are far more common on formal exams and term assessments than on quizzes, and their relative scarcity in the quiz format is itself revealing. Quizzes, historically, evolved as quick, low-stakes check-ins—often handed out as worksheets or warm-ups—and their migration online preserved and even intensified that character. The online quiz is overwhelmingly an unsupervised instrument: taken at home, at odd hours, without a proctor. Essay questions, which require manual grading and are notoriously difficult to police for plagiarism or AI-assisted writing, fit awkwardly into this context. Their infrequency on quizzes supports the broader claim that the quiz format anticipated its own unsupervised nature long before the internet made unsupervised assessment the default.

When essay questions do appear on online quizzes, they tend to be short-response prompts ("In 3–5 sentences, explain...") rather than the multi-paragraph arguments expected on exams. Some platforms flag essay items as "requires manual grading" and exclude them from the auto-generated score until an instructor reviews them, which means the student's experience of the quiz is split: they see a provisional score immediately for the auto-graded items and a pending score for the essay, which arrives days later. This temporal gap can be pedagogically useful (it keeps the conversation going) or frustrating (it feels like unfinished business), depending on how the instructor frames it.

EXAMPLE: In 3–5 sentences, describe the significance of the Treaty of Westphalia (1648) for the modern concept of state sovereignty. ANSWERS WILL VARY; GRADED AGAINST A RUBRIC RATHER THAN A SINGLE CORRECT RESPONSE

If appropriate, answers to the examples are immediately after the question or clue, in invisible ink (highlight to see)

Yes, like that

CLICK ANY QUESTION TYPE

------------------------------

>> To the TOC for all weeks

<< Back to PAGE 5

Back to the MAIN PAGE