Sequence questions in an online quiz context ask the student to arrange items in the correct order—chronological, procedural, hierarchical, or otherwise. Unlike the ordering and ranking questions discussed earlier (which may reward argument or interpretation), these are typically auto-graded against a single correct sequence, which means the designer must be certain the order is unambiguous.
The format is well suited to procedural knowledge: "Put these steps of the scientific method in order," or "Arrange these lines of code so the program compiles." It tests not just whether you know the individual items but whether you understand their relationship to each other. On touchscreen devices, sequence questions are often implemented as drag-and-drop reorderings, which makes them more tactile than a paper exam—though accessibility for screen readers remains a persistent design challenge.
EXAMPLE: Arrange these steps of cellular respiration in the correct order: A) Electron Transport Chain B) Glycolysis C) Krebs Cycle CORRECT ORDER: B, C, A
Hotspot questions present an image and ask the student to click on a specific region: the correct bone in a skeleton, the faulty component in a circuit diagram, the country on an unlabeled map. They are among the few online quiz formats that test spatial knowledge directly rather than translating it into words first.
The design challenge is tolerance. If the correct region is a three-pixel-wide wire on a circuit board, the question is testing mouse precision more than electrical knowledge. Most platforms allow the designer to define a clickable zone (a rectangle, circle, or polygon), and the generosity of that zone is itself a pedagogical decision. A large zone says "I just want to know you're in the right neighborhood"; a small zone says "Precision matters here." Hotspot questions also have a natural affinity for disciplines that rely on visual literacy: anatomy, geography, art history, architecture.
EXAMPLE: [An image of a human skeleton is displayed.] Click on the femur. THE STUDENT CLICKS ON THE LARGE BONE IN THE UPPER LEG / THIGH
True/false is the simplest closed-ended question type and, in many ways, the most dangerous. A coin flip gives you 50% accuracy, which means a ten-item true/false quiz can be "passed" by a student who knows literally nothing, purely through chance. Paper exams have always had this problem, but online delivery adds a few wrinkles worth noting.
First, auto-grading makes it trivially easy to deploy large banks of true/false items, which tempts quiz designers to pad their assessments with low-effort questions. Second, some platforms allow "true/false/I don't know" or confidence-weighted variants, which partially address the guessing problem by penalizing wrong guesses more than abstentions. Third, the online environment makes it easy to randomize statement order and even to randomly invert statements between students (swapping "true" items for their negations), which complicates the kind of answer-sharing that plagues unsupervised quizzes. The format is at its best when used not as a standalone assessment but as a quick pulse check—three or four items at the start of a module to see whether students arrived with the right assumptions.
EXAMPLE: True or False: The Great Wall of China is visible from space with the naked eye. FALSE
Short-answer questions occupy the middle ground between fill-in-the-blank (one word or phrase) and essay (open-ended prose). They typically expect a sentence or two: enough to demonstrate understanding, not enough to develop an argument. In an online context, they are usually hand-graded, which means they carry a hidden cost—instructor time—that multiple-choice items do not.
Some platforms now offer AI-assisted grading for short answers, comparing student responses against a rubric or model answer, but these tools are still rough enough that most instructors treat them as a first pass rather than a final judgment. The format's strength is that it resists the "process of elimination" strategy that students use on closed-ended items. Its weakness is that grading consistency across a large class is hard to maintain, even with a rubric, because natural language is slippery and two students can say the same correct thing in very different ways.
EXAMPLE: In one or two sentences, explain why the sky appears blue. SUNLIGHT IS SCATTERED BY THE ATMOSPHERE, AND SHORTER (BLUE) WAVELENGTHS ARE SCATTERED MORE THAN LONGER ONES, SO THE SKY APPEARS BLUE TO AN OBSERVER ON THE GROUND (RAYLEIGH SCATTERING)
Drag-and-drop questions ask the student to move items into correct positions: labels onto a diagram, categories into buckets, steps into slots. They are the kinesthetic cousins of matching questions, and on a touchscreen they can feel almost physical—closer to manipulating objects on a desk than to filling in bubbles on a Scantron sheet.
The format is popular for categorization tasks ("Drag each animal into the correct phylum") and for labeling tasks ("Drag the names of these parts onto the engine diagram"). Designers should be aware that drag-and-drop items are among the hardest to make accessible: screen readers struggle with spatial manipulation, and keyboard-only users need an alternative interaction mode. A well-designed drag-and-drop question will always have a non-visual fallback, even if the visual version is the showpiece.
EXAMPLE: Drag each label to the correct part of the plant cell diagram: "Nucleus," "Cell Wall," "Chloroplast," "Vacuole." EACH LABEL IS PLACED ON ITS CORRESPONDING ORGANELLE IN THE DIAGRAM
A close relative of both fill-in-the-blank and drag-and-drop: a passage is displayed with certain words removed, and the student drags words from a word bank into the correct gaps. The format is essentially a digital cloze test, and it inherits the cloze test's strengths (testing contextual understanding, forcing students to read surrounding text carefully) along with its limitations (highly sensitive to how the blanks are chosen).
The word bank is an important design variable. If it contains exactly as many words as there are blanks, the last answer is always free—the student gets it by elimination. Adding extra words (decoys) makes the task harder and fairer. Some implementations allow the same word to be used more than once, which introduces its own kind of ambiguity. The format works best for language-heavy disciplines: grammar exercises, foreign-language vocabulary, reading comprehension, and close analysis of primary sources where the exact word matters.
EXAMPLE: Drag the correct words into the blanks: "The _______ of Independence was signed in _______." Word bank: Declaration, Constitution, 1776, 1789, Philadelphia "THE DECLARATION OF INDEPENDENCE WAS SIGNED IN 1776." (CONSTITUTION, 1789, AND PHILADELPHIA ARE DECOYS—THOUGH PHILADELPHIA IS ARGUABLY ALSO TRUE, THE QUESTION TESTS FOR THE DATE)
If appropriate, answers to the examples are immediately after the question or clue, in invisible ink (highlight to see)
Yes, like that