2Phase 2 · The Wins
Module 5 · Research, Reading & Critical Thinking (Flagship)
Flagship Lesson

The Hallucinated-Citation Trap

Lesson 5.1 5 screens · the most-caught failure mode in school

If you only read one lesson in this whole course, read this one.

A college sophomore submitted a 12-page political-science paper. The argument was reasonable. The writing was decent. The bibliography had nine sources, all formatted in correct Chicago style: author names, journal titles, volumes, issues, page numbers. The student got pulled into the prof's office a week later. Of the nine citations, four did not exist.

Not "couldn't be found in the library system." Not "had a typo in the volume number." Did. Not. Exist. The journals were real journals. The author names were real-sounding names that had never been published in those journals. The articles were inventions. The student had asked Claude to suggest sources for the paper, copy-pasted the bibliography Claude produced, and never once tried to open a single one of those sources. The student didn't know AI did this. Now you do.

What Module 5 is for

Module 4 taught you how to write with AI without losing your voice. Module 5 teaches you how to read and research with AI without losing your credibility. The rules of the game shift here. In writing, "did you do the thinking?" is the question. In research, the question is "is what you submitted actually true?" And AI's most confident-sounding outputs are exactly the ones most likely to be wrong. Five lessons, one habit: every claim survives scrutiny.

This whole module is Honest Work Code · Rule 2 in slow motion

Your work survives scrutiny. Module 4 put that lens on writing: does my use of AI hold up if my professor sees the whole process? Module 5 puts the same lens on research, and the test gets sharper because the failure mode is more concrete: can my professor click every citation in my bibliography and find a real source that says what I claimed it says? That's the only question this module is really answering. The prompts, checklists, and workflows all serve the same goal. By Lesson 5.5, "every citation gets clicked" should be muscle memory.

The asymmetry that runs the whole module

AI helps with research in real ways: mapping the territory, speed-reading dense PDFs, comparing arguments across sources, pressure-testing your own thinking. Those wins are real. But there's a short list of places it bites back, and the bites can cost you a class. The cost of an unverified citation isn't proportional to the time saved by the AI workflow.

Why Claude makes up citations: and why they sound real.

Claude doesn't have a "real sources" database to check against. It has the patterns it learned during training, and "produce real-sounding citations" is a pattern it's very good at, sometimes pulling a real paper, sometimes inventing one that fits the shape of a real paper in the field. From the outside, you can't tell which is which. The format is correct. The journal exists. The page numbers are in a valid range. The summaries Claude gives of the "paper" are plausible. If you ask, "are you sure that source exists?" Claude may say "Yes, this is a well-known paper in the field." It isn't. Claude doesn't know that, and asking that way doesn't help.

What makes hallucinated citations especially dangerous

  • They look indistinguishable from real ones. Format, capitalization, journal, page-number range, all clean. Nothing on the surface says "fake."
  • The author names are real-sounding. Often Claude combines a real first name with a real last name from someone who actually publishes in the field, just not the paper Claude is claiming.
  • Claude will defend them when challenged. The "are you sure?" reflex doesn't catch these.
  • They show up alongside real ones. A bibliography Claude produces may have 7 real sources and 2 fake ones. Even if you spot-check one and find a real paper, the next one might be invented.
📕

1 · Pure invention

Author, journal, year, pages: all fake. The whole record is a fabrication. Most common when you ask "find me 5 sources on [niche topic]" and Claude obliges by inventing them. Easiest to catch: the article won't exist in any database.

📗

2 · Real author, fake paper

The author is a real scholar in the field. Just didn't write this article. Claude attributed a plausible-but-fictional paper to a real name. Hardest flavor to catch on a quick search. The author's other work shows up; you might assume the one you're looking for exists too.

📘

3 · Real paper, wrong details

The paper exists, but Claude has the year wrong, the volume wrong, page numbers wrong, or attributes it to the wrong author. Looks legit at first glance. A professor running a citation check will hit "this article isn't in volume 47" and pull on the thread.

📙

4 · Real paper, fake quote

You ask Claude to summarize a real paper or pull a quote, and Claude invents a quote that "would fit." The article exists; the words are fiction. Most dangerous flavor for upper-level work. If a prof Ctrl-Fs the quote in the original, it isn't there.

The unifying pattern across all four

In every flavor, Claude is producing the shape of an answer that would be true if Claude had actually read the source. Sometimes the shape is filled with truth. Sometimes it's filled with plausible fiction. Claude can't tell you which is which because Claude doesn't know which is which. The only reliable way to know is to verify, every time, before it goes in your paper.

"But I asked it to only use real sources"

You can ask Claude to only cite real sources. You should. It still won't be reliable. Telling Claude "only real sources" reduces hallucinations somewhat, doesn't eliminate them, and gives you a false sense of security that's worse than no instruction at all. The fix isn't a better prompt. The fix is verification. Always.

The one rule that ends the problem forever.

The Citation Rule

Every citation in your bibliography is one you've personally opened in a real database, and every quote you attribute is one you've personally seen in the source.

If you can't open it, it doesn't go in your paper. If you didn't see the quote with your own eyes in the actual article, the quote doesn't go in your paper. AI can help you find sources, summarize sources you've already opened, and format sources correctly. AI cannot supply the verification step. You do that. Always.

Here's the prompt that bakes the rule into every research chat. Paste it at the top of any session where Claude might be tempted to invent. It cuts Claude off at titles + authors (the patterns least likely to be hallucinated) and forces the patterned-text danger zone (volumes, issues, page numbers, DOIs) to get filled in by you, against a real database, where invention isn't possible.

The "don't invent citations" framing prompt: paste at the top of any research-help chat
I'm doing research on [topic] for [class / paper type]. Some ground rules for this whole conversation: 1. When I ask you to suggest sources, give me TITLES and AUTHORS only. Never full citations with volumes, page numbers, or DOIs. I'll find the actual citation details in my library database myself. 2. If you're not at least 90% sure a specific paper exists with the title and author you're suggesting, label it [POSSIBLY INVENTED] and tell me what you'd search for instead to find a real source on this angle. 3. If I paste a paper for you to summarize, only summarize what's actually in the paper. Don't fill in gaps with what you'd expect a paper on this topic to say. 4. If I ask for a quote, only give me one if you can also tell me what page or section it's on, and warn me to verify it in the source before using it. 5. If you don't know, say "I don't know. Here's how I'd find out." Don't guess. Keep these rules active for the whole chat. Confirm you got them, then I'll start.

What this looks like in practice

  • Claude suggests a source. You write down the title and authors in your notes (not in your bibliography).
  • You open Google Scholar / your library / JSTOR. Search the title or the author. Find the actual paper. Open the PDF.
  • If it doesn't exist: drop it from your list. Don't try to "find a similar one" without the same verification step. (Lesson 5.2 is the full workflow.)
  • If it does exist: skim the abstract. Confirm it's actually about what Claude said it's about. Now it goes in your bibliography.
  • If you're using a quote: Ctrl-F the quote in the actual PDF. Confirm the page number Claude gave you. If the quote isn't there, it's not in your paper.

For some research tasks, the right move is to close the chat.

The Citation Rule covers AI in the loop done right. But there's a separate question: when should AI not be in the loop at all? Some assignments are designed to test whether you can do the work yourself: not "produce clean research synthesis on a deadline" but "sit alone with a difficult primary source until it makes sense to you." Those are different skills, and AI breaks the second one even when used carefully.

1 · Primary sources where the assignment IS the reading

"Read this play / poem / speech / treaty / letter and write a 5-page close reading." Any AI summary is already an interpretation. Read Claude's summary first and you're now writing about Claude's reading, not yours. Read the primary source first. Form a take. Then bring AI in for editing.

2 · Reading-response and weekly journals

The whole point is that you're forced to encounter the text under deadline pressure and react in real time. AI's first-pass response is recognizably not yours, and seminar discussion will out you fast. Write it cold, even if it's bad. Improve through the discussion.

3 · Anything where summary distorts the source

Poems. Short stories. Aphorisms. Legal opinions. Mathematical proofs. The work of reading them is in the surfaces, the rhythm, the specific word choice. A "summary of Hamlet's soliloquy" isn't a faster version of reading the soliloquy; it's a different thing.

4 · Closed-book components

"On the in-class essay you'll analyze a passage you haven't seen before with no aids." If you practiced the prep with AI in the loop, you didn't practice the actual skill, which is reading without help. Do at least some prep closed-book. (Module 3.5's rule applies.)

5 · Anything the prof has explicitly disallowed AI on

Syllabus says "no AI on this assignment." The line is the line. "I used Claude to outline" is still using Claude. The line means none, not "the obvious bits."

The hardest case: papers that include both

A 15-page research paper might have a close-reading section (your encounter with a primary source) and a synthesis section (where you put that close reading in conversation with secondary sources). Do them in order: write the close reading cold, then bring Claude in for the synthesis stage (Lesson 5.4), then do a final voice-edit pass (Module 4.5 / 4.6).

Match the tool to the goal

Honest Work Code · Rule 1: learn with it, not instead of it. Some assignments aren't synthesis tests; they're tests of your encounter with a text, and AI is in the way of the assignment by definition. Use Module 5's toolkit on the right assignments. Close the chat on the others.

Try this now: run a hallucination test on your hardest topic.

You're not doing this to test Claude. You're doing it so the next time Claude does it on your real paper, you'll recognize it.

The "make Claude invent a citation" exercise

  1. Pick a niche topic from a class you're actually taking. The narrower the better. ("The economics of municipal water privatization in mid-sized US cities, 2010–2020." Not "water policy.")
  2. Open a fresh Claude chat. Don't paste the framing prompt above. We're testing default behavior, not the safe version.
  3. Ask: "Give me 5 academic sources on [your niche topic], in full Chicago citation format, with page numbers."
  4. Take the answer. Pick any TWO citations from the list. Open Google Scholar. Search the exact title in quotes. If it doesn't show up, search the author name + a couple of distinctive words from the title.
  5. See what you find.

Some citations will probably be real. Some, depending on how niche the topic is, almost certainly won't be.

Module 5: what each lesson does

  • 5.1: This lesson. The Citation Trap, the Citation Rule, and when to close the chat.
  • 5.2: Source-Checking Workflow. The 5-Minute Verification Checklist. Run it on every AI-surfaced source before it counts.
  • 5.3: Speed-Reading Dense PDFs. The "give me the 10% I need to read carefully" workflow.
  • 5.4: Synthesizing Across Sources. Comparing arguments, finding gaps, building the synthesis matrix, including how this scales up to long-form research papers and theses.
  • 5.5: The Sparring Partner. Argue the opposite. Skeptical-professor prompts. Module close.

One last thing: annotated bibliographies

If your class assigns annotated bibliographies, there's a clean honest way to do them with AI in the loop (and several wrong ways). The full workflow lives in the Annotated Bibs reference card in Bonuses, pulled out of the core module so it doesn't crowd the lessons that apply to everyone. Use it when you need it.

Up next: the workflow that turns "verify every citation" into a habit.

"Verify every citation" sounds great on paper; in practice you need a workflow that's fast enough to actually do, every time, on every source. Lesson 5.2 is the Verification Checklist: the exact 5-step workflow (Google Scholar → library catalog → DOI lookup → abstract check → quote-check) that takes a Claude-suggested source and either confirms it's safe to cite or rules it out.

Continue to 5.2 → The Source-Checking Workflow