Local AI: Should You Care? (Probably Not.)
"The cloud" vs. "local" in plain English.
Every chat you've had with Claude in this course has been cloud AI, even if nobody called it that. You typed something into claude.ai. Your message went over the internet to Anthropic's servers. The model lives there. It's just somebody else's computer doing the thinking, with your laptop acting as a screen and a keyboard.
A local model is AI that runs entirely on your computer. Your laptop does the thinking. Nothing leaves your machine. No server sees what you typed. The model lives on your hard drive, takes up a few gigabytes, and runs on your laptop's chip the same way GarageBand or Photoshop does.
The short answer
For almost all student work, cloud Claude with Lesson 1.4's privacy settings dialed in is the right answer. Local AI is a real tool, but it's a side tool, not a replacement, and only worth installing if you have a specific kind of task: usually IRB-protected research, a specific personal document, or genuine non-internet situations: where "the data physically does not leave my laptop" is something you actually need. Most students will stay cloud-only.
What you give up by going local.
Six real tradeoffs. The privacy upside is real (physics, not policy). These are the costs.
It's slower
Cloud Claude responds in 1–3 seconds. A local model on a 3-year-old laptop might take 10–60 seconds, sometimes more. On a powerful new Mac it can get close to cloud speed. On the average student laptop, plan for "I can do other things while it thinks," not "feels like a chat."
It's a smaller brain
Local models students can actually run on a laptop are real but junior-varsity compared to cloud Claude. More mistakes, more clichéd prose, more hallucinations, missed subtleties. Fine for "summarize this paragraph, paraphrase this idea, organize these notes." Not where you want to draft a college essay or work through a hard physics problem.
No Coach, no Projects, no scheduled tasks, no connectors
Everything you build in the Coach module and everything the Automations module covers: that's all cloud-Claude infrastructure. A local model is just a chat window. No context files, no custom instructions, no folder-of-chats organization, no Cowork, no calendar / email connectors, no voice mode.
One device only
The model lives on the laptop you installed it on. Not your phone. Not your tablet. Not your roommate's iPad.
Eats disk space and battery
10–15 GB minimum for Ollama plus a couple of models. Running the model spins up your laptop's GPU and chews battery: fine plugged in, painful on a long study session at the library cafe.
Maintenance is on you
Cloud Claude auto-updates. A local install requires you to update the runtime, swap models, troubleshoot when something breaks. Not hard for a curious student: but it's a thing you have to do, not a thing that happens for you.
How to think about cloud + local.
Local-only (don't do this). You replace cloud Claude entirely. You give up Projects, Coach, voice, scheduled tasks, the whole course. Almost nobody should do this. Cloud-plus-local (the only setup worth bothering with). You keep cloud Claude as your daily driver, and you install a local model as a second tool for one specific kind of task: the one you don't want on a server even with privacy settings dialed in. Whenever this card says "go local," it means "add local as a side tool":not "switch."
The short list of students who'd actually benefit.
Four real categories. If you fit one of them clearly, the rest of this card is for you.
1 · Sensitive research data
Thesis or capstone work involving interview transcripts, ethnographic field notes, IRB-approved subject data, or any human-subjects research. You have a real reason: both ethically and often by IRB rule: to keep that material off remote servers. Local model is a legitimate tool here. Also check what your specific IRB says about AI tools: many universities are still writing those rules.
2 · A specific kind of personal-document task
One specific kind of document you keep on your laptop and don't love putting in any cloud: a long-form journal, family medical history, an estranged-parent situation you process by writing about it, immigration documents you're sorting for a sibling. If it's just "I want privacy in general," that's what cloud privacy settings are for. Local is for "this specific document I'd otherwise not run through AI at all."
3 · Internet that genuinely doesn't work
Study abroad in a region with metered data or unreliable connections. A field-station semester. Long international flights with a paper to draft. Rural-area college where the dorm wifi is bad. Local AI works offline. If your "bad wifi" is "the campus center is sometimes slow," cloud is still fine.
4 · Curious tinkerers
You're CS-curious, you like installing things, and a 30-minute install project sounds fun. Once you've run a model on your laptop, what an LLM is stops being abstract. Useful résumé / portfolio data point too. Pure-curiosity is a legitimate reason; you don't owe yourself a more practical justification.
The longer list of students who should NOT bother.
Four common reasons that sound like they need local but actually don't. If you're in any of these, your honest move is to confirm cloud-only and move on.
"I don't trust big tech with my data"
Real concern, not the right tool for it. If the worry is platform power and policy in general, the answer is privacy settings + reading the actual policy + not pasting genuinely sensitive stuff: not running a smaller AI. You're still on Google, your phone, the LMS. The local install is a lot of work for one specific kind of friction.
"I want AI for essays but not on a server"
Three things wrong. First, the local brain is much weaker, so drafts get worse, not better. Second, your school's academic-integrity policy applies to AI of any kind: privacy doesn't change the rules. Third, cloud-privacy settings already cover this for almost all students. Local is for material you genuinely cannot put online, not a workaround for AI-detection or policy.
"I'd save money by going local"
You wouldn't. Cloud Claude has a free tier that handles most student use cases. The capstone needs Pro for one specific month. Your time learning to install and maintain a local stack costs more than that. The "free local AI" framing is technically correct and practically misleading.
"I just want to feel more in control"
Honest reason and not a wrong reason. But the cheaper version is to spend 10 minutes in Lesson 1.4's Privacy & Data Settings, set things on purpose, and move on. That gets you most of the "I made a deliberate choice about my data" feeling for none of the install effort.
The decision tree: three branches, one call.
Pick one of the three choices that matches your situation.
Branch A · Stay cloud-only (most students)
Pick this if: you don't have a specific task you've been avoiding because of where the data would go. Your sensitive material is normal-student-life sensitive (not IRB-protected, not on metered international internet). Your wifi works.
Action item: spend 10 minutes in Lesson 1.4's Privacy & Data Settings, set the toggles on purpose, and continue with the course as planned. The next time someone tells you "you should run AI locally," you can explain: accurately: why you chose not to.
Branch B · Cloud + local as a side tool (the small slice)
Pick this if: you fit one of the four categories above: IRB-protected research data, a specific personal-document task, study-abroad / shaky-internet, or curious tinkerer. You have laptop hardware that passes the reality check below.
Action item: install Ollama via the official docs at ollama.com. Keep cloud Claude as your daily driver. Use local only for the specific task that drove you here. The mental model: cloud for school, local for the one thing you don't put on a server.
Branch C · Upgrade to Pro
Pick this if: the friction you've been feeling isn't actually about local-vs-cloud at all: it's about hitting Claude's free-tier limits during heavy weeks (midterms, application season, capstone month). The fix isn't a different kind of AI; it's a higher tier of the same one.
Action item: revisit Lesson 1.1's "earn the upgrade by needing it" framing. If you keep hitting the cap mid-essay or mid-study-session, Pro is a small bet for a heavy month. You can downgrade any time.
You don't have to pick branch A vs B forever
Most students will stay in Branch A their whole degree. Some pop into Branch B for one semester (the thesis year, the study-abroad term, the curiosity weekend) and then quietly stop using local AI when the use case ends. Uninstalling Ollama is two clicks. Same for Branch C:you can pick up and drop Pro month-by-month around your actual workload.
If you picked Branch B: the hardware reality check.
Before you commit to installing, do this quick sanity check. Local AI is one of the few things on a student laptop that actually pushes the hardware. The wrong machine will run a model so slowly it's basically unusable. The right machine will surprise you with how snappy it feels.
Apple Silicon Mac (M1, M2, M3, M4)
The dream scenario. Apple's chips are unusually good at running AI models: same chip handles graphics, OS, and AI workloads, with unified memory that loads the model fast. Even an 8GB M1 MacBook Air will run a small local model fine. If you have any Apple Silicon Mac from the last 4 years, you're good.
Intel Mac (any year)
Will technically run, but slow: Intel Macs don't have the AI-specific hardware Apple Silicon does. Doable for occasional use, frustrating for daily. Honest call: skip the install and stay cloud.
Windows with a gaming GPU
If your laptop has a discrete NVIDIA GeForce GPU (RTX series especially), local AI runs great. Gaming laptops are often the best non-Mac option. Check that you have at least 8GB of dedicated VRAM for a comfortable experience.
Regular Windows laptop (no gaming GPU)
Will run, slowly, on the smallest models. The CPU does all the work, which is what it sounds like. Doable for tinkering; frustrating for daily use. Honest call: stay cloud unless you're specifically curious to see what running an LLM on a CPU feels like.
Chromebook
Chromebooks largely can't run Ollama natively: they're built around the cloud-AI model. Workarounds exist (Linux dev mode, browser-based tools) but they're for advanced users. Honest call for a Chromebook: stay cloud.
Disk space (any machine)
You need at least 10–15 GB free to install Ollama plus one or two models. If your laptop's disk is already full of class downloads and unused apps, clean up before you install. Running out of space mid-download is the most common newbie failure mode.
Where the install lives
We punt the actual install steps to ollama.com's official docs because their commands change with each version and ours wouldn't stay current. Standard installer experience on Mac and Windows. After install, Ollama gives you one line to copy-paste into Terminal (Mac) or PowerShell (Windows) to download your first model:something like ollama run llama3. The terminal-only chat experience gets old fast; consider a free GUI front-end like Open WebUI or LM Studio once you're up and running.
The Honest Work Code · all three rules apply, even here
Rule 1:you stay the author. Where the AI runs doesn't change who's doing the thinking. You are. Cloud or local, the words on your transcript are the words you'd defend in a room. Rule 2:your work survives scrutiny. If asked, "what AI did you use, and where did the data live," you can now answer cleanly. Rule 3:respect the rules of the room. Course AI policies and IRB rules apply identically to local and cloud. The privacy of where the AI runs is never a permission slip for what the AI is doing.
Pick a branch. Move on.
The point of this card wasn't to convert you to a different kind of AI. The point was that you now understand the architecture of the tool you've been using, and you've made a deliberate choice about it instead of defaulting into one. That's a real piece of digital literacy. Most students will read this and stay cloud-only. That's the right answer.
Back to the curriculum →