The rules haven't caught up. Most schools and workplaces are scrambling to write AI policies as AI capabilities evolve faster than any policy can track. Which means right now, in the gap between "AI exists" and "we have clear rules about AI", you're making judgement calls every day.
This lesson doesn't tell you what to do. It gives you the thinking tools to work it out for yourself — and tests them against eight real situations where the answer isn't obvious.
🎯 How this works
For each scenario, you vote: Yes (acceptable), No (not acceptable), or It depends. After you vote, you'll see how other learners responded and get the nuanced analysis. There are no easy right answers here — but there are better and worse ways of thinking about each one.
Eight dilemmas. Vote honestly.
University essay
A student uses ChatGPT to outline their essay, then writes every sentence themselves, substantially revising the structure and adding their own argument. The essay is genuinely their own thinking and writing — the AI only helped them organise their ideas before they started. The assignment says nothing about AI use.
Is this acceptable?
Yes
62%
No
8%
Depends
30%
The analysis
Most people say yes or it depends — and that's the right instinct. Using AI to brainstorm, outline, or check logic is fundamentally similar to talking through your argument with a friend or tutor. The work is yours. The thinking is yours. The argument is yours. This is not plagiarism in any meaningful sense.
The "it depends" voters are right that context matters: if the institution explicitly bans AI assistance of any kind, using it even this way violates policy. The student should check. If the policy is silent, using AI as a thinking tool — not a writing tool — is defensible. The principle: AI that supports your thinking is different from AI that replaces your thinking.
Rule: Use AI as a thinking partner, not a thinking replacement. Outline with it, argue against it, use it to stress-test your logic — then write your own work.
Workplace report
A manager uses Claude to draft a performance review for an employee. The manager reviews and edits the draft, changing roughly 30% of it. The final review captures the manager's genuine assessment. The employee never knows AI was used in the drafting.
Is this acceptable?
Yes
55%
No
12%
Depends
33%
The analysis
This is generally acceptable, with one important caveat. Managers have always used templates, HR frameworks, and colleagues' input when writing reviews. AI is a more capable version of that. The assessment is the manager's; the drafting assistance is a tool.
The caveat: if the AI produced language the manager didn't fully scrutinise — for example, a vague criticism or a generic praise phrase that doesn't reflect reality — that's a problem. Not because AI was used, but because the manager failed to own the output. The disclosure question is interesting: most people wouldn't expect a manager to note "I used Word's spellchecker on this." AI drafting assistance is increasingly in that category. But if the company has a disclosure policy, it applies regardless.
Rule: Own the output completely. If you can't defend every sentence in the review as genuinely reflecting your assessment, that's where the problem lies — not the tool.
Academic submission
A student submits an essay that is 80% written by ChatGPT. They edited it to make it sound more like them, added a few personal examples, and genuinely understand the argument. The institution's policy says "AI may be used as a tool but submitted work must be the student's own."
Is this acceptable?
Yes
9%
No
74%
Depends
17%
The analysis
This is not acceptable, and not a close call. The policy explicitly says submitted work must be the student's own — and 80% AI-generated work is not their own by any reasonable interpretation. The student may have edited and understood it, but the core intellectual labour (developing the argument, structuring it, finding the language) was done by AI.
The "genuinely understands the argument" point is worth addressing directly: understanding the argument you submitted isn't the same as doing the intellectual work of producing it. Assessment is partly about demonstrating that you can perform the thinking, not just that you can recognise good thinking when you see it. Using AI at this level when the policy prohibits it is academic misconduct.
Rule: If the point of the assignment is to demonstrate that you can do the thinking, submitting AI-generated thinking defeats the purpose — regardless of whether you understand it.
Job application
A job applicant uses AI to write their cover letter. The cover letter is truthful, reflects their genuine qualifications, and represents how they would speak about themselves in an interview. They don't disclose that AI helped write it.
Is this acceptable?
Yes
68%
No
11%
Depends
21%
The analysis
Generally acceptable. People have always used professional CV writers, career coaches, and templates to write cover letters. The purpose of a cover letter is to communicate qualifications and fit — not to demonstrate writing ability (unless the role specifically requires writing). Using AI to produce polished, accurate, truthful content is a reasonable extension of tools people have always used.
There is a genuine concern worth naming: if the cover letter dramatically overstates your communication ability and you can't back that up in writing-based parts of the role, that's a problem. But that's about honesty, not about the AI. The principle that matters: the content must be truthful. The tool that produced it is largely beside the point.
Rule: In contexts where writing ability isn't what's being assessed, AI writing assistance is generally comparable to other professional editing tools. The content must be truthful regardless.
Medical education
A medical student uses AI to answer practice case questions during exam preparation. They review the AI's answers carefully and use them to learn. They don't use AI during the actual exam, which is proctored.
Is this acceptable?
Yes
71%
No
7%
Depends
22%
The analysis
Acceptable, and arguably excellent practice. Using AI as a study partner for practice questions is similar to using textbooks, question banks, or a study group. The student is doing the learning — the AI is a richer, more interactive version of a practice answer book.
There is one important caveat for medical education specifically: AI can be wrong about clinical facts, and in medical contexts, learning a wrong answer has downstream patient safety implications. This student should cross-reference AI answers against authoritative clinical sources, not treat AI as the gold standard. Used critically, AI study assistance in medicine is valuable. Used uncritically, it could build dangerous misconceptions. The habit of critical verification is the skill being practised — and that's a good habit to build.
Rule: Using AI as a study tool is legitimate when you're doing the learning. In high-stakes knowledge domains, always verify AI answers against authoritative sources.
Journalism
A journalist uses AI to draft a news article from notes of interviews they conducted. They verify all facts, make substantial edits, and publish under their byline with no AI disclosure. The outlet has no explicit AI policy.
Is this acceptable?
Yes
31%
No
24%
Depends
45%
The analysis
This is genuinely contested and context-dependent — hence the very split response. Journalism sits at the intersection of transparency and craft. The case for "acceptable": the journalist did all the actual reporting (interviews, fact-checking, editing). The AI was a drafting tool, like any other writing software. The case for "problematic": journalism has an implicit contract with readers about authorship. Many readers assume the words in a bylined article were chosen by the journalist. AI disclosure is increasingly seen as a professional norm, even without a formal policy.
The pragmatic answer: the field is moving toward disclosure being standard practice. Getting ahead of that norm, rather than behind it, is both professionally safer and more honest. "Drafted with AI assistance" as a note has very little downside and maintains reader trust.
Rule: In trust-based professions (journalism, medicine, law), disclose AI assistance proactively rather than reactively. The norm is moving in that direction regardless.
Creative writing class
A student submits a short story. They used AI to generate an opening paragraph they were stuck on, then wrote the rest themselves — about 600 words of original writing following 80 AI-generated words. The story is genuinely theirs in voice and direction. No AI policy exists for the class.
Is this acceptable?
Yes
44%
No
22%
Depends
34%
The analysis
This is one of the most genuinely contested dilemmas in creative fields. Professional writers regularly use first-line prompts, writing exercises, and collaborative input to break through blocks. Plenty of published novels started with a line a friend suggested. The spirit of a creative writing class is developing voice and the ability to tell a story — the student did both.
The counter-argument: the opening line is often where voice is established. If a student can't start a story themselves, the AI opening is doing more than just breaking a block — it's making a creative choice for them. In a class specifically about developing creative skill, that might matter. The honest answer is: transparency with the instructor resolves this entirely. "I was blocked and used an AI-generated opening as a starting point" is a perfectly legitimate conversation to have. The secrecy creates more risk than the use.
Rule: In creative work, transparency with collaborators (including instructors) about AI assistance is usually the move that converts a grey area into a clear one.
Legal practice
A lawyer uses AI to research case law for a client matter. The AI produces a list of relevant cases. The lawyer doesn't independently verify each case, trusting the AI's output, and cites them in a legal brief to the court.
Is this acceptable?
Yes
3%
No
89%
Depends
8%
The analysis
Not acceptable — and this has already played out in court with severe consequences. AI language models hallucinate cases that don't exist. They produce case citations that sound completely real — correct format, plausible names, plausible dates — that are entirely fabricated. In 2023, multiple lawyers were sanctioned or fined for submitting AI-generated briefs with fictional cases they failed to verify.
This isn't about whether AI should be used in legal research — it can be a valuable starting point. It's about an absolute professional duty: you must verify every case you cite before submitting it to a court. Using AI as a research starting point is reasonable. Treating AI output as verified legal fact is professional negligence and potentially contempt of court. The rule is simple: never cite a case you haven't personally confirmed exists and says what you claim.
Rule: In high-stakes professional contexts, AI output is a starting point, never an endpoint. Verification is non-negotiable — especially when failure has consequences for clients or third parties.
⚡ The three questions that cut through most dilemmas
1. Is the thinking mine? — Did I do the intellectual work, or did AI do it for me?
2. Am I being honest? — Is the work truthful, and would the person receiving it expect it to have been produced this way?
3. What are the stakes of being wrong? — If the AI got something wrong and I didn't catch it, who gets hurt? The higher the stakes, the higher the verification burden.
Key takeaway
The ethical question isn't usually "did I use AI?" It's "did I own the output, verify what needed verifying, and tell the truth?"