Most people's approach to AI ethics is reactive — they figure out how they feel when a situation forces them to. That works until you're under pressure, in a hurry, or something is genuinely ambiguous. That's when defaults matter, and defaults are just decisions you made in advance.
This lesson is about making those decisions consciously — before you need them. You'll work through a set of questions that surface what you actually believe, then use Claude to help you articulate those beliefs into a clear, personal document you'll keep.
A written set of personal principles for how you'll use AI — not someone else's guidelines adapted for you, but the actual positions you've arrived at after thinking seriously about the questions in this track. Something you can return to. Something that's yours.
Work through these before you start building your ethics code. They're designed to surface what you actually think — not what sounds good.
Where is the line between AI supporting your thinking and AI replacing your thinking? It's probably in a different place for different tasks. Where is it for the things you do every day?
In what situations do you believe disclosing AI use is required? In what situations is it not? What principle underlies the difference? "I'd want to know if someone did that to me" is a valid test.
What categories of information will you never put into an AI system? Other people's personal details? Medical information? Confidential client data? Where do you draw that line, and why?
If you use an AI tool to make decisions about people — recommendations, assessments, selections — what's your responsibility to check for bias? What would that check actually look like in practice?
If AI-generated content you share is wrong and causes harm, what's your responsibility? "I didn't write it, the AI did" is not a satisfying answer to most people. Where does your responsibility begin and end?
Answer each question honestly — short answers are fine, the AI will help shape them into polished principles. Then generate your code. You can refine it until it says exactly what you mean.
A written ethics code only matters if you return to it. Three suggestions: (1) Save it somewhere you'll see it — the notes app you use most, pinned in your browser, the first page of your work journal. (2) When you face an ambiguous AI situation, read it before deciding. (3) Review it in six months. Your thinking will evolve as AI evolves — the document should too.
The right ethics code for you today might not be the right one in a year. AI capabilities are changing rapidly. New uses are emerging. New harms are being discovered. The value of having written principles isn't that they're permanent — it's that they make your thinking explicit, so when you change your mind, you know you've changed your mind and why.
You've finished AI Ethics & Safety. You can now spot AI misinformation, you know what data your tools collect, you understand where bias comes from, you can reason through genuinely hard dilemmas — and you have your own written principles to guide you when things get complicated.
That's not nothing. Most people using AI daily have thought about none of this.