Lesson 5 of 5 · Premium Track

Your personal
AI ethics code

⏱ 30 min 🤖 Live Claude 📄 Your own document

Most people's approach to AI ethics is reactive — they figure out how they feel when a situation forces them to. That works until you're under pressure, in a hurry, or something is genuinely ambiguous. That's when defaults matter, and defaults are just decisions you made in advance.

This lesson is about making those decisions consciously — before you need them. You'll work through a set of questions that surface what you actually believe, then use Claude to help you articulate those beliefs into a clear, personal document you'll keep.

📌 What you're building

A written set of personal principles for how you'll use AI — not someone else's guidelines adapted for you, but the actual positions you've arrived at after thinking seriously about the questions in this track. Something you can return to. Something that's yours.


Before you write:
five questions to sit with

Work through these before you start building your ethics code. They're designed to surface what you actually think — not what sounds good.

Question 1 — The thinking line

Where is the line between AI supporting your thinking and AI replacing your thinking? It's probably in a different place for different tasks. Where is it for the things you do every day?

Question 2 — Your disclosure standard

In what situations do you believe disclosing AI use is required? In what situations is it not? What principle underlies the difference? "I'd want to know if someone did that to me" is a valid test.

Question 3 — The sensitive data line

What categories of information will you never put into an AI system? Other people's personal details? Medical information? Confidential client data? Where do you draw that line, and why?

Question 4 — The AI bias question

If you use an AI tool to make decisions about people — recommendations, assessments, selections — what's your responsibility to check for bias? What would that check actually look like in practice?

Question 5 — The downstream question

If AI-generated content you share is wrong and causes harm, what's your responsibility? "I didn't write it, the AI did" is not a satisfying answer to most people. Where does your responsibility begin and end?


Build your
ethics code

Answer each question honestly — short answers are fine, the AI will help shape them into polished principles. Then generate your code. You can refine it until it says exactly what you mean.

Personal ethics code builder
Define your principles
Answer honestly — Claude will help you articulate them clearly
Step 1 of 6
What's your name or how should the code refer to you?
e.g. "Alex", "a student", "a marketing professional" — whatever feels right
Step 2 of 6
Where do you draw the line between AI helping your thinking versus replacing it?
What tasks should you always do yourself? Where is AI assistance clearly fine?
Step 3 of 6
What's your disclosure principle? When will you tell people you used AI?
Think about the different contexts in your life — professional, academic, personal
Step 4 of 6
What information will you never put into AI tools?
Think about other people's privacy, confidential data, sensitive categories
Step 5 of 6
What's your commitment when AI affects decisions about people?
Hiring, assessing, recommending, or evaluating others using AI assistance
Step 6 of 6
What's your commitment to accuracy and the downstream impact of what you share?
Misinformation, verification, and your responsibility for content you put into the world
Tone preferences
📄 Your AI ethics code
Refine your code

⚡ How to actually use this

A written ethics code only matters if you return to it. Three suggestions: (1) Save it somewhere you'll see it — the notes app you use most, pinned in your browser, the first page of your work journal. (2) When you face an ambiguous AI situation, read it before deciding. (3) Review it in six months. Your thinking will evolve as AI evolves — the document should too.

🎯 This is a living document

The right ethics code for you today might not be the right one in a year. AI capabilities are changing rapidly. New uses are emerging. New harms are being discovered. The value of having written principles isn't that they're permanent — it's that they make your thinking explicit, so when you change your mind, you know you've changed your mind and why.

🛡️

Track complete.

You've finished AI Ethics & Safety. You can now spot AI misinformation, you know what data your tools collect, you understand where bias comes from, you can reason through genuinely hard dilemmas — and you have your own written principles to guide you when things get complicated.

That's not nothing. Most people using AI daily have thought about none of this.

← Back to track Explore more tracks
Mark lesson as complete
PreviousResponsible use at work & school