AI Safety & Compliance Guidelines

These guidelines are mandatory reading before starting any exercises. They’re not a formality — they’re the foundation that makes AI adoption safe in any organizational setting.

Every exercise in this framework reinforces these principles. Safety isn’t a separate module; it’s embedded in how we teach AI fluency.

The Core Rule

Delegate work to AI, never accountability. You own every output. AI is a thought partner and a tool — never a decision-maker for compliance-sensitive decisions, professional judgment, or high-stakes actions.

What You CAN Share with AI Tools

What You CANNOT Share

When in doubt, don’t input it. There is no exercise in this program that requires real personal data or confidential information.

Risks to Understand

Risk Impact Mitigation
Data privacy violation Regulatory fines, reputational damage Never input protected or confidential data; use only hypothetical/anonymized data
AI hallucination Incorrect information acted upon Always verify AI outputs against authoritative sources
Bias in outputs Inequitable recommendations or decisions Review outputs critically; don’t accept AI suggestions unchallenged
Over-reliance Degraded professional judgment Use AI to augment, not replace, domain expertise
Shadow AI Untracked, ungoverned AI usage Use approved tools; log interactions when required by policy

Best Practices

  1. Use approved tools on approved devices. Follow your organization’s IT policies for AI tool access.
  2. Log your AI interactions when your organization requires it. At minimum, note what tool you used and for what purpose.
  3. Verify everything. AI outputs are drafts, not final products. Cross-reference facts, check citations, and apply professional judgment.
  4. Report concerns immediately. If you accidentally input sensitive data, or receive an output that concerns you, follow your organization’s incident reporting process.
  5. Share what you learn. When you discover an effective AI workflow, share it with your team (anonymized). This builds organizational fluency.

For Regulated Industries

Organizations in healthcare, finance, legal, and other regulated sectors face additional compliance requirements:

Customize this section for your organization’s specific regulatory requirements when forking the framework.

For Managers

Quick Reference Card

BEFORE using AI, ask:
  ✓ Does my prompt contain any personal, protected, or confidential data?
  ✓ Am I using an approved tool on an approved device?
  ✓ Could this output influence a high-stakes decision?

AFTER receiving output, ask:
  ✓ Have I verified the facts against a reliable source?
  ✓ Am I comfortable putting my name on this output?
  ✓ Should I log this interaction?

These guidelines are reinforced in every level of the framework. As you progress, the exercises become more complex — but the safety principles remain constant.


← Back: Overview Next: Measuring Success →