AI Safety & Compliance Guidelines
These guidelines are mandatory reading before starting any exercises. They’re not a formality — they’re the foundation that makes AI adoption safe in any organizational setting.
Every exercise in this framework reinforces these principles. Safety isn’t a separate module; it’s embedded in how we teach AI fluency.
The Core Rule
Delegate work to AI, never accountability. You own every output. AI is a thought partner and a tool — never a decision-maker for compliance-sensitive decisions, professional judgment, or high-stakes actions.
What You CAN Share with AI Tools
- Anonymized, hypothetical scenarios (“A general wellness program for employees aged 30-50”)
- Publicly available information (published guidelines, research, general best practices)
- De-identified operational questions (“How can a 200-person department improve onboarding?”)
- Personal learning and skill-building prompts
- Draft communications that contain no protected or confidential information
What You CANNOT Share
- Protected personal information — names, dates of birth, ID numbers, or any data that could identify individuals (employees, customers, patients, clients)
- Confidential organizational data — financial records, strategic plans, proprietary processes, employee records
- Credentials or access information — passwords, system access details, internal URLs
When in doubt, don’t input it. There is no exercise in this program that requires real personal data or confidential information.
Risks to Understand
| Risk | Impact | Mitigation |
|---|---|---|
| Data privacy violation | Regulatory fines, reputational damage | Never input protected or confidential data; use only hypothetical/anonymized data |
| AI hallucination | Incorrect information acted upon | Always verify AI outputs against authoritative sources |
| Bias in outputs | Inequitable recommendations or decisions | Review outputs critically; don’t accept AI suggestions unchallenged |
| Over-reliance | Degraded professional judgment | Use AI to augment, not replace, domain expertise |
| Shadow AI | Untracked, ungoverned AI usage | Use approved tools; log interactions when required by policy |
Best Practices
- Use approved tools on approved devices. Follow your organization’s IT policies for AI tool access.
- Log your AI interactions when your organization requires it. At minimum, note what tool you used and for what purpose.
- Verify everything. AI outputs are drafts, not final products. Cross-reference facts, check citations, and apply professional judgment.
- Report concerns immediately. If you accidentally input sensitive data, or receive an output that concerns you, follow your organization’s incident reporting process.
- Share what you learn. When you discover an effective AI workflow, share it with your team (anonymized). This builds organizational fluency.
For Regulated Industries
Organizations in healthcare, finance, legal, and other regulated sectors face additional compliance requirements:
- Healthcare: HIPAA governs Protected Health Information (PHI). Never input any of the 18 HIPAA identifiers into AI tools. Ensure Business Associate Agreements (BAAs) are in place where required.
- Finance: SOX, PCI-DSS, and other regulations restrict how financial data and customer information can be processed. AI outputs used in financial reporting must be verified and auditable.
- Legal: Attorney-client privilege and confidentiality rules apply. AI-generated legal analysis must be reviewed by qualified counsel.
Customize this section for your organization’s specific regulatory requirements when forking the framework.
For Managers
- Model safe behavior. Your team watches how you use AI. Demonstrate the guidelines in practice.
- Create psychological safety. Teams won’t report concerns or share learnings if they fear judgment. Encourage experimentation within the guardrails.
- Review team AI usage. Periodically check how your team is using AI tools. Look for both compliance issues and opportunities to share effective patterns.
Quick Reference Card
BEFORE using AI, ask:
✓ Does my prompt contain any personal, protected, or confidential data?
✓ Am I using an approved tool on an approved device?
✓ Could this output influence a high-stakes decision?
AFTER receiving output, ask:
✓ Have I verified the facts against a reliable source?
✓ Am I comfortable putting my name on this output?
✓ Should I log this interaction?
These guidelines are reinforced in every level of the framework. As you progress, the exercises become more complex — but the safety principles remain constant.
| ← Back: Overview | Next: Measuring Success → |