Your team is using AI whether you have a policy or not. This template gives you a starting point — built on an open-source framework, with practical layers we added from real implementation work.
Covers permitted and prohibited uses, a data handling decision tree, ready-to-use disclosure language, and governance guardrails for agentic AI.
This template is built on the Pavilion AI Acceptable Use Policy and Governance Framework (MIT license, v1.0, March 2026). We started there, then added three layers from what we’ve seen in practice: a data handling decision tree, disclosure templates, and agentic AI governance.
The policy is organized around six core principles:
1. Human judgment leads. AI supports decisions. It doesn’t make them.
2. Transparency by default. If AI touched it, be ready to say so.
3. Data protection is non-negotiable. Not every tool deserves every input.
4. You own the output. AI generated it. You’re accountable for it.
5. Legal compliance comes first. Convenience never overrides regulation.
6. Confidentiality over convenience. If you’re not sure, don’t paste it in.
Every AI use case falls into one of three categories. The goal isn’t to lock things down — it’s to make the rules clear enough that people don’t have to guess.
Most policies list data tiers in a table. That’s necessary but not sufficient. People don’t think in tiers — they think in questions. Here’s the decision tree.
| Tier | Examples | What’s allowed |
|---|---|---|
| Enterprise | Tools with BAAs, DPAs, SOC 2, contractual no-training clauses | All data types, including client and regulated data with appropriate controls |
| Professional | Paid tools with privacy policies and data retention controls | Internal data, anonymized client data, business strategy. No PII, no regulated data. |
| Free / Public | Free-tier AI tools, no contractual guarantees | Public information only. No client names, no internal docs, no customer data. |
The question isn’t whether to disclose — it’s when the situation calls for it. Here are the rules, followed by ready-to-use language.
Disclose when: someone asks directly, AI played a primary drafting role in a deliverable, a client has stated a preference, or regulatory requirements apply.
Don’t disclose: routine productivity use. Using AI to brainstorm, check grammar, or summarize research is analogous to using spell check or a search engine. Nobody discloses that they used Google.
Agents that can read email, write to databases, post to external systems, or execute code need different rules than a chatbot. These five guardrails apply to any agent with access to business systems.
Every agent needs a written scope: what it can access, what actions it can take, and what it explicitly cannot do. No unbounded autonomy. If you can’t write down the boundaries, the agent isn’t ready to deploy.
Sending an email, publishing content, deleting records, modifying permissions, spending money — anything that can’t be undone requires a human in the loop. The agent can draft, recommend, and queue. A person approves and executes.
Every action an agent takes should be logged with a timestamp, what it did, what data it accessed, and what decision it made. If you can’t audit it after the fact, you can’t govern it. Logs are not optional.
Before any agent touches live systems, test it against synthetic or sandboxed data. Verify it respects scope boundaries, handles edge cases, and fails gracefully. Do not test in production.
MCP servers, API connections, and plugin architectures expand what an agent can do — and what can go wrong. Evaluate each integration for prompt injection risk, data exfiltration surface, and permission scope before connecting it.
Source attribution: This template builds on the Pavilion AI Acceptable Use Policy and Governance Framework (MIT license, v1.0, March 2026). The foundation layer — core principles, use categories, and data tier structure — comes from that framework. The data handling decision tree, disclosure templates, and agentic governance guardrails are dAIs additions built from implementation experience.
Full PDF with the complete policy framework, data handling decision tree, disclosure language, and agentic governance guardrails. Adapt it to your organization.
Download the PDFPDF — free, no strings
We build AI automations with governance guardrails built in — compliant by design, not by audit.
Book a Discovery Call