Free Policy Template

AI Acceptable Use Policy Template

Your team is using AI whether you have a policy or not. This template gives you a starting point — built on an open-source framework, with practical layers we added from real implementation work.

Covers permitted and prohibited uses, a data handling decision tree, ready-to-use disclosure language, and governance guardrails for agentic AI.

No spam. Just tools and the occasional useful thing.

The Foundation

What this policy covers

This template is built on the Pavilion AI Acceptable Use Policy and Governance Framework (MIT license, v1.0, March 2026). We started there, then added three layers from what we’ve seen in practice: a data handling decision tree, disclosure templates, and agentic AI governance.

The policy is organized around six core principles:

1. Human judgment leads. AI supports decisions. It doesn’t make them.
2. Transparency by default. If AI touched it, be ready to say so.
3. Data protection is non-negotiable. Not every tool deserves every input.
4. You own the output. AI generated it. You’re accountable for it.
5. Legal compliance comes first. Convenience never overrides regulation.
6. Confidentiality over convenience. If you’re not sure, don’t paste it in.

Use Categories

Permitted, restricted, and prohibited

Every AI use case falls into one of three categories. The goal isn’t to lock things down — it’s to make the rules clear enough that people don’t have to guess.

Permitted

  • Drafting and editing internal content
  • Research and summarization
  • Brainstorming and ideation
  • Code generation and debugging
  • Data analysis on non-sensitive data
  • Workflow automation (documented)

Restricted (needs approval)

  • Client data in any AI tool
  • Publishing AI-generated content externally
  • Autonomous agents with system access
  • Regulated data (financial, health, legal)
  • Hiring or performance evaluation inputs

Prohibited

  • Misrepresenting AI output as solely human work when asked
  • Inputting confidential data into free-tier tools
  • Using AI to fabricate data or sources
  • Bypassing security controls
  • Deploying agents without scope limits
dAIs Layer: Data Handling

Which data goes where

Most policies list data tiers in a table. That’s necessary but not sufficient. People don’t think in tiers — they think in questions. Here’s the decision tree.

Does this data identify a specific person?
Yes: Never use free-tier tools. Enterprise tools with data processing agreements only. If it includes health, financial, or employment data, get legal signoff first.
Is this data covered by a client NDA or contract?
Yes: Treat it as confidential. Enterprise-tier tools only. Strip client names and identifying details when possible — work with the structure, not the specifics.
Would you be uncomfortable if this input appeared in a training dataset?
Yes: Use only tools with contractual no-training guarantees. If you’re not sure whether a tool trains on inputs, assume it does.
Is this information already publicly available?
Yes: Any tool tier is appropriate. Public data in, public data out. This is where free-tier tools are fine — competitive research, public filings, published articles.

The three tiers

Tier Examples What’s allowed
Enterprise Tools with BAAs, DPAs, SOC 2, contractual no-training clauses All data types, including client and regulated data with appropriate controls
Professional Paid tools with privacy policies and data retention controls Internal data, anonymized client data, business strategy. No PII, no regulated data.
Free / Public Free-tier AI tools, no contractual guarantees Public information only. No client names, no internal docs, no customer data.
dAIs Layer: Disclosure

When and how to disclose AI use

The question isn’t whether to disclose — it’s when the situation calls for it. Here are the rules, followed by ready-to-use language.

Disclose when: someone asks directly, AI played a primary drafting role in a deliverable, a client has stated a preference, or regulatory requirements apply.

Don’t disclose: routine productivity use. Using AI to brainstorm, check grammar, or summarize research is analogous to using spell check or a search engine. Nobody discloses that they used Google.

Client deliverable
“This deliverable was developed using AI-assisted tools for drafting and research. All strategic recommendations, analysis, and final content reflect our professional judgment and have been reviewed for accuracy.”
Internal tooling
“This workflow uses AI to [specific function — e.g., classify inbound requests, draft initial responses, flag anomalies]. A team member reviews all outputs before they reach a customer or go on record.”
When asked directly
“Yes, I use AI tools as part of my workflow. They help with research, drafting, and analysis. I review and edit all outputs, and I’m accountable for the final work product.”
dAIs Layer: Agentic AI Governance

Guardrails for autonomous agents

Agents that can read email, write to databases, post to external systems, or execute code need different rules than a chatbot. These five guardrails apply to any agent with access to business systems.

01

Define the scope before deployment

Every agent needs a written scope: what it can access, what actions it can take, and what it explicitly cannot do. No unbounded autonomy. If you can’t write down the boundaries, the agent isn’t ready to deploy.

02

Human approval for irreversible actions

Sending an email, publishing content, deleting records, modifying permissions, spending money — anything that can’t be undone requires a human in the loop. The agent can draft, recommend, and queue. A person approves and executes.

03

Log everything

Every action an agent takes should be logged with a timestamp, what it did, what data it accessed, and what decision it made. If you can’t audit it after the fact, you can’t govern it. Logs are not optional.

04

Test with synthetic data first

Before any agent touches live systems, test it against synthetic or sandboxed data. Verify it respects scope boundaries, handles edge cases, and fails gracefully. Do not test in production.

05

Review third-party integrations

MCP servers, API connections, and plugin architectures expand what an agent can do — and what can go wrong. Evaluate each integration for prompt injection risk, data exfiltration surface, and permission scope before connecting it.

Source attribution: This template builds on the Pavilion AI Acceptable Use Policy and Governance Framework (MIT license, v1.0, March 2026). The foundation layer — core principles, use categories, and data tier structure — comes from that framework. The data handling decision tree, disclosure templates, and agentic governance guardrails are dAIs additions built from implementation experience.

Download the Policy Template

Full PDF with the complete policy framework, data handling decision tree, disclosure language, and agentic governance guardrails. Adapt it to your organization.

Download the PDF

PDF — free, no strings

Need help implementing AI responsibly?

We build AI automations with governance guardrails built in — compliant by design, not by audit.

Book a Discovery Call