[Docs index](/docs.md) / [Agent Skills](/docs/agent-skills/overview.md) / Playbook: Build a Team Prompt Standardization Skill

---

# Playbook: Build a Team Prompt Standardization Skill

Three people on the support team have their own version of "summarize this escalation." Two of them paste in the customer's full name. One of them tells the model to promise a fix timeline. Sales has four variations of the discovery-call summary prompt, and you can spot which rep wrote which one by the phrasing. Operations has a weekly reporting prompt that started as one person's experiment and now lives in six different Slack message histories.

These prompts are not bad. They are useful enough that the team keeps using them. They are just unowned — there is no canonical version, no review, no way to fix a problem in one place. This playbook walks you through building an Agent Skill that turns a family of scattered prompts into one reviewed Assist workflow. Preserve what works. Strip what shouldn't be there. Produce a version the whole team can use, attached to the operational agent that runs the work.

## What you will build

By the end of this playbook, you will have:

- A workspace Agent Skill named **Team Prompt Standardization** with a `SKILL.md`, a review-criteria policy, an approved-prompt format, source-prompt examples, and good and bad output examples.
- An approved version attached to an AI Enablement or Operations agent.
- A repeatable process: feed the agent the scattered prompts for one job, and get back a single Assist-ready workflow with the cleanup decisions documented.
- A template for using that workflow to seed new job-specific Agent Skills (Support Escalation Summary, Discovery Call Summary, Weekly Operations Update, etc.).

## What you need before you start

- An Assist workspace where you can create workspace-scoped Agent Skills.
- A workspace admin who can review and approve versions.
- An operational agent in your workspace dedicated to enablement or operations work. If you do not have one, [create a subagent](../subagents/creating-a-subagent.md).
- One concrete prompt family to start with. Pick something real and irritating: pick the prompt your support leads are tired of seeing five versions of, or the discovery-call summary that nobody can find the canonical version of.
- Three to six examples of the source prompts. Get them from people's chat history, docs, or Slack threads. Real text, not summaries.

## Step 1: Create the draft skill

Open Assist and go to **Workspace > Agent Skills**. Click **New Skill** and set:

- **Name:** Team Prompt Standardization
- **Slug:** team-prompt-standardization
- **Scope:** Workspace
- **Description:** Helps an operator compare scattered team prompts for the same job, preserve what works, remove private data and unsupported claims, and produce a single reviewed Assist workflow.

You can also scaffold it from chat:

> "Create a workspace Agent Skill called Team Prompt Standardization. Slug: team-prompt-standardization. It compares the variations of a recurring team prompt and turns them into one reviewed Assist workflow. Start a draft `SKILL.md` and create folders for `examples`, `templates`, and `policies`."

The skill is created as a draft. Nothing is mounted to any agent yet. You can author every file safely.

## Step 2: Write the main procedure

Open `/SKILL.md`. The first line tells Assist when to use the skill:

> Use this skill when a user asks to compare, clean up, or standardize repeated team prompts into one Assist-ready workflow.

Then write the standardization process the agent must follow. A useful procedure looks like this:

1. Ask the operator which job the prompt performs (escalation summary, discovery summary, weekly update, etc.) and how many source variations they have. Refuse to proceed with fewer than two — if there is only one prompt, there is nothing to standardize, just promote.
2. Read each source prompt and identify the business job it is trying to do.
3. List the useful instructions that appear in more than one source. Those are the strong signals — what the team has independently converged on.
4. Apply the review criteria in `/policies/review-rules.md`. Remove private customer or employee data, unsupported claims, instructions that bind the company to commitments, and shortcuts that depend on context the agent will not have at runtime.
5. Produce the standardized workflow in the format defined by `/templates/approved-prompt-format.md`.
6. Document the cleanup explicitly: what was preserved, what was changed, what was removed, and why.
7. Recommend whether the workflow is ready to become its own Agent Skill (`Support Escalation Summary`, for example), needs more cleanup first, or should stay as a manual reference.

The cleanup transparency is the point. Reviewers should be able to see why the new workflow is better than the scattered source prompts. Without that, the team just thinks you replaced their prompts with a different person's favorite version.

## Step 3: Add the supporting files

The supporting files are what give the agent something to apply, not just say. Create:

- `/templates/approved-prompt-format.md` — the exact structure the standardized workflow must use. Required fields: job, when to use, required inputs, instructions, output format, review conditions. A strict format produces standardized prompts that can be reviewed in batches.
- `/policies/review-rules.md` — the rules the agent must enforce. Examples: no customer names in examples; no commitments to delivery timelines without a human review; no instructions to "be confident" or "sound authoritative"; required inputs must be named explicitly.
- `/examples/source-prompts.md` — three or four sanitized examples of the kind of scattered prompts the agent will see as input. Real phrasing, real messiness. The agent learns what "scattered" looks like by reading actual scattered text.
- `/examples/good-output.md` — one fully worked example of a standardized workflow, complete with the cleanup commentary.
- `/examples/bad-output.md` — one example of what to avoid: a "standardized" prompt that just polished the language without removing risk. This stops the agent from prioritizing tone over substance.

Drive the work from chat if you want:

> "Draft `/policies/review-rules.md` for the Team Prompt Standardization skill. Eight to ten rules the agent must enforce when cleaning up scattered team prompts. Cover sensitive data, unsupported claims, commitments, required inputs, and human-review triggers."

Review every file before submitting. The point of an Agent Skill is that the file is the contract — what the agent reads is what the agent will do.

## Step 4: Use the review criteria that actually decide

Before you submit, make sure the review criteria in your skill are operational. A workflow that passes the criteria can become its own Agent Skill. One that fails goes back to draft. Add this table to the skill so the agent applies it consistently:

| Category | Question |
|----------|----------|
| Repeated value | Is this workflow used often enough by enough people to standardize? |
| Business owner | Does a manager or team own the workflow and the future updates? |
| Input clarity | Does the prompt say exactly what information the agent needs? |
| Output quality | Is the expected output format clear and reviewable? |
| Data safety | Does the prompt avoid unnecessary sensitive data? |
| Human review | Does it state when a person must review the output? |

If a workflow cannot pass these six checks, keep it as a draft and improve it. Standardization is not the same as approval.

## Step 5: Submit for review and approve

From the skill detail page, click **Submit for review**. The version moves to pending. A workspace admin walks through the files:

- Does the procedure actually produce standardized prompts, or just polished ones?
- Are the review rules realistic for the kinds of prompts your team writes?
- Does the good-output example clear the bar? Does the bad-output example demonstrate the failure mode?

When the admin approves, the version becomes available to mount. If the admin rejects, the feedback comes back as comments on the version, and you author a new draft. Treat the rejection seriously — it is the way scattered AI use becomes shared company infrastructure instead of one person's preferences pushed across the team.

## Step 6: Attach the skill to the right agent

Pick the operational agent that will run this work. Good candidates: `AI Enablement Agent`, `Operations Agent`, `Knowledge Operations Agent`, `Team Workflow Review Agent`. Attach the approved version with a readable mount path:

`/skills/team-prompt-standardization`

The agent is now pinned to this version of the skill. Future approved versions do not silently swap in — you upgrade the mount deliberately when you are ready.

## Step 7: Run the first standardization with real prompts

Pick a real prompt family. Support escalation summaries are usually a good first run — the prompts exist in multiple places, the output goes to customers or internal handoff partners, and the team feels the inconsistency. Start with prompts like:

> "Standardize these three support escalation prompts into one Assist workflow. The job is summarizing a support ticket for a Tier 2 handoff. Here are the three sources..."

> "Review these four sales discovery-call summary prompts. Produce one standardized workflow and tell me what you removed and why."

> "Convert this operations weekly-update prompt into a managed Assist workflow. The output goes to leadership every Monday."

Watch what the agent produces. Two things matter:

- Did the cleanup section actually explain what changed, or did it just say "improved clarity"? "Improved clarity" is a smell. You want statements like "Removed instruction to promise resolution within 24 hours — that's a commitment we cannot make without human review."
- Does the output match the strict format in your template? If the agent drifts from the template, the review process collapses — every standardized workflow looks different and reviewers cannot compare.

Iterate in chat:

> "The cleanup section is too vague. Rewrite it as specific before/after pairs: for each instruction you changed or removed, show the source phrase and the new phrase."

> "Be stricter about the required-inputs section. List every piece of information the agent must have to produce a good output. Don't include optional context."

That iteration feedback should go into the next draft of the skill itself, not just this conversation. Once an issue appears in two consecutive runs, fix the skill.

## Step 8: Turn standardized workflows into their own Agent Skills

The standardized workflow that comes out of this process is the seed of a new, narrower Agent Skill. Take the standardized "Support Escalation Summary" output and create a workspace Agent Skill named exactly that. The `SKILL.md` is the standardized procedure. The supporting files become `/templates/escalation-summary.md`, `/policies/customer-data.md`, `/examples/good-summary.md`.

> "Use the standardized Support Escalation Summary workflow we just produced as the basis for a new workspace Agent Skill called Support Escalation Summary. Use the cleaned-up procedure as `SKILL.md`. Carry over the customer-data rule and the good-output example."

Submit the new skill for review. When it is approved, attach it to the support team's operational agent. Now the support team has one canonical workflow, not five copies in five chat histories.

## What you built

You have an Agent Skill that does what no individual prompt doc can do: it produces standardized workflows with an explicit record of what was preserved, changed, and removed. The skill itself includes:

- A specific procedure that compares multiple source prompts and produces one reviewed workflow.
- Review rules that catch the failure modes scattered prompts usually have: leaked sensitive data, unsupported claims, missing inputs, missing human-review triggers.
- A strict output format that reviewers can compare across runs.
- Worked examples of both good and bad outcomes, so the agent does not optimize for polish.

More importantly, you have a way to produce job-specific Agent Skills that the company actually owns. Every standardized workflow becomes a candidate for its own approved skill, attached to the agent the team already uses. The support team stops asking "which version of this prompt should I copy?" — they just ask the support agent, which already has the reviewed version mounted.

The team's collected experience of what works in a prompt becomes shared company infrastructure instead of personal preference. That is what bringing scattered AI use under one platform actually looks like at the prompt level.

## Where to go from here

- **Pick the next prompt family.** Once support escalations are standardized, move to discovery-call summaries, weekly operations updates, customer onboarding checklists, or recruiting screen summaries. Run the same playbook.
- **Roll out the new skills deliberately.** Each standardized workflow that becomes its own Agent Skill needs to be attached to an operational agent and validated on real work. Use the [Department AI Rollout](playbook-department-ai-rollout.md) playbook.
- **Keep the cleanup commentary.** Save the change records from standardization. They become the audit trail when leadership asks "what is the company AI policy for support output?" The answer is the policy in your approved skill plus the cleanup notes that show what was caught and removed.
- **Run a quarterly drift check.** Periodically re-collect the prompts people are actually using and run them through the standardization skill. Drift happens — people add personal tweaks. Catching drift early keeps the approved workflow useful.

## Related guides

- [Authoring and approving Agent Skills](authoring-and-approving-agent-skills.md)
- [Attaching skills to operational agents](attaching-skills-to-operational-agents.md)
- [Governing Agent Skill versions](governing-agent-skill-versions.md)
- [Troubleshooting](troubleshooting.md)

---

## Navigation

### In this section: Agent Skills

- [Agent Skills](/docs/agent-skills/overview.md)
- [Use Cases and Playbooks](/docs/agent-skills/use-cases.md)
- [Troubleshooting Agent Skills](/docs/agent-skills/troubleshooting.md)
- [Attaching Agent Skills to Operational Agents](/docs/agent-skills/attaching-skills-to-operational-agents.md)
- [Authoring and Approving Agent Skills](/docs/agent-skills/authoring-and-approving-agent-skills.md)
- [Creating a Managed AI Workflow Skill](/docs/agent-skills/creating-a-managed-ai-workflow-skill.md)
- [Governing Agent Skill Versions](/docs/agent-skills/governing-agent-skill-versions.md)
- [Importing a Team AI Workflow Skill](/docs/agent-skills/importing-a-team-ai-workflow-skill.md)

#### Playbooks

- [Playbook: Build a Department AI Rollout Skill](/docs/agent-skills/playbook-department-ai-rollout.md)
- [Playbook: Build a Team AI Usage Cleanup Skill](/docs/agent-skills/playbook-ai-usage-inventory.md)
- **Playbook: Build a Team Prompt Standardization Skill** (current)

### Other sections

- [MCP Servers](/docs/mcp-servers/overview.md)
- [Tool Creation](/docs/tool-creation/overview.md)
- [Agent Filesystem](/docs/agent-filesystem/overview.md)
- [Chat Sharing](/docs/chat-sharing/overview.md)
- [Scheduled Triggers](/docs/scheduled-triggers/overview.md)
- [Sandcastles](/docs/sandcastles/overview.md)
- [Subagents](/docs/subagents/overview.md)
- [Workspace Permissions](/docs/workspace-permissions/overview.md)

[Back to docs index](/docs.md)
