Playbook: Build a Monthly Reporting Agent With Templates and an Archive
Every organization has someone whose calendar is blocked the first week of every month because "reports are due." They log into an analytics system or an ERP, run the same set of queries, copy the numbers into a document, write two paragraphs of commentary, and send the report to the leadership team. The format drifts month to month because each iteration is done by hand. Last month's report is buried in an email thread. Comparing this month's numbers to three months ago requires digging up three separate files.
This playbook walks you through building a subagent with a filesystem that turns the monthly reporting cycle into something that runs on its own. The template lives as a file. The completed reports live as dated files. The agent reads the template, fills in the numbers from real data sources, writes the new report, and updates an index the whole team can browse.
What you will build
By the end of this playbook, you will have:
- A scheduled subagent that runs on the first business day of every month.
- Custom tools that pull the monthly figures from your analytics system, ERP, or warehouse database.
- A template file stored in the agent's filesystem that defines the report's structure.
- A dated report archive — one file per month, all in the same place, all in the same format.
- An index file the agent maintains automatically so anyone can see what has been generated.
- A sandcastle dashboard that renders the archive as a browsable report history with a one-click comparison between any two months.
What you need before you start
- An Assist workspace with your AI client connected via MCP.
- Editor access to create agents, schedule them, and turn their filesystems on.
- Credentials for whatever system holds the numbers — analytics database, BI tool, ERP module, or internal data warehouse.
- An example of the report your team currently generates by hand. You will convert it into a template, so have the most recent version handy.
This playbook uses a fictional analytics API at https://analytics.internal.company.com/api and a monthly operations report with sections for revenue, shipment volume, on-time delivery rate, and exceptions. Substitute your real systems and sections.
Step 1: Create the agent and its filesystem
From your AI client:
"Create a new subagent called 'Monthly Operations Report Agent'. Its purpose is to generate the monthly operations report by reading a template, pulling numbers from the analytics system, and writing the completed report to its filesystem. It should never email the report automatically — it writes the file and returns the path."
Accept the agent creation, then open Agents in Assist, click the agent, and turn Filesystem Volume on. Start a fresh chat with the agent and confirm the file tools are available.
Pause before moving on. The filesystem is empty and the template does not exist yet. The next step populates it.
Step 2: Write the template file
Open a conversation with the agent and hand it the current report in plain text:
"I am going to paste last month's operations report below. Read it, then save it as
/template.md— but replace every specific number and date with a placeholder like{{revenue}},{{shipment_count}},{{otd_rate}},{{top_exception}}, and so on. Also replace the month name in the title with{{month}}. Keep the structure, headings, prose, and commentary patterns exactly as they are. The result should be a reusable skeleton we can fill in every month."
Paste the old report. The agent will produce a templated version and save it. Ask it to read /template.md back to you. Check that:
- Every number from the old report is a placeholder.
- The headings match exactly.
- The commentary sections have placeholders for things like "month-over-month change" and "top exception category."
Iterate with the agent until the template is clean. This is a one-time investment that pays off every month.
Step 3: Build read tools for the source data
The agent needs to actually pull the numbers, not make them up. Create tools for each metric in the template:
"Create a custom tool called
get_monthly_revenuethat takes a month inYYYY-MMformat, calls our analytics API athttps://analytics.internal.company.com/api/revenue?month={month}with bearer auth, and returns the total revenue as a number plus the month-over-month change as a percentage."
"Create a custom tool called
get_monthly_shipmentsthat takes a month, calls the analytics API at/api/shipments?month={month}, and returns the total shipment count, on-time delivery rate, and the average order cycle time."
"Create a custom tool called
get_monthly_exceptionsthat takes a month, calls/api/exceptions?month={month}, and returns the top five exception categories by count, with a description and resolution rate for each."
Test each tool against the most recent completed month. Verify the numbers match what the hand-made report showed. Any discrepancies mean either the API is measuring something different than the template assumes or the template documentation is wrong — fix whichever is wrong before continuing.
Step 4: Run the template end-to-end for one month
Before scheduling anything, walk through a full generation manually:
"Let's generate the operations report for last month. Read
/template.md, call each of the three tools for the correct month, fill in every placeholder with real data, write commentary for each section based on the actual numbers, and save the result as/reports/{{YYYY-MM}}-operations-report.md. Use the actual month in the filename and in the title. Also update/index.mdto add this report at the top of the list, with the month and a one-line summary."
The agent reads the template, calls the tools, writes the completed file, and updates the index. Ask it to read the new report back and confirm it looks right. Compare it against the human-generated version from last month. Differences are either:
- Tool numbers do not match the human numbers — either the tool is pulling the wrong thing or the human made a mistake. Investigate.
- Commentary is generic — tighten the template. Add more specific placeholder prompts like
{{why_otd_changed}}and instruct the agent to explain the month-over-month change in the commentary section. - Formatting drifted — tighten the template. Use explicit markdown that the agent will preserve.
Iterate until a generated report is as good as or better than the human-made one. This is when you know the workflow is ready.
Step 5: Schedule the agent
In your AI client:
"Schedule the Monthly Operations Report Agent to run at 9 AM UTC on the second business day of every month. When it runs, it should generate the report for the previous completed month using the template workflow we just built. On completion, it should post a short Slack notification to #operations with a summary and a link to the generated file."
The AI client will set up the cron schedule and the Slack hook. The agent now runs every month without anyone remembering.
Verify the schedule by looking at the agent's scheduled run list. Confirm the next execution time is what you expect.
Step 6: Build the archive dashboard
Now make the archive visible to the whole team:
"Build a sandcastle app called 'Monthly Operations Report Archive' that shows every file under
/reports/managed by the Monthly Operations Report Agent. The app should:
- Call a read-only tool that lists files matching
/reports/*.mdand returns each file's path plus the top of the file for a preview.- Display them as cards sorted newest first, with the month prominent and a short preview of the commentary.
- Clicking a card opens a full view of the report, rendered as markdown.
- Include a 'Compare' button that lets you pick two months and shows a side-by-side diff of the commentary and a table of the numbers from both.
- Store the last three compared pairs per user as persistent state so people can jump back to recent comparisons.
Make it read-only. Generation only happens through the scheduled agent."
Iterate on the app through conversation: "make the compare view show the delta between the two months next to each number." "Add a filter for the current year." "Highlight months where on-time delivery dropped more than 5 points."
Share the app with the operations leadership team. They now have a running archive of every report the organization has ever generated, all in the same format, all comparable.
Step 7: Add a template evolution process
Templates need to change over time. The business adds new metrics, drops old ones, and reshapes commentary. Because the template is a file in the agent's filesystem, the update flow is:
"Open
/template.md, add a new section after 'On-Time Delivery' called 'Customer Satisfaction Score' with placeholders for{{csat_score}}and{{csat_mom_change}}. Update/changelog.mdto log this change with today's date."
The agent updates both files. The next scheduled run uses the new template and every subsequent report includes the new section. The changelog file gives future readers a record of what changed and when — invaluable when someone asks "why did we start tracking CSAT in March?"
This is the pattern for every future template change. Treat the template like source code. Version it in the filesystem. Log changes.
What you built
You now have:
- A scheduled agent that runs every month without human intervention.
- A templated report workflow with the template, the archive, and a change log all living as files the agent manages.
- A faithful archive where every past report is preserved in the exact same format, making month-over-month comparisons trivial.
- A browseable dashboard that gives leadership direct access to the archive without going through the person who used to generate reports.
- A template evolution process that handles "we need to start tracking X" by editing one file.
What used to take a day of the month from one person is now one scheduled run. What used to be buried in email is now a searchable archive. The filesystem is what makes this durable — the agent's memory of the template, the history, and the process lives on disk, not in a prompt someone has to remember to type.
Natural extensions
- Quarterly and annual rollups: a second scheduled agent that reads the monthly reports, writes a quarterly summary, and a yearly recap.
- Anomaly detection: the monthly agent compares this month's numbers against a rolling three-month average and highlights any metric that moved more than one standard deviation.
- Per-audience versions: the template can have sections tagged for different readers. A secondary agent reads the full report, extracts only the sections tagged "executive summary," and writes a shorter version for the leadership email.
- Decision log integration: when a report flags an exception, an operations manager can respond with "add a note in the decisions log that we are changing our pickup schedule in response to this." The agent writes a new file under
/decisions/and links it from the report.
Each extension is a new prompt. The filesystem keeps it all connected.