Testing a Tool

This guide covers how to test your tool before publishing it.

Before you begin

  • The tool must be saved (draft or committed). You cannot test unsaved changes -- save first.
  • You need to know what input your tool expects so you can provide valid test data.

Steps

1. Open the tool editor

Navigate to Tools, click the tool, then click Edit on the version you want to test.

2. Open the Test tab

In the right panel, click the Test tab. You see a JSON input field and a Run Test button.

If the Run Test button is disabled, you have unsaved changes. Save your code first.

3. Enter test input

Type your test input as JSON in the text area. This is the data your tool will receive in event.data. For example:

If your tool expects an order number:

{"order_number": "WH-2024-8834"}

If your tool expects a search query:

{"query": "acme corp", "limit": 10}

4. Click Run Test

Click Run Test. The button shows a loading indicator while the tool runs.

5. Read the results

After the test completes, the results panel shows:

  • Status -- A green "Success" chip or a red "Error" chip
  • Duration -- How long the execution took (for example, "145ms" or "2.5s")
  • Output -- The data your tool returned, formatted as JSON
  • Errors -- If the tool failed, the error message appears in red. Look here for details about what went wrong.
  • stderr -- Any warnings or diagnostic output from your code, shown in pink
  • stdout -- Any console.log output from your code

Reading error messages

Errors fall into two categories:

Build errors -- The code could not compile or the environment could not be set up. Common causes:

  • Syntax errors in your TypeScript code
  • An imported package that does not exist
  • A network domain in your code that is not in the permissions list

Execution errors -- The code compiled but failed while running. Common causes:

  • The external API returned an error (wrong credentials, bad request)
  • A variable or field you referenced does not exist in the response
  • The tool timed out (exceeded 30 seconds)

Iterating on your tool

The typical testing workflow is:

  1. Write or edit code
  2. Save
  3. Run test
  4. Read the results
  5. If errors, fix the code and repeat from step 2
  6. Once tests pass, commit and publish

You can change the test input between runs. Try different inputs to verify your tool handles edge cases: missing fields, empty results, invalid values.

Viewing execution history

Every test run is logged. On the tool's detail page, the Execution History panel shows all past executions with their status, duration, and source. Click any execution to see its full details in a side panel.

Related guides