Class ReadWorkingMemoryTool

Tool that lets the agent explicitly read its persistent working memory. The memory is also injected into the system prompt automatically, but this tool is useful when the agent wants to reason about its memory before deciding what to update.

Implements

  • ITool<Record<string, never>, ReadOutput>

Constructors

Methods

  • Async

    Executes the core logic of the tool with the provided arguments and execution context. This method is asynchronous and should encapsulate the tool's primary functionality. Implementations should handle their own internal errors gracefully and package them into the ToolExecutionResult object.

    Parameters

    • _args: Record<string, never>

      The input arguments for the tool. These arguments are expected to have been validated against the tool's inputSchema by the calling system (e.g., ToolExecutor).

    • _context: ToolExecutionContext

      The execution context, providing information about the GMI, user, current session, and any correlation IDs for tracing.

    Returns Promise<ToolExecutionResult<ReadOutput>>

    A promise that resolves with a ToolExecutionResult object, which contains the success status, the output data (if successful), or an error message (if failed).

    Throws

    While tools should ideally capture errors and return them in ToolExecutionResult.error, critical, unrecoverable, or unexpected system-level failures during execution might still result in a thrown exception. The ToolExecutor should be prepared to catch these.

Properties

id: "read-working-memory-v1" = 'read-working-memory-v1'

A globally unique identifier for this specific tool (e.g., "web-search-engine-v1.2", "stock-price-fetcher"). This ID is used for internal registration, management, and precise identification. It's recommended to use a namespaced, versioned format (e.g., vendor-toolname-version).

name: "read_working_memory" = 'read_working_memory'

The functional name of the tool, as it should be presented to and used by an LLM in a tool call request (e.g., "searchWeb", "executePythonCode", "getWeatherForecast"). This name must be unique among the set of tools made available to a given GMI/LLM at any time. It should be concise, descriptive, and typically in camelCase or snake_case.

displayName: "Read Working Memory" = 'Read Working Memory'

A concise, human-readable title or display name for the tool. Used in user interfaces, logs, or when presenting tool options to developers or users.

Example

"Web Search Engine", "Advanced Python Code Interpreter"
description: string = ...

A detailed, natural language description of what the tool does, its primary purpose, typical use cases, and any important considerations or limitations for its use. This description is critical for an LLM to understand the tool's capabilities and make informed decisions about when and how to invoke it. It should be comprehensive enough for the LLM to grasp the tool's semantics.

category: "memory" = 'memory'

Optional. A category or group to which this tool belongs (e.g., "data_analysis", "communication", "file_system", "image_generation"). This is useful for organizing tools, for filtering in UIs or registries, and potentially for aiding an LLM in selecting from a large set of tools.

hasSideEffects: false = false

Optional. Indicates if the tool might have side effects on external systems (e.g., writing to a database, sending an email, making a purchase, modifying a file). Defaults to false if not specified. LLMs or orchestrators might handle tools with side effects with greater caution, potentially requiring explicit user confirmation.

inputSchema: JSONSchemaObject = ...

The JSON schema defining the structure, types, and constraints of the input arguments object that this tool expects. This schema is used by:

  1. LLMs: To construct valid argument objects when requesting a tool call.
  2. ToolExecutor: For validating the arguments before invoking the tool's execute method. It should follow the JSON Schema specification.