Async generator that streams chunks during a single LLM inference pass.
Must return a LoopOutput as its generator return value (the value
passed to the final done: true result from .next()).
Execute a single tool call and return its result.
Implementations should never throw — instead return a result with
success: false and a populated error field.
Feed tool results back into the conversation so the next generateStream
call has access to them. Typically appends tool messages to the message list.
Execution context provided to the LoopController by the caller. Abstracts away the underlying LLM/GMI implementation so the loop logic remains provider-agnostic.