Skip to content

Conversation

@alexandrudanpop
Copy link
Contributor

@alexandrudanpop alexandrudanpop commented Jan 28, 2026

Fixes OPS-3497

Copilot AI review requested due to automatic review settings January 28, 2026 14:35
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR fixes an infinite retry loop that occurred when backend/MCP tools returned data exceeding the LLM's token limit. The solution replaces the generic lastAssistantMessageIsCompleteWithToolCalls function with a custom hasCompletedUIToolCalls that only triggers auto-send for UI tools (prefixed with 'ui-'), preventing automatic retries for backend tool responses that may be too large.

Changes:

  • Added hasCompletedUIToolCalls function to filter tool calls by UI prefix before auto-sending
  • Replaced lastAssistantMessageIsCompleteWithToolCalls with hasCompletedUIToolCalls in the chat hook
  • Added comprehensive test suite covering UI/backend tool scenarios and the specific "Prompt is too long" edge case

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 1 comment.

File Description
packages/react-ui/src/app/features/ai/lib/tests/chat-utils.test.ts New test file with 572 lines covering all scenarios for the UI tool detection logic
packages/react-ui/src/app/features/ai/lib/chat-utils.ts Implements hasCompletedUIToolCalls function with UI tool prefix filtering
packages/react-ui/src/app/features/ai/lib/assistant-ui-chat-hook.ts Updates sendAutomaticallyWhen to use the new hasCompletedUIToolCalls function

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

}
}
},
sendAutomaticallyWhen: lastAssistantMessageIsCompleteWithToolCalls,
Copy link
Contributor Author

@alexandrudanpop alexandrudanpop Jan 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are many references in AI SDK to use sendAutomaticallyWhen: lastAssistantMessageIsCompleteWithToolCalls

However, this function seems to have an issue:
vercel/ai#12099

https://ai-sdk.dev/docs/ai-sdk-ui/chatbot-tool-usage

For example, when the last message is this one, lastAssistantMessageIsCompleteWithToolCalls will return true.

    {
      "id": "PQ65e1VsSDZxYuS9",
      "role": "assistant",
      "parts": [
        {
          "type": "reasoning",
          "text": "Searching OpenOps documentation for available AWS templates...",
          "state": "done"
        },
        {
          "type": "step-start"
        },
        {
          "type": "tool-OpenOps_Documentation",
          "toolCallId": "toolu_vrtx_014xu4P2LtCC84JMGmciw6X6",
          "state": "output-available",
          "input": {
            "query": "AWS templates"
          },
          "output": {
            "success": true,
            "query": "AWS templates",
            "queryResult": "... very large data truncated"
          }
        },
        {
          "type": "text",
          "text": "Prompt is too long",
          "state": "done"
        }
      ]
    }

@linear
Copy link

linear bot commented Jan 28, 2026

const UI_TOOL_PREFIX = 'ui-';

export function hasCompletedUIToolCalls(messages: UIMessage[]): boolean {
const message = messages[messages.length - 1];
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick]

Suggested change
const message = messages[messages.length - 1];
const message = messages.at(-1);

@sonarqubecloud
Copy link

@alexandrudanpop alexandrudanpop merged commit c365c29 into main Jan 29, 2026
25 checks passed
@alexandrudanpop alexandrudanpop deleted the fix/llm-error-loop branch January 29, 2026 09:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants