Portal Agent Chat — Manual Test Checklist
Portal Agent Chat — Manual Test Checklist
End-to-end smoke test for the per-agent Chat tab on the agent detail page. Exercises the full loop: AI config → conversation design → chat turns → stage transitions → chat management.
This is a manual checklist, not an automated suite. Run through it after any change to agent chat, the Chat API, or the conversation engine.
Prerequisites
- An organization you are an admin of.
- An agent with:
- A chatbot system memory attached (role
read-write). - At least one non-setup conversation defined under
conversations:*with a real starting stage and prompt. A setup-only system memory will hide the Chat tab.
- A chatbot system memory attached (role
- An API key for one of: Anthropic, OpenAI, or GLM.
1. AI config (Settings tab)
- Open the agent detail page → Settings tab.
- Confirm the AI configuration section shows "No AI config yet" when
hasAiApiKeyis false. - Click Edit → pick a provider, enter a model, paste an API key.
- Save → row refreshes,
hasAiApiKeyflips to true, provider/model shown, key is masked. - Test → expect
✓ Workswith a one-sentence reply from the model. - Enter a bad key → Save → Test → expect a clear provider error message, not a stack trace.
- Clear → confirmation → config is removed, Chat tab disappears.
2. Chat tab visibility gate
The Chat tab should appear only when all three are true:
hasSystemMemory(agent has a chatbot system memory)hasAiApiKey(AI config saved)hasNonSetupConversations(at least oneconversations:<name>node withisSetup !== true)
Toggle each condition off individually and verify the tab hides.
3. Start a new chat
- Open the Chat tab. Sidebar shows "No chats yet".
- Click New chat.
- Expect:
- A new chat row appears in the sidebar, selected.
- The assistant greets you with the conversation's welcome prompt
(served by
/api/agent-chat/start). - No user bubble precedes the welcome.
- Stage toast shows the initial stage, if the conversation defines one.
4. Send messages
- Type a message → Enter (or click Send).
- User bubble appears immediately (optimistic).
- Assistant reply streams in once the LLM responds.
- On error (kill network, wrong key), the optimistic user bubble rolls
back and
turnErroris shown below the input. - Shift+Enter inserts a newline; Enter alone submits.
5. Stage transitions
- Drive the conversation through a prompt designed to transition stages (e.g., answer the question the stage is gating on).
- Expect a stage toast when
next_stageis returned. - Subsequent assistant messages should reflect the new stage's prompt.
6. Resume an existing chat
- Start a second chat so there are at least two in the sidebar.
- Reload the page.
- Open the Chat tab → sidebar lists both chats, sorted by
lastMessageAt(most recent first). - Click an older chat → messages load via
/api/agent-chat/resume. - Send another message →
lastMessageAtupdates and the row jumps to the top of the sidebar.
7. Rename + delete
- Hover a chat row → click rename → edit title → Enter to commit, Escape to cancel.
- Click delete → confirmation modal → confirm → chat disappears from
sidebar, underlying
Chatis soft-deleted. - Cancel on the modal keeps the chat.
8. Per-user scoping
- As user A, create a chat in agent X.
- Log in as user B (same org, same agent) → Chat tab.
- Expect an empty sidebar — user A's chats must not leak.
- Starting a chat as B provisions a fresh per-user memory
(
userMemoryOfAgentId) and does not touch A's memory.
9. Multi-provider sanity
Run sections 3–5 once per provider you support:
- Anthropic (
claude-*) - OpenAI (
gpt-*) - GLM (
glm-*)
Tool-call forcing must succeed for all three — the LLM should always call
respond({ message, data, next_stage }), never return plain text. If a
provider returns text, treat it as a bug in agentChatLlm.ts.
10. Welcome latency
The welcome message is served synchronously by /api/agent-chat/start,
which runs a full LLM turn before returning. Expect a 1–3s delay on New
chat. This is known; streaming is not yet implemented.
Known non-goals (do not test)
- Password-based encryption of API keys (deferred; JWT only for now).
- Streaming responses.
- Station chat retirement — legacy station chat is untouched.
- Billing / usage metering for agent chat turns.