Streaming Chat
Streaming Chat enables your agent to receive model responses incrementally — as they are generated — instead of waiting for the full response.
This is ideal for:
- Live agent interactions
- Interactive UIs with typing indicators
- Reducing latency for long responses
Current Status
Streaming is not yet supported in Telex AI.
This page describes the future planned behavior so agents can prepare for upcoming updates.
🔗 Endpoint
POST /telexai/chat
Required Headers
-
X-AGENT-API-KEY
: Your API key -
Model can be specified via:
X-Model
headermodel
query parammodel
in request body
Planned Example
{
"messages": [
{
"role": "user",
"content": "Write a poem about the stars."
}
],
"stream": true
}
Planned Response: Server-Sent Events (SSE) stream of message chunks.
[data stream begins]
{"role": "assistant", "content": "The stars whisper softly..."}
{"role": "assistant", "content": "... in the midnight sky."}
...
[data stream ends]
Usage Notes
- Set
"stream": true
in request body to enable - Future response will use SSE for compatibility with browsers and real-time apps
- Until officially supported, fallback to standard synchronous responses
Next
- Return to Chat Overview
- Explore Contextual Chat for multi-turn conversations
- See System & Developer Instructions to shape agent behavior