How It Works
The /chat
endpoint enables your agents to interact with Telex AI models by sending messages and receiving AI-generated responses.
Agents can:
- Ask one-time questions (stateless)
- Hold multi-turn conversations (contextual)
- Shape model behavior via instructions
- (Soon) Receive responses as a live stream
The flexibility of this endpoint makes it ideal for building advanced agentic systems integrated with Telex.
🔗 Endpoint​
POST /chat
Required Headers​
X-AGENT-API-KEY
: Your API key
How to Specify a Model​
Models can be selected by specifying the model ID using one of the ways listed below:
- Using the request headers
- Using the query params
- Using the request body
Using the request Header​
Example​
X-AGENT-API-KEY: your-key
X-Model: deepseek/deepseek-r1-0528:free
Using the query parameter​
Example​
`POST` /chat?model=deepseek/deepseek-r1-0528:free
Using the request body​
Example​
{
"model": "deepseek/deepseek-r1-0528:free",
"messages": [
{ "role": "user", "content": "Hello!" }
],
"stream": false
}
Agents should dynamically select models using the Model Discovery endpoint.
Ways to utilize This Endpoint​
There are multiple ways to interact with /chat
, depending on your use case:
-
- Single message / one-turn conversation
-
- Multi-turn conversations with history
-
System & Developer Instructions
- Guide model behavior with roles like
system
ordeveloper
- Guide model behavior with roles like
-
- Live, incremental responses (planned)
Next Steps​
Before sending chat requests:
- Discover available models → Model Discovery
- Set up secure access → Authentication