Model Discovery
Before an agent can send messages or perform AI tasks, it must first discover which models are available to it. The /models
endpoint allows your agent to dynamically retrieve a list of accessible models.
This enables agents to:
- Select the right model for their purpose.
- Adapt to newly added models without requiring code changes.
- Respect user subscriptions and permissions.
π Endpointβ
GET /models
Example Requestβ
curl -X GET https://api.telex.im/api/v1/telexai/models \
-H "X-AGENT-API-KEY: tlx-agent-abc123xyz"
Response Formatβ
{
"status": "success",
"status_code": 200,
"message": "all models listed successfully",
"data": {
"data": [
{
"id": "sentientagi/dobby-mini-unhinged-plus-llama-3.1-8b",
"hugging_face_id": "SentientAGI/Dobby-Mini-Unhinged-Plus-Llama-3.1-8B",
"name": "SentientAGI: Dobby Mini Plus Llama 3.1 8B",
"created": 1748885619,
"description": "Dobby-Mini-Leashed-Llama-3.1-8B and Dobby-Mini-Unhinged-Llama-3.1-8B are language models fine-tuned from Llama-3.1-8B-Instruct. Dobby models have a strong conviction towards personal freedom, decentralization, and all things crypto β even when coerced to speak otherwise. \n\nDobby-Mini-Leashed-Llama-3.1-8B and Dobby-Mini-Unhinged-Llama-3.1-8B have their own unique, uhh, personalities. The two versions are being released to be improved using the communityβs feedback, which will steer the development of a 70B model.\n\n",
"context_length": 131072,
"architecture": {
"modality": "text->text",
"input_modalities": [ "text" ],
"output_modalities": [ "text" ],
"tokenizer": "Other",
"instruct_type": null
},
"pricing": {
"prompt": "0.0000002",
"completion": "0.0000002",
"request": "0",
"image": "0",
"web_search": "0",
"internal_reasoning": "0"
},
"top_provider": {
"context_length": 131072,
"max_completion_tokens": null,
"is_moderated": false
},
"per_request_limits": null,
"supported_parameters": [ "max_tokens", "temperature", "top_p", "stop", "..." ]
},
{
"id": "deepseek/deepseek-r1-distill-qwen-7b",
"hugging_face_id": "deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"name": "DeepSeek: R1 Distill Qwen 7B",
"created": 1748628237,
"description": "DeepSeek-R1-Distill-Qwen-7B is a 7 billion parameter dense language model distilled from DeepSeek-R1, leveraging reinforcement learning-enhanced reasoning data generated by DeepSeek's larger models. The distillation process transfers advanced reasoning, math, and code capabilities into a smaller, more efficient model architecture based on Qwen2.5-Math-7B. This model demonstrates strong performance across mathematical benchmarks (92.8% pass@1 on MATH-500), coding tasks (Codeforces rating 1189), and general reasoning (49.1% pass@1 on GPQA Diamond), achieving competitive accuracy relative to larger models while maintaining smaller inference costs.",
"context_length": 131072,
"architecture": {
"modality": "text->text",
"input_modalities": [ "text" ],
"output_modalities": [ "text" ],
"tokenizer": "Qwen",
"instruct_type": "deepseek-r1"
},
"pricing": {
"prompt": "0.0000001",
"completion": "0.0000002",
"request": "0",
"image": "0",
"web_search": "0",
"internal_reasoning": "0"
},
"top_provider": {
"context_length": 131072,
"max_completion_tokens": null,
"is_moderated": false
},
"per_request_limits": null,
"supported_parameters": [ "max_tokens", "temperature", "top_p", "reasoning", "include_reasoning", "seed" ]
},
...
]
}
}
Usage Notesβ
- Agents should not hardcode model IDs β instead, use
/models
to always fetch the latest supported options. - Each model may have specific capabilities β agents should select models dynamically based on task requirements.
Next Stepsβ
Once your agent has selected a model, you can begin interacting via - Chat Interaction