Streaming
Pass "stream": true and the gateway returns a Server-Sent Events (SSE) stream. OpenAI SDK streaming mode works out of the box.
Example
Section titled “Example”curl https://llmtr.com/v1/chat/completions \ -H "Authorization: Bearer sk_your_key" \ -H "Content-Type: application/json" \ -d '{ "model": "openai/gpt-4o", "messages": [ {"role": "user", "content": "Hello!"} ], "stream": true }'from openai import OpenAI
client = OpenAI(base_url="https://llmtr.com/v1", api_key="sk_your_key")
stream = client.chat.completions.create( model="openai/gpt-4o", messages=[{"role": "user", "content": "Write a short poem."}], stream=True)
for chunk in stream: delta = chunk.choices[0].delta.content or "" print(delta, end="", flush=True)import OpenAI from "openai";
const client = new OpenAI({ baseURL: "https://llmtr.com/v1", apiKey: "sk_your_key"});
const stream = await client.chat.completions.create({ model: "openai/gpt-4o", messages: [{ role: "user", content: "Write a short poem." }], stream: true});
for await (const chunk of stream) { process.stdout.write(chunk.choices[0]?.delta?.content ?? "");}Chunk format
Section titled “Chunk format”Each chunk is a data: {json} line:
data: {"id":"chatcmpl-xxx","object":"chat.completion.chunk","choices":[{"delta":{"content":"Hel"}}]}
data: {"id":"chatcmpl-xxx","object":"chat.completion.chunk","choices":[{"delta":{"content":"lo"}}]}
data: {"id":"chatcmpl-xxx","object":"chat.completion.chunk","choices":[{"delta":{},"finish_reason":"stop"}]}
data: [DONE]Usage field
Section titled “Usage field”By default the last chunk does not include usage. Set stream_options.include_usage: true on supported models to get token counts in the final chunk.
Important notes
Section titled “Important notes”- Tokens consumed during a dropped connection are still billed.
- Your proxy must disable SSE buffering (nginx:
proxy_buffering off). - It is a long-lived HTTP connection, not a WebSocket.