Documentation Index
Fetch the complete documentation index at: https://docs.dolphy.chat/llms.txt
Use this file to discover all available pages before exploring further.
Endpoint
openai SDK by changing baseURL.
Request body
| Field | Type | Notes |
|---|---|---|
model | string | Default venice-uncensored |
messages | array | Up to 200 messages, OpenAI shape |
temperature | number | 0–2 |
top_p | number | 0–1 |
max_tokens | int | Up to 8192 |
stream | boolean | SSE streaming if true |
stop | string | string[] | Stop sequences |
Streaming
Setstream: true to get token-by-token SSE chunks (OpenAI format with
data: {…}\n\n events ending in data: [DONE]).
Billing
1 credit per 10,000 tokens (input + output, minimum 1 per call). For streamed responses, the final chunk includes theusage block we use to
compute the bill.
Errors
| Status | When |
|---|---|
401 | Invalid or revoked API key |
402 | Insufficient credits |
422 | Content policy rejection from upstream |
429 | Rate limit (60/min per key) |
502 | Upstream provider error |
503 | Provider temporarily unavailable |