LLM Gateway

Create Video

Creates a new asynchronous video generation job using an OpenAI-compatible request format.

POST
/v1/videos
AuthorizationBearer <token>

Bearer token authentication using API keys

In: header

model?string

The video generation model to use. Supports current Veo and Sora video models, including provider-prefixed variants like openai/sora-2 or avalanche/veo-3.1-generate-preview.

Default"veo-3.1-generate-preview"
promptstring

Text prompt describing the video to generate.

Length1 <= length
size?string

Output resolution in OpenAI widthxheight format. Supported values depend on the selected model and provider mapping.

callback_url?string

LLMGateway extension. When set, a signed webhook is delivered after the job reaches a terminal state.

Formaturi
callback_secret?string

LLMGateway extension. Shared secret used to sign webhook deliveries with HMAC-SHA256.

Length1 <= length
input_reference?string||
last_frame?string|
secondsinteger

Output duration in seconds. Supported values depend on the selected model and provider mapping.

Range1 <= value
audio?boolean

Whether the generated video should include audio. Support depends on the selected model and provider mapping.

Defaulttrue
n?integer
image?string|
reference_images?

One to three reference images for provider-specific asset or material-guided video generation.

Items1 <= items <= 3

Response Body

application/json

curl -X POST "https://api.llmgateway.io/v1/videos" \  -H "Content-Type: application/json" \  -d '{    "prompt": "A cinematic drone shot flying through a neon-lit futuristic city at night",    "seconds": 8  }'
{
  "id": "string",
  "object": "video",
  "model": "string",
  "status": "queued",
  "progress": 100,
  "created_at": 0,
  "completed_at": 0,
  "expires_at": 0,
  "error": {
    "code": "string",
    "message": "string",
    "details": null
  },
  "content": [
    {
      "type": "video",
      "url": "http://example.com",
      "mime_type": "string"
    }
  ]
}

How is this guide?

Last updated on