openai-oxide
Feature-complete OpenAI client for Rust, Node.js, and Python.
openai-oxide implements the full Responses API, Chat Completions, and 20+ other endpoints with persistent WebSockets, hedged requests, streaming with early-parse, and type-safe Structured Outputs.
Why openai-oxide?
- Streaming — SSE parser with anti-buffering headers, 2.5x faster per-chunk vs official JS SDK
- WebSocket Mode — Persistent
wss://connections, 29-44% faster on multi-turn benchmarks - Stream FC Early Parse — Execute tools the moment
arguments.donearrives - Structured Outputs —
parse::<T>()auto-generates JSON schema from Rust types - Hedged Requests — Send redundant requests, cancel the slower (technique from Google’s “The Tail at Scale”)
- WASM First-Class — Full streaming in Cloudflare Workers and browsers
- 1100+ Types — Auto-synced from Python SDK via
openai-typescrate
Packages
| Package | Registry | Install |
|---|---|---|
openai-oxide | crates.io | cargo add openai-oxide |
openai-types | crates.io | cargo add openai-types |
openai-oxide | npm | npm install openai-oxide |
openai-oxide | PyPI | pip install openai-oxide |
OpenAI Compatibility
Parameter names match the official Python SDK exactly. If the OpenAI docs show model="gpt-5.4", use .model("gpt-5.4") in Rust or {model: "gpt-5.4"} in Node.js.
See the OpenAI Docs Mapping for a complete cross-reference.