Chat Completions
Send messages to GPT models and receive completions. This is the most common API for conversational AI.
See the official Chat Completions guide and API reference.
Rust
//! Basic chat completion example.
//!
//! Run with: `OPENAI_API_KEY=sk-... cargo run --example chat`
use openai_oxide::OpenAI;
use openai_oxide::types::chat::*;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = OpenAI::from_env()?;
let request = ChatCompletionRequest::new(
"gpt-4o-mini",
vec![
ChatCompletionMessageParam::System {
content: "You are a helpful assistant.".into(),
name: None,
},
ChatCompletionMessageParam::User {
content: UserContent::Text("What is the capital of France?".into()),
name: None,
},
],
);
let response = client.chat().completions().create(request).await?;
for choice in &response.choices {
println!(
"[{}] {}",
choice.finish_reason,
choice.message.content.as_deref().unwrap_or("")
);
}
if let Some(usage) = &response.usage {
println!(
"\nTokens: {} prompt + {} completion = {} total",
usage.prompt_tokens.unwrap_or(0),
usage.completion_tokens.unwrap_or(0),
usage.total_tokens.unwrap_or(0),
);
}
Ok(())
}
Run: OPENAI_API_KEY=sk-... cargo run --example chat
Next Steps
- Streaming — Stream chat completion tokens as they arrive
- Function Calling — Let the model call your functions
- Structured Output — Get JSON responses matching a schema