OpenAI API & ChatGPT
Complete guide to OpenAI's API platform, ChatGPT models, and enterprise integration strategies
Overview
The OpenAI API provides access to powerful language models including GPT-4, GPT-3.5 Turbo, and specialized models for specific tasks. It offers developers the capability to integrate advanced AI features into their applications with robust API endpoints and comprehensive documentation.
GPT-4 Turbo
Latest model with 128K context window and improved instruction following
Fine-tuning API
Customize models for specific domains and use cases with proprietary data
Assistants API
Build conversational AI agents with persistent threads and file search
Getting Started
API Key Setup
- Create an account at OpenAI Platform
- Navigate to API Keys section and create a new secret key
- Set up usage limits and billing information
- Secure your API key using environment variables
Platform Features
- Playground: Interactive environment for testing prompts and models
- Usage Dashboard: Monitor API usage and costs in real-time
- Documentation: Comprehensive guides and API references
- Community: Active developer community and support forums
Available Models
GPT-4 Turbo
- Context: 128K tokens
- Use Cases: Complex reasoning, advanced applications
- Pricing: $10/1M input, $30/1M output tokens
- Features: JSON mode, reproducible outputs
GPT-4
- Context: 8K tokens
- Use Cases: Advanced reasoning, creative tasks
- Pricing: $30/1M input, $60/1M output tokens
- Features: Strong reasoning capabilities
GPT-3.5 Turbo
- Context: 16K tokens
- Use Cases: Cost-effective applications, chatbots
- Pricing: $0.50/1M input, $1.50/1M output tokens
- Features: Fast response times, reliable performance
API Integration
JavaScript Implementation
// Using the official OpenAI package
npm install openai
// Basic chat completion
import OpenAI from "openai";
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
async function main() {
const completion = await openai.chat.completions.create({
messages: [
{
role: "system",
content: "You are a helpful assistant."
},
{
role: "user",
content: "Explain machine learning in simple terms"
}
],
model: "gpt-3.5-turbo",
max_tokens: 500,
temperature: 0.7,
});
console.log(completion.choices[0].message.content);
}
main().catch(console.error);
Python Implementation
# Install OpenAI package
pip install openai
# Basic implementation
from openai import OpenAI
import os
client = OpenAI(api_key=os.environ['OPENAI_API_KEY'])
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What are the benefits of renewable energy?"}
],
max_tokens=500,
temperature=0.7
)
print(response.choices[0].message.content)
# Streaming response
stream = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Tell me a story"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")
Advanced Features
Function Calling
// Function calling example
const functions = [
{
name: "get_current_weather",
description: "Get the current weather in a location",
parameters: {
type: "object",
properties: {
location: {
type: "string",
description: "The city and state, e.g. San Francisco, CA"
},
unit: {
type: "string",
enum: ["celsius", "fahrenheit"]
}
},
required: ["location"]
}
}
];
const response = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
messages: [{role: "user", content: "What's the weather in London?"}],
functions: functions,
function_call: "auto"
});
Assistants API
// Create and manage assistants
const assistant = await openai.beta.assistants.create({
name: "Math Tutor",
instructions: "You are a personal math tutor. Help solve problems and explain concepts.",
tools: [{type: "code_interpreter"}],
model: "gpt-4-1106-preview"
});
// Create a thread
const thread = await openai.beta.threads.create();
// Add message to thread
await openai.beta.threads.messages.create(
thread.id,
{role: "user", content: "Explain the Pythagorean theorem"}
);
Best Practices
Cost Optimization
- Use GPT-3.5 Turbo for cost-sensitive applications
- Implement caching for repeated queries
- Set maximum token limits appropriately
- Monitor usage through the dashboard regularly
Error Handling
async function robustAPICall(prompt, maxRetries = 3) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
const response = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
messages: [{role: "user", content: prompt}],
max_tokens: 500
});
return response;
} catch (error) {
if (error.status === 429) {
// Rate limit - exponential backoff
await new Promise(resolve =>
setTimeout(resolve, Math.pow(2, attempt) * 1000)
);
continue;
}
if (error.status === 503) {
// Service unavailable
await new Promise(resolve => setTimeout(resolve, 5000));
continue;
}
throw error;
}
}
throw new Error('Max retries exceeded');
}
Use Cases
Content Creation
Automated article writing, email generation, social media content, and creative writing assistance
Customer Support
Intelligent chatbots, ticket classification, and automated response systems
Code Generation
Code completion, bug fixing, documentation, and programming assistance
Data Analysis
Text summarization, sentiment analysis, and data extraction from unstructured text