OpenAI Backend
The openai backend integrates with OpenAI’s GPT models and Azure OpenAI deployments.
Configuration
OpenAI API
Set the OPENAI_API_KEY environment variable:
export OPENAI_API_KEY="sk-..." # pragma: allowlist secret
Or store it in ~/.config/kanoa/.env:
mkdir -p ~/.config/kanoa
echo "OPENAI_API_KEY=your-api-key-here" >> ~/.config/kanoa/.env
Initialize with:
from kanoa import AnalyticsInterpreter
interpreter = AnalyticsInterpreter(
backend="openai",
model="gpt-4o" # or 'gpt-4-turbo', 'gpt-3.5-turbo'
)
Azure OpenAI
For Azure OpenAI deployments:
interpreter = AnalyticsInterpreter(
backend="openai",
api_base="https://your-resource.openai.azure.com/openai/deployments/your-deployment",
api_key="your-azure-key", # pragma: allowlist secret
api_version="2024-02-01"
)
Or configure via environment variables:
# ~/.config/kanoa/.env
AZURE_OPENAI_API_KEY=your-azure-key
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com
AZURE_OPENAI_DEPLOYMENT=your-deployment-name
Supported Models
Recommended Models
gpt-4o: Latest and most capable multimodal modelgpt-4-turbo: Fast, high-quality responses with vision supportgpt-4: Most capable for complex reasoning tasksgpt-3.5-turbo: Cost-effective for simpler tasks
All models support vision capabilities (can process matplotlib figures).
Features
Vision Capabilities
All GPT-4 models support multimodal inputs. kanoa automatically converts matplotlib figures to images and sends them to the model.
Knowledge Base
The OpenAI backend supports both Text and PDF knowledge bases:
# Text knowledge base
interpreter = interpreter.with_kb(kb_path="data/docs") # Auto-detects file types
# PDF knowledge base
Cost Tracking
kanoa automatically tracks token usage and costs for OpenAI models:
result = interpreter.interpret(fig=plt.gcf(), context="Analysis")
print(f"Cost: ${result.usage.total_cost:.4f}")
print(f"Input tokens: {result.usage.prompt_tokens}")
print(f"Output tokens: {result.usage.completion_tokens}")
Pricing is based on current OpenAI API rates and updated regularly.
Advanced Configuration
Custom Parameters
interpreter = AnalyticsInterpreter(
backend="openai",
model="gpt-4o",
temperature=0.7,
max_tokens=2048,
top_p=0.9
)
Streaming Responses
# Streaming is not yet supported in the current version
# Coming in a future release