Ship AI in Minutes
Airbolt is the only way to integrate AI into your app with zero backend code.
Currently in public beta.
Get Started Fast
Get your AI app running in a few simple steps. No complex infrastructure or backend code required.
Create an Airbolt project
Sign up, create a project, and add your OpenAI API key. No credit card required.
Set up SDK
Install with your favorite package manager and initialize the SDK with only a few lines of code.
Start building
Make your first API call in minutes. Toggle LLM settings in the dashboard with just a few clicks.
That's It. Really.
Here's all the code you need to start using AI in your app:
import { ChatInterface } from '@airbolt/sdk/react';
function App() {
return (
<ChatInterface
projectId="proj_your_project_id"
/>
);
}
Built for Builders
Everything you need to integrate AI into your applications, without the complexity.
Zero Backend
Just add our SDK to any client app and start building. We handle all the backend complexity for you.
Secure by Default
Short-lived tokens and per-user rate limits built in. Zero configuration needed.
Bring Your Own LLM Keys
Your OpenAI keys stay encrypted on our servers, never exposed to browsers. More providers coming soon.
Flexible Integration
Drop in our React chat component, or build custom UIs with hooks and JavaScript/TypeScript APIs. Streaming, state management, and error handling enabled out of the box.
Coming soon: Multi-provider support • Bring your own auth • iOS/Android SDKs • Dynamic model routing • Tool use • And more
Skip The Boilerplate
Focus on building your app, not building LLM boilerplate code.
Without Airbolt
// components/Chat.tsx
import { useState } from 'react';
type Message = { id: string; role: 'user' | 'assistant'; content: string };
export default function Chat() {
const [messages, setMessages] = useState<Message[]>([]);
const [input, setInput] = useState('');
const [loading, setLoading] = useState(false);
async function sendMessage(e: React.FormEvent) {
e.preventDefault();
if (!input.trim()) return;
const userMessage: Message = {
id: String(Date.now()),
role: 'user',
content: input,
};
setMessages(prev => [...prev, userMessage]);
setInput('');
setLoading(true);
try {
const res = await fetch('/api/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
messages: [...messages, userMessage].map(m => ({ role: m.role, content: m.content }))
}),
});
if (!res.ok) throw new Error('Request failed');
const data = await res.json();
setMessages(prev => [
...prev,
{ id: String(Date.now() + 1), role: 'assistant', content: data.reply || '' },
]);
} catch {
setMessages(prev => [
...prev,
{ id: String(Date.now() + 2), role: 'assistant', content: 'Sorry, something went wrong.' },
]);
} finally {
setLoading(false);
}
}
return (
<div className="chat">
<div className="messages">
{messages.map(m => (
<div key={m.id} className={m.role === 'user' ? 'user' : 'assistant'}>
{m.content}
</div>
))}
</div>
<form onSubmit={sendMessage} className="input">
<input
value={input}
onChange={e => setInput(e.target.value)}
placeholder="Ask something..."
disabled={loading}
/>
<button type="submit" disabled={loading || !input.trim()}>
{loading ? 'Sending...' : 'Send'}
</button>
</form>
</div>
);
}
// pages/api/chat.ts
import type { NextApiRequest, NextApiResponse } from 'next';
import OpenAI from 'openai';
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY! });
export default async function handler(req: NextApiRequest, res: NextApiResponse) {
if (req.method !== 'POST') return res.status(405).end();
try {
const { messages } = req.body as { messages: { role: 'user' | 'assistant' | 'system'; content: string }[] };
const completion = await openai.chat.completions.create({
model: 'gpt-4o-mini',
messages,
});
res.status(200).json({
reply: completion.choices[0]?.message?.content ?? '',
});
} catch {
res.status(500).json({ error: 'OpenAI request failed' });
}
}
With Airbolt
import { ChatInterface } from '@airbolt/sdk/react';
function App() {
return (
<ChatInterface
projectId="proj_your_project_id"
/>
);
}
Questions? We've Got Answers
Do I need to manage my own API keys?
+You bring your own OpenAI API key. We store provider keys encrypted at rest and use them only to fulfill your requests.
Which AI models are supported?
+Currently, we only support OpenAI models, but multi-provider support will be added in an upcoming release.
How does pricing work?
+Using Airbolt is currently free. Bring your own OpenAI API keys and pay for token usage as you normally would.
Can I use Airbolt in production?
+Yes. Airbolt is actively developed, uses semantic versioning for the SDK, and you should lock your SDK version in production.
Ready to Ship AI-Powered Features?
Start building your app in minutes or take a look at our docs before you get started!