Why One AI Model Isn't Enough
Different AI models excel at different things. Claude is exceptional at reasoning and complex architectures. GPT-4o is fast and reliable. DeepSeek offers incredible value. Groq delivers blazing inference speed. Mistral handles European languages beautifully.
So why would you lock yourself into just one?
With VULK's BYOM (Bring Your Own Model) system, you don't have to choose. Connect your own API keys and use whatever model fits your project — without spending a single VULK credit.
Supported Providers
VULK supports 8 AI providers natively, plus any OpenAI-compatible endpoint:
| Provider | Key Models | Best For | |----------|-----------|----------| | OpenAI | GPT-4o, GPT-4o-mini, o3 | General-purpose, fast iteration | | DeepSeek | DeepSeek V3, DeepSeek R1 | Cost-effective, strong reasoning | | Groq | Llama 3.3 70B, Mixtral | Ultra-fast inference (500+ tok/s) | | Mistral | Mistral Large, Codestral | European languages, code generation | | Together AI | Llama 3.1 405B, Qwen 2.5 | Large open-source models | | Fireworks | FireFunction, Llama 3.3 | Production-grade, low latency | | Ollama | Any local model | Privacy-first, offline development | | Custom | Any OpenAI-compatible API | Self-hosted, enterprise setups |
How It Works
Setting up BYOM takes about 30 seconds:
- Go to Settings → API Keys in your VULK dashboard
- Select your provider from the dropdown
- Paste your API key — it's validated instantly
- Choose your model from the auto-populated list
- Start building — all generations route through your key
Your API key is encrypted with AES-256-GCM before storage. We never log your requests or responses. The key is only decrypted at the moment of making the API call.
Zero Credits, Full Power
When you use BYOM, your generations go directly to the provider's API. VULK doesn't charge credits for BYOM generations — you only pay the provider's token costs.
This means:
- No credit limits on generation length or complexity
- No throttling based on your VULK plan
- Full model selection — use any model your provider offers
- Same VULK features — preview, deployment, file management all work the same
The Smart Routing Architecture
VULK's model router is intelligent about how it uses your key:
User Prompt → Intent Analysis → Model Router
├── Config files → Fast model (saves cost)
├── UI components → Medium model (balanced)
└── Auth/Security → Best model (accuracy)
Even with BYOM, VULK's smart routing can optimize your costs by using lighter models for simpler files — if you enable it in settings.
Key Validation
Before saving your key, VULK makes a lightweight validation request to confirm:
- The key is valid and active
- The key has sufficient permissions
- The selected model is accessible
If validation fails, you get a clear error message explaining what to fix — no guessing.
Who Is BYOM For?
- Power users who generate dozens of apps per day and want predictable costs
- Enterprise teams with existing AI contracts and volume discounts
- Privacy-conscious developers who want to use local models via Ollama
- Open-source enthusiasts who prefer running Llama, Qwen, or Mixtral
- Cost optimizers who found a provider with better pricing for their usage
Getting Started
BYOM is available on all VULK plans, including Free. Head to Settings → API Keys, connect your provider, and start building with any AI model you want.
Your keys. Your models. Your apps. Zero VULK credits.
