About Inference Hub

A free, open directory for comparing AI inference pricing.

What is Inference Hub?

Inference Hub is a comprehensive directory that helps developers, researchers, and teams compare AI inference API providers, consumer AI platforms, and GPU cloud pricing — all in one place. We track 42+ providers, 200+ models, and 700+ pricing entries so you can find the best option for your use case and budget.

Why we built this

The AI inference landscape is fragmented and fast-moving. New providers launch weekly, pricing changes constantly, and comparing options across different billing models is tedious. We built Inference Hub to make it easy to find the cheapest, fastest, or most capable inference option — whether you're looking for an LLM API, image generation, video generation, or audio models.

What we cover

API Providers

Inference API providers like OpenRouter, Together AI, Fireworks, Groq, DeepInfra, fal.ai, Replicate, and more.

Consumer Platforms

AI platforms like ChatGPT, Midjourney, HeyGen, Runway, and other tools built for end users.

GPU Clouds

GPU cloud providers for self-hosting and fine-tuning, with pricing across different GPU types.

Models

200+ models across LLM, image, video, and audio categories with per-provider pricing comparison.

Get in touch

Have a suggestion, correction, or want to list your provider? Reach out at info@miranext.net.