Explore our credit programs for startups
docker logo
tensorflow logo
pytorch logo
docker logo
tensorflow logo
pytorch logo
docker logo
tensorflow logo
pytorch logo
docker logo
tensorflow logo
pytorch logo
docker logo
tensorflow logo
pytorch logo
docker logo
tensorflow logo
pytorch logo
docker logo
tensorflow logo
pytorch logo
docker logo
tensorflow logo
pytorch logo
docker logo
tensorflow logo
pytorch logo
docker logo
tensorflow logo
pytorch logo
docker logo
tensorflow logo
pytorch logo
docker logo
tensorflow logo
pytorch logo
docker logo
tensorflow logo
pytorch logo
docker logo
tensorflow logo
pytorch logo
docker logo
tensorflow logo
pytorch logo
docker logo
tensorflow logo
pytorch logo
docker logo
tensorflow logo
pytorch logo
docker logo
tensorflow logo
pytorch logo
docker logo
tensorflow logo
pytorch logo
docker logo
tensorflow logo
pytorch logo
docker logo
tensorflow logo
pytorch logo
docker logo
tensorflow logo
pytorch logo
docker logo
tensorflow logo
pytorch logo
docker logo
tensorflow logo
pytorch logo
docker logo
tensorflow logo
pytorch logo
docker logo
tensorflow logo
pytorch logo
docker logo
tensorflow logo
pytorch logo
docker logo
tensorflow logo
pytorch logo
docker logo
tensorflow logo
pytorch logo
docker logo
tensorflow logo
pytorch logo
Hub

The fastest way to fork and deploy open-source AI.

Customize, launch, and contribute to open-source packages–from GitHub to production

Built for open source.

Discover, fork, and contribute to community-driven projects.

One-click deployment.

Skip the setup—launch any package straight from GitHub.

Autoscaling endpoints.

Deploy autoscaling endpoints from community templates.
How it Works

From code to cloud.

Deploy, scale, and manage your entire stack
in one streamlined workflow.
Public endpoints

Access ready-to-use public AI endpoints.

Test, integrate, and deploy without provisioning your own infrastructure.
deep-cogito / Deep Cogito v2 Llama 70B
deep-cogito / Deep Cogito v2 Llama 70B
The Deep Cogito v2 Llama 70B model is part of a groundbreaking family of open-source hybrid reasoning LLMs developed under a novel AI paradigm
Text to Text
qwen / Qwen Image
qwen / Qwen Image
An image generation foundation model in the Qwen series that achieves significant advances in complex text rendering and precise image editing.
Text to Image
qwen / Qwen3 32B AWQ
qwen / Qwen3 32B AWQ
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.
Text to Text
minimax / Minimax Speech 02 HD
minimax / Minimax Speech 02 HD
MiniMax is a high-definition text-to-speech model
Text to Audio
qwen / Qwen Image LoRA
qwen / Qwen Image LoRA
An image generation foundation model in the Qwen series that achieves significant advances in complex text rendering and precise image editing and has LoRA support.
Text to Image
qwen / Qwen Image Edit
qwen / Qwen Image Edit
The image editing version of Qwen-Image. Qwen-Image-Edit successfully extends Qwen-Image’s unique text rendering capabilities to image editing tasks, enabling precise text editing.
Image to Image
Bytedance / Seedance 1.0 pro
Bytedance / Seedance 1.0 pro
Latest high-performance video generation model, featuring multi-shot storytelling, strong instruction-following, and semantic understanding.
Image to Video
Bytedance / Seedream 3.0
Bytedance / Seedream 3.0
Seedream 3.0, a native high-resolution bilingual image generation foundational model (Chinese-English).
Text to Image
Alibaba / Wan 2.2 I2V 720p
Alibaba / Wan 2.2 I2V 720p
Wan 2.2 is an open-source AI video generation model that utilizes a diffusion transformer architecture and a novel 3D spatio-temporal VAE (Wan-VAE) for image-to-video generation.
Image to Video
Alibaba / Wan 2.2 T2V 720p
Alibaba / Wan 2.2 T2V 720p
Wan 2.2 is an open-source AI video generation model that utilizes a diffusion transformer architecture and a novel 3D spatio-temporal VAE (Wan-VAE).
Text to Video
black-forest-labs / FLUX.1 Kontext [dev]
black-forest-labs / FLUX.1 Kontext [dev]
FLUX.1 Kontext [dev] is a 12 billion parameter rectified flow transformer capable of editing images based on text instructions.
Image to Image
Alibaba / Wan 2.1 I2V 720p
Alibaba / Wan 2.1 I2V 720p
Wan 2.1 is an open-source AI video generation model that utilizes a diffusion transformer architecture and a novel 3D spatio-temporal VAE (Wan-VAE) for image-to-video generation.
Image to Video
Alibaba / Wan 2.1 T2V 720p
Alibaba / Wan 2.1 T2V 720p
Wan 2.1 is an open-source AI video generation model that utilizes a diffusion transformer architecture and a novel 3D spatio-temporal VAE (Wan-VAE).
Text to Video
black-forest-labs / FLUX.1 Schnell
black-forest-labs / FLUX.1 Schnell
Fastest and most lightweight FLUX model, ideal for local development, prototyping, and personal use.
Text to Image
black-forest-labs / FLUX.1 [dev]
black-forest-labs / FLUX.1 [dev]
Offers exceptional prompt adherence, high visual fidelity, and rich image detail.
Text to Image
Community

Join the community.

Build, share, and connect with thousands
@
casper_hansen_
Why is Huggingface not adding RunPod as a serverless provider? RunPod is 10-15x cheaper for serverless deployment than AWS and GCP
@
qtnx_
1.3k spent on the training run, this latest release would not have been possible without runpod
@
SaaS Wiz
I love runpod
@
Dwayne
Just discovered @runpod_io 🤯🤯🤯 Per second billing for serverless GPU capacity?! Infinitely scalable?! Whaaaat
@
rachel
thank u runpod i was doing a training run for work when GCP and cloudflare died 🙏🙏 i appreciate u staying online it finished successfully
@
Dean Jones
Runpod has great prices as well
@
YuvrajS9886
Introducing SmolLlama! An effort to make a mini-ChatGPT from scratch! Its based on the Llama (123 M) structure I coded and pre-trained on 10B tokens (10k steps) from the FineWeb dataset from scratch using DDP (torchrun) in PyTorch. Used 2xH100 (SXM) 80GB VRAM from Runpod
@
SuperHumanEpoch
I have been testing work with @runpod_io last 2 weeks and I've to say the service is pretty amazing. Super awesome UX and DevEX (and plenty of GPU backend choices). It's about ~20% pricier than Lambda labs, but worth it IMO given all the harness and workflow they provide that Lambda doesn't. I'm not associated with them in any way or manner, btw. Just a very happy customer.
@
dfranke
Shoutout to @runpod_io as I work through my first non-trivial machine learning experiment. They have exactly what you need if you're a hobbyist and their prices are about a fifth of the big cloud providers.
@
othocs
@runpod_io is so goated, first time trying it today and it’s super easy to setup + their ai helper on discord was very helpful If you ever need cpus/gpus I recommend it!
@
DataEatsWorld
Thanks @runpod_io, loving all of the updates! 👀
@
skypilot_org
🏃 RunPod is now available on SkyPilot! ✈️ Get high-end GPUs (3x cheaper) with great availability: sky launch --gpus H100 Great thanks to @runpod_io for contributing this integration to join the Sky!
@
SkotiVi
For anyone annoyed with Amazon's (and Azure's and Google's) gatekeeping on their cloud GPU VMs, I recommend @runpod_io None of the 'prove you really need this much power' bs from the majors Just great pricing, availability, and an intuitive UI
@
abacaj
Runpod support > lambdalabs support. For on demand GPUs runpod still works the best ime
@
Mascobot
Apparently, we got a Kaggle silver medal in the @arcprize for being in position 17th out of 1430 teams 🙃 I wish I had more time to spend on it; we worked on it for a couple of weeks for fun with limited compute (HUGE thanks to @runpod_io!)
@
berliangor
i'm a big fan of @runpod_io they're most reliable GPU provider for training and running your models at scale
@
DrRogerThomp
Trained a 7B parameter model in just 90 minutes for $0.80 using LoRA + Runpod. Yes, it’s possible—and no, you don’t need enterprise hardware.
@
jzlegion
ai engineering is just tweaking config values in a notebook until you run out of runpod credits
@
AlicanKiraz0
Runpod > Sagemaker, VertexAi, AzureML
@
Yoeven
The @runpod_io event was amazing! One reason we can boast about fast speeds at @jigsawstack is because the cold boot on runpod GPUs is basically nonexistent!
@
winglian
Axolotl works out of the box with @runpod_io's Instant Clusters. It's as easy as running this on each node using the Docker images that we ship.
@
oliviawells
Needed a GPU for a quick job, didn’t want to commit to anything long-term. RunPod was perfect for that. Love that I can just spin one up and shut it down after.
@
Pauline_Cx
I'm proud to be part of the GPU Elite, awarded by @runpod_io 😍
@
AlicanKiraz0
Runpod > Sagemaker, VertexAi, AzureML
@
jzlegion
ai engineering is just tweaking config values in a notebook until you run out of runpod credits
@
Dean Jones
Runpod has great prices as well
@
Mascobot
Apparently, we got a Kaggle silver medal in the @arcprize for being in position 17th out of 1430 teams 🙃 I wish I had more time to spend on it; we worked on it for a couple of weeks for fun with limited compute (HUGE thanks to @runpod_io!)
@
casper_hansen_
Why is Huggingface not adding RunPod as a serverless provider? RunPod is 10-15x cheaper for serverless deployment than AWS and GCP
@
rachel
thank u runpod i was doing a training run for work when GCP and cloudflare died 🙏🙏 i appreciate u staying online it finished successfully
@
dfranke
Shoutout to @runpod_io as I work through my first non-trivial machine learning experiment. They have exactly what you need if you're a hobbyist and their prices are about a fifth of the big cloud providers.
@
Dwayne
Just discovered @runpod_io 🤯🤯🤯 Per second billing for serverless GPU capacity?! Infinitely scalable?! Whaaaat
@
othocs
@runpod_io is so goated, first time trying it today and it’s super easy to setup + their ai helper on discord was very helpful If you ever need cpus/gpus I recommend it!
@
DataEatsWorld
Thanks @runpod_io, loving all of the updates! 👀
@
qtnx_
1.3k spent on the training run, this latest release would not have been possible without runpod
@
Pauline_Cx
I'm proud to be part of the GPU Elite, awarded by @runpod_io 😍
@
oliviawells
Needed a GPU for a quick job, didn’t want to commit to anything long-term. RunPod was perfect for that. Love that I can just spin one up and shut it down after.
@
SaaS Wiz
I love runpod
@
winglian
Axolotl works out of the box with @runpod_io's Instant Clusters. It's as easy as running this on each node using the Docker images that we ship.
@
abacaj
Runpod support > lambdalabs support. For on demand GPUs runpod still works the best ime
@
berliangor
i'm a big fan of @runpod_io they're most reliable GPU provider for training and running your models at scale
@
DrRogerThomp
Trained a 7B parameter model in just 90 minutes for $0.80 using LoRA + Runpod. Yes, it’s possible—and no, you don’t need enterprise hardware.
@
YuvrajS9886
Introducing SmolLlama! An effort to make a mini-ChatGPT from scratch! Its based on the Llama (123 M) structure I coded and pre-trained on 10B tokens (10k steps) from the FineWeb dataset from scratch using DDP (torchrun) in PyTorch. Used 2xH100 (SXM) 80GB VRAM from Runpod
@
skypilot_org
🏃 RunPod is now available on SkyPilot! ✈️ Get high-end GPUs (3x cheaper) with great availability: sky launch --gpus H100 Great thanks to @runpod_io for contributing this integration to join the Sky!
@
SuperHumanEpoch
I have been testing work with @runpod_io last 2 weeks and I've to say the service is pretty amazing. Super awesome UX and DevEX (and plenty of GPU backend choices). It's about ~20% pricier than Lambda labs, but worth it IMO given all the harness and workflow they provide that Lambda doesn't. I'm not associated with them in any way or manner, btw. Just a very happy customer.
@
SkotiVi
For anyone annoyed with Amazon's (and Azure's and Google's) gatekeeping on their cloud GPU VMs, I recommend @runpod_io None of the 'prove you really need this much power' bs from the majors Just great pricing, availability, and an intuitive UI
@
Yoeven
The @runpod_io event was amazing! One reason we can boast about fast speeds at @jigsawstack is because the cold boot on runpod GPUs is basically nonexistent!
@
skypilot_org
🏃 RunPod is now available on SkyPilot! ✈️ Get high-end GPUs (3x cheaper) with great availability: sky launch --gpus H100 Great thanks to @runpod_io for contributing this integration to join the Sky!
@
SaaS Wiz
I love runpod
@
Yoeven
The @runpod_io event was amazing! One reason we can boast about fast speeds at @jigsawstack is because the cold boot on runpod GPUs is basically nonexistent!
@
abacaj
Runpod support > lambdalabs support. For on demand GPUs runpod still works the best ime
@
qtnx_
1.3k spent on the training run, this latest release would not have been possible without runpod
@
jzlegion
ai engineering is just tweaking config values in a notebook until you run out of runpod credits
@
SuperHumanEpoch
I have been testing work with @runpod_io last 2 weeks and I've to say the service is pretty amazing. Super awesome UX and DevEX (and plenty of GPU backend choices). It's about ~20% pricier than Lambda labs, but worth it IMO given all the harness and workflow they provide that Lambda doesn't. I'm not associated with them in any way or manner, btw. Just a very happy customer.
@
Dwayne
Just discovered @runpod_io 🤯🤯🤯 Per second billing for serverless GPU capacity?! Infinitely scalable?! Whaaaat
@
DrRogerThomp
Trained a 7B parameter model in just 90 minutes for $0.80 using LoRA + Runpod. Yes, it’s possible—and no, you don’t need enterprise hardware.
@
SkotiVi
For anyone annoyed with Amazon's (and Azure's and Google's) gatekeeping on their cloud GPU VMs, I recommend @runpod_io None of the 'prove you really need this much power' bs from the majors Just great pricing, availability, and an intuitive UI
@
YuvrajS9886
Introducing SmolLlama! An effort to make a mini-ChatGPT from scratch! Its based on the Llama (123 M) structure I coded and pre-trained on 10B tokens (10k steps) from the FineWeb dataset from scratch using DDP (torchrun) in PyTorch. Used 2xH100 (SXM) 80GB VRAM from Runpod
@
DataEatsWorld
Thanks @runpod_io, loving all of the updates! 👀
@
oliviawells
Needed a GPU for a quick job, didn’t want to commit to anything long-term. RunPod was perfect for that. Love that I can just spin one up and shut it down after.
@
winglian
Axolotl works out of the box with @runpod_io's Instant Clusters. It's as easy as running this on each node using the Docker images that we ship.
@
dfranke
Shoutout to @runpod_io as I work through my first non-trivial machine learning experiment. They have exactly what you need if you're a hobbyist and their prices are about a fifth of the big cloud providers.
@
Pauline_Cx
I'm proud to be part of the GPU Elite, awarded by @runpod_io 😍
@
othocs
@runpod_io is so goated, first time trying it today and it’s super easy to setup + their ai helper on discord was very helpful If you ever need cpus/gpus I recommend it!
@
rachel
thank u runpod i was doing a training run for work when GCP and cloudflare died 🙏🙏 i appreciate u staying online it finished successfully
@
Dean Jones
Runpod has great prices as well
@
berliangor
i'm a big fan of @runpod_io they're most reliable GPU provider for training and running your models at scale
@
AlicanKiraz0
Runpod > Sagemaker, VertexAi, AzureML
@
casper_hansen_
Why is Huggingface not adding RunPod as a serverless provider? RunPod is 10-15x cheaper for serverless deployment than AWS and GCP
@
Mascobot
Apparently, we got a Kaggle silver medal in the @arcprize for being in position 17th out of 1430 teams 🙃 I wish I had more time to spend on it; we worked on it for a couple of weeks for fun with limited compute (HUGE thanks to @runpod_io!)
@
abacaj
Runpod support > lambdalabs support. For on demand GPUs runpod still works the best ime
@
casper_hansen_
Why is Huggingface not adding RunPod as a serverless provider? RunPod is 10-15x cheaper for serverless deployment than AWS and GCP
@
skypilot_org
🏃 RunPod is now available on SkyPilot! ✈️ Get high-end GPUs (3x cheaper) with great availability: sky launch --gpus H100 Great thanks to @runpod_io for contributing this integration to join the Sky!
@
jzlegion
ai engineering is just tweaking config values in a notebook until you run out of runpod credits
@
DrRogerThomp
Trained a 7B parameter model in just 90 minutes for $0.80 using LoRA + Runpod. Yes, it’s possible—and no, you don’t need enterprise hardware.
@
Pauline_Cx
I'm proud to be part of the GPU Elite, awarded by @runpod_io 😍
@
SaaS Wiz
I love runpod
@
Mascobot
Apparently, we got a Kaggle silver medal in the @arcprize for being in position 17th out of 1430 teams 🙃 I wish I had more time to spend on it; we worked on it for a couple of weeks for fun with limited compute (HUGE thanks to @runpod_io!)
@
Dwayne
Just discovered @runpod_io 🤯🤯🤯 Per second billing for serverless GPU capacity?! Infinitely scalable?! Whaaaat
@
qtnx_
1.3k spent on the training run, this latest release would not have been possible without runpod
@
othocs
@runpod_io is so goated, first time trying it today and it’s super easy to setup + their ai helper on discord was very helpful If you ever need cpus/gpus I recommend it!
@
AlicanKiraz0
Runpod > Sagemaker, VertexAi, AzureML
@
berliangor
i'm a big fan of @runpod_io they're most reliable GPU provider for training and running your models at scale
@
Dean Jones
Runpod has great prices as well
@
SkotiVi
For anyone annoyed with Amazon's (and Azure's and Google's) gatekeeping on their cloud GPU VMs, I recommend @runpod_io None of the 'prove you really need this much power' bs from the majors Just great pricing, availability, and an intuitive UI
@
dfranke
Shoutout to @runpod_io as I work through my first non-trivial machine learning experiment. They have exactly what you need if you're a hobbyist and their prices are about a fifth of the big cloud providers.
@
YuvrajS9886
Introducing SmolLlama! An effort to make a mini-ChatGPT from scratch! Its based on the Llama (123 M) structure I coded and pre-trained on 10B tokens (10k steps) from the FineWeb dataset from scratch using DDP (torchrun) in PyTorch. Used 2xH100 (SXM) 80GB VRAM from Runpod
@
rachel
thank u runpod i was doing a training run for work when GCP and cloudflare died 🙏🙏 i appreciate u staying online it finished successfully
@
winglian
Axolotl works out of the box with @runpod_io's Instant Clusters. It's as easy as running this on each node using the Docker images that we ship.
@
Yoeven
The @runpod_io event was amazing! One reason we can boast about fast speeds at @jigsawstack is because the cold boot on runpod GPUs is basically nonexistent!
@
SuperHumanEpoch
I have been testing work with @runpod_io last 2 weeks and I've to say the service is pretty amazing. Super awesome UX and DevEX (and plenty of GPU backend choices). It's about ~20% pricier than Lambda labs, but worth it IMO given all the harness and workflow they provide that Lambda doesn't. I'm not associated with them in any way or manner, btw. Just a very happy customer.
@
DataEatsWorld
Thanks @runpod_io, loving all of the updates! 👀
@
oliviawells
Needed a GPU for a quick job, didn’t want to commit to anything long-term. RunPod was perfect for that. Love that I can just spin one up and shut it down after.
FAQs

Questions? Answers.

Runpod Hub explained.
What is Runpod Hub?
Runpod Hub is a centralized catalog of preconfigured AI repositories that you can browse, deploy, and share. All repos are optimized for Runpod’s Serverless infrastructure, so you can go from discovery to a running endpoint in minutes.
Is Runpod Hub production-ready?
No—the Hub is currently in beta. We’re actively adding features and fixing bugs. Join our Discord if you’d like to give feedback or report issues.
Why should I use Runpod Hub instead of deploying my own containers manually?
One-click deployment: All Hub repos come with prebuilt Docker images and Serverless handlers. You don’t have to write Dockerfiles or manage dependencies.

Configuration UI: We expose common parameters (environment variables, model paths, precision settings, etc.) so you can tweak a repo without touching code.

Built-in testing: Every repo in the Hub has automated build-and-test pipelines. You can trust that the code runs properly on Runpod before you click “Deploy.”

Save time: Instead of cloning a repo, installing dependencies, and debugging runtime issues, you can launch a vetted endpoint in minutes.
Who benefits from using the Hub?
End users/Developers: Quickly find and run popular AI models (LLMs, Stable Diffusion, OCR, etc.) without setup headaches. Customize inputs via a simple form instead of editing code.

Hub creators: Showcase your open-source work to the Runpod community. Every new GitHub release triggers an automated build/test cycle in our pipeline, ensuring your repo stays up to date.

Enterprises/Teams: Adopt standardized, production-ready AI endpoints without reinventing infrastructure. Onboard developers faster by pointing them to Hub listings rather than internal deployment docs.
How do I deploy a repo from the Hub?
In the Runpod console, go to the Hub page.

Browse or search for a repo that matches your needs.

Click on the repo to view details—check hardware requirements (CPU vs. GPU, disk size) and any exposed configuration options.

Click Deploy (or choose an older version via the dropdown).

Click Create Endpoint. Within minutes, you’ll have a live Serverless endpoint you can call via API.

For a more details, check out the docs: https://docs.runpod.io/hub/overview
How do I share my own AI repo in the Hub?
Prepare a working Serverless implementation in your GitHub repo. You’ll need a handler.py (or equivalent), a Dockerfile, and a README.md.

Add a .runpod/hub.json file with metadata (title, description, category, hardware settings, environment variables, presets).

Add a .runpod/tests.json file that defines one or more test cases to exercise your endpoint (each test should return HTTP 200).

Create a GitHub Release (the Hub indexes releases rather than commits).

In the RunPod console, go to the Hub and click Get Started under “Add your repo.” Enter your GitHub URL and follow the prompts.

Once submitted, our build pipeline will automatically scan, build, and test your repo. After it passes, our team will manually review it. If approved, your repo appears live in the Hub.

For a more details, check out the docs: https://docs.runpod.io/hub/publishing-guide.
Clients

Trusted by today's leaders, built for tomorrow's pioneers.

Engineered for teams building the future.

Build what’s next.

The most cost-effective platform for building, training, and scaling machine learning models—ready when you are.

You’ve unlocked a
referral bonus!

Sign up today and you’ll get a random credit bonus between $5 and $500 when you spend your first $10 on Runpod.