We're officially SOC 2 Type II Compliant
You've unlocked a referral bonus! Sign up today and you'll get a random credit bonus between $5 and $500
You've unlocked a referral bonus!
Claim Your Bonus
Claim Bonus
Blog

Runpod Blog.

Our team’s insights on building better
and scaling smarter.
All
This is some text inside of a div block.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
10 billion Serverless requests and counting

10 billion Serverless requests and counting

Join us as we celebrate our fielding of our 10 billionth serverless request.
Read article
Product Updates
A note to the developers who built Runpod with us

A note to the developers who built Runpod with us

Runpod surpasses $120M ARR, now serving over 500,000 developers worldwide. Founder Zhen reflects on the journey from basement GPU rigs to AI-first cloud infrastructure powering startups, research labs, and Fortune 500 teams.
Read article
Founder Updates
New update to Github integration: release rollback!

New update to Github integration: release rollback!

Deploy Serverless endpoints directly from GitHub and roll back instantly if needed. Runpod’s improved GitHub integration lets you revert to previous builds without rebuilding Docker images, enabling faster, safer, and more confident deployments.
Read article
Product Updates
Runpod AI field notes: December 2025

Runpod AI field notes: December 2025

In early December 2025, Mistral AI released Mistral Large 3 and Devstral 2, two open models under the Apache 2.0 license, with Mistral Large 3 targeting frontier-scale reasoning and long-context workloads and Devstral 2 focusing on developer use cases like coding and agent workflows. At the same time, Nvidia expanded its open source footprint with the release of the Nemotron 3 model family and the acquisition of SchedMD, reinforcing a broader shift toward open models and open infrastructure that teams can run, benchmark, and scale on their own GPU platforms.
Read article
Learn AI
Faster GitHub Builds: Major Performance Improvements to Our Automated Integration

Faster GitHub Builds: Major Performance Improvements to Our Automated Integration

Runpod has significantly improved the performance and reliability of its automated GitHub integration by fixing a bottleneck in the container image upload pipeline that caused slow or timed-out builds. By rewriting key components of the registry image uploader and optimizing layer transfers, GitHub-triggered container builds now complete faster, more consistently, and with fewer deployment failures.
Read article
AI Infrastructure
How to Run Serverless AI and ML Workloads on Runpod

How to Run Serverless AI and ML Workloads on Runpod

Learn how to train, deploy, and scale AI/ML models using Runpod Serverless. This guide covers real-world examples, deployment best practices, and how serverless is unlocking new possibilities like real-time video generation.
Read article
Product Updates
Transcribe and translate audio files with Faster Whisper

Transcribe and translate audio files with Faster Whisper

Learn how to deploy OpenAI’s Faster Whisper on Runpod Serverless to transcribe and translate audio up to four times faster and at a fraction of the cost of Whisper, using Python for efficient, scalable speech-to-text automation.
Read article
Learn AI
Oops! no result found for User type something
Clear search
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Build what’s next.

The most cost-effective platform for building, training, and scaling machine learning models—ready when you are.

You’ve unlocked a
referral bonus!

Sign up today and you’ll get a random credit bonus between $5 and $500 when you spend your first $10 on Runpod.