Compare GPU Performance on AI Workloads
Vs.
H100 SXM
H100 SXM
LLM Benchmarks
Benchmarks were run on RunPod gpus using vllm. For more details on vllm, check out the vllm github repository.
Output Throughput (tok/s)
Select a model
128 input, 128 output
1
Output Throughput (tok/s)
Get started with RunPodÂ
today.
We handle millions of gpu requests a day. Scale your machine learning workloads while keeping costs low with RunPod.
Get Started