Compare GPU Performance on AI Workloads
RTX A6000 48GB
RTX A6000 48GB
Vs.
A40 48GB
A40 48GB
LLM Benchmarks
Benchmarks were run on RunPod gpus using vllm. For more details on vllm, check out the vllm github repository.
Output Token Throughput (tok/s)
Llama 8b Instruct
1
Output Token Throughput (tok/s)
Get started with RunPod
today.
We handle millions of gpu requests a day. Scale your machine learning workloads while keeping costs low with RunPod.
Get Started