Intermediate5 min readnormalized.cost_per_gpu_hour

Cost Per GPU Hour

The universal metric for comparing GPU cloud pricing. Normalizes multi-GPU instances, different billing units, and provider-specific pricing into one comparable number.

What it is

Cost per GPU hour is the effective price of renting one GPU for one hour. It's the universal comparison metric for GPU cloud pricing.

Why do we need it? Because providers price instances differently:

  • Lambda sells 1×H100 for $2.49/hr → cost per GPU hour = $2.49
  • AWS sells 8×H100 for $98.32/hr → cost per GPU hour = $98.32 ÷ 8 = $12.29
  • Some providers bill per-minute or per-second

Without normalization, you'd compare $2.49 to $98.32 and think AWS is 40× more expensive. In reality, it's about 5× more per GPU — still significant, but a completely different story.

The formula

cost_per_gpu_hour = (on_demand / gpu.count) × unit_multiplier

where unit_multiplier:
  per-hour   → 1
  per-minute → 60
  per-second → 3600
  per-month  → 1/730
Worked example — AWS p5.48xlarge
on_demand = $98.32/hr
gpu.count = 8
billing_unit = per-hour → multiplier = 1

cost_per_gpu_hour = ($98.32 / 8) × 1 = $12.29
Worked example — hypothetical per-minute provider
on_demand = $0.05/min
gpu.count = 1
billing_unit = per-minute → multiplier = 60

cost_per_gpu_hour = ($0.05 / 1) × 60 = $3.00

What it doesn't capture

Cost per GPU hour is a starting point, not the full picture:

  • Different GPU models — $2.49/hr for an H100 vs $1.10/hr for an A100 isn't apples-to-apples. Use cost_per_tflop_hour to compare across GPU models.
  • Hidden costs — egress fees, storage costs, networking charges aren't included.
  • Spot pricing — the metric uses on-demand pricing. Spot can be 50-80% cheaper.
  • Performance differences — same GPU model from different providers can have different CPU/RAM/network specs that affect real-world performance.

How it appears in GIS

{
  "normalized": {
    "cost_per_gpu_hour": 2.49,
    "cost_per_tflop_hour": 0.00252,
    "vram_per_dollar": 32.13
  }
}

The normalized.cost_per_gpu_hour field is pre-computed in every GIS document. You don't need to calculate it yourself — just read the field and compare across providers.

See GIS Normalization Explained for the full algorithm reference.

Key takeaways
  • ·cost_per_gpu_hour = on_demand price ÷ gpu.count (adjusted for billing unit)
  • ·It's the single most important metric for price comparison
  • ·Always normalize before comparing — raw instance prices are misleading
  • ·Current range: ~$1.00 (Vast.ai A100) to ~$12.29 (AWS H100)
  • ·In GIS: normalized.cost_per_gpu_hour