The open specification for describing GPU cloud offerings. Vendor-neutral. Machine-readable. The standard layer underneath every comparison tool, marketplace, and AI agent.
Think OpenAPI for GPU cloud. One JSON document = one GPU instance, fully described.
1{ 2 "$schema": "https://computespec.dev/gpu/v1/schema.json", 3 "provider": "lambda-labs", 4 "instance": "gpu_1x_h100_sxm5", 5 "gpu": { 6 "model": "nvidia-h100", 7 "variant": "sxm5", 8 "count": 1, 9 "vram_gb": 80, 10 "tflops_fp16": 989.5, 11 "memory_bandwidth_tbps": 3.35, 12 "interconnect": "nvlink", 13 "architecture": "hopper" 14 }, 15 "compute": { 16 "vcpus": 26, 17 "ram_gb": 200, 18 "storage_gb": 512, 19 "storage_type": "nvme-ssd", 20 "network_gbps": 25 21 }, 22 "pricing": { 23 "currency": "USD", 24 "billing_unit": "per-hour", 25 "billing_granularity": "per-second", 26 "on_demand": 2.49, 27 "spot": null, 28 "reserved_1yr": null, 29 "reserved_3yr": null 30 }, 31 "availability": { 32 "regions": [ 33 "us-west-1", 34 "us-east-1" 35 ], 36 "type": "on-demand", 37 "sla_uptime": 0.999 38 }, 39 "normalized": { 40 "cost_per_gpu_hour": 2.49, 41 "cost_per_tflop_hour": 0.00252, 42 "vram_per_dollar": 32.13 43 }, 44 "meta": { 45 "last_updated": "2026-02-08T00:00:00Z", 46 "source_url": "https://lambdalabs.com/service/gpu-cloud", 47 "verified": true, 48 "spec_version": "1.0.0" 49 } 50}
The problem
Every comparison tool, procurement system, and AI agent independently scrapes, normalizes, and structures the same data — producing incompatible representations that cannot be composed, validated, or versioned. N providers × M consumers × 0 standards.
AWS says p5.48xlarge. Lambda says gpu_1x_h100_sxm5. RunPod says a100-80gb-sxm. Same hardware, no shared vocabulary.
Per-hour, per-second, per-minute. On-demand, spot, reserved. With storage, without. Apples-to-oranges by default.
Some list TFLOPS. Some don't. Some show bandwidth. Some hide it. Every comparison site re-invents normalization.
The spec
Minimal, flat, nullable, extensible. Only fields that 80%+ of providers can populate.
model · variant · count · vram_gb · tflops_fp16 · memory_bandwidth · interconnect · architecture
vcpus · ram_gb · storage_gb · storage_type · network_gbps
currency · billing_unit · granularity · on_demand · spot · reserved_1yr · reserved_3yr
regions · type · sla_uptime
cost_per_gpu_hour · cost_per_tflop_hour · vram_per_dollar
last_updated · source_url · verified · spec_version
Live data
Same format, every provider. Sort by any metric. No spreadsheet required.
| Provider | Instance | GPU | VRAM | FP16 TFLOPS | On-Demand | $/GPU/hr↑ | $/TFLOP/hr | VRAM/$ |
|---|---|---|---|---|---|---|---|---|
| Vast.ai | A100 80GB | A100 | 80 GB | 312 | $1.10/hr | $1.10 | $0.00353 | 72.7 GB |
| RunPod | A100 80GB SXM | A100 | 80 GB | 312 | $1.64/hr | $1.64 | $0.00526 | 48.8 GB |
| CoreWeave | H100 SXM | H100 | 80 GB | 989.5 | $2.23/hr | $2.23 | $0.00225 | 35.9 GB |
| Lambda Labs | 1×H100 SXM5 | H100 | 80 GB | 989.5 | $2.49/hr | $2.49 | $0.00252 | 32.1 GB |
| AWS | p5.48xlarge | 8×H100 | 80 GB | 989.5 | $98.32/hr | $12.29 | $0.01242 | 6.5 GB |
Prices as of Feb 2026. Full dataset coming soon at computespec.dev/gpu/explore
Why a specification
We don't compete with comparison sites, marketplaces, or price indices. We define the format they could all adopt.
Every competitor builds a product on top of unstructured data. None builds the format the data should be in. That's GIS.
Think OpenAPI for GPU cloud. The spec is free. The data catalog, historical trends, and API are where the value compounds. If a competitor adopts GIS format — we won.
Roadmap
GPU Instance Specification
v1.0 · CC BY 4.0
CPU Instance Specification
Trigger: 500+ GIS stars
TPU/ASIC Specification
Trigger: Community demand
Spot Pricing Specification
Trigger: Data shows need
SLA Definition Specification
Trigger: Enterprise interest
The model
Coming soon. Early adopters get founding rates.
Star the repo. Read the spec. Build on top.
Get notified when Pro launches