Skip to main content

Tutorial: Benchmark a Backend

Measure the performance of a quantum backend using standardized benchmark suites. Generate PDF reports with evidence.

Time: 15 minutes Β· Prerequisites: API key configured


Step 1 β€” Discover Available Backends​

import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem';

from softqcos_sdk import QCOSClient

client = QCOSClient()

backends = client.backends.list()
for b in backends:
print(f"{b['name']:20s} | {b['num_qubits']} qubits | {b['status']}")
softqcos backends list

Step 2 β€” Check Available Benchmark Suites​

suites = client.bench.suites()
for s in suites:
print(f"{s['suite_id']:15s} | {s['description']}")
softqcos bench suites

Available suites include:

  • standard β€” General quantum computing performance (20 circuits)
  • volumetric β€” Quantum volume and related metrics
  • application β€” Real-world algorithm benchmarks
  • noise β€” Noise characterization

Step 3 β€” Run the Benchmark​

report = client.bench.run(
backend="aer_simulator",
suite_id="standard",
shots=4096,
repetitions=5
)

print(f"Benchmark ID: {report['benchmark_id']}")
print(f"Score: {report['score']:.2f}")
print(f"Fidelity: {report['fidelity']:.4f}")
print(f"Depth ratio: {report['depth_ratio']:.2f}")
softqcos bench run aer_simulator --suite standard --shots 4096 --reps 5

Step 4 β€” Download the Report​

pdf = client.bench.download(report["benchmark_id"])
with open("benchmark_report.pdf", "wb") as f:
f.write(pdf)
print("Report saved to benchmark_report.pdf")
softqcos bench download bench_abc123 --output benchmark_report.pdf

Step 5 β€” Verify Benchmark Integrity​

Benchmark results are evidence-backed β€” verify with:

verification = client.bench.verify(report["benchmark_id"])
print(f"Valid: {verification['valid']}")
print(f"Hash: {verification['hash']}")
softqcos bench verify bench_abc123
# Output: βœ“ Benchmark evidence valid

Comparing Multiple Backends​

Benchmark multiple backends and compare:

backends_to_test = ["aer_simulator", "ibm_brisbane", "ionq_harmony"]
reports = []

for backend in backends_to_test:
report = client.bench.run(
backend=backend,
suite_id="standard",
shots=4096,
repetitions=3
)
reports.append(report)
print(f"{backend:20s} | Score: {report['score']:.2f} | Fidelity: {report['fidelity']:.4f}")

# Sort by score
reports.sort(key=lambda r: r["score"], reverse=True)
print(f"\nBest: {reports[0]['backend']} (score: {reports[0]['score']:.2f})")

What's Next?​