Skip to main content

Node Providers — Quickstart

This guide gets you from zero to a running multi-node quantum circuit in under 5 minutes, using only local simulation (no cloud credentials needed).


Prerequisites

  • Python 3.10+
  • QCOS installed: pip install qcos-sdk (or from source)
  • Qiskit + Aer: pip install qiskit qiskit-aer

Step 1: Discover available providers

from network.node_providers import NodeProviderRegistry

# List all built-in providers
for name, display in NodeProviderRegistry.list().items():
print(f" {name:30s}{display}")
  aws_braket                     → AWS Braket
azure_quantum → Azure Quantum
custom_rest → Custom REST Endpoint
ibm_quantum → IBM Quantum
local_cpu → Local CPU (Qiskit Aer)
local_gpu_aer → Local GPU — Aer Statevector (CUDA)
local_gpu_cuquantum → Local GPU — cuQuantum (NVIDIA)
local_gpu_mps → Local GPU — Aer MPS (tensor-network)
lumi_amd_gpu → LUMI AMD MI250X (ROCm)

Step 2: Build a local cluster

No credentials needed. This creates 12 virtual GPU nodes:

from network.node_providers import ClusterBuilder

registry = (
ClusterBuilder()
# 8 statevector nodes: exact simulation, max 28 qubits each
.add("local_gpu_aer", n_nodes=8, max_qubits=28)
# 4 MPS nodes: approximate but handles large circuits
.add("local_gpu_mps", n_nodes=4, max_qubits=500)
.build()
)

# Inspect what we built
report = registry.status_report()
print(f"Total nodes: {report['total_nodes']}")
print(f"Total qubits: {report['total_physical_qubits']}")
Total nodes:  12
Total qubits: 2224

Step 3: Run a quantum circuit

from network.distributed_qvm import DistributedQVM, VirtualCircuitSpec, QVMMode
from qiskit import QuantumCircuit

# Create the QVM
qvm = DistributedQVM(registry, mode=QVMMode.EMULATED, shots=2048)

# Build a 10-qubit GHZ circuit
qc = QuantumCircuit(10)
qc.h(0)
for i in range(1, 10):
qc.cx(0, i)
qc.measure_all()

# Run
result = qvm.run(VirtualCircuitSpec(
n_logical_qubits=10,
circuit_object=qc,
shots=2048,
))

# Inspect results
print(f"Shards: {result.n_shards}")
print(f"Mode: {result.mode_label}")
print(f"Exec time: {result.total_exec_ms:.1f} ms")

# Top 3 outcomes
for bitstring, count in sorted(result.counts.items(), key=lambda x: -x[1])[:3]:
pct = 100 * count / sum(result.counts.values())
print(f" |{bitstring}⟩ → {count:5d} shots ({pct:.1f}%)")
Shards:    1
Mode: EMULATED - No physical quantum resources
Exec time: 8.3 ms

|0000000000⟩ → 1032 shots (50.4%)
|1111111111⟩ → 1016 shots (49.6%)
GHZ state

The GHZ state produces only |00…0⟩ and |11…1⟩ with equal probability — exactly what quantum mechanics predicts. ✅


Step 4: Scale up — 100-qubit circuit via MPS

MPS (Matrix Product State) simulation handles large qubit counts efficiently for low-entanglement circuits:

from network.node_providers import make_local_cluster

# Build a cluster optimized for large circuits
registry = make_local_cluster(
n_gpu_sv_nodes=4, # exact simulation nodes
n_gpu_mps_nodes=4, # MPS nodes for large circuits
mps_max_qubits=500,
)

qvm = DistributedQVM(registry, mode=QVMMode.EMULATED, shots=1024)

# 100-qubit circuit
qc100 = QuantumCircuit(100)
qc100.h(range(100)) # 100 Hadamard gates
qc100.measure_all()

result = qvm.run(VirtualCircuitSpec(
n_logical_qubits=100,
circuit_object=qc100,
shots=1024,
))

print(f"Ran {result.n_logical_qubits}-qubit circuit in {result.total_exec_ms:.1f} ms")
print(f"Number of unique outcomes: {len(result.counts)}")

Step 5: Check cluster capacity

report = qvm.capacity_report()

print("Logical qubit capacity:")
for mode, n in report["logical_qubits"].items():
print(f" {mode:15s}: {n:,} qubits")
Logical qubit capacity:
no_qec : 2,224 qubits
surface_d3 : 246 qubits
surface_d5 : 88 qubits
qldpc_d3 : 222 qubits
qldpc_d5 : 111 qubits

Common Patterns

Pattern A — Pure local simulation (zero cost)

from network.node_providers import make_local_cluster
from network.distributed_qvm import DistributedQVM, QVMMode

registry = make_local_cluster(n_gpu_sv_nodes=8, n_gpu_mps_nodes=4)
qvm = DistributedQVM(registry, mode=QVMMode.EMULATED, shots=4096)

Pattern B — LUMI supercomputer

from network.node_providers import make_lumi_cluster

registry = make_lumi_cluster(n_nodes=4, gpus_per_node=8) # 32 MI250X GCDs
qvm = DistributedQVM(registry, mode=QVMMode.EMULATED, shots=4096)

See LUMI GPU Guide for SSH configuration.

Pattern C — Azure cheapest QPU

import os
from network.node_providers import make_azure_cheap_cluster

registry = make_azure_cheap_cluster(
azure_resource_id=os.environ["AZURE_QUANTUM_RESOURCE_ID"],
)
qvm = DistributedQVM(registry, mode=QVMMode.HYBRID, shots=1024)

See Azure Quantum Guide for setup.

Pattern D — Full hybrid

import os
from network.node_providers import make_hybrid_cluster

registry = make_hybrid_cluster(
lumi_nodes=2,
azure_resource_id=os.environ.get("AZURE_QUANTUM_RESOURCE_ID", ""),
ibm_token=os.environ.get("IBM_QUANTUM_TOKEN", ""),
n_local_sv=8,
n_local_mps=4,
)
qvm = DistributedQVM(registry, mode=QVMMode.HYBRID, shots=2048)

Next Steps

Guide
ClusterBuilderFull reference for the fluent builder API
LUMI GPU GuideConnect LUMI MI250X clusters
Azure Quantum GuideUse real QPUs at minimal cost
Custom ProvidersWrite your own provider
API ReferenceComplete class documentation