Skip to main content

Provider API Reference

Complete reference for all classes and functions in src/network/node_providers.py.


NodeProvider (ABC)

Abstract base class for all providers. Import from network.node_providers.

from network.node_providers import NodeProvider

Abstract methods

build_nodes(**kwargs) → List[QPUNodeSpec]

Build and return the list of nodes this provider manages.

ParameterTypeDescription
**kwargsanyProvider-specific keyword arguments passed via ClusterBuilder.add()

Returns: List[QPUNodeSpec] — must contain at least one node.

Raises: ValueError if required kwargs are missing or invalid.


Optional methods

execute(circuit, shots, node_spec) → Optional[Dict[str, int]]

Custom circuit execution. Override this to run circuits on your hardware.

ParameterTypeDescription
circuitQuantumCircuitQiskit circuit to execute
shotsintNumber of measurement repetitions
node_specQPUNodeSpecThe node selected for this shard

Returns: Dict[str, int] counts (e.g. {"0000": 512, "1111": 512}), or None / NotImplementedError to fall back to built-in Aer executor.


health_check() → bool

Liveness check called during ClusterBuilder.build().

Returns: True if the provider is reachable, False to mark all nodes offline.

Default: Returns True (always healthy).


NodeProviderRegistry

Global registry for provider classes. All methods are classmethods.

from network.node_providers import NodeProviderRegistry

Methods

register(name, cls) → None

Register a provider class under a string key.

ParameterTypeDescription
namestrUnique string key (e.g. "my_gpu")
clsType[NodeProvider]Provider class (not instance)
NodeProviderRegistry.register("my_gpu", MyGPUProvider)

get(name) → Type[NodeProvider]

Look up a provider class by name.

ParameterTypeDescription
namestrPreviously registered key

Returns: Provider class.

Raises: KeyError if name not found.


list() → Dict[str, str]

Return all registered providers as {name: DISPLAY_NAME}.

providers = NodeProviderRegistry.list()
# {"local_cpu": "Local CPU (Aer)", "lumi_amd_gpu": "LUMI AMD MI250X", ...}

instantiate(name, **kwargs) → Tuple[NodeProvider, List[QPUNodeSpec]]

Instantiate a provider and call build_nodes(**kwargs) in one step.

ParameterTypeDescription
namestrProvider key
**kwargsanyForwarded to build_nodes()

Returns: (provider_instance, node_list) tuple.


ClusterBuilder

Fluent builder for QPURegistry with multi-provider support.

from network.node_providers import ClusterBuilder

Methods

add(provider_name, **kwargs) → ClusterBuilder

Add a provider slot to the builder. Returns self for chaining.

ParameterTypeDescription
provider_namestrRegistered provider key
**kwargsanyPassed to provider.build_nodes()
builder = ClusterBuilder().add("local_cpu", n_nodes=4).add("lumi_amd_gpu", n_nodes=2)

build() → QPURegistry

Execute all providers and build the final registry.

For each slot:

  1. Instantiates the provider
  2. Calls health_check() — marks nodes offline if False
  3. Calls build_nodes(**kwargs) — adds nodes to registry
  4. Binds provider instance to each node via registry.register(spec, provider)

Returns: QPURegistry ready for use with DistributedQVM.


describe() → str

Return a human-readable cluster plan without executing any provider.

print(ClusterBuilder()
.add("lumi_amd_gpu", n_nodes=2, gpus_per_node=8)
.add("local_cpu", n_nodes=4)
.describe())
Cluster Plan:
[0] lumi_amd_gpu — n_nodes=2, gpus_per_node=8
[1] local_cpu — n_nodes=4
Estimated nodes: 20 (16 SV + 2 MPS + 2 CPU fallback)

QPUNodeSpec fields

QPUNodeSpec is a dataclass defined in distributed_qvm.py. All built-in providers construct specs with these fields:

FieldTypeDescription
node_idstrUnique identifier (e.g. "lumi-n00-gcd0")
node_typeNodeTypeEnum: GPU_AER, GPU_TENSOR, CPU_AER, CPU_NUMPY, QPU_IBM, QPU_IONQ, QPU_AZURE, QPU_BRAKET
max_qubitsintMaximum circuit width this node can handle
tagsList[str]Searchable labels (e.g. ["lumi", "rocm"])
credentialsDict[str, Any]Provider-specific auth data
onlineboolSet by ClusterBuilder based on health_check()
metadataDict[str, Any]Extra info (pricing, region, etc.)

Built-in provider kwargs reference

local_cpu

.add("local_cpu", n_nodes=4)
kwargTypeDefaultDescription
n_nodesint1Number of CPU simulation nodes
max_qubitsint30Max qubits per node

local_gpu_aer

.add("local_gpu_aer", n_nodes=2)
kwargTypeDefaultDescription
n_nodesint1Number of CUDA statevector nodes
max_qubitsint32Max qubits per node

local_gpu_mps

.add("local_gpu_mps", n_nodes=1, max_qubits=4000)
kwargTypeDefaultDescription
n_nodesint1Number of MPS tensor nodes
max_qubitsint4096Max qubits per node

local_gpu_cuquantum

.add("local_gpu_cuquantum", n_nodes=2)
kwargTypeDefaultDescription
n_nodesint1Number of cuQuantum nodes
max_qubitsint36Max qubits per node (limited by cuQuantum memory model)

lumi_amd_gpu

.add("lumi_amd_gpu",
n_nodes=4, gpus_per_node=8,
max_qubits_sv=34, max_qubits_mps=2000,
lumi_host="lumi", lumi_project="project_465002463")
kwargTypeDefaultDescription
n_nodesintrequiredNumber of LUMI nodes
gpus_per_nodeint8GCDs per node (max 8 on LUMI)
max_qubits_svint34Qubits per SV GCD (64 GB HBM2e)
max_qubits_mpsint2000Qubits per MPS node
lumi_hoststr"lumi"SSH alias in ~/.ssh/config
lumi_projectstr""LUMI project ID for paths

Registers: n_nodes × gpus_per_node SV nodes + n_nodes MPS nodes.


azure_quantum

.add("azure_quantum",
azure_resource_id="...", subscription_id="...",
targets=["rigetti.qpu.ankaa-3"], location="eastus")
kwargTypeDefaultDescription
azure_resource_idstrrequiredFull Azure Quantum workspace resource ID
subscription_idstrrequiredAzure subscription ID
targetsList[str]All availableSubset of targets to register
locationstr"eastus"Azure region

Available targets: rigetti.qpu.ankaa-3, quantinuum.qpu.h2-1, ionq.qpu.forte-1, microsoft.estimator


ibm_quantum

.add("ibm_quantum", ibm_token="...", instance="ibm-q/open/main")
kwargTypeDefaultDescription
ibm_tokenstrrequiredIBM Quantum API token
instancestr"ibm-q/open/main"IBM hub/group/project
backendsList[str]All availableSubset of backends to register

aws_braket

.add("aws_braket",
aws_region="us-east-1",
s3_bucket="my-bucket", s3_prefix="qcos-results",
devices=["arn:aws:braket:us-east-1::device/qpu/ionq/Aria-1"])
kwargTypeDefaultDescription
aws_regionstr"us-east-1"AWS region
s3_bucketstrrequiredS3 bucket for job results
s3_prefixstr"qcos"S3 key prefix
devicesList[str]All availableBraket device ARNs to register

custom_rest

.add("custom_rest",
endpoints=["http://host1:8888", "http://host2:8888"],
max_qubits=30, api_key="secret", timeout_s=60)
kwargTypeDefaultDescription
endpointsList[str]requiredList of base URLs (POST /run-circuit expected)
max_qubitsint30Max qubits per endpoint
api_keystr""Bearer token for Authorization header
timeout_sint60HTTP request timeout in seconds

Convenience factory functions

make_local_cluster

from network.node_providers import make_local_cluster

registry = make_local_cluster(
n_cpu_nodes=4,
n_gpu_sv_nodes=2,
n_gpu_mps_nodes=1,
)
ParameterTypeDefaultDescription
n_cpu_nodesint4local_cpu nodes
n_gpu_sv_nodesint0local_gpu_aer nodes
n_gpu_mps_nodesint0local_gpu_mps nodes

make_lumi_cluster

from network.node_providers import make_lumi_cluster

registry = make_lumi_cluster(
n_nodes=4,
gpus_per_node=8,
lumi_host="lumi",
lumi_project="project_465002463",
include_local_fallback=True,
)
ParameterTypeDefaultDescription
n_nodesintrequiredLUMI compute nodes
gpus_per_nodeint8GCDs per node
lumi_hoststr"lumi"SSH alias
lumi_projectstr""Project ID
include_local_fallbackboolTrueAdd 4 local CPU nodes

make_azure_cheap_cluster

from network.node_providers import make_azure_cheap_cluster

registry = make_azure_cheap_cluster(
azure_resource_id="...",
subscription_id="...",
include_local_mps=True,
)
ParameterTypeDefaultDescription
azure_resource_idstrrequiredAzure workspace resource ID
subscription_idstrrequiredAzure subscription ID
include_local_mpsboolTrueAdd 1 local MPS node (2000q fallback)

Registers: Rigetti Ankaa-3 + Quantinuum H2-1 + Microsoft Estimator + optional MPS.


make_hybrid_cluster

from network.node_providers import make_hybrid_cluster

registry = make_hybrid_cluster(
lumi_nodes=4,
azure_resource_id="...",
subscription_id="...",
ibm_token="...",
include_local=True,
)
ParameterTypeDefaultDescription
lumi_nodesint2LUMI nodes
azure_resource_idstr""Azure workspace resource ID
subscription_idstr""Azure subscription ID
ibm_tokenstr""IBM Quantum token
include_localboolTrueAdd local CPU + MPS fallback

All Azure/IBM/LUMI params are optional — omit any to skip that provider.


NodeType enum

from network.distributed_qvm import NodeType

NodeType.GPU_AER # Local CUDA statevector (Aer)
NodeType.GPU_TENSOR # MPS tensor-network (Aer)
NodeType.CPU_AER # Local CPU statevector (Aer)
NodeType.CPU_NUMPY # Minimal numpy simulation
NodeType.QPU_IBM # IBM Quantum hardware
NodeType.QPU_IONQ # IonQ hardware
NodeType.QPU_AZURE # Azure Quantum targets
NodeType.QPU_BRAKET # AWS Braket devices