Provider API Reference
Complete reference for all classes and functions in src/network/node_providers.py.
NodeProvider (ABC)
Abstract base class for all providers. Import from network.node_providers.
from network.node_providers import NodeProvider
Abstract methods
build_nodes(**kwargs) → List[QPUNodeSpec]
Build and return the list of nodes this provider manages.
| Parameter | Type | Description |
|---|---|---|
**kwargs | any | Provider-specific keyword arguments passed via ClusterBuilder.add() |
Returns: List[QPUNodeSpec] — must contain at least one node.
Raises: ValueError if required kwargs are missing or invalid.
Optional methods
execute(circuit, shots, node_spec) → Optional[Dict[str, int]]
Custom circuit execution. Override this to run circuits on your hardware.
| Parameter | Type | Description |
|---|---|---|
circuit | QuantumCircuit | Qiskit circuit to execute |
shots | int | Number of measurement repetitions |
node_spec | QPUNodeSpec | The node selected for this shard |
Returns: Dict[str, int] counts (e.g. {"0000": 512, "1111": 512}), or None / NotImplementedError to fall back to built-in Aer executor.
health_check() → bool
Liveness check called during ClusterBuilder.build().
Returns: True if the provider is reachable, False to mark all nodes offline.
Default: Returns True (always healthy).
NodeProviderRegistry
Global registry for provider classes. All methods are classmethods.
from network.node_providers import NodeProviderRegistry
Methods
register(name, cls) → None
Register a provider class under a string key.
| Parameter | Type | Description |
|---|---|---|
name | str | Unique string key (e.g. "my_gpu") |
cls | Type[NodeProvider] | Provider class (not instance) |
NodeProviderRegistry.register("my_gpu", MyGPUProvider)
get(name) → Type[NodeProvider]
Look up a provider class by name.
| Parameter | Type | Description |
|---|---|---|
name | str | Previously registered key |
Returns: Provider class.
Raises: KeyError if name not found.
list() → Dict[str, str]
Return all registered providers as {name: DISPLAY_NAME}.
providers = NodeProviderRegistry.list()
# {"local_cpu": "Local CPU (Aer)", "lumi_amd_gpu": "LUMI AMD MI250X", ...}
instantiate(name, **kwargs) → Tuple[NodeProvider, List[QPUNodeSpec]]
Instantiate a provider and call build_nodes(**kwargs) in one step.
| Parameter | Type | Description |
|---|---|---|
name | str | Provider key |
**kwargs | any | Forwarded to build_nodes() |
Returns: (provider_instance, node_list) tuple.
ClusterBuilder
Fluent builder for QPURegistry with multi-provider support.
from network.node_providers import ClusterBuilder
Methods
add(provider_name, **kwargs) → ClusterBuilder
Add a provider slot to the builder. Returns self for chaining.
| Parameter | Type | Description |
|---|---|---|
provider_name | str | Registered provider key |
**kwargs | any | Passed to provider.build_nodes() |
builder = ClusterBuilder().add("local_cpu", n_nodes=4).add("lumi_amd_gpu", n_nodes=2)
build() → QPURegistry
Execute all providers and build the final registry.
For each slot:
- Instantiates the provider
- Calls
health_check()— marks nodes offline ifFalse - Calls
build_nodes(**kwargs)— adds nodes to registry - Binds provider instance to each node via
registry.register(spec, provider)
Returns: QPURegistry ready for use with DistributedQVM.
describe() → str
Return a human-readable cluster plan without executing any provider.
print(ClusterBuilder()
.add("lumi_amd_gpu", n_nodes=2, gpus_per_node=8)
.add("local_cpu", n_nodes=4)
.describe())
Cluster Plan:
[0] lumi_amd_gpu — n_nodes=2, gpus_per_node=8
[1] local_cpu — n_nodes=4
Estimated nodes: 20 (16 SV + 2 MPS + 2 CPU fallback)
QPUNodeSpec fields
QPUNodeSpec is a dataclass defined in distributed_qvm.py. All built-in providers construct specs with these fields:
| Field | Type | Description |
|---|---|---|
node_id | str | Unique identifier (e.g. "lumi-n00-gcd0") |
node_type | NodeType | Enum: GPU_AER, GPU_TENSOR, CPU_AER, CPU_NUMPY, QPU_IBM, QPU_IONQ, QPU_AZURE, QPU_BRAKET |
max_qubits | int | Maximum circuit width this node can handle |
tags | List[str] | Searchable labels (e.g. ["lumi", "rocm"]) |
credentials | Dict[str, Any] | Provider-specific auth data |
online | bool | Set by ClusterBuilder based on health_check() |
metadata | Dict[str, Any] | Extra info (pricing, region, etc.) |
Built-in provider kwargs reference
local_cpu
.add("local_cpu", n_nodes=4)
| kwarg | Type | Default | Description |
|---|---|---|---|
n_nodes | int | 1 | Number of CPU simulation nodes |
max_qubits | int | 30 | Max qubits per node |
local_gpu_aer
.add("local_gpu_aer", n_nodes=2)
| kwarg | Type | Default | Description |
|---|---|---|---|
n_nodes | int | 1 | Number of CUDA statevector nodes |
max_qubits | int | 32 | Max qubits per node |
local_gpu_mps
.add("local_gpu_mps", n_nodes=1, max_qubits=4000)
| kwarg | Type | Default | Description |
|---|---|---|---|
n_nodes | int | 1 | Number of MPS tensor nodes |
max_qubits | int | 4096 | Max qubits per node |
local_gpu_cuquantum
.add("local_gpu_cuquantum", n_nodes=2)
| kwarg | Type | Default | Description |
|---|---|---|---|
n_nodes | int | 1 | Number of cuQuantum nodes |
max_qubits | int | 36 | Max qubits per node (limited by cuQuantum memory model) |
lumi_amd_gpu
.add("lumi_amd_gpu",
n_nodes=4, gpus_per_node=8,
max_qubits_sv=34, max_qubits_mps=2000,
lumi_host="lumi", lumi_project="project_465002463")
| kwarg | Type | Default | Description |
|---|---|---|---|
n_nodes | int | required | Number of LUMI nodes |
gpus_per_node | int | 8 | GCDs per node (max 8 on LUMI) |
max_qubits_sv | int | 34 | Qubits per SV GCD (64 GB HBM2e) |
max_qubits_mps | int | 2000 | Qubits per MPS node |
lumi_host | str | "lumi" | SSH alias in ~/.ssh/config |
lumi_project | str | "" | LUMI project ID for paths |
Registers: n_nodes × gpus_per_node SV nodes + n_nodes MPS nodes.
azure_quantum
.add("azure_quantum",
azure_resource_id="...", subscription_id="...",
targets=["rigetti.qpu.ankaa-3"], location="eastus")
| kwarg | Type | Default | Description |
|---|---|---|---|
azure_resource_id | str | required | Full Azure Quantum workspace resource ID |
subscription_id | str | required | Azure subscription ID |
targets | List[str] | All available | Subset of targets to register |
location | str | "eastus" | Azure region |
Available targets: rigetti.qpu.ankaa-3, quantinuum.qpu.h2-1, ionq.qpu.forte-1, microsoft.estimator
ibm_quantum
.add("ibm_quantum", ibm_token="...", instance="ibm-q/open/main")
| kwarg | Type | Default | Description |
|---|---|---|---|
ibm_token | str | required | IBM Quantum API token |
instance | str | "ibm-q/open/main" | IBM hub/group/project |
backends | List[str] | All available | Subset of backends to register |
aws_braket
.add("aws_braket",
aws_region="us-east-1",
s3_bucket="my-bucket", s3_prefix="qcos-results",
devices=["arn:aws:braket:us-east-1::device/qpu/ionq/Aria-1"])
| kwarg | Type | Default | Description |
|---|---|---|---|
aws_region | str | "us-east-1" | AWS region |
s3_bucket | str | required | S3 bucket for job results |
s3_prefix | str | "qcos" | S3 key prefix |
devices | List[str] | All available | Braket device ARNs to register |
custom_rest
.add("custom_rest",
endpoints=["http://host1:8888", "http://host2:8888"],
max_qubits=30, api_key="secret", timeout_s=60)
| kwarg | Type | Default | Description |
|---|---|---|---|
endpoints | List[str] | required | List of base URLs (POST /run-circuit expected) |
max_qubits | int | 30 | Max qubits per endpoint |
api_key | str | "" | Bearer token for Authorization header |
timeout_s | int | 60 | HTTP request timeout in seconds |
Convenience factory functions
make_local_cluster
from network.node_providers import make_local_cluster
registry = make_local_cluster(
n_cpu_nodes=4,
n_gpu_sv_nodes=2,
n_gpu_mps_nodes=1,
)
| Parameter | Type | Default | Description |
|---|---|---|---|
n_cpu_nodes | int | 4 | local_cpu nodes |
n_gpu_sv_nodes | int | 0 | local_gpu_aer nodes |
n_gpu_mps_nodes | int | 0 | local_gpu_mps nodes |
make_lumi_cluster
from network.node_providers import make_lumi_cluster
registry = make_lumi_cluster(
n_nodes=4,
gpus_per_node=8,
lumi_host="lumi",
lumi_project="project_465002463",
include_local_fallback=True,
)
| Parameter | Type | Default | Description |
|---|---|---|---|
n_nodes | int | required | LUMI compute nodes |
gpus_per_node | int | 8 | GCDs per node |
lumi_host | str | "lumi" | SSH alias |
lumi_project | str | "" | Project ID |
include_local_fallback | bool | True | Add 4 local CPU nodes |
make_azure_cheap_cluster
from network.node_providers import make_azure_cheap_cluster
registry = make_azure_cheap_cluster(
azure_resource_id="...",
subscription_id="...",
include_local_mps=True,
)
| Parameter | Type | Default | Description |
|---|---|---|---|
azure_resource_id | str | required | Azure workspace resource ID |
subscription_id | str | required | Azure subscription ID |
include_local_mps | bool | True | Add 1 local MPS node (2000q fallback) |
Registers: Rigetti Ankaa-3 + Quantinuum H2-1 + Microsoft Estimator + optional MPS.
make_hybrid_cluster
from network.node_providers import make_hybrid_cluster
registry = make_hybrid_cluster(
lumi_nodes=4,
azure_resource_id="...",
subscription_id="...",
ibm_token="...",
include_local=True,
)
| Parameter | Type | Default | Description |
|---|---|---|---|
lumi_nodes | int | 2 | LUMI nodes |
azure_resource_id | str | "" | Azure workspace resource ID |
subscription_id | str | "" | Azure subscription ID |
ibm_token | str | "" | IBM Quantum token |
include_local | bool | True | Add local CPU + MPS fallback |
All Azure/IBM/LUMI params are optional — omit any to skip that provider.
NodeType enum
from network.distributed_qvm import NodeType
NodeType.GPU_AER # Local CUDA statevector (Aer)
NodeType.GPU_TENSOR # MPS tensor-network (Aer)
NodeType.CPU_AER # Local CPU statevector (Aer)
NodeType.CPU_NUMPY # Minimal numpy simulation
NodeType.QPU_IBM # IBM Quantum hardware
NodeType.QPU_IONQ # IonQ hardware
NodeType.QPU_AZURE # Azure Quantum targets
NodeType.QPU_BRAKET # AWS Braket devices