Distributed Training Dashboard

Enterprise Functions

Decoded Quantum Interferometry (DQI) with Dicke States

Distributed Quantum Training Pipeline

Real-time monitoring of distributed DQI training across multiple quantum nodes.
Based on: "Towards solving industrial integer linear programs with Decoded Quantum Interferometry" (BMW Group & Boston Consulting Group, arXiv:2509.08328)

TRAINING STATUS

85
% COMPLETE

ACTIVE NODES

4/4
Distributed Workers

DQI CONFIGURATION

n=30, ℓ=8
Variables • m=20 constraints

TOTAL RESOURCES

400
Qubits (100 per node)

DQI Configuration

Variables (n)
30
Constraints (m)
20
Dicke Weight (ℓ)
8
Sparsity
26.7%
Search Space
5.9M
Reduction Factor
183x

Node 1 - Vehicle Option Pricing (ILP)

Progress
100%
Constraints
15/15
Qubits 70
CNOT Gates 5,000
Toffoli Gates 750
Circuit Depth 355
Sparsity Advantage 67x

Node 2 - max-XORSAT Decoding

Progress
100%
Constraints
20/20
Qubits 100
CNOT Gates 6,300
Toffoli Gates 880
Circuit Depth 773
Sparsity Advantage 183x

Node 3 - BP1 Decoder Circuit

Progress
100%
Constraints
18/18
Qubits 90
CNOT Gates 5,800
Toffoli Gates 820
Circuit Depth 520
Sparsity Advantage 125x

Node 4 - LDPC Code Structure

Progress
100%
Constraints
22/22
Qubits 110
CNOT Gates 7,200
Toffoli Gates 950
Circuit Depth 890
Sparsity Advantage 210x

Average Fraction of Satisfied Constraints

Performance comparison: DQI vs Gurobi vs Random Sampling (from arXiv:2509.08328, Figure 11)

Quantum Resource Scaling

Qubits and gate counts vs problem size m·n (from arXiv:2509.08328, Section 7.2)

Quantum Resources

Total Qubits
370
Total CNOTs
24,300
Total Toffolis
3,400
Avg Circuit Depth
635
Avg Sparsity Advantage
146x

Technical Summary: DQI Algorithm (arXiv:2509.08328)

Algorithm Overview: Decoded Quantum Interferometry (DQI) converts optimization problems into decoding problems using quantum interference patterns and classical decoding techniques. The algorithm leverages the quantum Fourier transform to amplify probabilities of high-quality solutions.

Pipeline:

  1. Dicke State Preparation: Initialize message register in superposition |Dm⟩ over all bit strings with Hamming weight ≤ ℓ
  2. Phase Encoding: Apply Z gates for problem-specific phases based on target vector v
  3. Syndrome Encoding: Compute BTy into syndrome register (n qubits) where B is the parity-check matrix
  4. Decoding: Coherently uncompute message register using binary belief propagation (BP1) decoder
  5. Hadamard Transform: Apply Hadamard gates to syndrome register to create interference
  6. Measurement: Measure and post-process, keeping only instances where message register is |0⟩

Problem Transformation: Industrial 0-1 Integer Linear Programs (ILPs) are transformed to max-XORSAT instances (Bx = v mod 2), then mapped to LDPC codes for decoding. The parity-check matrix B defines the code structure.

Key Innovation: This paper provides the first detailed quantum circuit implementation of binary belief propagation (BP1) as a coherent decoder within the DQI framework, enabling end-to-end quantum optimization for industrial ILPs.

Why Dicke States Benefit BP1/BP2 Decoders

1. Exponential Search Space Reduction:

The Dicke state |Dm⟩ restricts the search to bit strings with Hamming weight ≤ ℓ. Instead of searching all 2m possible error patterns, we only consider Σk=0 C(m,k) patterns. For m=20, ℓ=5: this reduces from 1,048,576 states to 21,700 states—a 48× reduction.

2. Sparsity Prior Encodes Problem Structure:

The parameter ℓ encodes domain knowledge: "I expect at most ℓ constraints to be violated" or "at most ℓ features to be active." This sparsity prior aligns with real-world problems where solutions are naturally sparse (few active constraints/features). BP decoders perform better when the error patterns they need to decode are sparse.

3. Qubit Distribution Enables Parallel Decoding:

  • Message Register (m qubits): Encodes all possible error patterns y simultaneously in superposition. Each qubit corresponds to one constraint in the max-XORSAT problem.
  • Syndrome Register (n qubits): Stores the computed syndrome BTy for each error pattern. Each qubit corresponds to one variable in the optimization problem.
  • Parallel Processing: The quantum superposition allows BP1/BP2 to decode all error patterns in parallel, rather than sequentially. The decoder operates coherently on the entire superposition.

4. How BP1/BP2 Leverage the Dicke Structure:

  • BP1 (Binary Hard-Decision): Operates on the syndrome register to recover error patterns. The Dicke constraint ensures it only needs to decode sparse errors (≤ ℓ bit flips), which is easier than dense errors.
  • BP2 (Soft Belief Propagation): Uses probability information, but still benefits from the sparsity prior—it can focus computational resources on the most likely sparse error patterns.
  • Decoding Success: The paper shows that "increasing ℓ improves the number of satisfied constraints" because larger ℓ allows considering more error patterns, but only if the decoder can reliably decode that many errors (decoder success rate decreases with ℓ).

5. Resource Efficiency:

The Dicke state preparation doesn't require additional qubits—it's just a specific superposition over the existing m message qubits. The qubit count scales as O(m + n) where m = number of constraints and n = number of variables. This is independent of ℓ, making it resource-efficient compared to exhaustive search.

6. Quantum Interference Amplification:

After decoding, the Hadamard transform creates constructive interference on solutions that satisfy many constraints. The Dicke state ensures this interference happens over a reduced, structured search space, making high-quality solutions more likely to be measured.

Key Insight: The Dicke state formulation transforms an unstructured optimization problem into a structured decoding problem with explicit sparsity. BP decoders excel at decoding sparse errors in LDPC codes, and the Dicke state ensures the errors are sparse by construction. This synergy between problem structure (sparsity) and decoder capability (LDPC decoding) is what enables quantum advantage.

Training Log (Real-time)

Connecting to distributed training process...

Auto-refreshing every 10 seconds • Last update: --:--:--