Interactive Design & Performance Visualization Tool
Detection time comparison: Classical vs Quantum
Option A: Same detection probability, 10x faster
Option B: 60% ? 95% detection probability improvement
Detection Probability (Pd) and Miss Probability (Pm = 1-Pd) vs Squeezing Level (1dB increments)
Quantum enhancement factor as a function of detection range
Number of chirps required for reliable detection: Classical vs Quantum
This section describes how Artemis (Adaptive mesh Refinement Time-domain ElectrodynaMIcs Solver) enables electromagnetic modeling of terahertz waves for quantum LIDAR system development. Artemis provides the physical foundation for understanding wave propagation, waveguide design, target scattering, and photonic component optimization.
Value Proposition: Quantum LIDAR achieves classical long-range performance (like SILC's 9km with 16× lines) with 8× lower cost and power through waveform utilization efficiency (QFI/GQI), not brute-force scaling. This enables cost-effective, power-efficient automotive LIDAR systems capable of detecting rare objects at extended ranges.
Artemis Performance: With 59× GPU speedup and excellent scaling, Artemis can handle these workloads on Perlmutter's 6,144 NVIDIA A100 GPUs, enabling full quantum LIDAR system simulation in hours rather than months.
The following sensor specifications drive the computational requirements for Artemis FDTD simulation:
| Parameter | Ultra-High-Res (1mm) | Automotive (10mm) |
|---|---|---|
| Wavelength | 1550 nm (193.4 THz) | 1550 nm (193.4 THz) |
| Range Resolution | 1 mm | 10 mm |
| Chirp Bandwidth | 150 GHz | 15 GHz |
| Sweep Rate | 30 kHz | 5 kHz |
| Maximum Range | 50-100 m | 100-200 m |
| Spatial Modes | 1000 range bins | 100 range bins |
| Data Rate | ~30 GB/s | ~500 MB/s |
| Use Case | Medical imaging, industrial inspection, security screening | Automotive, robotics, navigation |
| Grid Cells/Dimension: | 65M cells |
| Points/Wavelength: | 100 pts (10× finer) |
| 3D Grid Total: | 275 zetta-cells |
| Time Steps: | 0.1 fs × 1M = 100 ns |
| Memory: | ~200 TB |
| Compute/Sim: | 275 exa-ops |
| Parameter Sweeps: | 10k combinations |
| Total Compute: | 2.75 zetta-ops |
| Grid Cells/Dimension: | 6.5M cells |
| Points/Wavelength: | 10 pts |
| 3D Grid Total: | 275 exa-cells |
| Time Steps: | 1 fs × 100k = 100 ns |
| Memory: | ~2 TB |
| Compute/Sim: | 27.5 peta-ops |
| Parameter Sweeps: | 10k combinations |
| Total Compute: | 275 peta-ops |
10 mm resolution enables detection of small targets (cyclists, pedestrians) at automotive ranges (20-200m)
100-200m range requires 6.5M cells per dimension for accurate FDTD simulation
100 range bins (modes) enable quantum-enhanced detection through spatial multiplexing
10,000 combinations (squeezing × efficiency × modes × ranges) require quantum-accelerated optimization
Summary: The FMCW LIDAR sensor operates at 1550 nm wavelength with 10 mm range resolution and 100-200m maximum range. These specifications directly map to Artemis FDTD simulation requirements: 100m range × 10 points/wavelength = 6.5M cells per dimension, resulting in 275 exa-cells for full 3D simulation. With 100 spatial modes (range bins) and parameter sweeps across squeezing levels, detection efficiency, mode counts, and ranges, the total computational requirement reaches 275 peta-operations. Artemis's GPU-accelerated architecture on Perlmutter (6,144 NVIDIA A100 GPUs) enables these simulations to complete in hours rather than months, making full quantum LIDAR system design optimization practical.
Quantum LIDAR systems require squeezed light generation implemented on photonic integrated circuits (PICs). Each chipset implements squeezing operations through specific photonic components:
Chipset Implementation: Single TFLN chip (~5×5 mm) with integrated OPO resonator, pump coupler, and output waveguide.
Chipset Implementation: Multi-mode waveguide network on single TFLN chip, supporting 100+ spatial modes (range bins).
Chipset Implementation: Integrated homodyne detection chip (~3×3 mm) with balanced photodiodes, phase modulators, and readout electronics.
| Component | Chipset Size | Squeezing Operations | Modes Supported |
|---|---|---|---|
| OPO Squeezing Source | 5×5 mm | 1 OPO resonator Pump: 1550 nm Squeezing: 0-10 dB |
Single mode output |
| Waveguide Network | 10×10 mm | Mode multiplexing Dispersion compensation Loss minimization |
100+ spatial modes |
| Homodyne Detection | 3×3 mm | 100× balanced detectors Phase modulators Signal processing |
100 range bins |
| Complete System | 20×20 mm | Full squeezing chain Multi-mode operation Real-time processing |
100 modes simultaneously |
The hybrid optimization stack combines classical electromagnetic simulation (Artemis) with quantum-accelerated parameter optimization to find optimal photonic circuit designs.
Classical FDTD Processing - Fully solvable with classical computing
Output: Loss coefficients, mode matching, detection efficiency, SNR
Hybrid Processing - Mix of classical and quantum-accelerated
Hybrid Approach: Parameter extraction is classical, but optimization across millions of combinations benefits from quantum-accelerated search algorithms (QAOA, VQE) running on quantum hardware (IonQ, Rigetti) to find optimal waveguide dimensions, mode matching configurations, and detection parameters.
Output: JSON file with EM parameters for quantum simulation
The hybrid optimization stack requires quantum hardware to solve combinatorial optimization problems that are intractable for classical computers. Important: These are logical qubit requirements for gate-based systems (IonQ, Rigetti).
| Optimization Task | Parameter Space | Logical Qubits | Physical Qubits Required | Quantum Hardware |
|---|---|---|---|---|
| Waveguide Dimensions | Width: 0.5-2.0 μm (30 steps) Height: 0.3-1.0 μm (20 steps) |
10-15 logical (log₂(600)) |
100-300 (IonQ) 1,000-3,000 (Rigetti) |
IonQ Aria (20 phys) |
| Mode Matching Configuration | 100 modes × 10 gap positions × 5 phase settings |
12-16 logical (log₂(5000)) |
120-320 (IonQ) 1,200-3,200 (Rigetti) |
Rigetti Aspen (80 phys) |
| OPO Resonator Design | Length: 1-10 mm (50 steps) Coupling: 0.01-0.1 (20 steps) |
10-13 logical (log₂(1000)) |
100-260 (IonQ) 1,000-2,600 (Rigetti) |
IonQ Aria (20 phys) |
| Complete System Optimization | All parameters combined 600 × 5000 × 1000 = 3×10⁹ |
32-36 logical (log₂(3×10⁹)) |
320-720 (IonQ) 3,200-7,200 (Rigetti) |
IonQ Forte (32 phys) |
| Multi-Objective Optimization | Pareto front search 10 objectives × 1000 points |
14-18 logical (log₂(10,000)) |
140-360 (IonQ) 1,400-3,600 (Rigetti) |
Rigetti Aspen (80 phys) |
Current Hardware Status:
Reality Check: For complete system optimization (32-36 logical qubits), gate-based systems need error-corrected logical qubits, requiring 100-1000× more physical qubits than currently available. Current hardware can handle smaller sub-problems, with full system optimization requiring future quantum hardware with improved error correction.
Does the quantum optimization layer solve the full Artemis FDTD electromagnetic simulation?
No. The quantum optimization layer does not replace or solve the Artemis FDTD electromagnetic simulation itself. Instead, it optimizes the parameters that Artemis simulates.
Key Distinction: Artemis performs the classical FDTD simulation (solving Maxwell's equations), while quantum optimization searches the design parameter space to find optimal configurations that Artemis then validates.
Key Insight: Artemis solves the physics (Maxwell's equations), quantum optimization solves the design space (parameter combinations). They work together: quantum finds optimal parameters, Artemis validates them with realistic physics.
Bottom Line: The quantum optimization layer optimizes parameters FOR Artemis simulations. Artemis remains the classical FDTD solver that models the electromagnetic physics. The hybrid approach combines quantum optimization (design space search) with classical simulation (physics validation).
Hardware: NVIDIA A100 GPUs on Perlmutter
Time: ~1 ms per simulation
Hardware: IonQ Forte / Rigetti Aspen
Time: ~20 μs per optimization
Integration: Classical layer generates cost functions from EM simulations, quantum layer optimizes parameter combinations, classical layer validates results. Iterative process converges to optimal photonic circuit design.
Scaling quantum LIDAR to TB/s data rates with 1mm range resolution at multiple km and sub-200μm resolution at 100m requires quantum-enhanced ML models for real-time physical AI performance.
THz LIDAR generates massive data streams:
Classical ML bottleneck: 100GB models with 500ms inference cannot keep up with this data rate.
| Phase | Qubits | Capability | LIDAR Application | Timeline |
|---|---|---|---|---|
| Phase 1 | 20 logical | Waveform optimization | Real-time stack selection | 2025 (Now) |
| Phase 2 | 100 logical | ML model compression | 100GB → 10GB models (QSVD) | 2025-2026 |
| Phase 3 | 300-500 logical | Quantum ML training | Hybrid quantum-classical networks | 2026-2027 |
| Phase 4 | 1000+ logical | Fleet optimization | Multi-vehicle coordination | 2027-2029 |
Original Model: 90GB, 25 billion parameters, 500ms inference
Quantum Compression: QSVD on 100-qubit system (IonQ Forte) → 10GB model, 50ms inference, 95% accuracy
Impact: Enables real-time processing of 9.6 Tbit/s LIDAR data stream
Pipeline:
Training Advantage: Classical only: 3 weeks → Hybrid quantum-classical: 3 days (7× speedup)
Problem: 1000 vehicles with quantum LIDAR need coordinated optimization
Classical ML Pipeline:
Quantum-Enhanced Pipeline:
| Qubits | Capability | LIDAR Application | Data Rate |
|---|---|---|---|
| 20 | Waveform optimization | Real-time stack selection | Current (15 GHz) |
| 100 | Model compression | 100GB → 10GB models | 1-5 Tbit/s |
| 300 | Quantum ML layers | Point cloud classification | 5-10 Tbit/s |
| 500 | Multi-modal fusion | LIDAR + camera + radar | 10+ Tbit/s |
| 1000+ | Fleet optimization | 1000-vehicle coordination | City-scale |
Your data rate (10 Tbit/s) demands processing capabilities that scale exponentially with problem size. Classical ML hits a wall at these scales. Quantum computing scales polynomially or better.
Classical distributed ML has achieved remarkable efficiency through innovations like DeepSeek's architecture. However, quantum computing requires fundamentally different distribution strategies—workloads must be distributed across qubits via quantum networks and distributed quantum ML protocols, not simply parallelized across GPUs.
Reference: arXiv:2512.24880 - DeepSeek-V3 demonstrates how to efficiently distribute large-scale model training across classical hardware.
Distribution Innovations:
Scale Achieved:
Key Insight: DeepSeek achieves 20× cost reduction through efficient classical parallelism— data parallelism, tensor parallelism, pipeline parallelism, and expert parallelism all running simultaneously.
Classical distribution copies data freely—quantum mechanics forbids this. Quantum workloads must be entangled across processors, not replicated.
| Aspect | Classical (DeepSeek) | Quantum Equivalent |
|---|---|---|
| Data Distribution | Copy tensors to all GPUs | Distribute entangled qubits (no-cloning) |
| Gradient Sync | AllReduce across nodes | Teleportation-based parameter updates |
| Communication | InfiniBand/NVLink (~400 GB/s) | Bell pair distribution (~1-100 kHz current) |
| Memory | HBM (80-192 GB/GPU) | Quantum memory (T₁ ~ 100 µs - 1 ms) |
| Pipeline Overlap | DualPipe (compute ↔ comm) | Parallel entanglement distillation |
| Error Handling | ECC memory, checkpoints | Quantum error correction (surface codes) |
┌─────────────────────────────────────────────────────────────────────────────────┐
│ DISTRIBUTED QUANTUM ML vs CLASSICAL DISTRIBUTED ML │
├─────────────────────────────────────────────────────────────────────────────────┤
│ │
│ CLASSICAL (DeepSeek) QUANTUM EQUIVALENT │
│ ═══════════════════ ══════════════════ │
│ │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Data Parallel │ │ Entanglement │ │
│ │ (copy batches) │ → │ Distribution │ │
│ └────────┬────────┘ │ (Bell pairs) │ │
│ │ └────────┬────────┘ │
│ ▼ ▼ │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Tensor Parallel │ │ Distributed │ │
│ │ (split layers) │ → │ Quantum Gates │ │
│ └────────┬────────┘ │ (teleportation) │ │
│ │ └────────┬────────┘ │
│ ▼ ▼ │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Pipeline │ │ Coherent │ │
│ │ Parallel │ → │ State Transfer │ │
│ │ (stage overlap) │ │ (MW-optical) │ │
│ └────────┬────────┘ └────────┬────────┘ │
│ │ │ │
│ ▼ ▼ │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Expert Parallel │ │ Quantum MoE │ │
│ │ (MoE routing) │ → │ (superposition │ │
│ └────────┬────────┘ │ of experts) │ │
│ │ └────────┬────────┘ │
│ ▼ ▼ │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ AllReduce │ │ Distributed │ │
│ │ Gradients │ → │ Measurement & │ │
│ │ (DualPipe) │ │ Classical Sync │ │
│ └─────────────────┘ └─────────────────┘ │
│ │
├─────────────────────────────────────────────────────────────────────────────────┤
│ KEY DIFFERENCE: Quantum cannot copy states—must use entanglement + teleport │
└─────────────────────────────────────────────────────────────────────────────────┘
Reference: Orca Computing (2024) - Demonstrated hybrid quantum-classical ML with significant energy advantages.
>5× Energy Reduction
on automotive ML workloads using Orca PT-1 photonic processor
Architecture:
Why Energy Efficient:
Quantum Computing Inc (QCI) demonstrates another hybrid approach using photonic reservoir computing for time-series prediction—directly applicable to LIDAR trajectory forecasting.
Reservoir Computing Principles:
Academic References:
| Approach | Hardware | Task | Demonstrated Advantage |
|---|---|---|---|
| Orca + Toyota | PT-1 Photonic | Classification | >5× energy reduction |
| QCI Reservoir | EmuCore Photonic | Time-series | Real-time trajectory prediction |
| Teraq Q-ViT | MPS-based | Image classification | 50× parameter reduction |
| Combined Stack | VIO + Photonic | LIDAR processing | Target: 10× throughput, 5× energy |
Two fundamentally different paradigms exist for quantum machine learning acceleration. Understanding their differences is critical for choosing the right architecture for specific ML workloads and porting hybrid models effectively.
| Aspect | Circuit-Based (Gate Model) | Reservoir Computing |
|---|---|---|
| Core Principle | Sequence of discrete unitary gates applied to qubits | Fixed nonlinear dynamical system with trainable readout |
| Trainable Parameters | Gate rotation angles (θ, φ) throughout circuit | Only output layer weights (linear regression) |
| Training Method | Variational (parameter shift rule, gradient descent) | Ridge regression on reservoir states |
| Hardware Control | Precise pulse sequences, error correction | Minimal control—exploits natural dynamics |
| Coherence Requirements | High (must maintain during full circuit) | Lower (can exploit decoherence as feature) |
| Best For | Classification, optimization, structured problems | Time-series, temporal patterns, chaotic systems |
| Example Systems | Teraq Q-ViT, Orca GBS, IonQ, Quantinuum | QCI EmuCore, photonic delay loops |
|ψ⟩ → U₁(θ₁) → U₂(θ₂) → ... → Uₙ(θₙ) → Measure
How It Works:
Advantages:
Input → [Fixed Nonlinear Dynamics] → Readout(W) → Output
How It Works:
Advantages:
The two paradigms offer fundamentally different speedup profiles. Circuit-based approaches target exponential speedups for specific problem classes, while reservoir computing offers consistent polynomial speedups with simpler implementation.
| Metric | Circuit-Based | Reservoir Computing |
|---|---|---|
| Inference Speedup (Theoretical) | Exponential for specific problems (e.g., O(√N) Grover) | Polynomial: O(N) → O(log N) for temporal tasks |
| Training Speedup | Limited by barren plateaus; O(2ⁿ) gradient evaluations possible | O(N²) → O(N) ridge regression only |
| Photonic Processing Rate | ~MHz (limited by measurement) | ~GHz - THz (speed of light) |
| Memory Capacity | 2ⁿ amplitudes in n qubits | Fading memory (~10-100 time steps) |
| Practical Speedup (Current) | 1-10× (NISQ limitations) | 10-1000× for time-series |
| Energy Efficiency | Cryogenic overhead (superconducting) | >5× demonstrated (Orca/Toyota) |
Key Insight: Reservoir computing provides more consistent speedups for temporal/sequential tasks due to natural dynamics at optical speeds, while circuit-based approaches offer potential exponential speedups but face practical challenges (barren plateaus, decoherence, error correction overhead).
Summary from Literature: Quantum reservoir computing shows 10-1000× speedups for time-series prediction with current hardware (Martínez-Peña 2021). Circuit-based approaches face barren plateau challenges (McClean 2018), but these can be mitigated through layer-wise training, local cost functions, and structured ansätze. For LIDAR: reservoir for trajectory prediction; circuit-based Q-ViT for spatial features.
The barren plateau problem is not insurmountable. Several proven strategies exist:
Reference: Cerezo et al., "Cost function dependent barren plateaus," Nature Communications 12, 1791 (2021). DOI: 10.1038/s41467-021-21728-w
When porting classical ML models to quantum-accelerated architectures, the choice between circuit-based and reservoir computing depends on the model structure and data characteristics.
| Classical Model | Circuit-Based Porting | Reservoir Porting | Recommendation |
|---|---|---|---|
| Vision Transformer (ViT) | Q-ViT: MPS attention layers, variational encoding | Image patches → reservoir → linear classifier | Circuit (attention structure) |
| LSTM / GRU | Variational quantum RNN, difficult gradient flow | Natural fit: temporal dynamics in reservoir | Reservoir (temporal) |
| CNN (ResNet, DenseNet) | Quantum convolution kernels, entangling layers | Spatial features → reservoir readout | Circuit (spatial hierarchy) |
| Transformer (LLM) | Quantum attention, MPS-based token mixing | Token embeddings → reservoir → output | Hybrid (circuit attention + reservoir memory) |
| Time-Series (ARIMA, Prophet) | Limited benefit, not naturally suited | Echo state network replacement | Reservoir (natural fit) |
| GNN (Graph Neural Network) | Quantum walks, graph encoding in circuits | Graph structure in reservoir connectivity | Circuit (graph structure) |
| LIDAR Point Cloud | Q-ViT for spatial features, object detection | Trajectory prediction, temporal tracking | Hybrid: Circuit (spatial) + Reservoir (temporal) |
# Classical ViT → Q-ViT Porting
1. EMBEDDING LAYER
Classical: Linear(768, 768)
Quantum: Amplitude encoding (log₂(768) qubits)
2. ATTENTION MECHANISM
Classical: QKV projection + softmax
Quantum: MPS tensor contraction
Bond dim χ controls expressivity
3. MLP LAYERS
Classical: Linear → GELU → Linear
Quantum: Variational circuit (RY, RZ, CNOT)
Trainable rotation angles
4. OUTPUT
Classical: Linear classifier
Quantum: Measurement → classical softmax
# Classical LSTM → Quantum Reservoir
1. INPUT ENCODING
Classical: Embedding lookup
Quantum: Modulate laser intensity/phase
2. RECURRENT PROCESSING
Classical: Gates (forget, input, output)
Quantum: Fixed photonic cavity dynamics
Natural nonlinearity + memory
3. STATE EXTRACTION
Classical: Hidden state h_t
Quantum: Sample reservoir at times
[t, t+Δ, t+2Δ, ...]
4. OUTPUT
Classical: Dense layer + softmax
Quantum: W·reservoir_states
(train W only via ridge regression)
Key Insight: For LIDAR processing, use circuit-based Q-ViT for per-frame spatial feature extraction and reservoir computing for multi-frame trajectory prediction. This hybrid approach leverages the strengths of both paradigms.
┌─────────────────────────────────────────────────────────────────────────────────────────┐
│ HYBRID QUANTUM ML FOR LIDAR PROCESSING │
├─────────────────────────────────────────────────────────────────────────────────────────┤
│ │
│ LIDAR Point Cloud (Frame t) │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────────────────────────┐ │
│ │ CIRCUIT-BASED: Q-ViT Feature Extraction │ │
│ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │ │
│ │ │ Amplitude │ → │ MPS Attention │ → │ Variational │ → Feature Vector │ │
│ │ │ Encoding │ │ (χ = 64) │ │ Classifier │ f_t ∈ ℝ^256 │ │
│ │ └──────────────┘ └──────────────┘ └──────────────┘ │ │
│ │ Hardware: IonQ, Quantinuum, Orca GBS, QuantWare VIO │ │
│ └─────────────────────────────────────────────────────────────────────────────────┘ │
│ │ │
│ │ f_t (per-frame features) │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────────────────────────┐ │
│ │ RESERVOIR-BASED: Temporal Trajectory Prediction │ │
│ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │ │
│ │ │ Feature │ → │ Photonic │ → │ Linear │ → Trajectory │ │
│ │ │ Injection │ │ Reservoir │ │ Readout (W) │ Prediction │ │
│ │ │ (optical) │ │ (fixed H) │ │ (trained) │ [x,y,z,v]_{t+1} │ │
│ │ └──────────────┘ └──────────────┘ └──────────────┘ │ │
│ │ Hardware: QCI EmuCore, Photonic delay lines, Room temperature │ │
│ └─────────────────────────────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────────────────────────┐ │
│ │ OUTPUT: Object Detection + Trajectory + Collision Risk │ │
│ └─────────────────────────────────────────────────────────────────────────────────┘ │
│ │
├─────────────────────────────────────────────────────────────────────────────────────────┤
│ PERFORMANCE TARGETS: │
│ • Spatial (Q-ViT): 30 Hz frame rate, <33 ms latency, 50× param reduction │
│ • Temporal (Reservoir): Multi-frame tracking, 100+ object trajectories │
│ • Combined: Real-time autonomous driving inference │
└─────────────────────────────────────────────────────────────────────────────────────────┘
Processing THz LIDAR datasets requires quantum resources scaled to the sensor resolution and data rates. Higher resolution (1mm) generates significantly more data, requiring proportionally more qubits for real-time quantum ML inference.
| Parameter | Ultra-High-Res (1mm) | Automotive (10mm) |
|---|---|---|
| Points per Frame | 1,000,000 pts | 100,000 pts |
| Frame Rate | 30 Hz | 10 Hz |
| Data Rate | ~30 GB/s | ~500 MB/s |
| Q-ViT Input Qubits | 20 qubits (2²⁰ = 1M) | 17 qubits (2¹⁷ ≈ 131k) |
| MPS Bond Dimension | χ = 64-128 | χ = 32-64 |
| Total Circuit Qubits | 50-100 logical | 30-50 logical |
| Inference Latency Target | <33 ms (30 Hz) | <100 ms (10 Hz) |
| Processing Stage | 1mm Resolution | 10mm Resolution | Purpose |
|---|---|---|---|
| Feature Extraction (Q-ViT) | 50-100 qubits | 30-50 qubits | Point cloud encoding & attention |
| Object Detection | 20-40 qubits | 15-25 qubits | Bounding box regression |
| Trajectory Prediction | 30-60 qubits | 20-40 qubits | Temporal sequence modeling |
| Sensor Fusion | 40-80 qubits | 25-50 qubits | Multi-sensor integration |
| Full Pipeline | 150-300 logical qubits | 100-200 logical qubits | End-to-end LIDAR processing |
Physical Qubits: With surface code error correction (distance d=7), each logical qubit requires ~98 physical qubits. Full 1mm pipeline: 15,000-30,000 physical qubits | Full 10mm pipeline: 10,000-20,000 physical qubits
Training quantum ML models on THz LIDAR datasets requires additional qubits for gradient computation and parameter optimization.
| Training Phase | 1mm Resolution | 10mm Resolution | Notes |
|---|---|---|---|
| Forward Pass | 150-300 qubits | 100-200 qubits | Same as inference |
| Gradient Estimation | +50-100 qubits | +30-60 qubits | Parameter shift rule |
| QAOA Optimization | +100-200 qubits | +50-100 qubits | Hyperparameter search |
| Total Training | 300-600 logical qubits | 200-400 logical qubits | Full training pipeline |
Training Timeline: With QuantWare VIO-40K (10,000+ qubits by 2028), full 1mm THz LIDAR model training becomes feasible. Current VIO-400 (2026) enables 10mm automotive model training and 1mm inference-only deployments.
Recent work on Random Circuit Sampling (RCS) quantum advantage provides crucial insights for validating quantum-enhanced LIDAR systems. The noise thresholds and verification methodologies developed for computational quantum advantage directly inform how we assess metrological quantum advantage in sensing applications.
A critical insight from RCS experiments: quantum advantage exhibits a sharp phase transition based on noise levels. In the weak-noise regime (ε < cA/n), quantum advantage is robust and verifiable. In the strong-noise regime, classical "spoofers" can fake quantum performance.
| RCS Concept | LIDAR Equivalent | Implication for Quantum LIDAR |
|---|---|---|
| Fidelity F(C) | Squeezed state purity | Must maintain high-purity squeezed states through optical path |
| XEB proxy | SNR improvement ratio | Measured SNR gain must track actual QFI enhancement |
| Weak-noise regime | Low-loss optical system | Total loss must be below squeezing threshold (~3 dB) |
| Phase transition | Classical-quantum crossover | Sharp threshold where squeezed light outperforms classical |
| Spoofers | Classical post-processing | Must verify quantum source, not just final SNR numbers |
| Multiple extrapolations | Independent verification tests | Multiple measurement methods should converge on same QFI |
The DTU quantum LIDAR experiment should be assessed using the same rigor applied to RCS quantum advantage claims:
Key Insight: Just as RCS experiments maintain ~0.1% fidelity as qubit count increases, quantum LIDAR must demonstrate that QFI enhancement persists as system complexity scales (more modes, longer range, higher data rates). The DTU protocol's multimode scaling is analogous to RCS's qubit scaling—both must stay in the "weak-noise regime" to claim genuine quantum advantage.
Achieving TB/s physical AI performance requires distributed quantum computing across multiple processors connected via quantum networks. This section documents the superconducting qubit foundry roadmap (QuantWare VIO) and quantum networking protocols (QBlox) needed for city-scale quantum LIDAR systems.
| Generation | Control Lines | Ships | Key Features | Network Capability |
|---|---|---|---|---|
| Planar | 176 | Now | Proven fabrication, T₁ > 100 µs | 50-100 qubits/processor |
| VIO-400 | 400 | 2026 | Enhanced routing, lower crosstalk | Quantum LAN nodes (5-10 processors) |
| VIO-1K | 1,000 | 2027 | 3D architecture, negligible kinetic inductance | 500-1,000 qubits (modular) |
| VIO-40K | 40,000 | 2028 | Full-scale quantum networking | 10,000-20,000 qubits |
Solution: Electro-optomechanical transduction
Microwave (GHz) → Phonon (MHz) → Optical (THz)
[piezoelectric] [radiation pressure]
| Metric | Current (2025) | Needed for VIO-40K | Reference |
|---|---|---|---|
| Conversion efficiency | 25-50% | >80% | Caltech/Chicago arXiv:2404.10358 |
| Added noise (photons) | N_add ~ 5-10 | N_add < 1 | SRF cavity platforms |
| Bandwidth | 1-10 MHz | >100 MHz | Piezo-optomechanical |
| Transduction fidelity | 75-85% | >95% | Harvard/Riverlane arXiv:2310.16155 |
| Bell pair generation rate | 1-10 kHz | 100 kHz - 1 MHz | Quantum network requirement |
| Link distance | 1-10 km | 100-1000 km | City-scale networks |
Reference: QBlox provides scalable quantum control electronics for superconducting qubit systems, enabling precise pulse generation, readout, and real-time feedback essential for quantum networking.
| Product | Description | Key Specs | Network Role |
|---|---|---|---|
| Cluster | Modular control system | Up to 20 modules/cluster, scalable to 1000s of qubits | Central control for QPU nodes |
| QCM (Control Module) | Arbitrary waveform generator | 1 GSPS, 16-bit, 4 output channels | Qubit drive pulses |
| QRM (Readout Module) | Signal acquisition & processing | 1 GSPS ADC, real-time demodulation, 2 I/O channels | Qubit state readout |
| QCM-RF | RF control module | 2-18 GHz, integrated IQ mixer, 2 RF outputs | Direct microwave control |
| QRM-RF | RF readout module | 2-18 GHz receive, low noise, 1 RF I/O | Dispersive readout |
| SPI Rack | DC bias & flux control | Ultra-low noise DACs, 16+ channels/module | Flux tuning for networking |
Timing & Synchronization:
Network-Critical Features:
Reference: SeQUeNCe (Simulator of QUantum Network Communication) - Open-source quantum network simulator developed by Oak Ridge National Laboratory (ORNL), compatible with QBlox hardware simulation.
SeQUeNCe Capabilities:
Integration with QBlox:
Citation: X. Wu et al., "SeQUeNCe: A Customizable Discrete-Event Simulator of Quantum Networks," Quantum Science and Technology 6, 045027 (2021). DOI: 10.1088/2058-9565/ac22f6
Advantages over polarization:
Critical 2024 Result:
Demonstrated entanglement source interfacing telecom wavelength time-bin qubits with GHz superconducting qubits via dual-rail encoding in piezo-optomechanical transducers.
┌─────────────────────────────────────────────────────────────────────────────────────────┐
│ QUANTUM NETWORK CONTROL ARCHITECTURE │
├─────────────────────────────────────────────────────────────────────────────────────────┤
│ │
│ APPLICATION LAYER │
│ ┌─────────────────────────────────────────────────────────────────────────────────┐ │
│ │ SeQUeNCe (ORNL) │ Quantify-scheduler │ Custom Q1ASM Programs │ │
│ │ Network simulation │ Pulse optimization │ Real-time sequences │ │
│ └─────────────────────────────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ CONTROL LAYER (QBlox Cluster) │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ QCM-RF │ │ QRM-RF │ │ QCM │ │ SPI Rack │ │
│ │ Qubit drive │ │ Readout │ │ Baseband │ │ DC bias │ │
│ │ 2-18 GHz │ │ 2-18 GHz │ │ control │ │ Flux tune │ │
│ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ │
│ │ │ │ │ │
│ └─────────────────┴─────────────────┴─────────────────┘ │
│ │ │
│ ┌──────────────┴──────────────┐ │
│ │ Sync Distribution (PPS) │ │
│ │ <100 ps cluster-wide │ │
│ └──────────────┬──────────────┘ │
│ │ │
│ HARDWARE LAYER ▼ │
│ ┌─────────────────────────────────────────────────────────────────────────────────┐ │
│ │ QuantWare VIO Processor │ │
│ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ │
│ │ │ Data Qubits │ │ Ancilla │ │ Interface │ │ Transducers │ │ │
│ │ │ (compute) │ │ (QEC) │ │ (network) │ │ (MW↔optical)│ │ │
│ │ └─────────────┘ └─────────────┘ └─────────────┘ └──────┬──────┘ │ │
│ └─────────────────────────────────────────────────────────────┼────────────────────┘ │
│ │ │
│ NETWORK LAYER ▼ │
│ ┌─────────────────────────────────────────────────────────────────────────────────┐ │
│ │ Optical Fiber Network (1550 nm) │ │
│ │ Time-bin encoded qubits, Bell pair distribution │ │
│ └─────────────────────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────────────────┘
Information density per photon:
| Qubit (d=2) | log₂(2) = 1 bit/transmission |
| Qutrit (d=3) | log₂(3) = 1.58 bits/transmission |
| Ququart (d=4) | log₂(4) = 2 bits/transmission |
Transmon qudit capabilities:
┌─────────────────────────────────────────────────────────────────────────────────────────┐
│ DISTRIBUTED QUANTUM LIDAR PROCESSING ARCHITECTURE │
├─────────────────────────────────────────────────────────────────────────────────────────┤
│ │
│ QuantWare VIO-40K Processor QuantWare VIO-40K Processor │
│ (40,000 control lines → 10,000 qubits) (40,000 control lines → 10,000 qubits) │
│ │ │ │
│ ▼ ▼ │
│ ┌─────────────────────┐ ┌─────────────────────┐ │
│ │ Interface Qubits │ │ Interface Qubits │ │
│ │ (dedicated for │ │ (dedicated for │ │
│ │ networking) │ │ networking) │ │
│ └──────────┬──────────┘ └──────────┬──────────┘ │
│ │ │ │
│ ▼ ▼ │
│ ┌─────────────────────┐ ┌─────────────────────┐ │
│ │ Microwave Resonators│ │ Microwave Resonators│ │
│ │ (time-bin encoding) │ │ (time-bin encoding) │ │
│ └──────────┬──────────┘ └──────────┬──────────┘ │
│ │ │ │
│ ▼ ▼ │
│ ┌─────────────────────┐ ┌─────────────────────┐ │
│ │ Electro-opto- │ │ Electro-opto- │ │
│ │ mechanical │ │ mechanical │ │
│ │ Transducers │ │ Transducers │ │
│ └──────────┬──────────┘ └──────────┬──────────┘ │
│ │ │ │
│ └───────────────────┬───────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌───────────────────────────────────────────┐ │
│ │ OPTICAL FIBER NETWORK │ │
│ │ (Room Temperature, 100-1000 km) │ │
│ │ • Time-bin encoded photons │ │
│ │ • Telecom wavelength (1550 nm) │ │
│ │ • Existing infrastructure │ │
│ └───────────────────────────────────────────┘ │
│ │
├─────────────────────────────────────────────────────────────────────────────────────────┤
│ APPLICATIONS FOR PHYSICAL AI: │
│ │
│ • City-scale autonomous vehicle coordination (100+ vehicles) │
│ • Distributed LIDAR point cloud processing (TB/s data rates) │
│ • Real-time sensor fusion across edge nodes │
│ • Federated quantum machine learning for fleet optimization │
│ │
└─────────────────────────────────────────────────────────────────────────────────────────┘
10,000+
qubits/node
1000 km
distribution
2×
channel capacity
1 MHz
generation rate
Combined capability: Distribute high-dimensional entangled states between 10,000-qubit processors over 100-1,000 km using existing telecom infrastructure, supporting 100+ logical qubits in surface-code processors for city-scale quantum LIDAR processing.
In classical machine learning, we take the "orchestration layer"—MLOps, reproducible environments, experiment tracking, and hardware-aware baselines—for granted. In quantum computing, this layer is conspicuously absent, yet the need for it is acute. The following methodologies address the critical gap between functional prototypes and benchmarked solutions for hybrid quantum-classical ML optimization.
The moment you switch backends (e.g., from a local statevector simulator to a QPU), you do not just change the execution speed—you often change the fundamental optimization landscape. This requires a distinct orchestration strategy to manage metadata, noise profiles, and error-mitigation pipeline reproducibility.
City of London Quantum Hackathon
Bradford Quantum Hackathon
Key Insight: Moving to hardware (or realistic noise models) forces fallback to sampling-based techniques like Sample-Based Quantum Diagonalization (SQD). This is not merely a hyperparameter change—it requires completely different orchestration strategies.
To address the orchestration gap, qhybrid is a Rust-accelerated toolkit designed to bridge the gap between fast experimentation and rigorous benchmarking. Currently conducting analysis on H200 GPUs and QPUs.
| Benchmark Target | Focus Area | Key Metrics |
|---|---|---|
| qhybrid Rust Kernels | Python-Rust interop overhead | FFI latency, memory bandwidth |
| Qiskit Aer (GPU) | cuStateVec scaling efficiency | 30+ qubit circuits, NVIDIA cuQuantum |
| H200 GPU Performance | Statevector simulation throughput | TFLOPS, memory utilization |
| IBM QPU Backends | Hardware execution fidelity | Gate error rates, decoherence |
| tket Compilation | Optimization pass overhead | Compilation time vs circuit depth/2Q gates |
Just as we track FLOPs and memory bandwidth in HPC, we must rigorously track circuit depth, shot counts, and backend-specific transpilation latencies in a unified metadata schema.
Ecosystem Recognition: This gap is being identified across the industry. Anastasia Marchenkova (Marqov) is building similar orchestration tools. The next leap in QC utility will not just come from better qubits, but from better tooling to manage the complexity of hybrid workflows.
┌─────────────────────────────────────────────────────────────────────────────────────────┐
│ QUANTUM MLOPS ORCHESTRATION SCHEMA │
├─────────────────────────────────────────────────────────────────────────────────────────┤
│ │
│ EXPERIMENT TRACKING │
│ ┌─────────────────────────────────────────────────────────────────────────────────┐ │
│ │ experiment_id: "qvit-lidar-2024-01" │ │
│ │ ├── backend: "ibm_brisbane" | "aer_simulator_statevector" | "cuquantum_h200" │ │
│ │ ├── noise_profile: { t1: 150µs, t2: 80µs, readout_error: 0.02 } │ │
│ │ └── connectivity: "heavy_hex_127q" │ │
│ └─────────────────────────────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ CIRCUIT METADATA │
│ ┌─────────────────────────────────────────────────────────────────────────────────┐ │
│ │ circuit_metrics: │ │
│ │ ├── depth: 45 ├── 2q_gates: 128 ├── t_count: 0 │ │
│ │ ├── width: 16 qubits ├── cx_count: 128 ├── shots: 8192 │ │
│ │ └── parameters: 256 └── measurement_count: 16 │ │
│ └─────────────────────────────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ TRANSPILATION LOG │
│ ┌─────────────────────────────────────────────────────────────────────────────────┐ │
│ │ passes: [ │ │
│ │ { "tket.FullPeepholeOptimise": { time_ms: 234, depth_reduction: 15% } }, │ │
│ │ { "qiskit.SabreLayout": { time_ms: 456, swap_overhead: 1.3x } }, │ │
│ │ { "routing": { algorithm: "sabre", added_swaps: 42 } } │ │
│ │ ] │ │
│ └─────────────────────────────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ERROR MITIGATION CONFIG │
│ ┌─────────────────────────────────────────────────────────────────────────────────┐ │
│ │ mitigation: { │ │
│ │ method: "ZNE" | "PEC" | "M3", │ │
│ │ noise_factors: [1.0, 1.5, 2.0, 3.0], │ │
│ │ extrapolation: "Richardson", │ │
│ │ sampling_overhead: 3.2x │ │
│ │ } │ │
│ └─────────────────────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────────────────┘
Based on hackathon validation and benchmarking experience, the following methodologies are prime candidates for optimization within the Teraq distributed quantum ML framework:
| Methodology | Best For | Backend Sensitivity | MLOps Priority |
|---|---|---|---|
| QCBM | Time-series, generative | High (gradient → sampling) | Critical |
| Quixer (Q-Transformer) | Sequence prediction | High (attention circuits deep) | Critical |
| Q-ViT (MPS) | Image classification | Medium (shallow circuits) | High |
| QD-HMC | Bayesian inference | High (MCMC → SQD) | Critical |
| VQE/QAOA | Optimization | Very High (barren plateaus) | Critical |
| Quantum Reservoir | Temporal data | Low (fixed dynamics) | Medium |