Compute Resources

Overview of EC2 instances, quantum computing backends, and infrastructure resources

EC2 Instances

Teraq Backend

Running t3.2xlarge
Hostname ec2-13-223-206-81.compute-1.amazonaws.com
IP Address 13.223.206.81
vCPUs < 40 vCPU
SSH Access ssh -i Teraq.pem ec2-user@ec2-13-223-206-81.compute-1.amazonaws.com
API Server Port 8000
Purpose Classical & Quantum Training
Services: QTinyLlama QRoBERTa QVIT Training APIs PostgreSQL

48vCPU Training Instance

Running 48 vCPU
Hostname ec2-204-236-243-64.compute-1.amazonaws.com
IP Address 204.236.243.64
vCPUs 48 vCPU
Status Training Active
Current Process train_tinyllama_48vcpu.py
Note Do not run additional training
Usage: Large-Scale Training TinyLlama-1.1B Medical Domain

Summit Backend Models

Running EC2
Hostname ec2-34-204-191-199.compute-1.amazonaws.com
IP Address 34.204.191.199
SSH Access ssh -i Teraq.pem ec2-user@ec2-34-204-191-199.compute-1.amazonaws.com
Storage 100GB (56GB used, 45GB available)
Purpose Model Inference & Training
Services: TinyLlama API (Port 8100) QRoBERTa API (Port 8101) Model Training Model Storage

Forms API Server

Running EC2
Hostname ec2-18-205-155-235.compute-1.amazonaws.com
IP Address 18.205.155.235
API Port 8600
API Endpoint http://18.205.155.235:8600
Purpose Forms Processing & Batch Management
Services: FastAPI Forms Service PostgreSQL Batch Review Training Logs

Teraq Backend (Alternative)

Running EC2
Hostname ec2-54-242-218-168.compute-1.amazonaws.com
IP Address 54.242.218.168
Purpose Reference Architecture
Services: Quantum Pipelines Training Templates PostgreSQL + pgvector

Quantum Computing Resources

Qiskit Quantum Simulators

QASM Simulator
Simulator
Type: Quantum Assembly Language Simulator
Usage: General-purpose quantum circuit simulation
Qubits: Up to 30+ qubits (simulated)
Access: from qiskit import Aer; backend = Aer.get_backend('qasm_simulator')
Applications: QTinyLlama quantum circuits, QRoBERTa MPS layers, general VQC training
Statevector Simulator
Simulator
Type: Full statevector simulation
Usage: Exact quantum state evolution (no measurement noise)
Qubits: Up to 25-30 qubits (memory-limited)
Access: backend = Aer.get_backend('statevector_simulator')
Applications: Quantum gradient computation, exact expectation values, debugging quantum circuits
Matrix Product State (MPS) Simulator
Simulator
Type: Tensor network simulator (O(n) complexity)
Usage: Efficient simulation of low-entanglement quantum states
Qubits: Up to 100+ qubits (bond dimension dependent)
Bond Dimension: Configurable (default: 32-64)
Access: backend = Aer.get_backend('matrix_product_state')
Applications: QRoBERTa quantum layers, QVIT quantum attention, large-scale quantum ML

Real Quantum Hardware (Available)

IBM Quantum Hardware
Hardware
Provider: IBM Quantum Network
Available Systems: ibm_brisbane (127 qubits), ibm_kyoto (127 qubits), ibm_osaka (127 qubits)
Access: Requires IBM Quantum account and API token
Setup: from qiskit_ibm_runtime import QiskitRuntimeService; service = QiskitRuntimeService()
Queue Time: Minutes to hours (depending on system load)
Cost: $1-10 per job (varies by system)
Applications: Production quantum ML, real quantum advantage validation
D-Wave Quantum Annealing
Hardware
Provider: D-Wave Systems
Qubits: 5000+ qubits (Advantage systems)
Type: Quantum annealing (optimization-focused)
Access: pip install dwave-ocean-sdk
Best For: QUBO/QUBOOST optimization, combinatorial optimization
Speedup: 100x faster on 100k-10M file processing
Applications: Q_Physical_AI models, large-scale optimization problems

Quantum Backend Selection Guide

Backend Best For Qubits Speed
QASM Simulator General circuits, testing ~30 Medium
Statevector Exact gradients, debugging ~25 Fast (small)
MPS Simulator QRoBERTa, QVIT, large circuits 100+ Very Fast
IBM Quantum Real hardware, production 127-433 Queue-dependent
D-Wave QUBO optimization 5000+ Very Fast

Training Infrastructure

Quantum Training

QTinyLlama, QRoBERTa, and QVIT training pipelines run on Teraq Backend instance (ec2-13-223-206-81) using Qiskit quantum simulators for circuit execution.

Classical Training

Large-scale classical training (TinyLlama-1.1B) runs on 48vCPU instance (ec2-204-236-243-64) for faster training throughput.

Data Storage

PostgreSQL databases on multiple instances store training data, batch information, and model metadata. pgvector extension used for model embeddings.