Code Repositories

Overview of GitHub repositories for quantum training, vision models, and platform services

Frontend Repositories

Teraq Website

Production
View on GitHub

Main frontend repository for the Teraq platform. Contains all public-facing HTML pages, serverless functions, and Vercel configuration. Auto-deploys to https://www.teraq.ai via Vercel.

Platform Vercel
Deployment Auto-deploy
Status Active
Used For: Platform Dashboard Training Management API Documentation Tutorials

Platform Web

Submodule
View on GitHub

Frontend submodule containing platform web pages, forms review interfaces, ML training management, and analytics dashboards. Integrated as a git submodule in the main Teraq website repository.

Type Git Submodule
Integration Embedded
Status Active
Used For: Forms Review Batch Review ML Training Analytics

Backend Repositories

Teraq Backend

Production
View on GitHub

Main backend repository for Teraq platform services. Contains quantum pipeline implementations, training APIs, model management, and PostgreSQL integration. Deployed on EC2 instances.

Deployment EC2 Instances
Services FastAPI
Database PostgreSQL
Used For: QTinyLlama QRoBERTa QVIT Training APIs Model Management

Summit Backend Models

Production
View on GitHub

Backend repository for Summit Health platform. Contains trained models, training scripts, quantum pipelines (QRoBERTa, QTinyLlama), and medical AI APIs. Includes model inference services and training orchestration.

Deployment EC2 Instance
Services FastAPI
Models TinyLlama, QRoBERTa
Used For: QRoBERTa Pipeline QTinyLlama Pipeline Medical Training Model Inference

Quantum Training Repositories

QTinyLlama

Quantum Distillation
View on GitHub

Quantum knowledge distillation pipeline for compressing large language models. Uses 29-qubit variational quantum circuits to achieve 10x parameter reduction while maintaining 90-96% of teacher model performance.

Architecture 29-Qubit VQC
Compression 10x (1.1B → 110M)
Retention 90-96%
Key Files: train_1b_quantum_distillation.py train_challenging.py aws_launcher.py

QRoBERTa

Quantum-Enhanced
View on GitHub

Quantum-enhanced RoBERTa model with Matrix Product State (MPS) quantum layer. Achieves 36% parameter reduction (125M → 85M) while improving medical accuracy by 5% (89% → 94%).

Quantum Layer MPS (Bond dim 32)
Compression 768 → 256 dim
Accuracy 94% (+5%)
Key Files: roberta_quantum_WORKING_MPS_fixed.py configs/training_config.yaml requirements.txt

QVIT (Quantum Vision Transformer)

Vision Models
View on GitHub

Quantum Vision Transformer with MPS data augmentation based on Aaronson's quantum supremacy approach. Integrates quantum attention mechanisms and quantum feed-forward networks for enhanced image processing.

Base Model ViT-Base
Augmentation MPS Tensor Network
Quantum Layers Attention + FFN
Key Features: Quantum Attention MPS Augmentation Quantum FFN Patch Encoding

Training & Infrastructure

Platform (Main Repository)

Development
View on GitHub

Main development repository containing training scripts, evaluation tools, diagnostic scripts, deployment configurations, and platform infrastructure code. Includes both quantum and classical training implementations.

Type Monorepo
Language Python, Shell
Status Active Development
Contains: Training Scripts Evaluation Tools API Services Deployment Configs Diagnostic Tools

Repository Usage Guide

Frontend Development

For frontend changes to the Teraq platform:

  1. Clone teraq-website repository
  2. Make changes to HTML files in public/ directory
  3. Update serverless functions in api/ directory if needed
  4. Commit and push to main branch
  5. Vercel automatically deploys changes

Backend Development

For backend API and training services:

  1. Clone Teraq-Backend or summit-backend
  2. Update FastAPI endpoints in dashboard/forms_api.py
  3. Deploy to EC2 instance using deployment scripts
  4. Restart Docker containers for changes to take effect

Quantum Training

For quantum model training (QTinyLlama, QVIT):

  1. Use training APIs documented in Tutorials page
  2. Training scripts are deployed on EC2 instances
  3. Monitor training via ML Training Management
  4. Check logs using training terminal interface