Rare Satellite Object Detection

Advanced AI-powered detection of rare objects in satellite imagery

1

Select Data Source

Drop files here or click to browse

Supports: GeoTIFF, PNG, JPEG (Max 50MB)

2

Select Detection Model

Model Architecture Details

Total Models Available: 5 models
• 2 Hybrid Classical-Quantum models (ResNet/CNN + Quantum layers)
• 2 Classical models (Prithvi-100M, ResNet-50)
• 1 Quantum CNN model (equivalent to CNNQuantumHybrid)

Hybrid Classical-Quantum Models (2 models):

1. PrithviQuantumHybrid (ResNet + Quantum):
Architecture: Prithvi-100M (frozen backbone) → QuantumMPS(768→64, bond_dim=16) → MLP(64→32→1)
Model Size: 100M (frozen) + ~15K trainable = ~100M total
Trainable Parameters: ~15K (quantum layer + classifier only)
Disk Size: ~380 MB (compressed model weights)
Performance: AUC 0.703 (+2.2% over Prithvi-100M classical)
2. CNNQuantumHybrid (ResNet/CNN + Quantum):
Architecture: CNN backbone (frozen) → QuantumMPS(12544→64, bond_dim=16) → MLP(64→32→1)
Model Size: ~587K total (all parameters trainable)
Trainable Parameters: ~587K (CNN backbone + quantum layer + classifier)
Disk Size: ~8 MB (compressed model weights)
Note: "Quantum CNN" option is the same as this model
⚛️ Hybrid Model Advantage: Combines proven classical feature extraction (ResNet/CNN) with quantum decision boundaries. The quantum layer operates on compressed feature vectors (64-768 dimensions), making it feasible for real quantum hardware while maintaining superior rare object detection performance.

Classical Models (2 models):

1. PrithviClassifier (Prithvi-100M + MLP):
Architecture: Prithvi-100M (frozen backbone) → MLP(768→256→64→1)
Model Size: 100M (frozen) + ~200K trainable = ~100M total
Trainable Parameters: ~200K (classifier head only)
Disk Size: ~380 MB (compressed model weights)
Performance: AUC 0.681 (NASA/IBM foundation model baseline)
2. ResNet-50 (Full Trainable):
Architecture: ResNet-50 (full architecture, all layers trainable)
Model Size: 25.6M total (all parameters trainable)
Trainable Parameters: 25.6M (entire network)
Disk Size: ~98 MB (compressed model weights)
Performance: AUC 0.668 (classical baseline, 13× more params than quantum)

Performance Comparison

Model Type AUC-ROC Total Size Trainable Latency Training
PrithviQuantumHybrid Hybrid 0.703 ~100M ~15K 60ms 277 min
CNNQuantumHybrid Hybrid 0.696 ~587K ~587K 60ms 277 min
Prithvi-100M Classical 0.681 ~100M ~200K 60ms 277 min
ResNet-50 Classical 0.668 25.6M 25.6M 170ms 780 min
📊 Model Size Breakdown:
Hybrid Models: 2 models (PrithviQuantumHybrid: ~100M total, CNNQuantumHybrid: ~587K total)
Classical Models: 2 models (Prithvi-100M: ~100M, ResNet-50: 25.6M)
Key Insight: Hybrid models achieve better performance with 50× fewer trainable parameters than Prithvi-100M and 1,700× fewer than ResNet-50
⚛️ Quantum Advantages:
• +2.2% AUC over Prithvi-100M | +4.2% AUC over ResNet-50
• 50× fewer parameters than Prithvi | 13× fewer than ResNet-50
• 2.8× faster inference than ResNet-50
• Wins even with random initialization

Select Trained Models

Loading trained models from database...
Select one or more trained models to use for inference. Models are sorted by performance metrics.

Processing Detection...

Analyzing imagery with selected model

Detection Results

Centralized Training Process & EKS Kubeflow Integration

Centralized Training Architecture

All satellite detection training processes are centralized through AWS SageMaker, ensuring synchronized execution, cost tracking, and model export. Training runs are automatically linked to billing for daniel.richart@teraq.ai and can be executed on GPU or Trainium instances for optimized performance.

Infrastructure
AWS SageMaker
Compute
Trainium, GPU, CPU Support
Billing
Linked to daniel.richart@teraq.ai

Synchronized Training Pipeline

1
Billing Registration
Automatically registers training run with billing system for daniel.richart@teraq.ai
2
Dataset Preparation
Downloads FMoW or Sentinel-2 datasets from S3, preprocesses for training
3
Training Execution
Runs on SageMaker managed instances (GPU or Trainium) with automatic scaling
4
Cost Tracking
Automatically tracks AWS costs via Cost Explorer and updates billing system
5
Model Export
Exports trained models to ONNX/TensorFlow/PyTorch for customer deployment

Trainium Instance Support

Satellite detection training processes can run on Trainium instances (ml.trn1.2xlarge) or GPU instances (ml.g4dn.xlarge) for optimized performance. SageMaker automatically manages the infrastructure and scaling.

Trainium Benefits

  • ✓ Optimized for ML training workloads
  • ✓ Lower cost per training hour
  • ✓ BF16 precision support
  • ✓ Automatic Neuron SDK integration

Porting Process

  • ✓ Existing training scripts compatible
  • ✓ Automatic SSH execution
  • ✓ Cost tracking maintained
  • ✓ Model export unchanged

Centralized Cost Management

All training processes are automatically linked to the centralized billing system:

Default Billing User: daniel.richart@teraq.ai
Organization: TERAQ
Cost Tracking: Automatic via AWS Cost Explorer
Billing Dashboard: View Costs
EKS Costs
Tracked via Cost Explorer API
Trainium Costs
Tracked per training run
Storage Costs
S3 dataset/model storage

Start Satellite Detection Training

Launch training on AWS SageMaker with GPU/Trainium support and centralized billing