This fork of Earth2Studio is currently in active development. We support the following versions with security updates:
| Version | Supported | Notes |
|---|---|---|
| Latest main branch | ✅ | Active development |
| Tagged releases | ✅ | Security fixes backported when critical |
| Older commits | ❌ | Please update to latest |
As a fork of NVIDIA/earth2studio, we inherit security considerations from the upstream project. Please also check the upstream repository for security advisories.
If you discover a security vulnerability that is specific to this fork (not present in upstream), please report it privately:
DO NOT open a public issue for security vulnerabilities.
- Email: [Configure your email address here]
- Subject Line:
[SECURITY] Brief description of vulnerability - Include:
- Description of the vulnerability
- Steps to reproduce
- Potential impact
- Suggested fix (if you have one)
- Your name/handle (if you want credit)
- Initial Response: Within 48 hours of report
- Assessment: Within 7 days
- Fix Timeline: Depends on severity
- Critical: 24-48 hours
- High: 1 week
- Medium: 2 weeks
- Low: Next planned release
If the vulnerability exists in the upstream NVIDIA/earth2studio project:
- Report to NVIDIA following their security policy
- Also notify us so we can coordinate updates
- We will sync the fix from upstream once available
-
Verify Sources: Only download model weights from official sources:
- NGC (NVIDIA GPU Cloud)
- Official HuggingFace repositories
- Official AWS S3 buckets
- Verified upstream sources
-
Checksum Verification: When available, verify checksums of downloaded models:
sha256sum model_checkpoint.pt # Compare with official checksum -
Isolated Environment: Run Earth2Studio in isolated environments:
# Use virtual environments python -m venv earth2studio-env # Or use Docker containers docker run --gpus all -it earth2studio:latest
-
API Keys: Never commit API keys or credentials to version control
# BAD - don't do this api_key = "sk-abc123..." # GOOD - use environment variables import os api_key = os.environ.get('DATA_API_KEY')
-
Data Validation: Validate external data before use:
import numpy as np # Check for NaN or infinite values assert not np.any(np.isnan(data)) assert not np.any(np.isinf(data)) # Validate expected ranges assert data.min() >= expected_min assert data.max() <= expected_max
-
Secure Connections: When fetching data, use secure connections:
# Ensure HTTPS is used import requests response = requests.get(url, verify=True) # Verify SSL certificates
-
Input Validation: Validate all user inputs:
# Validate file paths from pathlib import Path file_path = Path(user_input).resolve() assert file_path.is_relative_to(allowed_directory) # Validate dates from datetime import datetime try: time = datetime.fromisoformat(user_time_input) except ValueError: raise ValueError("Invalid time format")
-
Resource Limits: Set limits to prevent resource exhaustion:
import torch # Limit GPU memory torch.cuda.set_per_process_memory_fraction(0.8) # Set timeouts for operations import signal signal.alarm(3600) # 1 hour timeout
-
Dependency Management: Keep dependencies updated:
# Check for known vulnerabilities pip install safety safety check # Update dependencies regularly pip install --upgrade earth2studio
-
Untrusted Models: Be cautious with models from untrusted sources:
- Models can contain malicious code in pickle files
- Use safetensors format when possible
- Inspect model files before loading
-
Sandboxing: Run untrusted inference in isolated environments:
# Use containers with limited permissions docker run --gpus all --security-opt=no-new-privileges \ --read-only earth2studio:latest -
Output Validation: Validate model outputs:
# Check for anomalous outputs output = model(input) if output.abs().max() > threshold: raise ValueError("Model output exceeds expected range")
Many PyTorch models use pickle for serialization, which can execute arbitrary code during deserialization.
Mitigation:
- Only load models from trusted sources
- Use
torch.jitorsafetensorsformat when possible - Consider implementing custom model loaders
ONNX Runtime has had security vulnerabilities in the past.
Mitigation:
- Keep ONNX Runtime updated to latest version
- Monitor ONNX Runtime security advisories
- Use containerized deployments
Data sources that fetch from external APIs or cloud storage may be vulnerable to man-in-the-middle attacks.
Mitigation:
- Always verify SSL certificates
- Use authenticated endpoints
- Validate data integrity with checksums
- Cache data locally after verification
Malicious inputs could cause out-of-memory crashes.
Mitigation:
- Set memory limits:
torch.cuda.set_per_process_memory_fraction() - Validate input shapes before processing
- Implement timeouts for long-running operations
File path inputs could be used for path traversal attacks.
Mitigation:
from pathlib import Path
def safe_path(user_path, base_dir):
"""Ensure path is within allowed directory"""
path = Path(base_dir) / Path(user_path)
path = path.resolve()
if not path.is_relative_to(Path(base_dir).resolve()):
raise ValueError("Path traversal attempt detected")
return pathWe use automated tools to scan for vulnerabilities:
# Using pip-audit
pip install pip-audit
pip-audit
# Using safety
pip install safety
safety check --json- Critical Security Updates: Applied immediately
- High Severity: Within 1 week
- Medium/Low Severity: Next scheduled release
- Breaking Changes: Evaluated case-by-case
Our CI/CD pipeline includes:
- Dependency Scanning: Automated vulnerability scanning
- Code Analysis: Static analysis for security issues
- License Compliance: Checking for problematic licenses
- Container Scanning: Docker image vulnerability scanning
We follow coordinated disclosure:
- Report received and acknowledged
- Assessment and fix development (private)
- Testing and validation (private)
- Public disclosure after fix is available
- Credit given to reporter (if desired)
- Critical: 7 days after fix is available
- High: 14 days after fix is available
- Medium/Low: 30 days after fix is available
Published security advisories will be available at:
- GitHub Security Advisories (this repository)
- KNOWN_ISSUES.md (for workarounds)
- Release notes (for fixed issues)
We recognize security researchers who responsibly disclose vulnerabilities:
- TBD
- pip-audit - Python dependency auditing
- safety - Check Python dependencies for security issues
- bandit - Python code security analysis
- trivy - Container security scanning
Monitor upstream security:
For non-security questions, please use:
- GitHub Issues for bugs
- GitHub Discussions for questions
- Documentation for usage help
For security concerns, always use the private reporting methods above.
Last Updated: 2026-01-28
Next Review: TBD