High-performance, privacy-focused web analytics engine written in Rust. Built for speed, security, and scalability.
Rush Analytics provides real-time web analytics without compromising user privacy:
- High-Throughput Ingestion: Buffered event processing with async batch persistence
- Privacy First: No PII stored. Daily rotating visitor hashing with cryptographic salts
- Production Ready: Dead letter queue, exponential backoff, graceful shutdown
- Live Metrics: Real-time visitor tracking with Redis/memory hybrid cache
- Multi-Tenant: CRUD API for managing multiple sites programmatically
- Observable: Health checks, Prometheus metrics, structured JSON logging
Stack: Rust (Axum, Tokio, SQLx) + PostgreSQL + Redis (optional)
Design: Hexagonal architecture with clear separation:
src/api: HTTP handlers, DTOs, middlewaresrc/core: Domain models, business logic, ports (traits)src/infra: Database repositories, config, statesrc/workers: Background jobs (flusher, cleanup)
- Rust 1.75+
- PostgreSQL 15+
- Redis 7+ (optional, for live visitors)
-
Clone and configure:
git clone <repo-url> cd analytics cp .env.example .env
-
Edit
.env:DATABASE_URL=postgres://user:pass@localhost:5432/analytics REDIS_URL=redis://localhost:6379 AUTH_SECRET=your_32_char_secret_key_min_length ADMIN_SECRET=admin_32_char_secret_key_min_length
-
Run migrations:
sqlx migrate run
-
Start server:
cargo run --release
-
Verify:
curl http://localhost:3000/health
Before ingesting events, create a site via the admin API:
curl -X POST http://localhost:3000/admin/sites \
-H "Authorization: Bearer your_admin_secret_here" \
-H "Content-Type: application/json" \
-d '{
"domain": "example.com",
"name": "My Website"
}'Response:
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"domain": "example.com",
"name": "My Website",
"created_at": "2026-01-26T10:00:00Z"
}Save the id - you'll need it for ingestion.
Track pageviews from your website:
curl -X POST http://localhost:3000/ingest \
-H "Content-Type: application/json" \
-H "Origin: https://example.com" \
-d '{
"site_id": "550e8400-e29b-41d4-a716-446655440000",
"session_id": "unique-session-id",
"path": "/blog/post-1",
"referrer": "https://google.com",
"utm_source": "newsletter"
}'Returns 202 Accepted - event buffered for async processing.
Retrieve analytics data:
curl http://localhost:3000/stats/550e8400-e29b-41d4-a716-446655440000?period=7d \
-H "Authorization: Bearer your_auth_secret_here"Response:
{
"site_id": "550e8400-e29b-41d4-a716-446655440000",
"from": "2026-01-19",
"to": "2026-01-26",
"summary": {
"total_visitors": 1523,
"total_pageviews": 4891,
"avg_bounce_rate": 42.5,
"live_visitors": 12
},
"chart_data": [...],
"top_pages": [...],
"top_sources": [...]
}Site Management (Admin):
GET /admin/sites- List all sitesGET /admin/sites/:id- Get site detailsPUT /admin/sites/:id- Update siteDELETE /admin/sites/:id- Delete site
Operations:
GET /health- System health checkPOST /admin/dlq/replay- Replay failed events from DLQ
For complete API documentation, request/response schemas, and error codes, see:
- OpenAPI Spec: Coming soon
- Integration Guide: Check
docs/directory
# All tests (unit + integration + e2e)
cargo test
# E2E flows only
cargo test --test e2e_flows
# With logs
cargo test -- --nocapture# Format check
cargo fmt --check
# Clippy (strict mode)
cargo clippy -- -D warnings# Production build
docker compose up -d
# Access at http://localhost:3000All environment variables with defaults:
# Database (required)
DATABASE_URL=postgres://user:pass@localhost:5432/analytics
# Cache (optional)
REDIS_URL=redis://localhost:6379
# Server
PORT=3000
# Security (min 32 chars each)
AUTH_SECRET=your_secret_key_here
ADMIN_SECRET=admin_secret_key_here
# CORS (comma-separated)
ALLOWED_ORIGINS=https://example.com,https://www.example.com
# Buffer & Performance
BUFFER_SIZE=1000 # Events before flush
INGEST_RATE_LIMIT_PER_SEC=100 # Requests/sec
INGEST_BURST_SIZE=200 # Burst allowance
# Flusher Worker
FLUSHER_MAX_RETRIES=3
FLUSHER_INITIAL_BACKOFF_MS=100
FLUSHER_MAX_BACKOFF_MS=5000
# Retention
DATA_RETENTION_DAYS=90 # Auto-cleanup old partitions
CLEANUP_DRY_RUN=false # Set true to test cleanupcurl http://localhost:3000/healthReturns:
- System uptime
- Database latency
- Redis status
- Worker health
- Buffer capacity
Coming in v0.2.0. Will expose:
- Request rates by endpoint
- P50/P95/P99 latencies
- Buffer utilization
- Flush success/failure rates
Error: could not connect to database
Fix: Verify DATABASE_URL format and PostgreSQL is running:
psql $DATABASE_URL -c "SELECT 1"ERROR: AUTH_SECRET must be at least 32 characters
Fix: Generate a secure secret:
openssl rand -base64 32If you see sqlx-data.json errors:
# Regenerate query metadata
cargo sqlx prepare --database-url $DATABASE_URLError: Address already in use (os error 98)
Fix: Change port in .env or kill existing process:
lsof -ti:3000 | xargs kill -9- Strong secrets (32+ chars, random)
- CORS origins configured
- PostgreSQL with SSL enabled
- Redis persistence enabled
- Log aggregation configured
- Health checks monitored
- Auto-restart on crash
- Database: Use connection pooling (default: 10)
- Buffer: Tune
BUFFER_SIZEbased on traffic (100-10000) - Redis: Enable AOF persistence for live visitor data
- Partitions: Monitor
partition_dropstable for cleanup logs
MIT - See LICENSE file.
See CONTRIBUTING.md for guidelines.
- Issues: GitHub Issues
- Docs:
docs/directory - Changelog: CHANGELOG.md