A high-performance, real-time object detection application for iOS, leveraging YOLOv3, CoreML, and Apple's Vision framework.
- Live Detection: Real-time object recognition using the YOLOv3 model trained on the COCO dataset.
- Neural Acceleration: Explicitly optimized for the Apple Neural Engine (ANE) for maximum speed and battery efficiency.
- Glassmorphic UI: High-end SwiftUI overlay with real-time controls for confidence thresholds.
- Visual Feedback: Dynamic bounding boxes that scale perfectly with the camera's field of view.
- Safety: Built with Swift 6 strict concurrency to ensure heavy ML inference never creates data races.
- Accuracy: Uses
ContinuousClockto measure inference latency with nanosecond precision, providing transparent performance metrics. - Throughput: Implements a "Smart Back-Pressure" system to prioritize the most recent camera frame, keeping the UI at 60+ FPS even during heavy detection.
- Swift 6
- SwiftUI
- CoreML / Vision
- AVFoundation
- isProcessing: Prevents frame-stacking and ensures the pipeline stays reactive.
- confidenceThreshold: Real-time filtering of ML predictions via
@Observable. - inferenceTime: Live performance tracking for hardware benchmarking.
- Xcode 16.0+ (Required for Swift 6 features)
- iOS 17.0+ (Required for modern
@Observableand Vision performance) - Physical Device: Required for Neural Engine (ANE) and camera testing.
Created by Jonni Åkesson.
Open for educational and portfolio use.