-
Notifications
You must be signed in to change notification settings - Fork 234
add torchmetrics wrapper for evaluate_boxes #1256
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
add torchmetrics wrapper for evaluate_boxes #1256
Conversation
02bc30a to
ce69a72
Compare
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1256 +/- ##
==========================================
- Coverage 87.73% 87.52% -0.22%
==========================================
Files 20 21 +1
Lines 2716 2782 +66
==========================================
+ Hits 2383 2435 +52
- Misses 333 347 +14
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
3cefab1 to
4f6d7db
Compare
bf2ffef to
911173c
Compare
|
@jveitchmichaelis can you please update this PR |
1942e78 to
e048c3b
Compare
|
I might revert some of the test changes here, to keep the PR a bit more focused. We can initialize the metrics in |
e048c3b to
8a74338
Compare
Description
torchmetricsto collect results and callevaluate_boxesduring validationn=1. Metrics are always reset regardless.validation_stepmain. Non-loggable metrics are dropped in the torchmetric class.config_argsto set up test fixtures instead of post-init overwriting the config. This is good practice and doing it the other way around causes undefined behaviour if certain aspects of the deepforest class aren't also updated in sync...evaluateunder the hood.Related Issue(s)
#901 (step towards this)
#1254
#1245
Supports #1253
AI-Assisted Development
AI tools used (if applicable):
Claude Code to do some tedious rewriting of the config_args.