Problem
perceptionmetrics/cli/computational_cost.py already measures inference
latency for CLI users. However tabs/evaluator.py runs inference with no
timing instrumentation — GUI users have no way to see FPS or per-image latency.
For robotics deployment, accuracy alone is insufficient. A model with 0.85 mAP
at 2 FPS is unusable; one at 0.78 mAP at 30 FPS is deployable. The GUI gives
zero signal on this axis despite the CLI having full support.
Proposed Solution
- Add perceptionmetrics/utils/latency_profiler.py — lightweight LatencyReport
class recording per-image wall-clock time (stdlib only: time + statistics)
- Instrument tabs/evaluator.py inference call with time.perf_counter()
- Render mean latency, FPS, P95, P99 as st.metric() row in the evaluator tab
No new dependencies. Consistent with existing computational_cost.py approach.
I plan to submit a PR for this.
Problem
perceptionmetrics/cli/computational_cost.py already measures inference
latency for CLI users. However tabs/evaluator.py runs inference with no
timing instrumentation — GUI users have no way to see FPS or per-image latency.
For robotics deployment, accuracy alone is insufficient. A model with 0.85 mAP
at 2 FPS is unusable; one at 0.78 mAP at 30 FPS is deployable. The GUI gives
zero signal on this axis despite the CLI having full support.
Proposed Solution
class recording per-image wall-clock time (stdlib only: time + statistics)
No new dependencies. Consistent with existing computational_cost.py approach.
I plan to submit a PR for this.