Skip to content

[fix] AutoQuant: clamp instead of use fp64 in auto quant score#1156

Open
Fridah-nv wants to merge 2 commits intomainfrom
fridah/fix-aq2
Open

[fix] AutoQuant: clamp instead of use fp64 in auto quant score#1156
Fridah-nv wants to merge 2 commits intomainfrom
fridah/fix-aq2

Conversation

@Fridah-nv
Copy link
Copy Markdown
Contributor

@Fridah-nv Fridah-nv commented Apr 1, 2026

What does this PR do?

Type of change: ?

Usage

# Add a code snippet demonstrating how to use this

Testing

Before your PR is "Ready for review"

Make sure you read and follow Contributor guidelines and your commits are signed (git commit -s -S).

Make sure you read and follow the Security Best Practices (e.g. avoiding hardcoded trust_remote_code=True, torch.load(..., weights_only=False), pickle, etc.).

  • Is this change backward compatible?: ✅ / ❌ / N/A
  • If you copied code from any other sources or added a new PIP dependency, did you follow guidance in CONTRIBUTING.md: ✅ / ❌ / N/A
  • Did you write any new necessary tests?: ✅ / ❌ / N/A
  • Did you update Changelog?: ✅ / ❌ / N/A

Additional Information

Summary by CodeRabbit

  • Bug Fixes
    • Improved numeric stability in quantization score calculations by introducing saturation bounds, preventing potential overflow during intermediate value processing and ensuring more robust computation across a wider range of input values.

Signed-off-by: Fridah-nv <201670829+Fridah-nv@users.noreply.github.com>
@Fridah-nv Fridah-nv requested a review from a team as a code owner April 1, 2026 16:56
@Fridah-nv Fridah-nv requested review from meenchen and realAsma April 1, 2026 16:56
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 1, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 784080a9-f6d0-4a88-8d98-bd2bf1355425

📥 Commits

Reviewing files that changed from the base of the PR and between 09b3c0b and d1f9390.

📒 Files selected for processing (1)
  • modelopt/torch/quantization/algorithms.py

📝 Walkthrough

Walkthrough

Modified the _get_auto_quantize_score function in the quantization algorithms module to clamp intermediate values to the range [-1e10, 1e10] before squaring and summing, replacing the previous approach of explicit float64 conversion.

Changes

Cohort / File(s) Summary
Numeric Handling Update
modelopt/torch/quantization/algorithms.py
Modified _get_auto_quantize_score to clamp intermediate values x to [-1e10, 1e10] before computation instead of relying on explicit float64 casting. Maintains identical function signature and outputs.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

🚥 Pre-merge checks | ✅ 3 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: replacing fp64 usage with clamping in the AutoQuant score calculation, which aligns with the file modification in modelopt/torch/quantization/algorithms.py.
Security Anti-Patterns ✅ Passed PR makes purely numerical improvement to AutoQuant scoring with no unsafe deserialization, hardcoded remote code flags, eval/exec on untrusted input, nosec comments, or problematic dependencies.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fridah/fix-aq2

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 1, 2026

PR Preview Action v1.8.1

QR code for preview link

🚀 View preview at
https://NVIDIA.github.io/Model-Optimizer/pr-preview/pr-1156/

Built to branch gh-pages at 2026-04-01 21:56 UTC.
Preview will be ready when the GitHub Pages deployment is complete.

@codecov
Copy link
Copy Markdown

codecov bot commented Apr 1, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 74.95%. Comparing base (de55e8a) to head (9fdebce).

Additional details and impacted files
@@             Coverage Diff             @@
##             main    #1156       +/-   ##
===========================================
+ Coverage   54.53%   74.95%   +20.41%     
===========================================
  Files         348      349        +1     
  Lines       39766    39834       +68     
===========================================
+ Hits        21686    29857     +8171     
+ Misses      18080     9977     -8103     
Flag Coverage Δ
examples 40.40% <0.00%> (?)
gpu 57.09% <0.00%> (?)
unit 54.53% <100.00%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants