Skip to content

Fix omni model test CI issue#1667

Open
lvliang-intel wants to merge 3 commits intomainfrom
lvl/fix_omni_ci_issue
Open

Fix omni model test CI issue#1667
lvliang-intel wants to merge 3 commits intomainfrom
lvl/fix_omni_ci_issue

Conversation

@lvliang-intel
Copy link
Copy Markdown
Contributor

Description

https://dev.azure.com/lpot-inc/neural-compressor/_build/results?buildId=56973&view=logs&j=44c25250-aab3-5e31-d6d7-8ba2147b1266&t=262f41be-8379-5409-f492-e6c716395db9&s=883af604-c69a-512d-c028-4ffa383c1da9
Uninitialized memory producing Intermittent NaN Values, the test creates the model without a fixed random seed or explicit weight initialization.

Type of Change

  • Bug fix
  • New feature
  • Documentation update
  • Performance improvement
  • Code refactoring
  • Other (please specify):

Related Issues

Fixes or relates to #

Checklist Before Submitting

  • My code has been tested locally.
  • Documentation has been updated as needed.
  • New or updated tests are included where applicable.

Signed-off-by: lvliang-intel <liang1.lv@intel.com>
Copilot AI review requested due to automatic review settings April 7, 2026 14:13
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR stabilizes the Qwen3-Omni-MoE weight fidelity unit test by making model initialization deterministic, addressing intermittent CI failures attributed to non-deterministic random initialization.

Changes:

  • Seed PyTorch RNG before constructing the tiny Qwen3-Omni-MoE model in the weight fidelity test.

@lvliang-intel
Copy link
Copy Markdown
Contributor Author

/azp run Unit-Test-CUDA-AutoRound

@azure-pipelines
Copy link
Copy Markdown

Azure Pipelines could not run because the pipeline triggers exclude this branch/path.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants