Skip to content

[MAX] Add shift_terminal support for Qwen-Image#7

Draft
jglee-sqbits wants to merge 2 commits intomainfrom
add/qwen-image/scheduler
Draft

[MAX] Add shift_terminal support for Qwen-Image#7
jglee-sqbits wants to merge 2 commits intomainfrom
add/qwen-image/scheduler

Conversation

@jglee-sqbits
Copy link
Collaborator

@jglee-sqbits jglee-sqbits commented Mar 10, 2026

Summary

  • add shift_terminal support to FlowMatchEulerDiscreteScheduler
  • keep existing scheduler behavior unchanged when shift_terminal is unset

Testing

  • ./bazelw run format
  • ./bazelw run lint

Checklist

  • The PR is small and focused on one thing.
  • The code was formatted.
  • The code was tested.

@jglee-sqbits jglee-sqbits changed the title [MAX] Add shift_terminal support to FlowMatchEulerDiscreteScheduler [MAX] Add shift_terminal support for Qwen-Image Mar 10, 2026
@gemini-code-assist
Copy link

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the FlowMatchEulerDiscreteScheduler by adding support for the shift_terminal parameter. This allows users to stretch shifted sigmas, influencing the terminal value of the diffusion process. The changes ensure that the existing behavior of the scheduler is preserved when the new parameter is not utilized, providing a backward-compatible enhancement.

Highlights

  • New Feature: Introduced shift_terminal support to the FlowMatchEulerDiscreteScheduler.
  • Parameter Addition: Added a shift_terminal parameter to the scheduler's __init__ method, allowing for stretching shifted sigmas.
  • Behavior Preservation: Ensured that the existing scheduler behavior remains unchanged when shift_terminal is not set.
  • Sigma Stretching: Implemented logic to stretch sigmas so that the last sigma value equals shift_terminal if it's set and greater than 0.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • max/python/max/pipelines/lib/diffusion_schedulers/scheduling_flow_match_euler_discrete.py
    • Added shift_terminal parameter and logic to stretch sigmas.
Activity
  • Added shift_terminal support for FlowMatchEulerDiscreteScheduler
  • Kept existing scheduler behavior unchanged when shift_terminal is unset
  • Ran ./bazelw run format
  • Ran ./bazelw run lint
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for shift_terminal to the FlowMatchEulerDiscreteScheduler to allow for stretching of sigma values. However, it introduces a critical vulnerability: the lack of proper validation for the shift_terminal value can lead to a division by zero, causing an application crash (Denial of Service). A specific issue identified is a potential ZeroDivisionError if shift_terminal is exactly 1.0.

Comment on lines +156 to +160
one_minus_z = 1.0 - sigmas
scale_factor = one_minus_z[-1] / (1.0 - self.shift_terminal)
sigmas = (1.0 - (one_minus_z / scale_factor)).astype(
np.float32
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

The code performs a division by (1.0 - self.shift_terminal) on line 157. If self.shift_terminal is 1.0, this will cause a ZeroDivisionError, leading to an application crash (Denial of Service). This is a critical vulnerability as scheduler parameters are often exposed to users. It is recommended to add validation in the __init__ method to ensure shift_terminal is within a safe range, such as (0, 1). Additionally, consider handling this edge case explicitly by checking for self.shift_terminal being close to 1.0 using np.isclose and setting sigmas to 1.0 in that scenario.

Suggested change
one_minus_z = 1.0 - sigmas
scale_factor = one_minus_z[-1] / (1.0 - self.shift_terminal)
sigmas = (1.0 - (one_minus_z / scale_factor)).astype(
np.float32
)
if np.isclose(self.shift_terminal, 1.0):
sigmas = np.ones(sigmas.shape, dtype=np.float32)
else:
one_minus_z = 1.0 - sigmas
scale_factor = one_minus_z[-1] / (1.0 - self.shift_terminal)
sigmas = (1.0 - (one_minus_z / scale_factor)).astype(np.float32)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant