Skip to content

[MAX] Add Qwen-Image-Edit pipeline#12

Draft
jglee-sqbits wants to merge 1 commit intomainfrom
add/qwen-image/edit
Draft

[MAX] Add Qwen-Image-Edit pipeline#12
jglee-sqbits wants to merge 1 commit intomainfrom
add/qwen-image/edit

Conversation

@jglee-sqbits
Copy link
Collaborator

@jglee-sqbits jglee-sqbits commented Mar 10, 2026

Summary

  • add the dedicated Qwen image edit architecture and pipeline
  • add shared multimodal prompt encoding for Qwen image edit
  • wire edit-specific context and registry handling
  • keep the implementation on the module-v2 path
  • clarify edit-specific conditioning inputs and reduce avoidable reshape recompiles in edit pipeline glue code

Testing

  • ./bazelw run format
  • ./bazelw run lint
  • validated end-to-end Qwen-Image-Edit generation locally
  • validated single-image edit generation locally at 512x512 and 1024x1024

Checklist

  • The PR is small and focused on one thing.
  • The code was formatted.
  • The code was tested.

@jglee-sqbits jglee-sqbits changed the title [MAX] Add Qwen image edit pipeline [MAX] Add Qwen-Image-Edit pipeline Mar 10, 2026
@gemini-code-assist
Copy link

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces the Qwen-Image-Edit pipeline, significantly enhancing the system's image generation and editing capabilities. It integrates a sophisticated multimodal prompt encoder that processes both text and image inputs, allowing for more nuanced and controlled image conditioning. The pipeline also incorporates true classifier-free guidance for improved output quality and provides specialized model configurations, making it a powerful tool for complex image manipulation tasks.

Highlights

  • Qwen-Image-Edit Pipeline Integration: Added the dedicated Qwen-Image-Edit architecture and pipeline, including 'QwenImageEditPipeline' and 'QwenImageEditPlusPipeline', to enable advanced image editing capabilities.
  • Multimodal Prompt Encoding: Implemented shared multimodal prompt encoding for Qwen image editing, allowing the model to process both text prompts and input images for conditioning.
  • Enhanced Argument Parsing and Context Handling: Updated the offline generation script to support Qwen-specific guidance scales and multiple input images, and refined the PixelContext to manage new image conditioning fields.
  • Module-V2 Path Adherence: Ensured the new Qwen-Image-Edit implementation is consistent with the module-v2 path, maintaining architectural alignment.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • max/examples/diffusion/simple_offline_generation.py
    • Defined constants for Qwen image architecture names and default guidance scales.
    • Modified argument parsing to support 'guidance-scale' and 'true-cfg-scale' with Qwen-specific defaults.
    • Updated 'input-image' argument to allow multiple inputs.
    • Adjusted assertions for 'guidance-scale' and 'true-cfg-scale' to handle 'None' defaults.
    • Extended tokenizer 'max_length' logic to include Qwen image architectures.
    • Refactored image loading to handle multiple input images and convert them to data URIs.
    • Implemented logic to determine 'guidance_scale' and 'true_cfg_scale' based on Qwen image edit family and negative prompt presence.
    • Updated 'UserMessage' content to support multiple 'InputImageContent' items.
    • Passed 'true_cfg_scale' to 'PixelGenerationParams'.
    • Modified warmup image loading to use the first input image if multiple are provided.
    • Updated parameter logging to include 'true_cfg_scale'.
  • max/python/max/pipelines/architectures/init.py
    • Imported 'qwen_image_edit_arch' and 'qwen_image_edit_plus_arch'.
    • Registered the new Qwen image edit architectures.
  • max/python/max/pipelines/architectures/qwen2_5vl/encoder/init.py
    • Added 'Qwen25VLEncoderModel' and 'Qwen25VLMultimodalEncoderModel' to 'all'.
  • max/python/max/pipelines/architectures/qwen2_5vl/encoder/multimodal_encoder.py
    • Added 'Qwen25VLMultimodalEncoderModel' for multimodal prompt encoding.
    • Implemented vision encoder compilation and image processing.
    • Provided 'encode' method to combine text and vision embeddings.
  • max/python/max/pipelines/architectures/qwen_image_edit/init.py
    • Added 'QwenImageEditPipeline', 'qwen_image_edit_arch', and 'qwen_image_edit_plus_arch' to 'all'.
  • max/python/max/pipelines/architectures/qwen_image_edit/arch.py
    • Defined 'qwen_image_edit_arch' and 'qwen_image_edit_plus_arch' as 'SupportedArchitecture' instances for pixel generation.
  • max/python/max/pipelines/architectures/qwen_image_edit/model.py
    • Added 'QwenImageEditTransformerModel' with lazy graph compilation for 'num_noise_tokens'.
  • max/python/max/pipelines/architectures/qwen_image_edit/pipeline_qwen_image_edit.py
    • Added 'QwenImageEditPipeline' for QwenImage image editing.
    • Implemented multimodal prompt encoding, VAE image-conditioning, and true CFG.
    • Defined methods for preparing inputs, building graph components, and executing the denoising loop.
  • max/python/max/pipelines/core/context.py
    • Updated 'input_image' to 'input_images' (list of numpy arrays).
    • Added 'prompt_images' and 'vae_condition_images' fields to 'PixelContext'.
    • Modified 'update' method to accept 'latents' instead of 'image'.
    • Adjusted 'to_generation_output' to use 'latents' for output.
  • max/python/max/pipelines/lib/interfaces/diffusion_pipeline.py
    • Removed 'default_num_inference_steps'.
    • Updated 'CompileTarget' type alias.
    • Modified '_load_sub_models' to conditionally pass 'session' to component constructors.
    • Updated 'PixelModelInputs' to use 'input_images' (list of Any) instead of 'input_image' (single PIL Image).
    • Refactored 'CompileWrapper' to remove 'Module' specific compilation and simplify 'call' method.
  • max/python/max/pipelines/lib/registry.py
    • Updated 'primary_max_length' calculation for tokenizer based on pipeline class name, specifically for QwenImage pipelines.
Activity
  • The PR author has formatted the code.
  • The PR author has validated end-to-end Qwen-Image-Edit generation locally.
  • The PR is small and focused on one thing.
  • The code was tested (according to the checklist, but the box is unchecked).
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces the Qwen-Image-Edit pipeline, including its architecture, a dedicated multimodal prompt encoder, and updates to the example generation script. The changes are extensive and well-structured, adding support for image editing capabilities. My review focuses on improving code clarity, maintainability, and robustness. I've identified a few areas for improvement: local imports should be moved to the top of the file for better style, and in PixelContext, the repurposing of the latents field for the final image is confusing and could be made clearer. Additionally, a validity check for the generated output seems to have been removed and should be restored to prevent potential errors.

I am having trouble creating individual review comments. Click here to see my feedback.

max/python/max/pipelines/core/context.py (760-762)

high

The use of the name latents for both the parameter in this method and the class field self.latents is confusing, as it now seems to store the final decoded image data rather than the intermediate latent representation from the diffusion model. This can be misleading for future maintenance and debugging. Consider renaming the parameter to something like output_image to better reflect its content. If repurposing self.latents to store the final image is intentional, this should be clearly documented in the field's docstring.

    def update(self, output_image: npt.NDArray[Any]) -> None:
        """Update the context with newly generated latents/image data."""
        self.latents = output_image

max/python/max/pipelines/core/context.py (764-770)

high

The check to ensure the output image is valid before creating the GenerationOutput has been removed. Previously, there was a check for self.image is None. While self.latents will have a default value (an empty array), it's important to verify that it contains a valid generated image before attempting to convert it. An explicit check for content validity should be restored to prevent potential errors if generation fails or is incomplete.

max/python/max/pipelines/architectures/qwen_image_edit/pipeline_qwen_image_edit.py (159)

medium

This file contains several local imports within functions (e.g., load_weights on this line, AutoTokenizer on line 165, and float32_to_bfloat16_as_uint16 on lines 508 and 597). According to PEP 8, imports should usually be at the top of the file. Moving these imports to the top would improve code readability and consistency, and avoid repeated import overhead.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant