Skip to content

[MAX] Add Qwen2.5-VL encoder for Qwen-Image#9

Draft
jglee-sqbits wants to merge 1 commit intomainfrom
add/qwen-image/encoder
Draft

[MAX] Add Qwen2.5-VL encoder for Qwen-Image#9
jglee-sqbits wants to merge 1 commit intomainfrom
add/qwen-image/encoder

Conversation

@jglee-sqbits
Copy link
Collaborator

@jglee-sqbits jglee-sqbits commented Mar 10, 2026

Summary

  • add the shared Qwen2.5-VL encoder used by the Qwen image pipelines
  • keep the encoder on the module-v2 path with Buffer-based interfaces
  • keep edit-only multimodal helpers out of this branch

Testing

  • ./bazelw run format
  • ./bazelw run lint

Checklist

  • The PR is small and focused on one thing.
  • The code was formatted.
  • The code was tested.

@jglee-sqbits jglee-sqbits changed the title [MAX] Add Qwen2.5-VL encoder [MAX] Add Qwen2.5-VL encoder for Qwen-Image Mar 10, 2026
@gemini-code-assist
Copy link

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces the Qwen2.5-VL encoder, a foundational component for processing text within the Qwen-Image multimodal pipelines. The implementation prioritizes smooth integration into the existing module-v2 framework, ensuring compatibility with Buffer-based interfaces. This work establishes the necessary text encoding infrastructure, paving the way for future enhancements to Qwen-Image capabilities.

Highlights

  • Qwen2.5-VL Encoder Addition: Added the Qwen2.5-VL encoder, a critical component for the Qwen image pipelines, enabling text processing capabilities.
  • Module-v2 Integration: Ensured the new encoder integrates seamlessly with the existing module-v2 architecture, utilizing Buffer-based interfaces for consistent operation.
  • Scope Management: Maintained a focused scope by explicitly excluding edit-only multimodal helper functions from this branch.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • max/python/max/pipelines/architectures/qwen2_5vl/encoder/init.py
    • Added module exports for Qwen25VLEncoderModel and Qwen25VLMultimodalEncoderModel.
  • max/python/max/pipelines/architectures/qwen2_5vl/encoder/layers/init.py
    • Added module export for Qwen25VLEncoderAttention.
  • max/python/max/pipelines/architectures/qwen2_5vl/encoder/layers/attention.py
    • Implemented the Qwen25VLEncoderAttention class, providing encoder-only attention with bias support for Qwen2.5-VL.
  • max/python/max/pipelines/architectures/qwen2_5vl/encoder/model.py
    • Defined the Qwen25VLEncoderModel as a ComponentModel wrapper, managing the loading and execution of the Qwen2.5-VL text encoder.
  • max/python/max/pipelines/architectures/qwen2_5vl/encoder/model_config.py
    • Created Qwen25VLTextEncoderConfig and Qwen25VLTextEncoderConfigBase to define and generate configuration parameters for the Qwen2.5-VL text encoder.
  • max/python/max/pipelines/architectures/qwen2_5vl/encoder/qwen25vl.py
    • Implemented the core transformer components, including Qwen25VLMLP, Qwen25VLEncoderTransformerBlock, and Qwen25VLTextEncoderTransformer, for the Qwen2.5-VL text encoder.
Activity
  • The author confirmed the PR is small and focused on one thing.
  • The code was formatted according to project standards.
  • The author indicated that the code was not yet tested, as per the checklist.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds the Qwen2.5-VL encoder. The implementation is well-structured, but I have identified two potential high-severity issues that could affect model correctness. One concern is the reuse of a weight name mapping from Llama3 for the Qwen model, which might lead to incorrect weight loading. The second issue is a potential misconfiguration of the RotaryEmbedding layer's dim parameter, which could cause incorrect calculations in the attention mechanism. I've provided specific feedback and suggestions for these points.

from max.nn.embedding import Embedding
from max.nn.layer import Module
from max.pipelines.architectures.llama3.weight_adapters import (
LLAMA_SAFETENSOR_MAPPING as QWEN_SAFETENSOR_MAP,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Reusing LLAMA_SAFETENSOR_MAPPING for a Qwen model is risky. While there might be similarities in layer naming, differences between model architectures could lead to incorrect weight loading or hard-to-debug errors. For clarity and safety, it's better to define a specific QWEN_SAFETENSOR_MAP for this architecture. If the mapping is indeed identical, adding a comment to clarify this would be beneficial for future maintenance.

device = config.device

self.rope = RotaryEmbedding(
dim=config.hidden_size,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The RotaryEmbedding is initialized with dim=config.hidden_size. However, it's applied to query and key tensors that have been reshaped to (..., num_heads, head_dim). The dim parameter for RotaryEmbedding should typically match the feature dimension it operates on, which is head_dim in this case. Using hidden_size is likely incorrect and could lead to shape mismatches or incorrect application of rotary embeddings during attention computation.

Suggested change
dim=config.hidden_size,
dim=config.head_dim,

@jglee-sqbits jglee-sqbits force-pushed the add/qwen-image/encoder branch from e6b389e to 88b352b Compare March 24, 2026 08:34
@jglee-sqbits jglee-sqbits force-pushed the add/qwen-image/encoder branch from 88b352b to 6eb37d4 Compare March 24, 2026 08:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant