Skip to content

feat: add tract_moe_ffn operator for Mixture-of-Experts FFN#2084

Open
JulienBalianSonos wants to merge 2 commits intomainfrom
feat/moe-ffn-operator
Open

feat: add tract_moe_ffn operator for Mixture-of-Experts FFN#2084
JulienBalianSonos wants to merge 2 commits intomainfrom
feat/moe-ffn-operator

Conversation

@JulienBalianSonos
Copy link
Copy Markdown
Collaborator

Implements the tract_moe_ffn operator in the tract_transformers extension, enabling inference of MoE-based models (Mixtral, GPT-OSS, Qwen MoE) exported via torch_to_nnef.

The operator encapsulates the full MoE FFN block:

  • Router: x @ wg.T -> top-k expert selection with softmax gating
  • Token grouping: batch tokens per expert for efficient GEMM
  • Expert FFN: SwiGLU (silu(x@w1) * (x@w3)) @ w2 with BLAS-backed matmul
  • Weighted scatter-add of expert outputs

Real conditional compute: unused experts are fully skipped. Handles both 2D [T,D] and 3D [B,S,D] input shapes.

Verified bit-exact against PyTorch on TitanML/tiny-mixtral (8 experts, top-2, 246M params).

Implements the tract_moe_ffn operator in the tract_transformers
extension, enabling inference of MoE-based models (Mixtral, GPT-OSS,
Qwen MoE) exported via torch_to_nnef.

The operator encapsulates the full MoE FFN block:
- Router: x @ wg.T -> top-k expert selection with softmax gating
- Token grouping: batch tokens per expert for efficient GEMM
- Expert FFN: SwiGLU (silu(x@w1) * (x@w3)) @ w2 with BLAS-backed matmul
- Weighted scatter-add of expert outputs

Real conditional compute: unused experts are fully skipped.
Handles both 2D [T,D] and 3D [B,S,D] input shapes.

Verified bit-exact against PyTorch on TitanML/tiny-mixtral (8 experts,
top-2, 246M params).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant