vllm.model_executor.layers.quantization.utils.flashinfer_utils ¶
_shuffle_deepseek_fp8_moe_weights ¶
Preprocess DeepSeek FP8 block-scale weights for the FlashInfer TRT-LLM kernel using the shuffle + BlockMajorK layout variant.
Returns 4D weight tensors in BlockMajorK layout (E, K/block_k, Mn, block_k)
Source code in vllm/model_executor/layers/quantization/utils/flashinfer_utils.py
_shuffle_mxfp8_moe_weights ¶
_shuffle_mxfp8_moe_weights(
w13: Tensor,
w2: Tensor,
w13_scale: Tensor,
w2_scale: Tensor,
is_gated: bool,
) -> tuple[Tensor, Tensor, Tensor, Tensor]
Preprocess MXFP8 weights and scales for the FlashInfer TRT-LLM kernel.
Following flashinfer/tests/moe/test_trtllm_gen_fused_moe.py: 1. reorder_rows_for_gated_act_gemm (interleave gate/up rows) 2. shuffle_matrix_a (weight data layout shuffle) 3. shuffle_matrix_sf_a (scale factor layout shuffle)
Source code in vllm/model_executor/layers/quantization/utils/flashinfer_utils.py
align_fp4_moe_weights_for_fi ¶
align_fp4_moe_weights_for_fi(
w13: Tensor,
w13_scale: Tensor,
w2: Tensor,
w2_scale: Tensor,
is_act_and_mul: bool,
min_alignment: int = 16,
) -> tuple[Tensor, Tensor, Tensor, Tensor, int]
Pad intermediate size so FlashInfer kernels' alignment constraints hold.
Some FlashInfer FP4 MoE kernels require the intermediate size used for GEMM to be divisible by a small alignment value. When this is not satisfied (e.g. with certain tensor-parallel sizes), we pad the gate/up and down projection weights along the intermediate dim.
Source code in vllm/model_executor/layers/quantization/utils/flashinfer_utils.py
align_fp8_moe_weights_for_fi ¶
align_fp8_moe_weights_for_fi(
w13: Tensor,
w2: Tensor,
is_act_and_mul: bool,
min_alignment: int = 16,
) -> tuple[Tensor, Tensor, int]
Pad intermediate size so FlashInfer kernels' alignment constraints hold.
Some FlashInfer FP8 MoE kernels require the (gated) intermediate size used for GEMM to be divisible by a small alignment value. When this is not satisfied (e.g. with certain tensor-parallel sizes), we pad the gate/up and down projection weights along the intermediate dim.
Source code in vllm/model_executor/layers/quantization/utils/flashinfer_utils.py
convert_moe_weights_to_flashinfer_trtllm_block_layout ¶
convert_moe_weights_to_flashinfer_trtllm_block_layout(
cache_permute_indices: dict[Size, Tensor],
w13_weight: Tensor,
w2_weight: Tensor,
) -> tuple[Tensor, Tensor]
Convert expert weights to FlashInfer's block layout.
This reorders W13 and W2 into the expected epilogue-tiled block layout and returns the shuffled weight tensors.
Source code in vllm/model_executor/layers/quantization/utils/flashinfer_utils.py
prepare_fp8_moe_layer_for_fi ¶
prepare_fp8_moe_layer_for_fi(
layer: Module,
w13: Tensor,
w2: Tensor,
w13_scale: Tensor,
w13_input_scale: Tensor | None,
w2_scale: Tensor,
w2_input_scale: Tensor | None,
is_trtllm: bool = False,
) -> tuple[Tensor, Tensor, Tensor, Tensor]
Convert Fp8 MoE weights to flashinfer kernel format
Note that for trtllm we update the model state dict with the scale format needed for these kernels.
Note that for per-tensor, we update the layer's intermediate size if the weights needed padding.
Source code in vllm/model_executor/layers/quantization/utils/flashinfer_utils.py
461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 | |
rotate_weights_for_fi_trtllm_fp8_per_tensor_moe ¶
rotate_weights_for_fi_trtllm_fp8_per_tensor_moe(
gemm1_weights: Tensor,
gemm2_weights: Tensor,
is_gated_activation: bool,
)
Shuffle weights for FI TRT-LLM Format