vllm.model_executor.models.molmo2 ¶
AdapterConfig dataclass ¶
Config for a vit-llm adapter
Source code in vllm/model_executor/models/molmo2.py
ImagePoolingAttention ¶
Bases: Module
Multi-head attention used for image pooling
Source code in vllm/model_executor/models/molmo2.py
543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 | |
ImageProjectorMLP ¶
Bases: Module
MLP used for the image projector
Source code in vllm/model_executor/models/molmo2.py
LanguageModelMLP ¶
Bases: Module
Molmo2's LLM mlp.
Source code in vllm/model_executor/models/molmo2.py
Molmo2Attention ¶
Bases: Module
Molmo2's LLM Attention.
Source code in vllm/model_executor/models/molmo2.py
881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 | |
Molmo2ForConditionalGeneration ¶
Bases: Module, SupportsMultiModal, SupportsPP, SupportsLoRA, SupportsQuant
Source code in vllm/model_executor/models/molmo2.py
2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 2559 2560 2561 2562 2563 2564 2565 2566 2567 2568 2569 2570 2571 2572 2573 2574 2575 2576 2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 2606 2607 2608 2609 2610 2611 2612 2613 2614 2615 2616 2617 2618 2619 2620 2621 2622 2623 2624 2625 2626 2627 2628 2629 2630 2631 2632 2633 2634 2635 2636 2637 2638 2639 2640 2641 2642 2643 2644 2645 2646 2647 2648 2649 2650 2651 2652 2653 2654 2655 2656 2657 2658 2659 2660 2661 2662 2663 2664 2665 2666 2667 2668 2669 2670 2671 2672 2673 2674 2675 2676 2677 2678 2679 2680 2681 2682 2683 2684 2685 2686 2687 2688 2689 2690 2691 2692 2693 2694 2695 2696 2697 2698 2699 2700 2701 2702 2703 2704 2705 2706 2707 2708 2709 2710 2711 2712 2713 2714 2715 2716 2717 2718 2719 2720 2721 2722 2723 2724 2725 2726 2727 2728 2729 2730 2731 2732 2733 2734 2735 2736 2737 2738 2739 2740 2741 2742 2743 2744 2745 2746 2747 2748 2749 2750 2751 2752 2753 2754 2755 2756 2757 2758 2759 2760 2761 2762 2763 2764 2765 2766 2767 2768 2769 2770 2771 2772 2773 2774 2775 | |
get_mm_mapping ¶
Get the module prefix in multimodal models
Source code in vllm/model_executor/models/molmo2.py
Molmo2ImageInputs ¶
Bases: TensorSchema
Dimensions
- nc: The total number of crops (dynamic)
- np: The total number of patches per crop
- cps: Number of channels * patch_size * patch_size
- npp: Number of pooled patches (dynamic)
- pp: pooling_size * pooling_size
- ni: Number of images
- nt: Number of image tokens (dynamic)
Source code in vllm/model_executor/models/molmo2.py
token_pooling instance-attribute ¶
token_pooling: Annotated[Tensor, TensorShape(npp, pp)]
An index tensor that maps image features to their corresponding patch tokens before pooling.
Molmo2ProcessorWrapper ¶
Wraps :class:Molmo2Processor so that it can be called directly.
Source code in vllm/model_executor/models/molmo2.py
1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 | |
Molmo2VideoInputs ¶
Bases: TensorSchema
Dimensions
- nc: The total number of frames (dynamic)
- np: The total number of patches per frame
- cps: Number of channels * patch_size * patch_size
- npp: Number of pooled patches (dynamic)
- pp: pooling_size * pooling_size
- nv: Number of videos
- nt: Number of video tokens (dynamic)
Source code in vllm/model_executor/models/molmo2.py
token_pooling instance-attribute ¶
token_pooling: Annotated[Tensor, TensorShape(npp, pp)]
An index tensor that maps image features to their corresponding patch tokens before pooling.
Molmo2VisionBackbone ¶
Bases: Module, SupportsQuant
Source code in vllm/model_executor/models/molmo2.py
705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 | |
encode_image ¶
: param images: (batch_size, num_crops, num_patch, n_pixels)
Source code in vllm/model_executor/models/molmo2.py
Molmo2VisionBlock ¶
Bases: Module
Residual attention block used in Vision Transformer.
Source code in vllm/model_executor/models/molmo2.py
Molmo2VisionBlockCollection ¶
Bases: Module
Collection of residual attention blocks used in Vision Transformer.
Source code in vllm/model_executor/models/molmo2.py
Molmo2VisionTransformer ¶
Bases: Module
Vision Transformer used in Vision Backbone.
Source code in vllm/model_executor/models/molmo2.py
forward ¶
: param x: (batch_size, num_patch, n_pixels)
Source code in vllm/model_executor/models/molmo2.py
TextConfig dataclass ¶
Configuration for a text model transformer
Source code in vllm/model_executor/models/molmo2.py
202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 | |
additional_vocab_size class-attribute instance-attribute ¶
additional_vocab_size: int = 128
Number of additional tokens to have the input embeddings for
head_dim class-attribute instance-attribute ¶
head_dim: int = 128
The head dimensionality for the attention mechanism.
hidden_act class-attribute instance-attribute ¶
hidden_act: str = 'silu'
The activation function to use within the MLP layers.
hidden_size class-attribute instance-attribute ¶
hidden_size: int = 3584
The hidden size of the model.
intermediate_size class-attribute instance-attribute ¶
intermediate_size: int = 18944
The hidden size for the MLP.
layer_norm_eps class-attribute instance-attribute ¶
layer_norm_eps: float = 1e-06
epsilon for layer norms
max_position_embeddings class-attribute instance-attribute ¶
max_position_embeddings: int = 4096
Max positional embeddings to use in RoPE cache
norm_after class-attribute instance-attribute ¶
norm_after: bool = False
Apply layer norm before and after the attention and MLP blocks.
num_attention_heads class-attribute instance-attribute ¶
num_attention_heads: int = 28
The number of self-attention heads.
num_hidden_layers class-attribute instance-attribute ¶
num_hidden_layers: int = 48
The number of layers/blocks.
num_key_value_heads class-attribute instance-attribute ¶
num_key_value_heads: int = 4
The number of heads to use for keys and values.
qk_norm_type class-attribute instance-attribute ¶
qk_norm_type: str = 'olmo'
The type of layer norm to use for the keys and queries. Can be "olmo" or "qwen3".
rope_scaling_layers class-attribute instance-attribute ¶
RoPE scaling layers.
ViTMLP ¶
Bases: Module
MLP used in Vision Transformer.
Source code in vllm/model_executor/models/molmo2.py
ViTMultiHeadDotProductAttention ¶
Bases: Module
Multi-head attention used in Vision Transformer.
Source code in vllm/model_executor/models/molmo2.py
VitConfig dataclass ¶
Config for a vision transformer
Source code in vllm/model_executor/models/molmo2.py
get_candidate_target_fps ¶
get_candidate_target_fps(
video_fps: int | float,
sampling_fps: int | float,
max_fps: int | float = _MAX_VIDEO_FPS,
) -> list[float]
Return the subset of video_fps factors that remain multiples of sampling_fps.
Examples:
>>> get_candidate_target_fps(video_fps=6, sampling_fps=2)
[2, 6]
>>> get_candidate_target_fps(video_fps=5, sampling_fps=1)
[1, 5]
>>> get_candidate_target_fps(video_fps=2, sampling_fps=2)
[2]
>>> get_candidate_target_fps(video_fps=5, sampling_fps=2)
Traceback (most recent call last):
...
ValueError: sampling_fps=2 must divide video_fps=5 to produce
consistent frame steps.
Source code in vllm/model_executor/models/molmo2.py
get_target_fps ¶
get_target_fps(
video_fps: float,
max_frames: int,
total_frames: int,
frame_sample_mode: str,
candidate_target_fps: list[float],
) -> float | None
Get the target fps that best spans the video and has the most frames sampled