Skip to content

[Feature Request] Multi NUMA CPU Tensor Parallel #3303

@rankaiyx

Description

@rankaiyx

Feature Description

By splitting the weights between NUMA nodes, and then doing tensor parallel between those nodes, bandwidth utilization can be significantly improved on multi socket systems, circumventing the problem of overloading the link between them when weights are shared globally. This was also recently implemented by sglang: https://lmsys.org/blog/2025-07-14-intel-xeon-optimization/#multi-numa-parallelism

Motivation

Likely faster

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions