-
-
Couldn't load subscription status.
- Fork 10.9k
[Neuron] Add an option to build with neuron #2065
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @liangfu, apologies for the late review and thanks for the PR! I like this PR in that you didn't submit a big PR at once but instead split it into small parts. :)
Overall, I think moving the import statements is not a good idea. Considering the architecture you showed last time, I think we can just skip loading the modules that try to import custom ops. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@liangfu Apologies for the late review and thanks for addressing my comments! Left some very minor comments on styles. Looking forward to the next PRs!
Co-authored-by: Woosuk Kwon <[email protected]>
This PR adds an option that setup vLLM to build with Neuron toolchain (include neuronx-cc and transformers-neuronx).
This would help us build
, where the neuron version comes out of the compiler version (neuronx-cc 2.12).
This is part of the effort to add support to accelerate LLM inference with Trainium/Inferentia (see #1866) .