Skip to content

Conversation

@mgoin
Copy link
Member

@mgoin mgoin commented Jul 10, 2025

Purpose

Skip the known failing test #20723 for now while it is investigated

Test Plan

Test Result

@mgoin mgoin requested a review from jeejeelee as a code owner July 10, 2025 00:30
@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @mgoin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

I have created this pull request to temporarily disable a specific integration test that is currently causing CI failures. The test, test_tp2_serialize_and_deserialize_lora, is known to be problematic when combining Tensorizer and LoRA, and its underlying issue is being investigated separately. By skipping this test, the CI pipeline can proceed without being blocked, allowing other changes to be integrated.

Highlights

  • CI Stability: This pull request addresses a known CI failure by temporarily skipping a specific test case to unblock continuous integration pipelines.
  • Test Skipping: The test_tp2_serialize_and_deserialize_lora function, which involves Tensorizer and LoRA functionalities, is now skipped due to an ongoing issue documented in #19619.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mergify mergify bot added the llama Related to Llama models label Jul 10, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request disables a failing test related to Tensorizer and LoRA, which is a good temporary measure to keep the CI green. I've left one comment regarding a discrepancy in the referenced issue number in the skip reason to improve clarity for future maintenance. Otherwise, the change is correct and follows the PR's intent.

Comment on lines +154 to +155
@pytest.mark.skip(reason=("Skipping this test as tensorizer is not "
"working with LoRA as of #19619"))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Thanks for disabling this failing test. There's a small discrepancy in the issue number referenced. The PR description points to issue #20723, while the skip reason in the code mentions #19619.

To ensure clarity for future developers, it's best to reference the most relevant issue tracking this specific test failure. Assuming #20723 is the correct one, I've suggested an updated and more concise reason.

@pytest.mark.skip(reason="Skipping failing test for tensorizer with LoRA (see #20723).")

generate_and_test(llm, sql_lora_files)


@pytest.mark.skip(reason=("Skipping this test as tensorizer is not "
Copy link
Collaborator

@sangstar sangstar Jul 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please note that this test looks to be failing due to a now unsupported initialization usage pattern for TensorizerConfig instances, and does not imply that Tensorizer support with LoRA is not working, so this skip reason given here isn't accurate. The same CI run mentioned in the related Issue passed the Tensorizer LoRA test from test_tensorizer_entrypoint.py which serializes a LoRA adapter, deserializes it, serves it and its target model, and performs a completion with the adapter.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, thanks for the context. I'll still land this temporary patch to unblock other PRs in flight, but please do work on a proper fix when you can

@mgoin mgoin added the ready ONLY add when PR is ready to merge/full CI is needed label Jul 10, 2025
@mgoin mgoin merged commit be1e128 into vllm-project:main Jul 10, 2025
57 checks passed
Pradyun92 pushed a commit to Pradyun92/vllm that referenced this pull request Aug 6, 2025
npanpaliya pushed a commit to odh-on-pz/vllm-upstream that referenced this pull request Aug 6, 2025
jinzhen-lin pushed a commit to jinzhen-lin/vllm that referenced this pull request Aug 9, 2025
diegocastanibm pushed a commit to diegocastanibm/vllm that referenced this pull request Aug 15, 2025
epwalsh pushed a commit to epwalsh/vllm that referenced this pull request Aug 27, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

llama Related to Llama models ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants