Skip to content

Conversation

brian-dellabetta
Copy link
Collaborator

@brian-dellabetta brian-dellabetta commented Sep 24, 2025

SUMMARY:
A user reported in #1848 that our version pin for datasets is incompatible with lm_eval, a dev dependency that is often used in tandem with llm-compressor to eval compressed models.

(I have observed that llm-compressor and lm-evaluation-harness seem to have conflicting dependencies, specifically with the datasets library. This requires running them in separate Python environments.)

After trying with datasets 3.6.0, we have found that datasets<4.0 breaks tests with the following error:

ValueError: Feature type 'List' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'Sequence', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf']

so i will instead raise an issue in lm_eval. This PR now just loosens our dev restriction on lm_eval version, necessary for new tasks like arc_challenge_llama.

TEST PLAN:
n/a

Copy link

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

Copy link
Contributor

Summary of Changes

Hello @brian-dellabetta, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a reported dependency conflict between llm-compressor and lm_eval, specifically concerning the datasets library. By adjusting the version constraints for both datasets and lm_eval in the project's setup.py, the PR aims to enable these tools to function together within the same Python environment, improving developer experience and workflow compatibility.

Highlights

  • Dependency Compatibility: The datasets dependency version range has been broadened to include v3.6.0 (previously v4.0.0) to resolve an incompatibility issue with lm_eval.
  • Loosened lm_eval Pin: The lm_eval development dependency has been updated from a strict pin (==0.4.5) to a minimum version (>=0.4.5), allowing for greater flexibility.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request loosens the version constraints for datasets and lm_eval to resolve a dependency conflict. While the changes achieve the immediate goal, they introduce potential long-term maintenance and stability risks. My review includes a high-severity comment about the datasets dependency spanning multiple major versions, which is a significant maintenance concern. I've also added a medium-severity comment suggesting a safer version constraint for the lm_eval development dependency to improve build stability, along with a code suggestion.

@dhuangnm
Copy link
Collaborator

Thanks Brian. We used to hit errors with datasets<4.0.0 thus pinning it originally. Let me run the tests again with this PR to see if we still see those issues.

fynnsu
fynnsu previously approved these changes Sep 24, 2025
@dhuangnm
Copy link
Collaborator

@brian-dellabetta I ran the nightly, e2e and lm eval tests with the PR (using datasets 3.6.0) and tests look fine except for some known failures so I think the changes look good.

The runs are here:
https://github.com/neuralmagic/llm-compressor-testing/actions/runs/17981209012/job/51147473913
https://github.com/neuralmagic/llm-compressor-testing/actions/runs/17981239901
https://github.com/neuralmagic/llm-compressor-testing/actions/runs/17982663322/job/51152673599

dhuangnm
dhuangnm previously approved these changes Sep 24, 2025
@brian-dellabetta brian-dellabetta added the ready When a PR is ready for review label Sep 24, 2025
@brian-dellabetta
Copy link
Collaborator Author

brian-dellabetta commented Sep 24, 2025

@brian-dellabetta I ran the nightly, e2e and lm eval tests with the PR (using datasets 3.6.0) and tests look fine except for some known failures so I think the changes look good.

The runs are here: https://github.com/neuralmagic/llm-compressor-testing/actions/runs/17981209012/job/51147473913 https://github.com/neuralmagic/llm-compressor-testing/actions/runs/17981239901 https://github.com/neuralmagic/llm-compressor-testing/actions/runs/17982663322/job/51152673599

Thanks @dhuangnm for testing it out! It does look like there are some errors related to datasets -- https://github.com/neuralmagic/llm-compressor-testing/actions/runs/17981209012/job/51147473913#step:12:21470

ValueError: Feature type 'List' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'Sequence', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf']

I actually ran into this today running lm_eval with datasets<4.0. So I think we need to stick to datasets >=4.0. I will raise a ticket on lm_eval to see if they can support datasets>=4.0 instead.

@dhuangnm
Copy link
Collaborator

@brian-dellabetta I ran the nightly, e2e and lm eval tests with the PR (using datasets 3.6.0) and tests look fine except for some known failures so I think the changes look good.
The runs are here: https://github.com/neuralmagic/llm-compressor-testing/actions/runs/17981209012/job/51147473913 https://github.com/neuralmagic/llm-compressor-testing/actions/runs/17981239901 https://github.com/neuralmagic/llm-compressor-testing/actions/runs/17982663322/job/51152673599

Thanks @dhuangnm for testing it out! It does look like there are some errors related to datasets -- https://github.com/neuralmagic/llm-compressor-testing/actions/runs/17981209012/job/51147473913#step:12:21470

ValueError: Feature type 'List' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'Sequence', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf']

I actually ran into this today running lm_eval with datasets<4.0. So I think we need to stick to datasets >=4.0. I will raise a ticket on lm_eval to see if they can support datasets>=4.0 instead.

You're right @brian-dellabetta , good catch on the error. Sorry I missed these last few failures in the nightly job, strange when I searched error in the job originally they didn't show up but now I do remember seeing this error before when we had to pin the datasets in the last release.

Copy link
Collaborator

@dhuangnm dhuangnm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's switch back to "datasets>=4.0.0" due to the error.

Signed-off-by: Brian Dellabetta <[email protected]>
@brian-dellabetta brian-dellabetta dismissed stale reviews from fynnsu and dhuangnm via bd4e792 September 25, 2025 14:51
@brian-dellabetta brian-dellabetta changed the title [Dependencies] allow datasets v3 for lm_eval version compatibility [Dependencies] update lm_eval version pin Sep 25, 2025
@brian-dellabetta
Copy link
Collaborator Author

Let's switch back to "datasets>=4.0.0" due to the error.

Thanks @dhuangnm , I updated this PR to just update the lm_eval dev pin, to be more in line with the other dev version pins. Will instead try to resolve the user issue by raising a ticket on lm_eval to support datasets>=4.0.

Copy link
Collaborator

@dhuangnm dhuangnm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Brian, LGTM!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready When a PR is ready for review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants