-
Notifications
You must be signed in to change notification settings - Fork 129
Only run test job after basic checks are green to reduce wasted CI work. #249
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
PyPy 3.8 and NumPy 1.21.1 works locally for me using the official PyPy binaries so my guess is that there is some dependency missing in the GitHub images which I have installed locally. |
This requires removing the Poetry lockfiles as it appears impossible to have a single lock spanning such a diverse set of Python interpreters.
|
@kngwyu @davidhewitt So what started as simply trying to avoid a few wasted CPU cycles sprawled into wasting quite the amount of CPU cycles by triyng to add PyPy to our CI... :-\ It does seem to work now, but I had to remove the Using Poetry also seems pretty slow as it will resolve NumPy versions which have to be built from source instead of downloading a pre-built wheel. I tried to avoid that by manually setting up a venv and installing our dependencies without version specifications but ran into trouble on macOS and had issues manually handling the difference between Windows and POSIX environments. In addition building NumPy for PyPy 3.8 failed for unclear reasons on Windows and both Windows and macOS were considerably slower than Linux. Hence I decided to only add PyPy on Linux as CI targets. |
davidhewitt
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 looks good to me, yeah I had a lot of similar fun when I first added PyPy support for setuptools-rust and PyO3 in CI!
Strange that numpy doesn't have prebuilt wheels for PyPy on Windows and macOS?
R.e. poetry issues, do you think it's worth trying tox? Has worked nicely in PyO3 for its examples.
| fail-fast: false | ||
| matrix: | ||
| python-version: [3.6, 3.7, 3.8, 3.9] | ||
| python-version: [3.7, 3.8, 3.9] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider adding 3.10, and 3.11-dev (if numpy supports them?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did, but it seems that this does not work without a Git dependency on PyO3 until 0.16 is released.
It is older NumPy versions missing pre-built wheels for newer PyPy versions. But Poetry's version resolution seems to always select these older versions based on our specified Python version range (not based on the actual Python version in use). This is why just installing NumPy into a virtualenv using pip is much faster: It will select the newest stable version which usually also has a prebuilt wheel available.
Have not used it so far, but will give it a try... |
This has certainly simplified things but NumPy really does not seem to provide pre-built wheels for PyPy on macOS and Windows and hence I left them disabled. I also switched our |
kngwyu
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, looks nicely shaped.
About PyPy: I think only linux is OK for now, to avoid compiling numpy.
| cargo_toml["dependencies"]["numpy"]["path"] = "../rust-numpy" | ||
| cargo_toml["dependencies"]["ndarray"] = "0.13.1" | ||
| cargo_toml["dependencies"]["num-complex"] = "0.2.4" | ||
| cargo_toml["workspace"] = {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this 👍🏼
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Me too, but it was almost a necessity due to caching: The caching action would fail if its working directory was not present when run, hence building within an existing directory significantly simplified things compared to re-ordering steps so that the copy was created first, and then the caching action triggered.
| cargo update -p $(cargo pkgid -p ndarray 2>&1 >/dev/null | grep 0.15 | sed -e 's/^[ \t]*//') --precise 0.13.1 | ||
| working-directory: ../simple-extension-msrv | ||
| - name: Test Example | ||
| import toml | ||
| import subprocess | ||
| cargo_lock = toml.load("Cargo.lock") | ||
| for pkg in cargo_lock["package"]: | ||
| if pkg["name"] == "ndarray" and pkg["version"] != "0.13.1": | ||
| pkg_id = pkg["name"] + ":" + pkg["version"] | ||
| subprocess.run(["cargo", "update", "--package", pkg_id, "--precise", "0.13.1"], check=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I personally like the previous shell hack, but this version looks also OK
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have to admit that I am not very fond of shell oneliners. Especially in this case where a human-readable error message has to be ignored and the current ndarray version 0.15 was hard-coded. And since we have Python at hand in any case, it felt nicer to uniformly handle all the Cargo manipulations using the toml package.
| runs-on: ${{ matrix.platform.os }} | ||
| needs: [lint, check-msrv, linalg-example] | ||
| strategy: | ||
| max-parallel: 16 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are jobs still parallelized without this? I haven't used actions recently so sorry if I'm asking something outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, they still run in parallel. But I would not have known, I had to try.
While it makes CI times a bit slower, it will avoid wasting CPU time and thereby energy on the test matrix when there are more fundamental issues with a PR.
Also adds PyPy on Linux to our test targets.
Fixes #219