Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rebuild for PyPy3.8 and PyPy3.9 #204

Merged

Conversation

regro-cf-autotick-bot
Copy link
Contributor

This PR has been triggered in an effort to update pypy38.

Notes and instructions for merging this PR:

  1. Please merge the PR only after the tests have passed.
  2. Feel free to push to the bot's branch to update this PR if needed.

Please note that if you close this PR we presume that the feedstock has been rebuilt, so if you are going to perform the rebuild yourself don't close this PR until the your rebuild has been merged.

If this PR was opened in error or needs to be updated please add the bot-rerun label to this PR. The bot will close this PR and schedule another one. If you do not have permissions to add this label, you can use the phrase @conda-forge-admin, please rerun bot in a PR comment to have the conda-forge-admin add it for you.

This PR was created by the regro-cf-autotick-bot. The regro-cf-autotick-bot is a service to automatically track the dependency graph, migrate packages, and propose package version updates for conda-forge. Feel free to drop us a line if there are any issues! This PR was generated by https://github.com/regro/autotick-bot/actions/runs/2365798351, please use this URL for debugging.

@conda-forge-linter
Copy link

Hi! This is the friendly automated conda-forge-linting service.

I just wanted to let you know that I linted all conda-recipes in your PR (recipe) and found it was in an excellent condition.

@h-vetinari
Copy link
Member

Same situation as #202...

@h-vetinari
Copy link
Member

From #202:

@mattip: This may be a JIT bug in register allocation on windows, see PyPy issue 3753

Awesome to hear that a culprit has been found! :)

Should we backport https://foss.heptapod.net/pypy/pypy/-/commit/42b67db19ee1aee748cbf350b90001db54daefa6 on the pypy feedstock?

@mattip
Copy link
Contributor

mattip commented May 31, 2022

I was trying to confirm that the fix works. Tests fail to start without cython/cython#4764, which has not made it into a cython release yet. When I use the new meson build (which needs mesonbuild/meson#10138),I am seeing a segfault after 38% testing. Trying to debug locally led me to open scipy/scipy#16322 since the build now only uses the r-package version of gcc on windows. The debug is unusable, there are thousands of debug print statements with

warning: 3364:4084 @ 654583796 - LdrpCallTlsInitializers - INFO: Calling TLS callback 00007FFFD9D71840 for DLL "d:\pypy_stuff\scipy\build\Lib\site-packages\scipy\_lib\_fpumode.pypy38-pp73-win_amd64.pyd" at 00007FFFD9D70000

I could not find how to get rid of them. So now I gave up and am trying the older setup.py - based buld

@h-vetinari
Copy link
Member

@conda-forge-admin, please rerender

@maresb
Copy link

maresb commented Jul 24, 2022

The win win_64_numpy1.19python3.8.____cpythonpython_implcpython build failed on the "Download Miniforge" step with "decryption failed or bad record mac". I presume this is one of those transient errors which I occasionally see.

Regarding such errors, see my PR at conda-forge/conda-smithy#1639 😉

@h-vetinari
Copy link
Member

@mattip - great news, the windows failures in quadpack are gone! :)

The remaining failures are largely down to the errors that also exist (already last time) on unix:

=========================== short test summary info ===========================
FAILED sparse/tests/test_construct.py::TestConstructUtils::test_bmat - Assert...
FAILED stats/tests/test_sampling.py::TestTransformedDensityRejection::test_bad_pdf[<lambda>-TypeError-must be real number, not list]
FAILED stats/tests/test_sampling.py::TestTransformedDensityRejection::test_bad_dpdf[<lambda>-TypeError-must be real number, not list]
FAILED stats/tests/test_sampling.py::TestNumericalInversePolynomial::test_bad_pdf[<lambda>-TypeError-must be real number, not list]
= 4 failed, 34998 passed, 3217 skipped, 104 xfailed, 11 xpassed, 21 warnings in 2016.52s (0:33:36) =

@mattip
Copy link
Contributor

mattip commented Jul 25, 2022

Cool. The last 3 seem to be the difference between the epected error message and the actual one and could probably be handled in a condition in the test. The first one is a little trickier: is it due to the order of printing a set or is it a real error?

@h-vetinari
Copy link
Member

Cool. The last 3 seem to be the difference between the expected error message and the actual one and could probably be handled in a condition in the test. The first one is a little trickier: is it due to the order of printing a set or is it a real error?

The last three sound to me like something is going wrong in the wrapper itself, not necessarily the test.

Regarding the last one, I don't think that the order of dimensions is arbitrary, numpy is normally pretty sensitive about that

@mattip
Copy link
Contributor

mattip commented Jul 25, 2022

I fixed the error message discrepancy in PyPy for the next release. Here is a possible patch for the test_sampling.py test
patch.txt

@mattip
Copy link
Contributor

mattip commented Jul 25, 2022

Whoops, sorry, this is the correct patch. I actually tried it this time.
patch.txt

Now for the test_bmat failure.

@mattip
Copy link
Contributor

mattip commented Jul 25, 2022

The difference is here where scipy does set(b.shape[other_axis] for b in blocks). This boils down to set([2, 1]) which on PyPy returns {2, 1} and on CPython {1, 2}. This seems like the test is being over-specific.

@mattip
Copy link
Contributor

mattip commented Jul 25, 2022

This patch to make the regex more permissive makes the test pass

iff --git a/scipy/sparse/tests/test_construct.py b/scipy/sparse/tests/test_construct.py
index cfc9d0a06..9fc46ae05 100644
--- a/scipy/sparse/tests/test_construct.py
+++ b/scipy/sparse/tests/test_construct.py
@@ -421,7 +421,7 @@ class TestConstructUtils:
 
         with assert_raises(ValueError) as excinfo:
             construct.bmat([[A.tocsc()], [B.tocsc()]])
-        excinfo.match(r'Mismatching dimensions along axis 1: {1, 2}')
+        excinfo.match(r'Mismatching dimensions along axis 1: {[12], [12]}')
 
         with assert_raises(ValueError) as excinfo:
             construct.bmat([[A, C]])
@@ -429,7 +429,7 @@ class TestConstructUtils:
 
         with assert_raises(ValueError) as excinfo:
             construct.bmat([[A.tocsr(), C.tocsr()]])
-        excinfo.match(r'Mismatching dimensions along axis 0: {1, 2}')
+        excinfo.match(r'Mismatching dimensions along axis 0: {[12], [12]}')
 
         with assert_raises(ValueError) as excinfo:
             construct.bmat([[A.tocsc(), C.tocsc()]])

Co-Authored-By: Matti Picus <matti.picus@gmail.com>
@h-vetinari
Copy link
Member

Thanks for the analysis and the patches @mattip! 🙃

@mattip
Copy link
Contributor

mattip commented Jul 26, 2022

aarch64 is failing/timing out.

@h-vetinari h-vetinari merged commit dcb89d7 into conda-forge:main Jul 26, 2022
@regro-cf-autotick-bot regro-cf-autotick-bot deleted the rebuild-pypy38-0-1_h0f7bf7 branch July 26, 2022 04:50
@mattip
Copy link
Contributor

mattip commented Jul 26, 2022

@h-vetinari thanks!

@h-vetinari
Copy link
Member

aarch64 is failing/timing out.

Hm, I thought these were just regular timeouts, but unfortunately it seems the crash reproduces

@mattip
Copy link
Contributor

mattip commented Jul 26, 2022

It is strange that aarch64 is soooo much slower for PyPy than CPython. The difference for the other architectures is not so extreme

@h-vetinari
Copy link
Member

at least the pypy3.9 ran through now. This also provides us with some indication where the test seem to be wasting the most time:

============================= slowest 50 durations =============================
724.97s call     stats/tests/test_continuous_basic.py::test_cont_basic[500-200-wrapcauchy-arg106]
498.02s call     optimize/_trustregion_constr/tests/test_report.py::test_gh12922
333.99s call     _lib/tests/test_import_cycles.py::test_modules_importable
228.37s call     stats/tests/test_continuous_basic.py::test_cont_basic[500-200-truncnorm-arg97]
205.86s call     stats/tests/test_continuous_basic.py::test_cont_basic[500-200-truncnorm-arg98]
203.49s call     special/tests/test_cython_special.py::test_cython_api[elliprj]
203.18s call     optimize/tests/test__differential_evolution.py::TestDifferentialEvolutionSolver::test_L1
189.30s call     stats/tests/test_hypotests.py::TestPermutationTest::test_randomized_test_against_exact_samples
185.39s call     optimize/tests/test__differential_evolution.py::TestDifferentialEvolutionSolver::test_L4
172.03s call     stats/tests/test_continuous_basic.py::test_cont_basic[500-200-ncx2-arg76]
154.01s call     stats/tests/test_continuous_basic.py::test_cont_basic[500-200-beta-arg4]
142.12s call     interpolate/tests/test_rbfinterp.py::test_conditionally_positive_definite[multiquadric]
135.25s call     interpolate/tests/test_rbfinterp.py::test_conditionally_positive_definite[thin_plate_spline]
129.63s call     optimize/tests/test__basinhopping.py::TestBasinHopping::test_all_nograd_minimizers
126.50s call     stats/tests/test_distributions.py::TestFitMethod::test_fshapes[MM]
126.25s call     interpolate/tests/test_rbfinterp.py::test_conditionally_positive_definite[inverse_quadratic]
125.47s call     interpolate/tests/test_rbfinterp.py::test_conditionally_positive_definite[cubic]
122.84s call     signal/tests/test_filter_design.py::TestBessel::test_fs_param
120.07s call     interpolate/tests/test_rbfinterp.py::test_conditionally_positive_definite[gaussian]
119.77s call     interpolate/tests/test_rbfinterp.py::test_conditionally_positive_definite[inverse_multiquadric]
119.16s call     interpolate/tests/test_rbfinterp.py::test_conditionally_positive_definite[linear]
118.71s call     interpolate/tests/test_rbfinterp.py::test_conditionally_positive_definite[quintic]
115.42s call     stats/tests/test_continuous_basic.py::test_cont_basic[500-200-gengamma-arg31]
114.90s call     optimize/tests/test__differential_evolution.py::TestDifferentialEvolutionSolver::test_L2
111.36s call     signal/tests/test_signaltools.py::test_filtfilt_gust
104.89s call     linalg/tests/test_basic.py::TestLstsq::test_random_complex_exact
102.79s call     optimize/tests/test__differential_evolution.py::TestDifferentialEvolutionSolver::test_impossible_constraint
99.52s call     spatial/tests/test_distance.py::TestCdist::test_cdist_calling_conventions

In comparison, for cpython (on a slow agent):

============================= slowest 50 durations =============================
591.52s call     linalg/tests/test_basic.py::TestLstsq::test_random_complex_exact
400.54s call     linalg/tests/test_basic.py::TestLstsq::test_random_exact
311.48s call     optimize/_trustregion_constr/tests/test_report.py::test_gh12922
302.65s call     stats/tests/test_continuous_basic.py::test_cont_basic[500-200-wrapcauchy-arg106]
192.15s call     linalg/tests/test_decomp.py::TestOrdQZWorkspaceSize::test_decompose
137.94s call     _lib/tests/test_import_cycles.py::test_modules_importable
125.41s call     stats/tests/test_distributions.py::TestStudentizedRange::test_cdf_against_tables
106.98s call     optimize/tests/test_lsq_linear.py::TestTRF::test_sparse_bounds
102.55s call     signal/tests/test_signaltools.py::test_filtfilt_gust
99.66s call     stats/tests/test_continuous_basic.py::test_cont_basic[500-200-truncnorm-arg97]

and cpython on a fast agent

============================= slowest 50 durations =============================
192.03s call     stats/tests/test_continuous_basic.py::test_cont_basic[500-200-wrapcauchy-arg106]
162.81s call     optimize/_trustregion_constr/tests/test_report.py::test_gh12922
101.15s call     _lib/tests/test_import_cycles.py::test_modules_importable
91.06s call     linalg/tests/test_basic.py::TestLstsq::test_random_complex_exact

The emulation jobs are also really sensitive to the agent being used (I guess how powerful, how much memory, or which CPU instruction sets) - on CPython, the aarch builds vary between 1.5 and 2.5h. Still, it's pretty clear that pypy is about a factor of 3 slower there for some reason.

@h-vetinari
Copy link
Member

It is strange that aarch64 is soooo much slower for PyPy than CPython. The difference for the other architectures is not so extreme

It's also worth nothing that pypy runs with label='fast' everywhere (whereas cpython runs with label='full'), except on aarch, which already has label='fast' as well. So it might well be that pypy is quite a bit slower everywhere, but it only shows up on aarch so far, because it's the only apples-to-apples comparison.

@mattip
Copy link
Contributor

mattip commented Jul 26, 2022

...it might well be that pypy is quite a bit slower everywhere, but it only shows up on aarch so far

Makes sense. I was wondering why the pypy times looked so good on other platforms.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants