-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add tests for grib idx & reinflate #528
base: main
Are you sure you want to change the base?
Conversation
Well that's a lot .... :) |
All part of my plan to become top contributor... |
...ib_idx_fixtures/gfs.pgrb2.0p25/reinflate/best_available/u/instant/isobaricInPa/u_chunks.json
Show resolved
Hide resolved
6f9a188
to
194da4d
Compare
65ff274
to
261ee2f
Compare
b6cc52c
to
72f684e
Compare
tests/test__grib_idx.py
Outdated
@@ -284,6 +291,10 @@ def test_build_idx_grib_mapping(self): | |||
) | |||
expected = pd.read_parquet(kindex_test_path) | |||
|
|||
expected = expected.assign( | |||
step=lambda x: x.step.astype("timedelta64[ns]") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@martindurant These errors in the CI system are really strange... These typically only happen when using an old version of numpy/pandas that has a different default time type.
The last one, the sync error... that seems like maybe I need to convert from unittest subtest to pytest?
It claims to have numpy 2.2.0, but there is this:
which suggests maybe there is a conda-forge/defaults/pip crossover? We do have |
Welp - I dumped as much version info as I can think to and I don't see any smoking gun |
9b452ff
to
84de0be
Compare
Okay - I can repro locally now after installing anaconda python. |
84de0be
to
14807de
Compare
Okay - fixed the dtype on the step column. I am not sure what the remaining |
Looks like installing the head of fsspec was breaking this test. The pd.read_parquet requiring |
Okay - tests are green but I don't see an easy way to convert the heavy use of unittest subtest to pytest parameterize mark? The current behavior, when run with subtest give really nice error messages when things go wrong for a particular set of subtests. I forced a failure for some subtests by adding:
Now, when I run
Pytest will run all these cases... but it doesn't give any of the subtest context on which part failed:
I found a subtest package for pypi, but I am not sure you want the extra dependency. Any clever ideas on how to restructure the tests without a total rewrite? |
Sorry, I have never had to do any complex unittest->pytest refactoring. When you use parametrize and I think adding helper packages for the sake of saner test run output is totally fine. Test-time dependencies are easier to justify than runtime. |
Add chunky test files and copy over unit test approach.
Builds on #523
TODO: