Replies: 2 comments
-
@jedbrown HDF5 currently works around the MPI limitations using a typical derived datatype approach, so there shouldn't be any need for users to work around the issue at this point. That said, moving toward use of MPI-4.0 functions could be handy in cleaning up a lot of this code. Currently, we only have an MPI-3.0 requirement (which isn't necessarily well-documented but allowed us to clean up a lot of things), but I can certainly see this being on the list of things to explore in the future. We've recently been discussing moving MPI functionality further down in the layers of the library to isolate it and potentially make space for other communications library interfaces. I'd say that would be a good time to look into implementing support for these. |
Beta Was this translation helpful? Give feedback.
-
Thanks, it looks like this support requires 1.14, while we had been picking up 1.12 at the HPC facility. This issue appears to be resolved by updating to 1.14. And I totally get the lack of eagerness to rush a hard dependency on MPI-4.0. |
Beta Was this translation helpful? Give feedback.
-
MPI-4.0 added large count (
MPI_Count
) interfaces that would avoid the need for workarounds when working with large files from a small number of ranks. These functions have names ending in_c
, likeMPI_File_read_at_all_c
. I don't see any activity in issues or pull requests, but this seems valuable to users and uses cases like HDF5 are why the_c
interfaces were added to MPI. Is there a plan or specific technical obstacles to implementing this?Beta Was this translation helpful? Give feedback.
All reactions