-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error when saving TensorFlowModelDataset
as partition
#759
Comments
Hi @anabelchuinard, thanks for opening this issue and sorry for the delay. It will take us some time but I'm labeling this issue so we don't lose track of it. |
Hi @anabelchuinard, do you still need help fixing this issue? |
@merelcht I found a non-kedronic workaround for this but would love to know if there is now a kedronic way for batch-saving those models. |
Using the kedro-plugins/kedro-datasets/kedro_datasets/partitions/partitioned_dataset.py Lines 313 to 314 in be99fec
|
TensorFlowModelDataset
as partition
Cause of the issueThe issue is in how we implement partitioned dataset lazy saving. To postpone data loading, we require return kedro-plugins/kedro-datasets/kedro_datasets/partitions/partitioned_dataset.py Lines 313 to 314 in be99fec
When saving the data, we check if the So Current workaround@anabelchuinard - To make save_dict = {
"tensorflow_model_32": models["tensorflow_model_32"](),
"tensorflow_model_64": models["tensorflow_model_64"](),
}
# Tensorflow model can be wrapped with lambda, to avoid calling it when saving
save_dict = {
"tensorflow_model_32": lambda: models["tensorflow_model_32"](),
"tensorflow_model_64": lambda: models["tensorflow_model_64"](),
} Suggested fixMake Following PR to update docs |
To me this seems to be a niche case, and changing PartitionedDataset to only accept lambda is a bigger breaking change. Any useful callable will likely be more complicated than a simple lambda. Maybe we can disable lazy loading/saving (default enable) when specified? |
I see the point but I think the issue is a little bit broader than this case. Particularly I don't think it's right to call any callable object and use this check to decide if we apply lazy saving. This affects all the ml-models cases (tensorflow, pytorch, scikit-learn, etc.) and potentially can also execute some unwanted code implemented in In the solution suggested I tried to narrow down these cases from callable to lamda, so there's less chance to get them. As an alternative, we can consider making lazy saving a default behaviour so we internally wrap and unwrap objects automatically. But here, the question is whether we need to make it the only option (as it is for lazy saving) or provide some interface to disable it. |
Thanks for the investigation and PR, @ElenaKhaustova! I agree with @noklam that relying solely on lambda functions for lazy saving doesn't seem like a generic solution. While it is a breaking change, it's hard to determine how much it will impact users. In my opinion, it would be better to avoid treating all Callables as participants in lazy saving by default. However, this would also be a breaking change. As a simpler alternative, we could provide an option to disable lazy saving, as you suggested. |
Description
Can't save TensorFlowModelDataset objects as partition.
Context
I am dealing with a project where I have to train several models concurrently. I started writing my code using PartitionedDataset where each partition corresponds to the data relative to one training. When I am trying to save the resulting tensorflow models as a partition, I get an error. I wonder is this has to do with the fact that those inherit from the AbstractVersionedDataset instead of the AbstractDataset. And if yes, I am interested to know if there is any workaround for batch saving those.
This is the instance of my catalog corresponding to the models I want to save:
Note: Saving one model (not as partition) works.
Steps to Reproduce
Expected Result
Should save one .hdf5 file per partition with the name of the file being the associate dictionary key.
Actual Result
Getting this error:
Your Environment
pip show kedro
orkedro -V
): kedro, version 0.18.12python -V
): 3.9.16The text was updated successfully, but these errors were encountered: