From e1365339b712666b3ca7a0c706f33ce22a2d2bbf Mon Sep 17 00:00:00 2001 From: maxkazmsft Date: Wed, 20 May 2020 13:27:19 -0400 Subject: [PATCH] V0.1.2 (#307) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Merged PR 42: Python package structure Created Python package structure * Merged PR 50: Röth-Tarantola generative model for velocities - Created Python package structure for generative models for velocities - Implemented the [Röth-Tarantola model](https://doi.org/10.1029/93JB01563) * Merged PR 51: Isotropic AWE forward modelling using Devito Implemented forward modelling for the isotropic acoustic wave equation using [Devito](https://www.devitoproject.org/) * Merged PR 52: PRNG seed Exposed PRNG seed in generative models for velocities * Merged PR 53: Docs update - Updated LICENSE - Added Microsoft Open Source Code of Conduct - Added Contributing section to README * Merged PR 54: CLI for velocity generators Implemented CLI for velocity generators * Merged PR 69: CLI subpackage using Click Reimplemented CLI as subpackage using Click * Merged PR 70: VS Code settings Added VS Code settings * Merged PR 73: CLI for forward modelling Implemented CLI for forward modelling * Merged PR 76: Unit fixes - Changed to use km/s instead of m/s for velocities - Fixed CLI interface * Merged PR 78: Forward modelling CLI fix * Merged PR 85: Version 0.1.0 * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * adding cgmanifest to staging * adding a yml file with CG build task * added prelim NOTICE file * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Merged PR 126: updated notice file with previously excluded components updated notice file with previously excluded components * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * Merged PR 222: Moves cv_lib into repo and updates setup instructions * Merged PR 236: Cleaned up dutchf3 data loaders @ , @ , @ , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * added cela copyright headers to all non-empty .py files (#3) * switched to ACR instead of docker hub (#4) * sdk.v1.0.69, plus switched to ACR push. ACR pull coming next * full acr use, push and pull, and use in Estimator * temp fix for dcker image bug * fixed the az acr login --username and --password issue * full switch to ACR for docker image storage * Vapaunic/metrics (#1) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @ , @ , @ , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * BUILD: added build setup files. (#5) * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @ , @ , @ , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added pytest to environmetn, and pytest job to the main build (#18) * Update main_build.yml for Azure Pipelines * minor stylistic changes (#19) * Update main_build.yml for Azure Pipelines Added template for integration tests for scripts and experiments Added setup and env Increased job timeout added complete set of tests * BUILD: placeholder for Azure pipelines for notebooks build. BUILD: added notebooks job placeholders. BUILD: added github badgets for notebook builds * CLEANUP: moved non-release items to contrib (#20) * Updates HRNet notebook 🚀 (#25) * Modifies pre-commit hook to modify output * Modifies the HRNet notebook to use Penobscot dataset Adds parameters to limit iterations Adds parameters meta tag for papermil * Fixing merge peculiarities * Updates environment.yaml (#21) * Pins main libraries Adds cudatoolkit version based on issues faced during workshop * removing files * Updates Readme (#22) * Adds model instructions to readme * Update README.md (#24) I have collected points to all of our BP repos into this central place. We are trying to create links between everything to draw people from one to the other. Can we please add a pointer here to the readme? I have spoken with Max and will be adding Deep Seismic there once you have gone public. * CONTRIB: cleanup for imaging. (#28) * Create Unit Test Build.yml (#29) Adding Unit Test Build. * Update README.md * Update README.md * Create Unit Test Build.yml (#29) Adding Unit Test Build. Update README.md Update README.md * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * TESTS: added notebook integreation tests. (#65) * TESTS: added notebook integreation tests. * TEST: typo in env name * Addressing a number of minor issues with README and broken links (#67) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @ , @ , @ , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fix for seyviewer and mkdir splits in README + broken link in F3 notebook * issue edits to README * download complete message * Added Yacs info to README.md (#69) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @ , @ , @ , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added info on yacs files * MODEL.PRETRAINED key missing in default.py (#70) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @ , @ , @ , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added MODEL.PRETRAINED key to default.py * Update README.md (#59) * Update README.md (#58) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @ , @ , @ , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @ , @ , @ , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @ , @ , @ , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @ , @ , @ , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * Adds premium storage (#79) * Adds premium storage method * update test.py for section based approach to use command line arguments (#76) * added README documentation per bug bush feedback (#78) * sdk 1.0.76; tested conda env vs docker image; extented readme * removed reference to imaging * minor md formatting * minor md formatting * https://github.com/microsoft/DeepSeismic/issues/71 (#80) * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @ , @ , @ , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @ , @ , @ , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * sdk 1.0.76; tested conda env vs docker image; extented readme * removed reference to imaging * minor md formatting * minor md formatting * addressing multiple issues from first bug bash (#81) * added README documentation per bug bush feedback * DOC: added HRNET download info to README * added hrnet download script and tested it * added legal headers to a few scripts. * changed /data to ~data in the main README * added Troubleshooting section to the README * Dciborow/build bug (#68) * Update unit_test_steps.yml * Update environment.yml * Update setup_step.yml * Update setup_step.yml * Update unit_test_steps.yml * Update setup_step.yml * Adds AzureML libraries (#82) * Adds azure dependencies * Adds AzureML components * Fixes download script (#84) * Fixes download script * Updates readme * clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 - Issue #83 * Add Troubleshooting section for DSVM warnings #89 * Add Troubleshooting section for DSVM warnings, plus typo #89 * modified hrnet notebook, addressing bug bash issues (#95) * Update environment.yml (#93) * Update environment.yml * Update environment.yml * tested both yml conda env and docker; udated conda yml to have docker sdk * tested both yml conda env and docker; udated conda yml to have docker sdk; added * NVIDIA Tesla K80 (or V100 GPU for NCv2 series) - per Vanja's comment * notebook integration tests complete (#106) * added README documentation per bug bush feedback * HRNet notebook works with tests now * removed debug material from the notebook * corrected duplicate build names * conda init fix * changed setup deps * fixed F3 notebook - merge conflict and pytorch bug * main and notebook builds have functional setup now * Mat/test (#105) * added README documentation per bug bush feedback * Modifies scripts to run for only afew iterations when in debug/test mode * Updates training scripts and build * Making names unique * Fixes conda issue * HRNet notebook works with tests now * removed debug material from the notebook * corrected duplicate build names * conda init fix * Adds docstrings to training script * Testing somehting out * testing * test * test * test * test * test * test * test * test * test * test * test * adds seresnet * Modifies to work outside of git env * test * test * Fixes typo in DATASET * reducing steps * test * test * fixes the argument * Altering batch size to fit k80 * reducing batch size further * test * test * test * test * fixes distributed * test * test * adds missing import * Adds further tests * test * updates * test * Fixes section script * test * testing everyting once through * Final run for badge * changed setup deps, fixed F3 notebook * Adds missing tests (#111) * added missing tests * Adding fixes for test * reinstating all tests * Maxkaz/issues (#110) * added README documentation per bug bush feedback * added missing tests * closing out multiple post bug bash issues with single PR * Addressed comments * minor change * Adds Readme information to experiments (#112) * Adds readmes to experiments * Updates instructions based on feedback * Update README.md * BugBash2 Issue #83 and #89: clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 (#88) * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @ , @ , @ , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @ , @ , @ , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * sdk 1.0.76; tested conda env vs docker image; extented readme * removed reference to imaging * minor md formatting * minor md formatting * clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 - Issue #83 * Add Troubleshooting section for DSVM warnings #89 * Add Troubleshooting section for DSVM warnings, plus typo #89 * tested both yml conda env and docker; udated conda yml to have docker sdk * tested both yml conda env and docker; udated conda yml to have docker sdk; added * NVIDIA Tesla K80 (or V100 GPU for NCv2 series) - per Vanja's comment * Update README.md * BugBash2 Issue #83 and #89: clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 (#88) (#2) * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @ , @ , @ , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @ , @ , @ , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * sdk 1.0.76; tested conda env vs docker image; extented readme * removed reference to imaging * minor md formatting * minor md formatting * clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 - Issue #83 * Add Troubleshooting section for DSVM warnings #89 * Add Troubleshooting section for DSVM warnings, plus typo #89 * tested both yml conda env and docker; udated conda yml to have docker sdk * tested both yml conda env and docker; udated conda yml to have docker sdk; added * NVIDIA Tesla K80 (or V100 GPU for NCv2 series) - per Vanja's comment * Update README.md * BugBash2 Issue #83 and #89: clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 (#88) (#3) * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @ , @ , @ , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @ , @ , @ , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * sdk 1.0.76; tested conda env vs docker image; extented readme * removed reference to imaging * minor md formatting * minor md formatting * clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 - Issue #83 * Add Troubleshooting section for DSVM warnings #89 * Add Troubleshooting section for DSVM warnings, plus typo #89 * tested both yml conda env and docker; udated conda yml to have docker sdk * tested both yml conda env and docker; udated conda yml to have docker sdk; added * NVIDIA Tesla K80 (or V100 GPU for NCv2 series) - per Vanja's comment * Update README.md * Remove related projects on AI Labs * Added a reference to Azure machine learning (#115) Added a reference to Azure machine learning to show how folks can get started with using Azure Machine Learning * Update README.md * update fork from upstream (#4) * fixed merge conflict resolution in LICENSE * BugBash2 Issue #83 and #89: clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 (#88) * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @ , @ , @ , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @ , @ , @ , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * sdk 1.0.76; tested conda env vs docker image; extented readme * removed reference to imaging * minor md formatting * minor md formatting * clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 - Issue #83 * Add Troubleshooting section for DSVM warnings #89 * Add Troubleshooting section for DSVM warnings, plus typo #89 * tested both yml conda env and docker; udated conda yml to have docker sdk * tested both yml conda env and docker; udated conda yml to have docker sdk; added * NVIDIA Tesla K80 (or V100 GPU for NCv2 series) - per Vanja's comment * Update README.md * Remove related projects on AI Labs * Added a reference to Azure machine learning (#115) Added a reference to Azure machine learning to show how folks can get started with using Azure Machine Learning * Update README.md * Update AUTHORS.md (#117) * Update AUTHORS.md (#118) * pre-release items (#119) * added README documentation per bug bush feedback * added missing tests * closing out multiple post bug bash issues with single PR * new badges in README * cleared notebook output * notebooks links * fixed bad merge * forked branch name is misleading. (#116) * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @ , @ , @ , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @ , @ , @ , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * sdk 1.0.76; tested conda env vs docker image; extented readme * removed reference to imaging * minor md formatting * minor md formatting * clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 - Issue #83 * Add Troubleshooting section for DSVM warnings #89 * Add Troubleshooting section for DSVM warnings, plus typo #89 * tested both yml conda env and docker; udated conda yml to have docker sdk * tested both yml conda env and docker; udated conda yml to have docker sdk; added * NVIDIA Tesla K80 (or V100 GPU for NCv2 series) - per Vanja's comment * Update README.md * BugBash2 Issue #83 and #89: clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 (#88) (#2) * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @ , @ , @ , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @ , @ , @ , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * sdk 1.0.76; tested conda env vs docker image; extented readme * removed reference to imaging * minor md formatting * minor md formatting * clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 - Issue #83 * Add Troubleshooting section for DSVM warnings #89 * Add Troubleshooting section for DSVM warnings, plus typo #89 * tested both yml conda env and docker; udated conda yml to have docker sdk * tested both yml conda env and docker; udated conda yml to have docker sdk; added * NVIDIA Tesla K80 (or V100 GPU for NCv2 series) - per Vanja's comment * Update README.md * BugBash2 Issue #83 and #89: clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 (#88) (#3) * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build stat… * Minor fix: broken links in README (#120) * fully-run notebooks links and fixed contrib voxel models (#123) * added README documentation per bug bush feedback * added missing tests * - added notebook links - made sure orginal voxel2pixel code runs * update ignite port of texturenet * resolved merge conflict * formatting change * Adds reproduction instructions to readme (#122) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @ , @ , @ , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * Updates notebook to use itkwidgets for interactive visualisation * Further updates * Fixes merge conflicts * removing files * Adding reproduction experiment instructions to readme * checking in ablation study from ilkarman (#124) tests pass but final results aren't communicated to github. No way to trigger another commit other than to do a dummy commit * minor bug in 000 nb; sdk.v1.0.79; FROM continuumio/miniconda3:4.7.12 (#126) * Added download script for dutch F3 dataset. Also adding Sharat/WH as authors. (#129) * added sharat, weehyong to authors * adding a download script for Dutch F3 dataset * Adding script instructions for dutch f3 * Update README.md prepare scripts expect root level directory for dutch f3 dataset. (it is downloaded into $dir/data by the script) * Using env variable during dutchF3 splits * Improvements to dutchf3 (#128) * Adds padding to distributed training pipeline * Adds exception if supplied weights file is not found * Fixes hrnet location * Removes unecessary config * Ghiordan/azureml devito04 (#130) * exported conda env .yml file for AzureML control plane * both control plane and experimentation docker images use azure_ml sdk 1.0.81 * making model snapshots more verbose / friendly (#152) * added scripts which reproduce results * build error fix * modified all local training runs to use model_dir for model name * extended model naming to distributed setup as well * added pillow breakage fix too * removing execution scripts from this PR * upgrading pytorch version to keep up with torchvision to keep up with Pillow * reduced validation batch size for deconvnets to combat OOM with pyTorch 1.4.0 * notebook enhancements from sharatsc (#153) * added sharat, weehyong to authors * adding a download script for Dutch F3 dataset * Adding script instructions for dutch f3 * Update README.md prepare scripts expect root level directory for dutch f3 dataset. (it is downloaded into $dir/data by the script) * added scripts which reproduce results * Adding readme text for the notebooks and checking if config is correctly setup * build error fix * fixed notebook build error * removed scripts from older pull * fixing pip version for DS VM bug fix * fixing pip version for DS VM bug fix * notebook fixes Co-authored-by: Sharat Chikkerur * Fixes a few typos, and links to the troubleshooting section when running the conda command * Readme update Fixes a few typos, and links to the troubleshooting section when running the conda command (#160) * scripts to reproduce model results (#155) * added scripts which reproduce results * build error fix * modified all local training runs to use model_dir for model name * extended model naming to distributed setup as well * added pillow breakage fix too * removing execution scripts from this PR * upgrading pytorch version to keep up with torchvision to keep up with Pillow * initial checkin of the run scripts to reproduce results * edited version of run_all to run all jobs to match presentation/github results * fixed typos in main run_all launch script * final version of the scripts which reproduce repo results * added README description which reproduces the results * Fix data path in the README (#167) * added sharat, weehyong to authors * adding a download script for Dutch F3 dataset * Adding script instructions for dutch f3 * Update README.md prepare scripts expect root level directory for dutch f3 dataset. (it is downloaded into $dir/data by the script) * Adding readme text for the notebooks and checking if config is correctly setup * fixing prepare script example Co-authored-by: maxkazmsft * Update F3_block_training_and_evaluation_local.ipynb (#163) Minor fix to figure axes Co-authored-by: maxkazmsft * Maxkaz/test fixes (#168) * added sharat, weehyong to authors * adding a download script for Dutch F3 dataset * Adding script instructions for dutch f3 * Update README.md prepare scripts expect root level directory for dutch f3 dataset. (it is downloaded into $dir/data by the script) * added scripts which reproduce results * Adding readme text for the notebooks and checking if config is correctly setup * build error fix * fixed notebook build error * removed scripts from older pull * fixing pip version for DS VM bug fix * fixing pip version for DS VM bug fix * notebook fixes * fixes to benchmark test script * explicitly setting backbone model location in tests * fixed test links in the README * addressed PR comments Co-authored-by: Sharat Chikkerur * add data tests for download and preprocessing; resolve preprocessing bugs (#175) * added sharat, weehyong to authors * adding a download script for Dutch F3 dataset * Adding script instructions for dutch f3 * Update README.md prepare scripts expect root level directory for dutch f3 dataset. (it is downloaded into $dir/data by the script) * added scripts which reproduce results * Adding readme text for the notebooks and checking if config is correctly setup * build error fix * fixed notebook build error * removed scripts from older pull * fixing pip version for DS VM bug fix * fixing pip version for DS VM bug fix * notebook fixes * fixes to benchmark test script * explicitly setting backbone model location in tests * addressed PR comments * added dataset download and preprocessing tests * changed data dir to not break master, added separate data prep script for builds * modified README to reflect code changes; added license header * adding fixes to data download script for the builds * forgot to ass the readme fix to data preprocessing script * fixes Co-authored-by: Sharat Chikkerur * Adding content to interpretation README (#171) * added sharat, weehyong to authors * adding a download script for Dutch F3 dataset * Adding script instructions for dutch f3 * Update README.md prepare scripts expect root level directory for dutch f3 dataset. (it is downloaded into $dir/data by the script) * Adding readme text for the notebooks and checking if config is correctly setup * fixing prepare script example * Adding more content to interpretation README * Update README.md * Update HRNet_Penobscot_demo_notebook.ipynb Co-authored-by: maxkazmsft * added utility to validate paths * better handling of AttributeErrors * fixing paths in configs to match those in readme.md * minor formatting improvements * add validate_config_paths to notebooks * adding generic and absolute paths in the config + minor cleanup * better format for validate_config_paths() * added dummy path in hrnet config * modified HRNet notebook * added missing validate_config_paths() * Updates to prepare dutchf3 (#185) * updating patch to patch_size when we are using it as an integer * modifying the range function in the prepare_dutchf3 script to get all of our data * updating path to logging.config so the script can locate it * manually reverting back log path to troubleshoot build tests * updating patch to patch_size for testing on preprocessing scripts * updating patch to patch_size where applicable in ablation.sh * reverting back changes on ablation.sh to validate build pass * update patch to patch_size in ablation.sh (#191) Co-authored-by: Sharat Chikkerur * closes https://github.com/microsoft/seismic-deeplearning/issues/181 (#187) * added sharat, weehyong to authors * adding a download script for Dutch F3 dataset * Adding script instructions for dutch f3 * Update README.md prepare scripts expect root level directory for dutch f3 dataset. (it is downloaded into $dir/data by the script) * added scripts which reproduce results * Adding readme text for the notebooks and checking if config is correctly setup * build error fix * fixed notebook build error * removed scripts from older pull * fixing pip version for DS VM bug fix * fixing pip version for DS VM bug fix * notebook fixes * fixes to benchmark test script * explicitly setting backbone model location in tests * addressed PR comments * addded ability to load pretrained HRNet model on the build server from custom location * fixed build failure * another fix Co-authored-by: Sharat Chikkerur * read parameters from papermill * fixes to test_all script to reproduce model results (#201) * added sharat, weehyong to authors * adding a download script for Dutch F3 dataset * Adding script instructions for dutch f3 * Update README.md prepare scripts expect root level directory for dutch f3 dataset. (it is downloaded into $dir/data by the script) * added scripts which reproduce results * Adding readme text for the notebooks and checking if config is correctly setup * build error fix * fixed notebook build error * removed scripts from older pull * fixing pip version for DS VM bug fix * fixing pip version for DS VM bug fix * notebook fixes * fixes to benchmark test script * explicitly setting backbone model location in tests * addressed PR comments * updated model test script to work on master and staging branches to reproduce results Co-authored-by: Sharat Chikkerur * Solves issue #54: check/validate config to make sure datapath and model path are valid (#198) Co-authored-by: maxkazmsft * Adding dockerfile to solve issue #146 (#204) * Fixes a few typos, and links to the troubleshooting section when running the conda command * added draft dockerfile and readme * fixes to dockerfile * minor improvements * add code to download the datasets * Update Dockerfile * use miniconda * activate jupyter kernel * comment out code to download data * Updates to Dockerfile to activate conda env * updating the Dockerfile * change branch to staging (bug fix) * download the datasets * Update README.md * final modifications to Dockerfile * Updated the README file * Updated the README to use --mount instead of --volume --volume has the disadvantage of requiring the mount point to exist prior to running the docker image. Otherwise, it will create an empty directory. --mount however allows us to mount files directly to any location. * Update the hrnet.yml to match the mount point in the docker image * Update the dutchf3 paths in the config files * Update the dockerfile to prepare the datasets for training * support for nvidia-docker * fix gitpython bug * fixing the "out of shared memory" bug Co-authored-by: maxkazmsft * remove duplicate code for validation (#208) * Update README.md * updating readme metrics; adding runtimes (#210) * adds ability to specify cwd for notebook tests (#207) * added sharat, weehyong to authors * adding a download script for Dutch F3 dataset * Adding script instructions for dutch f3 * Update README.md prepare scripts expect root level directory for dutch f3 dataset. (it is downloaded into $dir/data by the script) * added scripts which reproduce results * Adding readme text for the notebooks and checking if config is correctly setup * build error fix * fixed notebook build error * removed scripts from older pull * fixing pip version for DS VM bug fix * fixing pip version for DS VM bug fix * notebook fixes * fixes to benchmark test script * explicitly setting backbone model location in tests * addressed PR comments * updated model test script to work on master and staging branches to reproduce results * enables ability to change notebook execution dir * fix Co-authored-by: Sharat Chikkerur * 138 download pretrained models to hrnet notebook (#213) * removed duplicate code in the notebooks * initial draft * done with download_pretrained_model * updated notebook and utils * updating model dir in config * updates to util * update to notebook * model download fixes and HRNet pre-trained model demo run * fix to non-existant model_dir directory on the build server * typo Co-authored-by: maxkazmsft * 226 (#231) * added ability to use pre-trained models on Dutch F3 dataset * moved black notebook formatter instructions to README * finished Dutch F3 notebook training - pre trained model runtime is down to 1 minute; starting on test set performance * finished dutch f3 notebook * fixed Docker not running out-of-the-box with the given parameters * cleaned up other notebooks and files which are not scoped for this release * tweaks to notebook from Docker * fixed Docker instructions and port 9000 for TB * notebook build fixes * small Dockerfile fix * notebook build fixes * increased max_iterations in tests * finished tweaking the notebook to get the tests to pass * more fixes for build tests * dummy commit to re-trigger the builds * addressed PR comments * reverting back data.Subset to toolz.take * added docker image test build (#242) * added docker image test build * increased Docker image build timeout * update notebook seeds (#247) * re-wrote experiment test builds to run in parallel on single 4-GPU VM (#246) * re-wrote experiment test builds to run in parallel on single 4-GPU VM * fixed yaml typo * fixed another yaml typo * added more descriptive build names * fixed another yaml typo * changed build names and added tee log splitting * added wait -n * added wait termination condition * fixed path typo * added code to manually block on PIDs * added ADO fixes to collect PIDs for wait; changed component governance build pool * added manual handling of return codes * fixed parallel distributed tests * build typo * correctness branch setup (#251) * created correctnes branch, trimmed experiments to Dutch F3 only * trivial change to re-trigger build * dummy PR to re-trigger malfunctioning builds * reducing scope further (#258) * created correctnes branch, trimmed experiments to Dutch F3 only * trivial change to re-trigger build * dummy PR to re-trigger malfunctioning builds * reducing scope of the correctness branch further * added branch triggers * hotfixing correctness - broken DropBox download link * 214 Ignite 0.3.0 upgrade (#261) * upgraded to Ignite 0.3.0 and fixed upgrade compatibility * added seeds and modified notebook for ignite 0.3.0 * updated code and tests to work with ignite 0.3.0 * made code consistent with Ignite 0.3.0 as much as possible * fixed iterator epoch_length bug by subsetting validation set * applied same fix to the notebook * bugfix in distributed train.py * increased distributed tests to 2 batched - hoping for one batch per GPU * resolved rebase conflict * added seeds and modified notebook for ignite 0.3.0 * updated code and tests to work with ignite 0.3.0 * made code consistent with Ignite 0.3.0 as much as possible * fixed iterator epoch_length bug by subsetting validation set * applied same fix to the notebook * bugfix in distributed train.py * increased distributed tests to 2 batched - hoping for one batch per GPU * update docker readme (#262) Co-authored-by: maxkazmsft * tagged all TODOs with issues on github (and created issues) (#278) * created correctnes branch, trimmed experiments to Dutch F3 only * trivial change to re-trigger build * dummy PR to re-trigger malfunctioning builds * resolved merge conflict * flagged all non-contrib TODO with github issues * resolved rebase conflict * resolved merge conflict * cleaned up archaic voxel code * Refactoring train.py, removing OpenCV, adding training results to Tensborboard, bug fixes (#264) I think moving forward, we'll use smaller PRs. But here are the changes in this one: Fixes issue #236 that involves rewriting a big portion of train.py such that: All the tensorboard event handlers are organized in tensorboard_handlers.py and only called in train.py to log training and validation results in Tensorboard The code logs the same results for training and validation. Also, it adds the class IoU score as well. All single-use functions (e.g. _select_max, _tensor_to_numpy, _select_pred_and_mask) are lambda functions now The code is organized into more meaningful "chunks".. e.g. all the optimizer-related code should be together if possible, same thing for logging, configuration, loaders, tensorboard, ..etc. In addition: Fixed a visualization bug where the seismic images where not normalized correctly. This solves Issue #217. Fixed a visualization bug where the predictions where not masked where the input image was padded. This improves the ability to visually inspect and evaluate the results. This solves Issue #230. Fixes a potential issue where Tensorboard can crash when a large training batchsize is used. Now the number of images visualized in Tensorboard from every batch has an upper limit. Completely removed OpenCV as a dependency from the DeepSeismic Repo. It was only used in a small part of the code where it wasn't really necessary, and OpenCV is a huge library. Fixes Issue #218 where the epoch number for the images in Tensorboard was always logged as 1 (therefore, not allowing use to see the epoch number of the different results in Tensorboard. Removes the HorovodLRScheduler class since its no longer used Removes toolz.take from Debug mode, and uses PyTorch's native Subset() dataset class Changes default patch size for the HRNet model to 256 In addition to several other minor changes Co-authored-by: Yazeed Alaudah Co-authored-by: Ubuntu Co-authored-by: Max Kaznady * Fixes training/validation overlap #143, #233, #253, and #259 (#282) * Correctness single GPU switch (#290) * resolved rebase conflict * resolved merge conflict * resolved rebase conflict * resolved merge conflict * reverted multi-GPU builds to run on single GPU * 249r3 (#283) * resolved rebase conflict * resolved merge conflict * resolved rebase conflict * resolved merge conflict * wrote the bulk of checkerboard example * finished checkerboard generator * resolved merge conflict * resolved rebase conflict * got binary dataset to run * finished first implementation mockup - commit before rebase * made sure rebase went well manually * added new files * resolved PR comments and made tests work * fixed build error * fixed build VM errors * more fixes to get the test to pass * fixed n_classes issue in data.py * fixed notebook as well * cleared notebook run cell * trivial commit to restart builds * addressed PR comments * moved notebook tests to main build pipeline * fixed checkerboard label precision * relaxed performance tests for now * resolved merge conflict * resolved merge conflict * fixed build error * resolved merge conflicts * fixed another merge mistake * enabling development on docker (#291) * 289: correctness metrics and tighter tests (#293) * resolved rebase conflict * resolved merge conflict * resolved rebase conflict * resolved merge conflict * wrote the bulk of checkerboard example * finished checkerboard generator * resolved merge conflict * resolved rebase conflict * got binary dataset to run * finished first implementation mockup - commit before rebase * made sure rebase went well manually * added new files * resolved PR comments and made tests work * fixed build error * fixed build VM errors * more fixes to get the test to pass * fixed n_classes issue in data.py * fixed notebook as well * cleared notebook run cell * trivial commit to restart builds * addressed PR comments * moved notebook tests to main build pipeline * fixed checkerboard label precision * relaxed performance tests for now * resolved merge conflict * resolved merge conflict * fixed build error * resolved merge conflicts * fixed another merge mistake * resolved rebase conflict * resolved rebase 2 * resolved merge conflict * resolved merge conflict * adding new logging * added better logging - cleaner - debugged metrics on checkerboard dataset * resolved rebase conflict * resolved merge conflict * resolved merge conflict * resolved merge conflict * resolved rebase 2 * resolved merge conflict * updated notebook with the changes * addressed PR comments * addressed another PR comment * uniform colormap and correctness tests (#295) * correctness code good for PR review * addressed PR comments * V0.2 release README update (#300) * updated readme for v0.2 release * bug fix (#296) Co-authored-by: Gianluca Campanella Co-authored-by: msalvaris Co-authored-by: Vanja Paunic Co-authored-by: Vanja Paunic Co-authored-by: Mathew Salvaris Co-authored-by: George Iordanescu Co-authored-by: vapaunic <15053814+vapaunic@users.noreply.github.com> Co-authored-by: Sharat Chikkerur Co-authored-by: Wee Hyong Tok Co-authored-by: Daniel Ciborowski Co-authored-by: George Iordanescu Co-authored-by: Sharat Chikkerur Co-authored-by: Ubuntu Co-authored-by: Yazeed Alaudah Co-authored-by: Yazeed Alaudah Co-authored-by: kirasoderstrom Co-authored-by: yalaudah Co-authored-by: Ubuntu --- AUTHORS.md | 4 +- NOTICE.txt | 2 +- README.md | 194 ++- cgmanifest.json | 2 +- conftest.py | 0 .../distributed/configs/hrnet.yaml | 103 ++ .../distributed/configs/patch_deconvnet.yaml | 59 + .../configs/patch_deconvnet_skip.yaml | 59 + .../distributed/configs/seresnet_unet.yaml | 59 + .../distributed/configs/unet.yaml | 63 + .../dutchf3_patch/distributed/default.py | 107 ++ .../dutchf3_patch/distributed/logging.conf | 34 + .../dutchf3_patch/distributed/run.sh | 0 .../dutchf3_patch/distributed/train.py | 341 +++++ .../dutchf3_patch/distributed/train.sh | 3 + .../interpretation/dutchf3_section/README.md | 25 + .../local/configs/section_deconvnet_skip.yaml | 45 + .../dutchf3_section/local/default.py | 93 ++ .../dutchf3_section/local/logging.conf | 37 + .../dutchf3_section/local/test.py | 205 +++ .../dutchf3_section/local/train.py | 294 +++++ .../dutchf3_voxel/configs/texture_net.yaml | 2 +- .../interpretation/dutchf3_voxel/default.py | 4 +- .../interpretation/dutchf3_voxel/train.py | 2 +- .../interpretation/penobscot/README.md | 27 + .../penobscot/local/configs/hrnet.yaml | 108 ++ .../local/configs/seresnet_unet.yaml | 64 + .../interpretation/penobscot/local/default.py | 122 ++ .../penobscot/local/logging.conf | 34 + .../interpretation/penobscot/local/test.py | 288 +++++ .../interpretation/penobscot/local/test.sh | 2 + .../interpretation/penobscot/local/train.py | 293 +++++ .../interpretation/penobscot/local/train.sh | 2 + ..._GeophysicsTutorial_FWI_Azure_devito.ipynb | 120 +- ..._GeophysicsTutorial_FWI_Azure_devito.ipynb | 228 ++-- ..._GeophysicsTutorial_FWI_Azure_devito.ipynb | 33 +- ..._GeophysicsTutorial_FWI_Azure_devito.ipynb | 273 ++-- contrib/scripts/ablation.sh | 8 +- cv_lib/cv_lib/event_handlers/__init__.py | 3 +- .../cv_lib/event_handlers/logging_handlers.py | 61 +- .../event_handlers/tensorboard_handlers.py | 79 +- .../cv_lib/segmentation/dutchf3/__init__.py | 0 cv_lib/cv_lib/segmentation/dutchf3/utils.py | 6 - .../cv_lib/segmentation/models/seg_hrnet.py | 10 +- cv_lib/cv_lib/segmentation/utils.py | 30 - cv_lib/cv_lib/utils.py | 53 + docker/Dockerfile | 40 +- docker/README.md | 2 +- environment/anaconda/local/environment.yml | 14 +- environment/docker/apex/dockerfile | 2 +- environment/docker/horovod/dockerfile | 2 +- examples/interpretation/README.md | 9 +- ..._patch_model_training_and_evaluation.ipynb | 1103 +++++++++++++++++ .../interpretation/notebooks/utilities.py | 213 +++- .../dutchf3_patch/local/configs/hrnet.yaml | 17 +- .../local/configs/patch_deconvnet.yaml | 6 +- .../local/configs/patch_deconvnet_skip.yaml | 5 +- .../local/configs/seresnet_unet.yaml | 2 +- .../dutchf3_patch/local/configs/unet.yaml | 2 +- .../dutchf3_patch/local/default.py | 12 +- .../dutchf3_patch/local/test.py | 176 +-- .../dutchf3_patch/local/test.sh | 2 +- .../dutchf3_patch/local/train.py | 286 ++--- .../dutchf3/data.py | 636 +++++----- .../dutchf3/tests/test_dataloaders.py | 325 +++++ .../dutchf3/utils/batch.py | 114 -- .../models/texture_net.py | 1 + .../penobscot/metrics.py | 4 +- scripts/gen_checkerboard.py | 197 +++ scripts/logging.conf | 34 + scripts/prepare_dutchf3.py | 357 ++++-- scripts/prepare_penobscot.py | 2 +- scripts/run_all.sh | 112 ++ scripts/run_distributed.sh | 59 + scripts/test_all.sh | 201 +++ tests/cicd/aml_build.yml | 54 + tests/cicd/component_governance.yml | 6 +- tests/cicd/main_build.yml | 485 +++++--- tests/cicd/src/check_performance.py | 97 ++ tests/cicd/src/conftest.py | 9 + tests/cicd/src/notebook_integration_tests.py | 12 +- tests/cicd/src/scripts/get_data_for_builds.sh | 53 + tests/test_prepare_dutchf3.py | 427 +++++++ 83 files changed, 6953 insertions(+), 1706 deletions(-) create mode 100644 conftest.py create mode 100644 contrib/experiments/interpretation/dutchf3_patch/distributed/configs/hrnet.yaml create mode 100644 contrib/experiments/interpretation/dutchf3_patch/distributed/configs/patch_deconvnet.yaml create mode 100644 contrib/experiments/interpretation/dutchf3_patch/distributed/configs/patch_deconvnet_skip.yaml create mode 100644 contrib/experiments/interpretation/dutchf3_patch/distributed/configs/seresnet_unet.yaml create mode 100644 contrib/experiments/interpretation/dutchf3_patch/distributed/configs/unet.yaml create mode 100644 contrib/experiments/interpretation/dutchf3_patch/distributed/default.py create mode 100644 contrib/experiments/interpretation/dutchf3_patch/distributed/logging.conf create mode 100644 contrib/experiments/interpretation/dutchf3_patch/distributed/run.sh create mode 100644 contrib/experiments/interpretation/dutchf3_patch/distributed/train.py create mode 100755 contrib/experiments/interpretation/dutchf3_patch/distributed/train.sh create mode 100644 contrib/experiments/interpretation/dutchf3_section/README.md create mode 100644 contrib/experiments/interpretation/dutchf3_section/local/configs/section_deconvnet_skip.yaml create mode 100644 contrib/experiments/interpretation/dutchf3_section/local/default.py create mode 100644 contrib/experiments/interpretation/dutchf3_section/local/logging.conf create mode 100644 contrib/experiments/interpretation/dutchf3_section/local/test.py create mode 100644 contrib/experiments/interpretation/dutchf3_section/local/train.py create mode 100644 contrib/experiments/interpretation/penobscot/README.md create mode 100644 contrib/experiments/interpretation/penobscot/local/configs/hrnet.yaml create mode 100644 contrib/experiments/interpretation/penobscot/local/configs/seresnet_unet.yaml create mode 100644 contrib/experiments/interpretation/penobscot/local/default.py create mode 100644 contrib/experiments/interpretation/penobscot/local/logging.conf create mode 100644 contrib/experiments/interpretation/penobscot/local/test.py create mode 100755 contrib/experiments/interpretation/penobscot/local/test.sh create mode 100644 contrib/experiments/interpretation/penobscot/local/train.py create mode 100755 contrib/experiments/interpretation/penobscot/local/train.sh create mode 100644 cv_lib/cv_lib/segmentation/dutchf3/__init__.py create mode 100644 examples/interpretation/notebooks/Dutch_F3_patch_model_training_and_evaluation.ipynb create mode 100644 interpretation/deepseismic_interpretation/dutchf3/tests/test_dataloaders.py create mode 100644 scripts/gen_checkerboard.py create mode 100644 scripts/logging.conf create mode 100755 scripts/run_all.sh create mode 100755 scripts/run_distributed.sh create mode 100755 scripts/test_all.sh create mode 100644 tests/cicd/aml_build.yml create mode 100644 tests/cicd/src/check_performance.py create mode 100755 tests/cicd/src/scripts/get_data_for_builds.sh create mode 100644 tests/test_prepare_dutchf3.py diff --git a/AUTHORS.md b/AUTHORS.md index c0011f3e..b903ddb4 100644 --- a/AUTHORS.md +++ b/AUTHORS.md @@ -9,14 +9,16 @@ Contributors (sorted alphabetically) ------------------------------------- To contributors: please add your name to the list when you submit a patch to the project. +* Yazeed Alaudah * Ashish Bhatia +* Sharat Chikkerur * Daniel Ciborowski * George Iordanescu * Ilia Karmanov * Max Kaznady * Vanja Paunic * Mathew Salvaris - +* Wee Hyong Tok ## How to be a contributor to the repository This project welcomes contributions and suggestions. Most contributions require you to agree to a diff --git a/NOTICE.txt b/NOTICE.txt index 6dc34351..fe7884f1 100755 --- a/NOTICE.txt +++ b/NOTICE.txt @@ -1949,7 +1949,7 @@ CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ------------------------------------------------------------------- -olivesgatech/facies_classification_benchmark 12102683a1ae78f8fbc953823c35a43b151194b3 - MIT +yalaudah/facies_classification_benchmark 12102683a1ae78f8fbc953823c35a43b151194b3 - MIT Copyright (c) 2017 Meet Pragnesh Shah Copyright (c) 2010-2018 Benjamin Peterson diff --git a/README.md b/README.md index 65b53002..eb89c926 100644 --- a/README.md +++ b/README.md @@ -3,32 +3,33 @@ This repository shows you how to perform seismic imaging and interpretation on Azure. It empowers geophysicists and data scientists to run seismic experiments using state-of-art DSL-based PDE solvers and segmentation algorithms on Azure. -The repository provides sample notebooks, data loaders for seismic data, utilities, and out-of-the box ML pipelines, organized as follows: +The repository provides sample notebooks, data loaders for seismic data, utilities, and out-of-the-box ML pipelines, organized as follows: - **sample notebooks**: these can be found in the `examples` folder - they are standard Jupyter notebooks which highlight how to use the codebase by walking the user through a set of pre-made examples -- **experiments**: the goal is to provide runnable Python scripts which train and test (score) our machine learning models in `experiments` folder. The models themselves are swappable, meaning a single train script can be used to run a different model on the same dataset by simply swapping out the configuration file which defines the model. Experiments are organized by model types and datasets - for example, "2D segmentation on Dutch F3 dataset", "2D segmentation on Penobscot dataset" and "3D segmentation on Penobscot dataset" are all different experiments. As another example, if one is swapping 2D segmentation models on Dutch F3 dataset, one would just point the train and test scripts to a different configuration file within the same experiment. +- **experiments**: the goal is to provide runnable Python scripts that train and test (score) our machine learning models in the `experiments` folder. The models themselves are swappable, meaning a single train script can be used to run a different model on the same dataset by simply swapping out the configuration file which defines the model. - **pip installable utilities**: we provide `cv_lib` and `deepseismic_interpretation` utilities (more info below) which are used by both sample notebooks and experiments mentioned above -DeepSeismic currently focuses on Seismic Interpretation (3D segmentation aka facies classification) with experimental code provided around Seismic Imaging. +DeepSeismic currently focuses on Seismic Interpretation (3D segmentation aka facies classification) with experimental code provided around Seismic Imaging in the contrib folder. ### Quick Start +Our repo is Docker-enabled and we provide a Docker file which you can use to quickly demo our codebase. If you are in a hurry and just can't wait to run our code, follow the [Docker README](https://github.com/microsoft/seismic-deeplearning/blob/master/docker/README.md) to build and run our repo from [Dockerfile](https://github.com/microsoft/seismic-deeplearning/blob/master/docker/Dockerfile). + +For developers, we offer a more hands-on Quick Start below. + +#### Dev Quick Start There are two ways to get started with the DeepSeismic codebase, which currently focuses on Interpretation: -- if you'd like to get an idea of how our interpretation (segmentation) models are used, simply review the [HRNet demo notebook](https://github.com/microsoft/DeepSeismic/blob/master/examples/interpretation/notebooks/HRNet_Penobscot_demo_notebook.ipynb) -- to actually run the code, you'll need to set up a compute environment (which includes setting up a GPU-enabled Linux VM and downloading the appropriate Anaconda Python packages) and download the datasets which you'd like to work with - detailed steps for doing this are provided in the next `Interpretation` section below. +- if you'd like to get an idea of how our interpretation (segmentation) models are used, simply review the [HRNet demo notebook](https://github.com/microsoft/seismic-deeplearning/blob/master/examples/interpretation/notebooks/Dutch_F3_patch_model_training_and_evaluation.ipynb) +- to run the code, you'll need to set up a compute environment (which includes setting up a GPU-enabled Linux VM and downloading the appropriate Anaconda Python packages) and download the datasets which you'd like to work with - detailed steps for doing this are provided in the next `Interpretation` section below. If you run into any problems, chances are your problem has already been solved in the [Troubleshooting](#troubleshooting) section. -### Pre-run notebooks - -Notebooks stored in the repository have output intentionally displaced - you can find full auto-generated versions of the notebooks here: -- **HRNet Penobscot demo**: [[HTML](https://deepseismicstore.blob.core.windows.net/shared/HRNet_Penobscot_demo_notebook.html)] [[.ipynb](https://deepseismicstore.blob.core.windows.net/shared/HRNet_Penobscot_demo_notebook.ipynb)] -- **Dutch F3 dataset**: [[HTML](https://deepseismicstore.blob.core.windows.net/shared/F3_block_training_and_evaluation_local.html)] [[.ipynb](https://deepseismicstore.blob.core.windows.net/shared/F3_block_training_and_evaluation_local.ipynb)] +The notebook is designed to be run in demo mode by default using a pre-trained model in under 5 minutes on any reasonable Deep Learning GPU such as nVidia K80/P40/P100/V100/TitanV. ### Azure Machine Learning [Azure Machine Learning](https://docs.microsoft.com/en-us/azure/machine-learning/) enables you to train and deploy your machine learning models and pipelines at scale, ane leverage open-source Python frameworks, such as PyTorch, TensorFlow, and scikit-learn. If you are looking at getting started with using the code in this repository with Azure Machine Learning, refer to [Azure Machine Learning How-to](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml) to get started. ## Interpretation -For seismic interpretation, the repository consists of extensible machine learning pipelines, that shows how you can leverage state-of-the-art segmentation algorithms (UNet, SEResNET, HRNet) for seismic interpretation, and also benchmarking results from running these algorithms using various seismic datasets (Dutch F3, and Penobscot). +For seismic interpretation, the repository consists of extensible machine learning pipelines, that shows how you can leverage state-of-the-art segmentation algorithms (UNet, SEResNET, HRNet) for seismic interpretation. To run examples available on the repo, please follow instructions below to: 1) [Set up the environment](#setting-up-environment) @@ -37,7 +38,7 @@ To run examples available on the repo, please follow instructions below to: ### Setting up Environment -Follow the instruction bellow to read about compute requirements and install required libraries. +Follow the instructions below to read about compute requirements and install required libraries. #### Compute environment @@ -55,9 +56,9 @@ To install packages contained in this repository, navigate to the directory wher ```bash conda env create -f environment/anaconda/local/environment.yml ``` -This will create the appropriate conda environment to run experiments. +This will create the appropriate conda environment to run experiments. If you run into problems with this step, see the [troubleshooting section](#Troubleshooting). -Next you will need to install the common package for interpretation: +Next, you will need to install the common package for interpretation: ```bash conda activate seismic-interpretation pip install -e interpretation @@ -79,45 +80,25 @@ from the root of DeepSeismic repo. ### Dataset download and preparation -This repository provides examples on how to run seismic interpretation on two publicly available annotated seismic datasets: [Penobscot](https://zenodo.org/record/1341774) and [F3 Netherlands](https://github.com/olivesgatech/facies_classification_benchmark). Their respective sizes (uncompressed on disk in your folder after downloading and pre-processing) are: -- **Penobscot**: 7.9 GB -- **Dutch F3**: 2.2 GB +This repository provides examples on how to run seismic interpretation on Dutch F3 publicly available annotated seismic dataset [Dutch F3](https://github.com/yalaudah/facies_classification_benchmark), which is about 2.2GB in size. Please make sure you have enough disk space to download either dataset. -We have experiments and notebooks which use either one dataset or the other. Depending on which experiment/notebook you want to run you'll need to download the corresponding dataset. We suggest you start by looking at [HRNet demo notebook](https://github.com/microsoft/DeepSeismic/blob/master/examples/interpretation/notebooks/HRNet_Penobscot_demo_notebook.ipynb) which requires the Penobscot dataset. - -#### Penobscot -To download the Penobscot dataset run the [download_penobscot.sh](scripts/download_penobscot.sh) script, e.g. - -``` -data_dir="$HOME/data/penobscot" -mkdir -p "$data_dir" -./scripts/download_penobscot.sh "$data_dir" -``` - -Note that the specified download location should be configured with appropriate `write` permissions. On some Linux virtual machines, you may want to place the data into `/mnt` or `/data` folder so you have to make sure you have write access. - -To make things easier, we suggested you use your home directory where you might run out of space. If this happens on an [Azure Data Science Virtual Machine](https://azure.microsoft.com/en-us/services/virtual-machines/data-science-virtual-machines/) you can resize the disk quite easily from [Azure Portal](https://portal.azure.com) - please see the [Troubleshooting](#troubleshooting) section at the end of this README regarding [how to do this](#how-to-resize-data-science-virtual-machine-disk). +We have experiments and notebooks which use either one dataset or the other. Depending on which experiment/notebook you want to run you'll need to download the corresponding dataset. We suggest you start by looking at [HRNet demo notebook](https://github.com/microsoft/seismic-deeplearning/blob/master/examples/interpretation/notebooks/Dutch_F3_patch_model_training_and_evaluation.ipynb) which requires the Dutch F3 dataset. -To prepare the data for the experiments (e.g. split into train/val/test), please run the following script (modifying arguments as desired): - -``` -python scripts/prepare_penobscot.py split_inline --data-dir="$HOME/data/penobscot" --val-ratio=.1 --test-ratio=.2 -``` - -#### F3 Netherlands +#### Dutch F3 Netherlands dataset prep To download the F3 Netherlands dataset for 2D experiments, please follow the data download instructions at -[this github repository](https://github.com/yalaudah/facies_classification_benchmark) (section Dataset). - -Once you've downloaded the data set, make sure to create an empty `splits` directory, under the downloaded `data` directory; you can re-use the same data directory as the one for Penobscot dataset created earlier. This is where your training/test/validation splits will be saved. +[this github repository](https://github.com/yalaudah/facies_classification_benchmark) (section Dataset). Atternatively, you can use the [download script](scripts/download_dutch_f3.sh) ``` -cd data -mkdir splits +data_dir="$HOME/data/dutch" +mkdir -p "${data_dir}" +./scripts/download_dutch_f3.sh "${data_dir}" ``` -At this point, your `data` directory tree should look like this: +Download scripts also automatically create any subfolders in `${data_dir}` which are needed for the data preprocessing scripts. + +At this point, your `${data_dir}` directory should contain a `data` folder, which should look like this: ``` data @@ -135,13 +116,15 @@ data To prepare the data for the experiments (e.g. split into train/val/test), please run the following script: ``` -# For section-based experiments -python scripts/prepare_dutchf3.py split_train_val section --data-dir=/home/username/data/dutch/data - +# change working directory to scripts folder +cd scripts # For patch-based experiments -python scripts/prepare_dutchf3.py split_train_val patch --data-dir=/home/username/data/dutch/data --stride=50 --patch=100 +python prepare_dutchf3.py split_train_val patch --data_dir=${data_dir} --label_file=train/train_labels.npy --output_dir=splits \ +--stride=50 --patch_size=100 --split_direction=both +# go back to repo root +cd .. ``` Refer to the script itself for more argument options. @@ -157,14 +140,21 @@ Make sure to run the notebooks in the conda environment we previously set up (`s python -m ipykernel install --user --name seismic-interpretation ``` +__Optional__: if you plan to develop a notebook, you can install black formatter with the following commands: +```bash +conda activate seismic-interpretation +jupyter nbextension install https://github.com/drillan/jupyter-black/archive/master.zip --user +jupyter nbextension enable jupyter-black-master/jupyter-black +``` + +This will enable your notebook with a Black formatter button, which then clicked will automatically format a notebook cell which you're in. + #### Experiments -We also provide scripts for a number of experiments we conducted using different segmentation approaches. These experiments are available under `experiments/interpretation`, and can be used as examples. Within each experiment start from the `train.sh` and `test.sh` scripts under the `local/` (single GPU) and `distributed/` (multiple GPUs) directories, which invoke the corresponding python scripts, `train.py` and `test.py`. Take a look at the experiment configurations (see Experiment Configuration Files section below) for experiment options and modify if necessary. +We also provide scripts for a number of experiments we conducted using different segmentation approaches. These experiments are available under `experiments/interpretation`, and can be used as examples. Within each experiment start from the `train.sh` and `test.sh` scripts under the `local/` directory, which invoke the corresponding python scripts, `train.py` and `test.py`. Take a look at the experiment configurations (see Experiment Configuration Files section below) for experiment options and modify if necessary. -Please refer to individual experiment README files for more information. -- [Penobscot](experiments/interpretation/penobscot/README.md) +This release currently supports Dutch F3 local execution - [F3 Netherlands Patch](experiments/interpretation/dutchf3_patch/README.md) -- [F3 Netherlands Section](experiments/interpretation/dutchf3_section/README.md) #### Configuration Files We use [YACS](https://github.com/rbgirshick/yacs) configuration library to manage configuration options for the experiments. There are three ways to pass arguments to the experiment scripts (e.g. train.py or test.py): @@ -186,13 +176,17 @@ We use [YACS](https://github.com/rbgirshick/yacs) configuration library to manag ### Pretrained Models -#### HRNet +There are two types of pre-trained models used by this repo: +1. pre-trained models trained on non-seismic Computer Vision datasets which we fine-tune for the seismic domain through re-training on seismic data +2. models which we already trained on seismic data - these are downloaded automatically by our code if needed (again, please see the notebook for a demo above regarding how this is done). + +#### HRNet ImageNet weights model -To achieve the same results as the benchmarks above you will need to download the HRNet model [pretrained](https://github.com/HRNet/HRNet-Image-Classification) on ImageNet. We are specifically using the [HRNet-W48-C](https://1drv.ms/u/s!Aus8VCZ_C_33dKvqI6pBZlifgJk) pre-trained model; other HRNet variants are also available [here](https://github.com/HRNet/HRNet-Image-Classification) - you can navigate to those from the [main HRNet landing page](https://github.com/HRNet/HRNet-Object-Detection) for object detection. +To enable training from scratch on seismic data and to achieve the same results as the benchmarks quoted below you will need to download the HRNet model [pretrained](https://github.com/HRNet/HRNet-Image-Classification) on ImageNet. We are specifically using the [HRNet-W48-C](https://1drv.ms/u/s!Aus8VCZ_C_33dKvqI6pBZlifgJk) pre-trained model; other HRNet variants are also available [here](https://github.com/HRNet/HRNet-Image-Classification) - you can navigate to those from the [main HRNet landing page](https://github.com/HRNet/HRNet-Object-Detection) for object detection. -Unfortunately the OneDrive location which is used to host the model is using a temporary authentication token, so there is no way for us to scipt up model download. There are two ways to upload and use the pre-trained HRNet model on DS VM: +Unfortunately, the OneDrive location which is used to host the model is using a temporary authentication token, so there is no way for us to script up model download. There are two ways to upload and use the pre-trained HRNet model on DS VM: - download the model to your local drive using a web browser of your choice and then upload the model to the DS VM using something like `scp`; navigate to Portal and copy DS VM's public IP from the Overview panel of your DS VM (you can search your DS VM by name in the search bar of the Portal) then use `scp local_model_location username@DS_VM_public_IP:./model/save/path` to upload -- alternatively you can use the same public IP to open remote desktop over SSH to your Linux VM using [X2Go](https://wiki.x2go.org/doku.php/download:start): you can basically open the web browser on your VM this way and download the model to VM's disk +- alternatively, you can use the same public IP to open remote desktop over SSH to your Linux VM using [X2Go](https://wiki.x2go.org/doku.php/download:start): you can basically open the web browser on your VM this way and download the model to VM's disk ### Viewers (optional) @@ -220,52 +214,28 @@ segyviewer "${HOME}/home/username/data/dutch/data.segy" #### Dense Labels -This section contains benchmarks of different algorithms for seismic interpretation on 3D seismic datasets with densely-annotated data. - -Below are the results from the models contained in this repo. To run them check the instructions in folder. Alternatively take a look in for how to run them on your own dataset - -#### Netherlands F3 - -| Source | Experiment | PA | FW IoU | MCA | -|------------------|-----------------------------------|-------------|--------------|------------| -| Alaudah et al.| Section-based | 0.905 | 0.817 | .832 | -| | Patch-based | 0.852 | 0.743 | .689 | -| DeepSeismic | Patch-based+fixed | .869 | .761 | .775 | -| | SEResNet UNet+section depth | .917 | .849 | .834 | -| | HRNet(patch)+patch_depth | .908 | .843 | .837 | -| | HRNet(patch)+section_depth | .928 | .871 | .871 | - -#### Penobscot +This section contains benchmarks of different algorithms for seismic interpretation on 3D seismic datasets with densely-annotated data. We currently only support single-GPU Dutch F3 dataset benchmarks with this release. -Trained and tested on full dataset. Inlines with artefacts were left in for training, validation and testing. -The dataset was split 70% training, 10% validation and 20% test. The results below are from the test set +#### Dutch F3 -| Source | Experiment | PA | IoU | MCA | -|------------------|-------------------------------------|-------------|--------------|------------| -| DeepSeismic | SEResNet UNet + section depth | 1.0 | .98 | .99 | -| | HRNet(patch) + section depth | 1.0 | .97 | .98 | +| Source | Experiment | PA | FW IoU | MCA | V100 (16GB) training time | +| -------------- | --------------------------- | ----- | ------ | ---- | ------------------------- | +| Alaudah et al. | Section-based | 0.905 | 0.817 | .832 | N/A | +| | Patch-based | 0.852 | 0.743 | .689 | N/A | +| DeepSeismic | Patch-based+fixed | .875 | .784 | .740 | 08h 54min | +| | SEResNet UNet+section depth | .910 | .841 | .809 | 55h 02min | +| | HRNet(patch)+patch_depth | .884 | .795 | .739 | 67h 41min | +| | HRNet(patch)+section_depth | .900 | .820 | .767 | 55h 08min | -![Best Penobscot SEResNet](assets/penobscot_seresnet_best.png "Best performing inlines, Mask and Predictions from SEResNet") -![Worst Penobscot SEResNet](assets/penobscot_seresnet_worst.png "Worst performing inlines Mask and Predictions from SEResNet") #### Reproduce benchmarks -In order to reproduce the benchmarks you will need to navigate to the [experiments](experiments) folder. In there each of the experiments -are split into different folders. To run the Netherlands F3 experiment navigate to the [dutchf3_patch/local](experiments/dutchf3_patch/local) folder. In there is a training script [([train.sh](experiments/dutchf3_patch/local/train.sh)) +In order to reproduce the benchmarks, you will need to navigate to the [experiments](experiments) folder. In there, each of the experiments are split into different folders. To run the Netherlands F3 experiment navigate to the [dutchf3_patch/local](experiments/dutchf3_patch/local) folder. In there is a training script [([train.sh](experiments/dutchf3_patch/local/train.sh)) which will run the training for any configuration you pass in. Once you have run the training you will need to run the [test.sh](experiments/dutchf3_patch/local/test.sh) script. Make sure you specify the path to the best performing model from your training run, either by passing it in as an argument or altering the YACS config file. -To reproduce the benchmarks -for the Penobscot dataset follow the same instructions but navigate to the [penobscot](penobscot) folder. - -#### Scripts -- [parallel_training.sh](scripts/parallel_training.sh): Script to launch multiple jobs in parallel. Used mainly for local hyperparameter tuning. Look at the script for further instructions - -- [kill_windows.sh](scripts/kill_windows.sh): Script to kill multiple tmux windows. Used to kill jobs that parallel_training.sh might have started. - - ## Contributing -This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com. +This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit [https://cla.opensource.microsoft.com](https://cla.opensource.microsoft.com). ### Submitting a Pull Request @@ -276,14 +246,12 @@ When you submit a pull request, a CLA bot will automatically determine whether y This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. ## Build Status -| Build | Branch | Status | -| --- | --- | --- | +| Build | Branch | Status | +| -------------------- | ------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Legal Compliance** | staging | [![Build Status](https://dev.azure.com/best-practices/deepseismic/_apis/build/status/microsoft.ComponentGovernance%20(seismic-deeplearning)?branchName=staging)](https://dev.azure.com/best-practices/deepseismic/_build/latest?definitionId=124&branchName=staging) | -| **Legal Compliance** | master | [![Build Status](https://dev.azure.com/best-practices/deepseismic/_apis/build/status/microsoft.ComponentGovernance%20(seismic-deeplearning)?branchName=master)](https://dev.azure.com/best-practices/deepseismic/_build/latest?definitionId=124&branchName=master) | -| **Tests** | staging | [![Build Status](https://dev.azure.com/best-practices/deepseismic/_apis/build/status/microsoft.Notebooks%20(seismic-deeplearning)?branchName=staging)](https://dev.azure.com/best-practices/deepseismic/_build/latest?definitionId=125&branchName=staging) | -| **Tests** | master | [![Build Status](https://dev.azure.com/best-practices/deepseismic/_apis/build/status/microsoft.Notebooks%20(seismic-deeplearning)?branchName=master)](https://dev.azure.com/best-practices/deepseismic/_build/latest?definitionId=125&branchName=master) | -| **Notebook Tests** | staging | [![Build Status](https://dev.azure.com/best-practices/deepseismic/_apis/build/status/microsoft.Tests%20(seismic-deeplearning)?branchName=staging)](https://dev.azure.com/best-practices/deepseismic/_build/latest?definitionId=126&branchName=staging) | -| **Notebook Tests** | master | [![Build Status](https://dev.azure.com/best-practices/deepseismic/_apis/build/status/microsoft.Tests%20(seismic-deeplearning)?branchName=master)](https://dev.azure.com/best-practices/deepseismic/_build/latest?definitionId=126&branchName=master) | +| **Legal Compliance** | master | [![Build Status](https://dev.azure.com/best-practices/deepseismic/_apis/build/status/microsoft.ComponentGovernance%20(seismic-deeplearning)?branchName=master)](https://dev.azure.com/best-practices/deepseismic/_build/latest?definitionId=124&branchName=master) | +| **Core Tests** | staging | [![Build Status](https://dev.azure.com/best-practices/deepseismic/_apis/build/status/microsoft.Tests%20(seismic-deeplearning)?branchName=staging)](https://dev.azure.com/best-practices/deepseismic/_build/latest?definitionId=126&branchName=staging) | +| **Core Tests** | master | [![Build Status](https://dev.azure.com/best-practices/deepseismic/_apis/build/status/microsoft.Tests%20(seismic-deeplearning)?branchName=master)](https://dev.azure.com/best-practices/deepseismic/_build/latest?definitionId=126&branchName=master) | # Troubleshooting @@ -297,7 +265,7 @@ A typical output will be: someusername@somevm:/projects/DeepSeismic$ which python /anaconda/envs/py35/bin/python ``` -which will indicate that anaconda folder is __/anaconda__. We'll refer to this location in instructions below, but you should update the commands according to your local anaconda folder. +which will indicate that anaconda folder is `__/anaconda__`. We'll refer to this location in the instructions below, but you should update the commands according to your local anaconda folder.
Data Science Virtual Machine conda package installation errors @@ -315,7 +283,7 @@ which will indicate that anaconda folder is __/anaconda__. We'll refer to this l
Data Science Virtual Machine conda package installation warnings - It could happen that while creating the conda environment defined by environment/anaconda/local/environment.yml on an Ubuntu DSVM, one can get multiple warnings like so: + It could happen that while creating the conda environment defined by `environment/anaconda/local/environment.yml` on an Ubuntu DSVM, one can get multiple warnings like so: ``` WARNING conda.gateways.disk.delete:unlink_or_rename_to_trash(140): Could not remove or rename /anaconda/pkgs/ipywidgets-7.5.1-py_0/site-packages/ipywidgets-7.5.1.dist-info/LICENSE. Please remove this file manually (you may need to reboot to free file handles) ``` @@ -326,20 +294,20 @@ which will indicate that anaconda folder is __/anaconda__. We'll refer to this l sudo chown -R $USER /anaconda ``` - After these command completes, try creating the conda environment in __environment/anaconda/local/environment.yml__ again. + After these command completes, try creating the conda environment in `__environment/anaconda/local/environment.yml__` again.
Model training or scoring is not using GPU - To see if GPU is being using while your model is being trained or used for inference, run + To see if GPU is being used while your model is being trained or used for inference, run ```bash nvidia-smi ``` - and confirm that you see you Python process using the GPU. + and confirm that you see your Python process using the GPU. - If not, you may want to try reverting to an older version of CUDA for use with pyTorch. After the environment has been setup, run the following command (by default we use CUDA 10) after running `conda activate seismic-interpretation` to activate the conda environment: + If not, you may want to try reverting to an older version of CUDA for use with PyTorch. After the environment has been set up, run the following command (by default we use CUDA 10) after running `conda activate seismic-interpretation` to activate the conda environment: ```bash conda install pytorch torchvision cudatoolkit=9.2 -c pytorch ``` @@ -360,7 +328,7 @@ which will indicate that anaconda folder is __/anaconda__. We'll refer to this l torch.cuda.is_available() ``` - Output should say "True" this time. If it does, you can make the change permanent by adding + The output should say "True" this time. If it does, you can make the change permanent by adding ```bash export CUDA_VISIBLE_DEVICES=0 ``` @@ -371,11 +339,11 @@ which will indicate that anaconda folder is __/anaconda__. We'll refer to this l
GPU out of memory errors - You should be able to see how much GPU memory your process is using by running + You should be able to see how much GPU memory your process is using by running: ```bash nvidia-smi ``` - and seeing if this amount is close to the physical memory limit specified by the GPU manufacturer. + and see if this amount is close to the physical memory limit specified by the GPU manufacturer. If we're getting close to the memory limit, you may want to lower the batch size in the model configuration file. Specifically, `TRAIN.BATCH_SIZE_PER_GPU` and `VALIDATION.BATCH_SIZE_PER_GPU` settings. @@ -386,17 +354,13 @@ which will indicate that anaconda folder is __/anaconda__. We'll refer to this l 1. Go to the [Azure Portal](https://portal.azure.com) and find your virtual machine by typing its name in the search bar at the very top of the page. - 2. In the Overview panel on the left hand side, click Stop button to stop the virtual machine. + 2. In the Overview panel on the left-hand side, click the Stop button to stop the virtual machine. - 3. Next, select Disks in the same panel on the left hand side. + 3. Next, select Disks in the same panel on the left-hand side. - 4. Click the Name of the OS Disk - you'll be navigated to the Disk view. From this view, select Configuration on the left hand side and then increase Size in GB and hit the Save button. + 4. Click the Name of the OS Disk - you'll be navigated to the Disk view. From this view, select Configuration on the left-hand side and then increase Size in GB and hit the Save button. 5. Navigate back to the Virtual Machine view in Step 2 and click the Start button to start the virtual machine.
- - - - diff --git a/cgmanifest.json b/cgmanifest.json index d647c543..d83c6bfd 100644 --- a/cgmanifest.json +++ b/cgmanifest.json @@ -3,7 +3,7 @@ "component": { "type": "git", "git": { - "repositoryUrl": "https://github.com/olivesgatech/facies_classification_benchmark", + "repositoryUrl": "https://github.com/yalaudah/facies_classification_benchmark", "commitHash": "12102683a1ae78f8fbc953823c35a43b151194b3" } }, diff --git a/conftest.py b/conftest.py new file mode 100644 index 00000000..e69de29b diff --git a/contrib/experiments/interpretation/dutchf3_patch/distributed/configs/hrnet.yaml b/contrib/experiments/interpretation/dutchf3_patch/distributed/configs/hrnet.yaml new file mode 100644 index 00000000..fe3995f6 --- /dev/null +++ b/contrib/experiments/interpretation/dutchf3_patch/distributed/configs/hrnet.yaml @@ -0,0 +1,103 @@ +CUDNN: + BENCHMARK: true + DETERMINISTIC: false + ENABLED: true +GPUS: (0,) +OUTPUT_DIR: 'output' +LOG_DIR: 'log' +WORKERS: 4 +PRINT_FREQ: 10 +LOG_CONFIG: logging.conf +SEED: 2019 +OPENCV_BORDER_CONSTANT: 0 + + +DATASET: + NUM_CLASSES: 6 + ROOT: /mnt/dutchf3 + CLASS_WEIGHTS: [0.7151, 0.8811, 0.5156, 0.9346, 0.9683, 0.9852] + + +MODEL: + NAME: seg_hrnet + IN_CHANNELS: 3 + PRETRAINED: '/mnt/hrnet_pretrained/image_classification/hrnetv2_w48_imagenet_pretrained.pth' + EXTRA: + FINAL_CONV_KERNEL: 1 + STAGE2: + NUM_MODULES: 1 + NUM_BRANCHES: 2 + BLOCK: BASIC + NUM_BLOCKS: + - 4 + - 4 + NUM_CHANNELS: + - 48 + - 96 + FUSE_METHOD: SUM + STAGE3: + NUM_MODULES: 4 + NUM_BRANCHES: 3 + BLOCK: BASIC + NUM_BLOCKS: + - 4 + - 4 + - 4 + NUM_CHANNELS: + - 48 + - 96 + - 192 + FUSE_METHOD: SUM + STAGE4: + NUM_MODULES: 3 + NUM_BRANCHES: 4 + BLOCK: BASIC + NUM_BLOCKS: + - 4 + - 4 + - 4 + - 4 + NUM_CHANNELS: + - 48 + - 96 + - 192 + - 384 + FUSE_METHOD: SUM + +TRAIN: + BATCH_SIZE_PER_GPU: 16 + BEGIN_EPOCH: 0 + END_EPOCH: 300 + MIN_LR: 0.001 + MAX_LR: 0.02 + MOMENTUM: 0.9 + WEIGHT_DECAY: 0.0001 + SNAPSHOTS: 5 + AUGMENTATION: True + DEPTH: "section" #"patch" # Options are none, patch, and section + STRIDE: 50 + PATCH_SIZE: 100 + AUGMENTATIONS: + RESIZE: + HEIGHT: 200 + WIDTH: 200 + PAD: + HEIGHT: 256 + WIDTH: 256 + MEAN: 0.0009997 # 0.0009996710808862074 + STD: 0.20977 # 0.20976548783479299 + MODEL_DIR: "models" + + +VALIDATION: + BATCH_SIZE_PER_GPU: 32 + +TEST: + MODEL_PATH: "/data/home/mat/repos/DeepSeismic/interpretation/experiments/segmentation/dutchf3/local/output/mat/exp/ccb7206b41dc7411609705e49d9f4c2d74c6eb88/seg_hrnet/Aug30_141919/models/seg_hrnet_running_model_18.pth" + TEST_STRIDE: 10 + SPLIT: 'Both' # Can be Both, Test1, Test2 + INLINE: True + CROSSLINE: True + POST_PROCESSING: + SIZE: 128 # + CROP_PIXELS: 14 # Number of pixels to crop top, bottom, left and right diff --git a/contrib/experiments/interpretation/dutchf3_patch/distributed/configs/patch_deconvnet.yaml b/contrib/experiments/interpretation/dutchf3_patch/distributed/configs/patch_deconvnet.yaml new file mode 100644 index 00000000..fa1d6add --- /dev/null +++ b/contrib/experiments/interpretation/dutchf3_patch/distributed/configs/patch_deconvnet.yaml @@ -0,0 +1,59 @@ +CUDNN: + BENCHMARK: true + DETERMINISTIC: false + ENABLED: true +GPUS: (0,) +OUTPUT_DIR: 'output' +LOG_DIR: 'log' +WORKERS: 4 +PRINT_FREQ: 10 +LOG_CONFIG: logging.conf +SEED: 2019 + +DATASET: + NUM_CLASSES: 6 + ROOT: /mnt/dutchf3 + CLASS_WEIGHTS: [0.7151, 0.8811, 0.5156, 0.9346, 0.9683, 0.9852] + +MODEL: + NAME: patch_deconvnet_skip + IN_CHANNELS: 1 + + +TRAIN: + BATCH_SIZE_PER_GPU: 64 + BEGIN_EPOCH: 0 + END_EPOCH: 300 + MIN_LR: 0.001 + MAX_LR: 0.02 + MOMENTUM: 0.9 + WEIGHT_DECAY: 0.0001 + SNAPSHOTS: 5 + AUGMENTATION: True + DEPTH: "none" #"patch" # Options are none, patch, and section + STRIDE: 50 + PATCH_SIZE: 99 + AUGMENTATIONS: + RESIZE: + HEIGHT: 99 + WIDTH: 99 + PAD: + HEIGHT: 99 + WIDTH: 99 + MEAN: 0.0009997 # 0.0009996710808862074 + STD: 0.20977 # 0.20976548783479299 + MODEL_DIR: "models" + +VALIDATION: + BATCH_SIZE_PER_GPU: 512 + +TEST: + MODEL_PATH: "" + TEST_STRIDE: 10 + SPLIT: 'Both' # Can be Both, Test1, Test2 + INLINE: True + CROSSLINE: True + POST_PROCESSING: + SIZE: 99 # + CROP_PIXELS: 0 # Number of pixels to crop top, bottom, left and right + diff --git a/contrib/experiments/interpretation/dutchf3_patch/distributed/configs/patch_deconvnet_skip.yaml b/contrib/experiments/interpretation/dutchf3_patch/distributed/configs/patch_deconvnet_skip.yaml new file mode 100644 index 00000000..fa1d6add --- /dev/null +++ b/contrib/experiments/interpretation/dutchf3_patch/distributed/configs/patch_deconvnet_skip.yaml @@ -0,0 +1,59 @@ +CUDNN: + BENCHMARK: true + DETERMINISTIC: false + ENABLED: true +GPUS: (0,) +OUTPUT_DIR: 'output' +LOG_DIR: 'log' +WORKERS: 4 +PRINT_FREQ: 10 +LOG_CONFIG: logging.conf +SEED: 2019 + +DATASET: + NUM_CLASSES: 6 + ROOT: /mnt/dutchf3 + CLASS_WEIGHTS: [0.7151, 0.8811, 0.5156, 0.9346, 0.9683, 0.9852] + +MODEL: + NAME: patch_deconvnet_skip + IN_CHANNELS: 1 + + +TRAIN: + BATCH_SIZE_PER_GPU: 64 + BEGIN_EPOCH: 0 + END_EPOCH: 300 + MIN_LR: 0.001 + MAX_LR: 0.02 + MOMENTUM: 0.9 + WEIGHT_DECAY: 0.0001 + SNAPSHOTS: 5 + AUGMENTATION: True + DEPTH: "none" #"patch" # Options are none, patch, and section + STRIDE: 50 + PATCH_SIZE: 99 + AUGMENTATIONS: + RESIZE: + HEIGHT: 99 + WIDTH: 99 + PAD: + HEIGHT: 99 + WIDTH: 99 + MEAN: 0.0009997 # 0.0009996710808862074 + STD: 0.20977 # 0.20976548783479299 + MODEL_DIR: "models" + +VALIDATION: + BATCH_SIZE_PER_GPU: 512 + +TEST: + MODEL_PATH: "" + TEST_STRIDE: 10 + SPLIT: 'Both' # Can be Both, Test1, Test2 + INLINE: True + CROSSLINE: True + POST_PROCESSING: + SIZE: 99 # + CROP_PIXELS: 0 # Number of pixels to crop top, bottom, left and right + diff --git a/contrib/experiments/interpretation/dutchf3_patch/distributed/configs/seresnet_unet.yaml b/contrib/experiments/interpretation/dutchf3_patch/distributed/configs/seresnet_unet.yaml new file mode 100644 index 00000000..9bc10d34 --- /dev/null +++ b/contrib/experiments/interpretation/dutchf3_patch/distributed/configs/seresnet_unet.yaml @@ -0,0 +1,59 @@ +CUDNN: + BENCHMARK: true + DETERMINISTIC: false + ENABLED: true +GPUS: (0,) +OUTPUT_DIR: 'output' +LOG_DIR: 'log' +WORKERS: 4 +PRINT_FREQ: 10 +LOG_CONFIG: logging.conf +SEED: 2019 + + +DATASET: + NUM_CLASSES: 6 + ROOT: /mnt/dutchf3 + CLASS_WEIGHTS: [0.7151, 0.8811, 0.5156, 0.9346, 0.9683, 0.9852] + +MODEL: + NAME: resnet_unet + IN_CHANNELS: 3 + +TRAIN: + BATCH_SIZE_PER_GPU: 16 + BEGIN_EPOCH: 0 + END_EPOCH: 300 + MIN_LR: 0.001 + MAX_LR: 0.02 + MOMENTUM: 0.9 + WEIGHT_DECAY: 0.0001 + SNAPSHOTS: 5 + AUGMENTATION: True + DEPTH: "section" # Options are none, patch, and section + STRIDE: 50 + PATCH_SIZE: 100 + AUGMENTATIONS: + RESIZE: + HEIGHT: 200 + WIDTH: 200 + PAD: + HEIGHT: 256 + WIDTH: 256 + MEAN: 0.0009997 # 0.0009996710808862074 + STD: 0.20977 # 0.20976548783479299 + MODEL_DIR: "models" + + +VALIDATION: + BATCH_SIZE_PER_GPU: 32 + +TEST: + MODEL_PATH: "/data/home/mat/repos/DeepSeismic/interpretation/experiments/segmentation/dutchf3/local/output/mat/exp/dc2e2d20b7f6d508beb779ffff37c77d0139e588/resnet_unet/Sep01_125513/models/resnet_unet_snapshot1model_52.pth" + TEST_STRIDE: 10 + SPLIT: 'Both' # Can be Both, Test1, Test2 + INLINE: True + CROSSLINE: True + POST_PROCESSING: + SIZE: 128 + CROP_PIXELS: 14 # Number of pixels to crop top, bottom, left and right \ No newline at end of file diff --git a/contrib/experiments/interpretation/dutchf3_patch/distributed/configs/unet.yaml b/contrib/experiments/interpretation/dutchf3_patch/distributed/configs/unet.yaml new file mode 100644 index 00000000..3fe5f439 --- /dev/null +++ b/contrib/experiments/interpretation/dutchf3_patch/distributed/configs/unet.yaml @@ -0,0 +1,63 @@ +# UNet configuration + +CUDNN: + BENCHMARK: true + DETERMINISTIC: false + ENABLED: true +GPUS: (0,) +OUTPUT_DIR: 'output' +LOG_DIR: 'log' +WORKERS: 4 +PRINT_FREQ: 10 +LOG_CONFIG: logging.conf +SEED: 2019 + + +DATASET: + NUM_CLASSES: 6 + ROOT: /mnt/dutchf3 + CLASS_WEIGHTS: [0.7151, 0.8811, 0.5156, 0.9346, 0.9683, 0.9852] + +MODEL: + NAME: resnet_unet + IN_CHANNELS: 3 + + +TRAIN: + BATCH_SIZE_PER_GPU: 16 + BEGIN_EPOCH: 0 + END_EPOCH: 300 + MIN_LR: 0.001 + MAX_LR: 0.02 + MOMENTUM: 0.9 + WEIGHT_DECAY: 0.0001 + SNAPSHOTS: 5 + AUGMENTATION: True + DEPTH: "section" # Options are none, patch, and section + STRIDE: 50 + PATCH_SIZE: 100 + AUGMENTATIONS: + RESIZE: + HEIGHT: 200 + WIDTH: 200 + PAD: + HEIGHT: 256 + WIDTH: 256 + MEAN: 0.0009997 # 0.0009996710808862074 + STD: 0.20977 # 0.20976548783479299 + MODEL_DIR: "models" + + +VALIDATION: + BATCH_SIZE_PER_GPU: 32 + +TEST: + MODEL_PATH: "" + TEST_STRIDE: 10 + SPLIT: 'Both' # Can be Both, Test1, Test2 + INLINE: True + CROSSLINE: True + POST_PROCESSING: + SIZE: 128 + CROP_PIXELS: 14 # Number of pixels to crop top, bottom, left and right + diff --git a/contrib/experiments/interpretation/dutchf3_patch/distributed/default.py b/contrib/experiments/interpretation/dutchf3_patch/distributed/default.py new file mode 100644 index 00000000..34d3c4d3 --- /dev/null +++ b/contrib/experiments/interpretation/dutchf3_patch/distributed/default.py @@ -0,0 +1,107 @@ +# ------------------------------------------------------------------------------ +# Copyright (c) Microsoft +# Licensed under the MIT License. +# ------------------------------------------------------------------------------ + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +from yacs.config import CfgNode as CN + +_C = CN() + +_C.OUTPUT_DIR = "output" +_C.LOG_DIR = "log" +_C.GPUS = (0,) +_C.WORKERS = 4 +_C.PRINT_FREQ = 20 +_C.AUTO_RESUME = False +_C.PIN_MEMORY = True +_C.LOG_CONFIG = "logging.conf" +_C.SEED = 42 +_C.OPENCV_BORDER_CONSTANT = 0 + +# Cudnn related params +_C.CUDNN = CN() +_C.CUDNN.BENCHMARK = True +_C.CUDNN.DETERMINISTIC = False +_C.CUDNN.ENABLED = True + + +# DATASET related params +_C.DATASET = CN() +_C.DATASET.ROOT = "" +_C.DATASET.NUM_CLASSES = 6 +_C.DATASET.CLASS_WEIGHTS = [0.7151, 0.8811, 0.5156, 0.9346, 0.9683, 0.9852] + +# common params for NETWORK +_C.MODEL = CN() +_C.MODEL.NAME = "patch_deconvnet" +_C.MODEL.IN_CHANNELS = 1 +_C.MODEL.PRETRAINED = "" +_C.MODEL.EXTRA = CN(new_allowed=True) + + +# training +_C.TRAIN = CN() +_C.TRAIN.MIN_LR = 0.001 +_C.TRAIN.MAX_LR = 0.01 +_C.TRAIN.MOMENTUM = 0.9 +_C.TRAIN.BEGIN_EPOCH = 0 +_C.TRAIN.END_EPOCH = 484 +_C.TRAIN.BATCH_SIZE_PER_GPU = 32 +_C.TRAIN.WEIGHT_DECAY = 0.0001 +_C.TRAIN.SNAPSHOTS = 5 +_C.TRAIN.MODEL_DIR = "models" +_C.TRAIN.AUGMENTATION = True +_C.TRAIN.STRIDE = 50 +_C.TRAIN.PATCH_SIZE = 99 +_C.TRAIN.MEAN = 0.0009997 # 0.0009996710808862074 +_C.TRAIN.STD = 0.21 # 0.20976548783479299 +_C.TRAIN.DEPTH = "none" # Options are: none, patch, and section +# None adds no depth information and the num of channels remains at 1 +# Patch adds depth per patch so is simply the height of that patch from 0 to 1, channels=3 +# Section adds depth per section so contains depth information for the whole section, channels=3 +_C.TRAIN.AUGMENTATIONS = CN() +_C.TRAIN.AUGMENTATIONS.RESIZE = CN() +_C.TRAIN.AUGMENTATIONS.RESIZE.HEIGHT = 200 +_C.TRAIN.AUGMENTATIONS.RESIZE.WIDTH = 200 +_C.TRAIN.AUGMENTATIONS.PAD = CN() +_C.TRAIN.AUGMENTATIONS.PAD.HEIGHT = 256 +_C.TRAIN.AUGMENTATIONS.PAD.WIDTH = 256 + + +# validation +_C.VALIDATION = CN() +_C.VALIDATION.BATCH_SIZE_PER_GPU = 32 + +# TEST +_C.TEST = CN() +_C.TEST.MODEL_PATH = "" +_C.TEST.TEST_STRIDE = 10 +_C.TEST.SPLIT = "Both" # Can be Both, Test1, Test2 +_C.TEST.INLINE = True +_C.TEST.CROSSLINE = True +_C.TEST.POST_PROCESSING = CN() # Model output postprocessing +_C.TEST.POST_PROCESSING.SIZE = 128 # Size to interpolate to in pixels +_C.TEST.POST_PROCESSING.CROP_PIXELS = 14 # Number of pixels to crop top, bottom, left and right + + +def update_config(cfg, options=None, config_file=None): + cfg.defrost() + + if config_file: + cfg.merge_from_file(config_file) + + if options: + cfg.merge_from_list(options) + + cfg.freeze() + + +if __name__ == "__main__": + import sys + + with open(sys.argv[1], "w") as f: + print(_C, file=f) diff --git a/contrib/experiments/interpretation/dutchf3_patch/distributed/logging.conf b/contrib/experiments/interpretation/dutchf3_patch/distributed/logging.conf new file mode 100644 index 00000000..56334fc4 --- /dev/null +++ b/contrib/experiments/interpretation/dutchf3_patch/distributed/logging.conf @@ -0,0 +1,34 @@ +[loggers] +keys=root,__main__,event_handlers + +[handlers] +keys=consoleHandler + +[formatters] +keys=simpleFormatter + +[logger_root] +level=INFO +handlers=consoleHandler + +[logger___main__] +level=INFO +handlers=consoleHandler +qualname=__main__ +propagate=0 + +[logger_event_handlers] +level=INFO +handlers=consoleHandler +qualname=event_handlers +propagate=0 + +[handler_consoleHandler] +class=StreamHandler +level=INFO +formatter=simpleFormatter +args=(sys.stdout,) + +[formatter_simpleFormatter] +format=%(asctime)s - %(name)s - %(levelname)s - %(message)s + diff --git a/contrib/experiments/interpretation/dutchf3_patch/distributed/run.sh b/contrib/experiments/interpretation/dutchf3_patch/distributed/run.sh new file mode 100644 index 00000000..e69de29b diff --git a/contrib/experiments/interpretation/dutchf3_patch/distributed/train.py b/contrib/experiments/interpretation/dutchf3_patch/distributed/train.py new file mode 100644 index 00000000..34f1157a --- /dev/null +++ b/contrib/experiments/interpretation/dutchf3_patch/distributed/train.py @@ -0,0 +1,341 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# +# To Run on 2 GPUs +# python -m torch.distributed.launch --nproc_per_node=2 train.py --cfg "configs/hrnet.yaml" +# +# To Test: +# python -m torch.distributed.launch --nproc_per_node=2 train.py TRAIN.END_EPOCH 1 TRAIN.SNAPSHOTS 1 --cfg "configs/hrnet.yaml" --debug +# +# /* spell-checker: disable */ +"""Train models on Dutch F3 dataset + +Trains models using PyTorch DistributedDataParallel +Uses a warmup schedule that then goes into a cyclic learning rate + +Time to run on two V100s for 300 epochs: 2.5 days +""" + +import logging +import logging.config +import os +from os import path + +import fire +import numpy as np +import torch +from albumentations import Compose, HorizontalFlip, Normalize, PadIfNeeded, Resize +from ignite.contrib.handlers import ConcatScheduler, CosineAnnealingScheduler, LinearCyclicalScheduler +from ignite.engine import Events +from ignite.metrics import Loss +from ignite.utils import convert_tensor +from toolz import compose, curry +from torch.utils import data + +from cv_lib.event_handlers import SnapshotHandler, logging_handlers, tensorboard_handlers +from cv_lib.event_handlers.logging_handlers import Evaluator +from cv_lib.event_handlers.tensorboard_handlers import create_image_writer, create_summary_writer +from cv_lib.segmentation import extract_metric_from, models +from cv_lib.segmentation.dutchf3.engine import create_supervised_evaluator, create_supervised_trainer +from cv_lib.segmentation.dutchf3.utils import current_datetime, generate_path, git_branch, git_hash, np_to_tb +from cv_lib.segmentation.metrics import class_accuracy, class_iou, mean_class_accuracy, mean_iou, pixelwise_accuracy +from cv_lib.utils import load_log_configuration +from deepseismic_interpretation.dutchf3.data import decode_segmap, get_patch_loader +from default import _C as config +from default import update_config + + +def prepare_batch(batch, device=None, non_blocking=False): + x, y = batch + return ( + convert_tensor(x, device=device, non_blocking=non_blocking), + convert_tensor(y, device=device, non_blocking=non_blocking), + ) + + +@curry +def update_sampler_epoch(data_loader, engine): + data_loader.sampler.epoch = engine.state.epoch + + +def run(*options, cfg=None, local_rank=0, debug=False): + """Run training and validation of model + + Notes: + Options can be passed in via the options argument and loaded from the cfg file + Options from default.py will be overridden by options loaded from cfg file + Options passed in via options argument will override option loaded from cfg file + + Args: + *options (str,int ,optional): Options used to overide what is loaded from the + config. To see what options are available consult + default.py + cfg (str, optional): Location of config file to load. Defaults to None. + """ + update_config(config, options=options, config_file=cfg) + + # we will write the model under outputs / config_file_name / model_dir + config_file_name = "default_config" if not cfg else cfg.split("/")[-1].split(".")[0] + + # Start logging + load_log_configuration(config.LOG_CONFIG) + logger = logging.getLogger(__name__) + logger.debug(config.WORKERS) + silence_other_ranks = True + world_size = int(os.environ.get("WORLD_SIZE", 1)) + distributed = world_size > 1 + + if distributed: + # FOR DISTRIBUTED: Set the device according to local_rank. + torch.cuda.set_device(local_rank) + + # FOR DISTRIBUTED: Initialize the backend. torch.distributed.launch will + # provide environment variables, and requires that you use init_method=`env://`. + torch.distributed.init_process_group(backend="nccl", init_method="env://") + + epochs_per_cycle = config.TRAIN.END_EPOCH // config.TRAIN.SNAPSHOTS + torch.backends.cudnn.benchmark = config.CUDNN.BENCHMARK + + torch.manual_seed(config.SEED) + if torch.cuda.is_available(): + torch.cuda.manual_seed_all(config.SEED) + np.random.seed(seed=config.SEED) + # Setup Augmentations + basic_aug = Compose( + [ + Normalize(mean=(config.TRAIN.MEAN,), std=(config.TRAIN.STD,), max_pixel_value=1), + PadIfNeeded( + min_height=config.TRAIN.PATCH_SIZE, + min_width=config.TRAIN.PATCH_SIZE, + border_mode=config.OPENCV_BORDER_CONSTANT, + always_apply=True, + mask_value=255, + ), + Resize( + config.TRAIN.AUGMENTATIONS.RESIZE.HEIGHT, config.TRAIN.AUGMENTATIONS.RESIZE.WIDTH, always_apply=True, + ), + PadIfNeeded( + min_height=config.TRAIN.AUGMENTATIONS.PAD.HEIGHT, + min_width=config.TRAIN.AUGMENTATIONS.PAD.WIDTH, + border_mode=config.OPENCV_BORDER_CONSTANT, + always_apply=True, + mask_value=255, + ), + ] + ) + if config.TRAIN.AUGMENTATION: + train_aug = Compose([basic_aug, HorizontalFlip(p=0.5)]) + val_aug = basic_aug + else: + train_aug = val_aug = basic_aug + + TrainPatchLoader = get_patch_loader(config) + + train_set = TrainPatchLoader( + config.DATASET.ROOT, + split="train", + is_transform=True, + stride=config.TRAIN.STRIDE, + patch_size=config.TRAIN.PATCH_SIZE, + augmentations=train_aug, + ) + + val_set = TrainPatchLoader( + config.DATASET.ROOT, + split="val", + is_transform=True, + stride=config.TRAIN.STRIDE, + patch_size=config.TRAIN.PATCH_SIZE, + augmentations=val_aug, + ) + + logger.info(f"Validation examples {len(val_set)}") + n_classes = train_set.n_classes + + if debug: + val_set = data.Subset(val_set, range(config.VALIDATION.BATCH_SIZE_PER_GPU)) + train_set = data.Subset(train_set, range(config.TRAIN.BATCH_SIZE_PER_GPU*2)) + + logger.info(f"Training examples {len(train_set)}") + logger.info(f"Validation examples {len(val_set)}") + + train_sampler = torch.utils.data.distributed.DistributedSampler(train_set, num_replicas=world_size, rank=local_rank) + + train_loader = data.DataLoader( + train_set, batch_size=config.TRAIN.BATCH_SIZE_PER_GPU, num_workers=config.WORKERS, sampler=train_sampler, + ) + + val_sampler = torch.utils.data.distributed.DistributedSampler(val_set, num_replicas=world_size, rank=local_rank) + + val_loader = data.DataLoader( + val_set, batch_size=config.VALIDATION.BATCH_SIZE_PER_GPU, num_workers=config.WORKERS, sampler=val_sampler, + ) + + model = getattr(models, config.MODEL.NAME).get_seg_model(config) + + device = "cpu" + if torch.cuda.is_available(): + device = "cuda" + model = model.to(device) # Send to GPU + + optimizer = torch.optim.SGD( + model.parameters(), + lr=config.TRAIN.MAX_LR, + momentum=config.TRAIN.MOMENTUM, + weight_decay=config.TRAIN.WEIGHT_DECAY, + ) + + # weights are inversely proportional to the frequency of the classes in + # the training set + class_weights = torch.tensor(config.DATASET.CLASS_WEIGHTS, device=device, requires_grad=False) + + criterion = torch.nn.CrossEntropyLoss(weight=class_weights, ignore_index=255, reduction="mean") + + model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[device], find_unused_parameters=True) + + snapshot_duration = epochs_per_cycle * len(train_loader) if not debug else 2*len(train_loader) + + warmup_duration = 5 * len(train_loader) + + warmup_scheduler = LinearCyclicalScheduler( + optimizer, + "lr", + start_value=config.TRAIN.MAX_LR, + end_value=config.TRAIN.MAX_LR * world_size, + cycle_size=10 * len(train_loader), + ) + cosine_scheduler = CosineAnnealingScheduler( + optimizer, + "lr", + config.TRAIN.MAX_LR * world_size, + config.TRAIN.MIN_LR * world_size, + cycle_size=snapshot_duration, + ) + + scheduler = ConcatScheduler(schedulers=[warmup_scheduler, cosine_scheduler], durations=[warmup_duration]) + + trainer = create_supervised_trainer(model, optimizer, criterion, prepare_batch, device=device) + + trainer.add_event_handler(Events.ITERATION_STARTED, scheduler) + # Set to update the epoch parameter of our distributed data sampler so that we get + # different shuffles + trainer.add_event_handler(Events.EPOCH_STARTED, update_sampler_epoch(train_loader)) + + if silence_other_ranks & local_rank != 0: + logging.getLogger("ignite.engine.engine.Engine").setLevel(logging.WARNING) + + def _select_pred_and_mask(model_out_dict): + return (model_out_dict["y_pred"].squeeze(), model_out_dict["mask"].squeeze()) + + evaluator = create_supervised_evaluator( + model, + prepare_batch, + metrics={ + "nll": Loss(criterion, output_transform=_select_pred_and_mask, device=device), + "pixa": pixelwise_accuracy(n_classes, output_transform=_select_pred_and_mask, device=device), + "cacc": class_accuracy(n_classes, output_transform=_select_pred_and_mask, device=device), + "mca": mean_class_accuracy(n_classes, output_transform=_select_pred_and_mask, device=device), + "ciou": class_iou(n_classes, output_transform=_select_pred_and_mask, device=device), + "mIoU": mean_iou(n_classes, output_transform=_select_pred_and_mask, device=device), + }, + device=device, + ) + + # Set the validation run to start on the epoch completion of the training run + + trainer.add_event_handler(Events.EPOCH_COMPLETED, Evaluator(evaluator, val_loader)) + + if local_rank == 0: # Run only on master process + + trainer.add_event_handler( + Events.ITERATION_COMPLETED, + logging_handlers.log_training_output(log_interval=config.TRAIN.BATCH_SIZE_PER_GPU), + ) + trainer.add_event_handler(Events.EPOCH_STARTED, logging_handlers.log_lr(optimizer)) + + try: + output_dir = generate_path( + config.OUTPUT_DIR, + git_branch(), + git_hash(), + config_file_name, + config.TRAIN.MODEL_DIR, + current_datetime(), + ) + except TypeError: + output_dir = generate_path(config.OUTPUT_DIR, config_file_name, config.TRAIN.MODEL_DIR, current_datetime(),) + + summary_writer = create_summary_writer(log_dir=path.join(output_dir, config.LOG_DIR)) + logger.info(f"Logging Tensorboard to {path.join(output_dir, config.LOG_DIR)}") + trainer.add_event_handler( + Events.EPOCH_STARTED, tensorboard_handlers.log_lr(summary_writer, optimizer, "epoch"), + ) + trainer.add_event_handler( + Events.ITERATION_COMPLETED, tensorboard_handlers.log_training_output(summary_writer), + ) + evaluator.add_event_handler( + Events.EPOCH_COMPLETED, + logging_handlers.log_metrics( + "Validation results", + metrics_dict={ + "nll": "Avg loss :", + "mIoU": " Avg IoU :", + "pixa": "Pixelwise Accuracy :", + "mca": "Mean Class Accuracy :", + }, + ), + ) + evaluator.add_event_handler( + Events.EPOCH_COMPLETED, + tensorboard_handlers.log_metrics( + summary_writer, + trainer, + "epoch", + metrics_dict={"mIoU": "Validation/IoU", "nll": "Validation/Loss", "mca": "Validation/MCA",}, + ), + ) + + def _select_max(pred_tensor): + return pred_tensor.max(1)[1] + + def _tensor_to_numpy(pred_tensor): + return pred_tensor.squeeze().cpu().numpy() + + transform_func = compose(np_to_tb, decode_segmap(n_classes=n_classes), _tensor_to_numpy) + transform_pred = compose(transform_func, _select_max) + evaluator.add_event_handler( + Events.EPOCH_COMPLETED, create_image_writer(summary_writer, "Validation/Image", "image"), + ) + evaluator.add_event_handler( + Events.EPOCH_COMPLETED, + create_image_writer(summary_writer, "Validation/Mask", "mask", transform_func=transform_func), + ) + evaluator.add_event_handler( + Events.EPOCH_COMPLETED, + create_image_writer(summary_writer, "Validation/Pred", "y_pred", transform_func=transform_pred,), + ) + + def snapshot_function(): + return (trainer.state.iteration % snapshot_duration) == 0 + + checkpoint_handler = SnapshotHandler( + output_dir, config.MODEL.NAME, extract_metric_from("mIoU"), snapshot_function, + ) + evaluator.add_event_handler(Events.EPOCH_COMPLETED, checkpoint_handler, {"model": model}) + logger.info("Starting training") + + if debug: + trainer.run( + train_loader, + max_epochs=config.TRAIN.END_EPOCH, + epoch_length=config.TRAIN.BATCH_SIZE_PER_GPU * 2, + seed=config.SEED, + ) + else: + trainer.run( + train_loader, max_epochs=config.TRAIN.END_EPOCH, epoch_length=len(train_loader), seed=config.SEED + ) + + +if __name__ == "__main__": + fire.Fire(run) diff --git a/contrib/experiments/interpretation/dutchf3_patch/distributed/train.sh b/contrib/experiments/interpretation/dutchf3_patch/distributed/train.sh new file mode 100755 index 00000000..e9394ecd --- /dev/null +++ b/contrib/experiments/interpretation/dutchf3_patch/distributed/train.sh @@ -0,0 +1,3 @@ +#!/bin/bash +export PYTHONPATH=/data/home/mat/repos/DeepSeismic/interpretation:$PYTHONPATH +python -m torch.distributed.launch --nproc_per_node=8 train.py --cfg configs/hrnet.yaml \ No newline at end of file diff --git a/contrib/experiments/interpretation/dutchf3_section/README.md b/contrib/experiments/interpretation/dutchf3_section/README.md new file mode 100644 index 00000000..66d3cfcd --- /dev/null +++ b/contrib/experiments/interpretation/dutchf3_section/README.md @@ -0,0 +1,25 @@ +## F3 Netherlands Section Experiments +In this folder are training and testing scripts that work on the F3 Netherlands dataset. +You can run one model on this dataset: +* [SectionDeconvNet-Skip](local/configs/section_deconvnet_skip.yaml) + +This model takes 2D sections as input from the dataset whether these be inlines or crosslines and provides predictions for whole section. + +To understand the configuration files and the dafault parameters refer to this [section in the top level README](../../../README.md#configuration-files) + +### Setup + +Please set up a conda environment following the instructions in the top-level [README.md](../../../README.md#setting-up-environment) file. +Also follow instructions for [downloading and preparing](../../../README.md#f3-Netherlands) the data. + +### Running experiments + +Now you're all set to run training and testing experiments on the F3 Netherlands dataset. Please start from the `train.sh` and `test.sh` scripts under the `local/` directory, which invoke the corresponding python scripts. Take a look at the project configurations in (e.g in `default.py`) for experiment options and modify if necessary. + +### Monitoring progress with TensorBoard +- from the this directory, run `tensorboard --logdir='output'` (all runtime logging information is +written to the `output` folder +- open a web-browser and go to either vmpublicip:6006 if running remotely or localhost:6006 if running locally +> **NOTE**:If running remotely remember that the port must be open and accessible + +More information on Tensorboard can be found [here](https://www.tensorflow.org/get_started/summaries_and_tensorboard#launching_tensorboard). diff --git a/contrib/experiments/interpretation/dutchf3_section/local/configs/section_deconvnet_skip.yaml b/contrib/experiments/interpretation/dutchf3_section/local/configs/section_deconvnet_skip.yaml new file mode 100644 index 00000000..9ce3937e --- /dev/null +++ b/contrib/experiments/interpretation/dutchf3_section/local/configs/section_deconvnet_skip.yaml @@ -0,0 +1,45 @@ +CUDNN: + BENCHMARK: true + DETERMINISTIC: false + ENABLED: true +GPUS: (0,) +OUTPUT_DIR: 'output' +LOG_DIR: 'log' +WORKERS: 4 +PRINT_FREQ: 10 +LOG_CONFIG: logging.conf +SEED: 2019 + +DATASET: + NUM_CLASSES: 6 + ROOT: /mnt/dutchf3 + CLASS_WEIGHTS: [0.7151, 0.8811, 0.5156, 0.9346, 0.9683, 0.9852] + +MODEL: + NAME: section_deconvnet_skip + IN_CHANNELS: 1 + +TRAIN: + BATCH_SIZE_PER_GPU: 16 + BEGIN_EPOCH: 0 + END_EPOCH: 300 + MIN_LR: 0.001 + MAX_LR: 0.02 + MOMENTUM: 0.9 + WEIGHT_DECAY: 0.0001 + SNAPSHOTS: 5 + AUGMENTATION: True + DEPTH: "none" # Can be None, Patch and Section + MEAN: 0.0009997 # 0.0009996710808862074 + STD: 0.20977 # 0.20976548783479299 + MODEL_DIR: "models" + +VALIDATION: + BATCH_SIZE_PER_GPU: 32 + +TEST: + MODEL_PATH: "" + TEST_STRIDE: 10 + SPLIT: 'Both' # Can be Both, Test1, Test2 + INLINE: True + CROSSLINE: True \ No newline at end of file diff --git a/contrib/experiments/interpretation/dutchf3_section/local/default.py b/contrib/experiments/interpretation/dutchf3_section/local/default.py new file mode 100644 index 00000000..2b4888d2 --- /dev/null +++ b/contrib/experiments/interpretation/dutchf3_section/local/default.py @@ -0,0 +1,93 @@ +# ------------------------------------------------------------------------------ +# Copyright (c) Microsoft +# Licensed under the MIT License. +# ------------------------------------------------------------------------------ + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +from yacs.config import CfgNode as CN + +_C = CN() + + +_C.OUTPUT_DIR = "output" # Base directory for all output (logs, models, etc) +_C.LOG_DIR = "" # This will be a subdirectory inside OUTPUT_DIR +_C.GPUS = (0,) +_C.WORKERS = 4 +_C.PRINT_FREQ = 20 +_C.AUTO_RESUME = False +_C.PIN_MEMORY = True +_C.LOG_CONFIG = "./logging.conf" # Logging config file relative to the experiment +_C.SEED = 42 +_C.OPENCV_BORDER_CONSTANT = 0 + +# Cudnn related params +_C.CUDNN = CN() +_C.CUDNN.BENCHMARK = True +_C.CUDNN.DETERMINISTIC = False +_C.CUDNN.ENABLED = True + +# DATASET related params +_C.DATASET = CN() +_C.DATASET.ROOT = "/mnt/dutchf3" +_C.DATASET.NUM_CLASSES = 6 +_C.DATASET.CLASS_WEIGHTS = [0.7151, 0.8811, 0.5156, 0.9346, 0.9683, 0.9852] + +# common params for NETWORK +_C.MODEL = CN() +_C.MODEL.NAME = "section_deconvnet_skip" +_C.MODEL.IN_CHANNELS = 1 +_C.MODEL.PRETRAINED = "" +_C.MODEL.EXTRA = CN(new_allowed=True) + +# training +_C.TRAIN = CN() +_C.TRAIN.MIN_LR = 0.001 +_C.TRAIN.MAX_LR = 0.01 +_C.TRAIN.MOMENTUM = 0.9 +_C.TRAIN.BEGIN_EPOCH = 0 +_C.TRAIN.END_EPOCH = 100 +_C.TRAIN.BATCH_SIZE_PER_GPU = 16 +_C.TRAIN.WEIGHT_DECAY = 0.0001 +_C.TRAIN.SNAPSHOTS = 5 +_C.TRAIN.MODEL_DIR = "models" # This will be a subdirectory inside OUTPUT_DIR +_C.TRAIN.AUGMENTATION = True +_C.TRAIN.MEAN = 0.0009997 # 0.0009996710808862074 +_C.TRAIN.STD = 0.20977 # 0.20976548783479299 +_C.TRAIN.DEPTH = "none" # Options are: none, patch, and section +# None adds no depth information and the num of channels remains at 1 +# Patch adds depth per patch so is simply the height of that patch from 0 to 1, channels=3 +# Section adds depth per section so contains depth information for the whole section, channels=3 + +# validation +_C.VALIDATION = CN() +_C.VALIDATION.BATCH_SIZE_PER_GPU = 16 + +# TEST +_C.TEST = CN() +_C.TEST.MODEL_PATH = "" +_C.TEST.TEST_STRIDE = 10 +_C.TEST.SPLIT = "Both" # Can be Both, Test1, Test2 +_C.TEST.INLINE = True +_C.TEST.CROSSLINE = True + + +def update_config(cfg, options=None, config_file=None): + cfg.defrost() + + if config_file: + cfg.merge_from_file(config_file) + + if options: + cfg.merge_from_list(options) + + cfg.freeze() + + +if __name__ == "__main__": + import sys + + with open(sys.argv[1], "w") as f: + print(_C, file=f) diff --git a/contrib/experiments/interpretation/dutchf3_section/local/logging.conf b/contrib/experiments/interpretation/dutchf3_section/local/logging.conf new file mode 100644 index 00000000..a67fb7a2 --- /dev/null +++ b/contrib/experiments/interpretation/dutchf3_section/local/logging.conf @@ -0,0 +1,37 @@ +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. + +[loggers] +keys=root,__main__,event_handlers + +[handlers] +keys=consoleHandler + +[formatters] +keys=simpleFormatter + +[logger_root] +level=INFO +handlers=consoleHandler + +[logger___main__] +level=INFO +handlers=consoleHandler +qualname=__main__ +propagate=0 + +[logger_event_handlers] +level=INFO +handlers=consoleHandler +qualname=event_handlers +propagate=0 + +[handler_consoleHandler] +class=StreamHandler +level=INFO +formatter=simpleFormatter +args=(sys.stdout,) + +[formatter_simpleFormatter] +format=%(asctime)s - %(name)s - %(levelname)s - %(message)s + diff --git a/contrib/experiments/interpretation/dutchf3_section/local/test.py b/contrib/experiments/interpretation/dutchf3_section/local/test.py new file mode 100644 index 00000000..3bc4cabb --- /dev/null +++ b/contrib/experiments/interpretation/dutchf3_section/local/test.py @@ -0,0 +1,205 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# commitHash: c76bf579a0d5090ebd32426907d051d499f3e847 +# url: https://github.com/yalaudah/facies_classification_benchmark + +""" +Modified version of the Alaudah testing script +# TODO: Needs to be improved. Needs to be able to run across multiple GPUs and better +# factoring around the loader +# issue: https://github.com/microsoft/seismic-deeplearning/issues/268 +""" + +import logging +import logging.config +import os +from os import path + +import fire +import numpy as np +import torch +from albumentations import Compose, Normalize +from cv_lib.utils import load_log_configuration +from cv_lib.segmentation import models + +from deepseismic_interpretation.dutchf3.data import get_test_loader +from default import _C as config +from default import update_config +from torch.utils import data +from toolz import take + + +_CLASS_NAMES = [ + "upper_ns", + "middle_ns", + "lower_ns", + "rijnland_chalk", + "scruff", + "zechstein", +] + + +class runningScore(object): + def __init__(self, n_classes): + self.n_classes = n_classes + self.confusion_matrix = np.zeros((n_classes, n_classes)) + + def _fast_hist(self, label_true, label_pred, n_class): + mask = (label_true >= 0) & (label_true < n_class) + hist = np.bincount(n_class * label_true[mask].astype(int) + label_pred[mask], minlength=n_class ** 2,).reshape( + n_class, n_class + ) + return hist + + def update(self, label_trues, label_preds): + for lt, lp in zip(label_trues, label_preds): + self.confusion_matrix += self._fast_hist(lt.flatten(), lp.flatten(), self.n_classes) + + def get_scores(self): + """Returns accuracy score evaluation result. + - overall accuracy + - mean accuracy + - mean IU + - fwavacc + """ + hist = self.confusion_matrix + acc = np.diag(hist).sum() / hist.sum() + acc_cls = np.diag(hist) / hist.sum(axis=1) + mean_acc_cls = np.nanmean(acc_cls) + iu = np.diag(hist) / (hist.sum(axis=1) + hist.sum(axis=0) - np.diag(hist)) + mean_iu = np.nanmean(iu) + freq = hist.sum(axis=1) / hist.sum() # fraction of the pixels that come from each class + fwavacc = (freq[freq > 0] * iu[freq > 0]).sum() + cls_iu = dict(zip(range(self.n_classes), iu)) + + return ( + { + "Pixel Acc: ": acc, + "Class Accuracy: ": acc_cls, + "Mean Class Acc: ": mean_acc_cls, + "Freq Weighted IoU: ": fwavacc, + "Mean IoU: ": mean_iu, + "confusion_matrix": self.confusion_matrix, + }, + cls_iu, + ) + + def reset(self): + self.confusion_matrix = np.zeros((self.n_classes, self.n_classes)) + + +def _evaluate_split(split, section_aug, model, device, running_metrics_overall, config, debug=False): + logger = logging.getLogger(__name__) + + TestSectionLoader = get_test_loader(config) + test_set = TestSectionLoader( + data_dir=config.DATASET.ROOT, split=split, is_transform=True, augmentations=section_aug, + ) + + n_classes = test_set.n_classes + + test_loader = data.DataLoader(test_set, batch_size=1, num_workers=config.WORKERS, shuffle=False) + if debug: + logger.info("Running in Debug/Test mode") + test_loader = take(1, test_loader) + + running_metrics_split = runningScore(n_classes) + + # testing mode: + with torch.no_grad(): # operations inside don't track history + model.eval() + total_iteration = 0 + for i, (images, labels) in enumerate(test_loader): + logger.info(f"split: {split}, section: {i}") + total_iteration = total_iteration + 1 + + outputs = model(images.to(device)) + + pred = outputs.detach().max(1)[1].cpu().numpy() + gt = labels.numpy() + running_metrics_split.update(gt, pred) + running_metrics_overall.update(gt, pred) + + # get scores + score, class_iou = running_metrics_split.get_scores() + + # Log split results + logger.info(f'Pixel Acc: {score["Pixel Acc: "]:.3f}') + for cdx, class_name in enumerate(_CLASS_NAMES): + logger.info(f' {class_name}_accuracy {score["Class Accuracy: "][cdx]:.3f}') + + logger.info(f'Mean Class Acc: {score["Mean Class Acc: "]:.3f}') + logger.info(f'Freq Weighted IoU: {score["Freq Weighted IoU: "]:.3f}') + logger.info(f'Mean IoU: {score["Mean IoU: "]:0.3f}') + running_metrics_split.reset() + + +def _write_section_file(labels, section_file): + # define indices of the array + irange, xrange, depth = labels.shape + + if config.TEST.INLINE: + i_list = list(range(irange)) + i_list = ["i_" + str(inline) for inline in i_list] + else: + i_list = [] + + if config.TEST.CROSSLINE: + x_list = list(range(xrange)) + x_list = ["x_" + str(crossline) for crossline in x_list] + else: + x_list = [] + + list_test = i_list + x_list + + file_object = open(section_file, "w") + file_object.write("\n".join(list_test)) + file_object.close() + + +def test(*options, cfg=None, debug=False): + update_config(config, options=options, config_file=cfg) + n_classes = config.DATASET.NUM_CLASSES + + # Start logging + load_log_configuration(config.LOG_CONFIG) + logger = logging.getLogger(__name__) + device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + log_dir, _ = os.path.split(config.TEST.MODEL_PATH) + + # load model: + model = getattr(models, config.MODEL.NAME).get_seg_model(config) + model.load_state_dict(torch.load(config.TEST.MODEL_PATH), strict=False) + model = model.to(device) # Send to GPU if available + + running_metrics_overall = runningScore(n_classes) + + # Augmentation + section_aug = Compose([Normalize(mean=(config.TRAIN.MEAN,), std=(config.TRAIN.STD,), max_pixel_value=1,)]) + + splits = ["test1", "test2"] if "Both" in config.TEST.SPLIT else [config.TEST.SPLIT] + + for sdx, split in enumerate(splits): + labels = np.load(path.join(config.DATASET.ROOT, "test_once", split + "_labels.npy")) + section_file = path.join(config.DATASET.ROOT, "splits", "section_" + split + ".txt") + _write_section_file(labels, section_file) + _evaluate_split(split, section_aug, model, device, running_metrics_overall, config, debug=debug) + + # FINAL TEST RESULTS: + score, class_iou = running_metrics_overall.get_scores() + + logger.info("--------------- FINAL RESULTS -----------------") + logger.info(f'Pixel Acc: {score["Pixel Acc: "]:.3f}') + for cdx, class_name in enumerate(_CLASS_NAMES): + logger.info(f' {class_name}_accuracy {score["Class Accuracy: "][cdx]:.3f}') + logger.info(f'Mean Class Acc: {score["Mean Class Acc: "]:.3f}') + logger.info(f'Freq Weighted IoU: {score["Freq Weighted IoU: "]:.3f}') + logger.info(f'Mean IoU: {score["Mean IoU: "]:0.3f}') + + # Save confusion matrix: + confusion = score["confusion_matrix"] + np.savetxt(path.join(log_dir, "confusion.csv"), confusion, delimiter=" ") + + +if __name__ == "__main__": + fire.Fire(test) diff --git a/contrib/experiments/interpretation/dutchf3_section/local/train.py b/contrib/experiments/interpretation/dutchf3_section/local/train.py new file mode 100644 index 00000000..484bbb4f --- /dev/null +++ b/contrib/experiments/interpretation/dutchf3_section/local/train.py @@ -0,0 +1,294 @@ +# Copyright (c) Microsoft Corporation. +# # Licensed under the MIT License. +# # /* spell-checker: disable */ + +import logging +import logging.config +from os import path + +import fire +import numpy as np +import torch +from albumentations import Compose, HorizontalFlip, Normalize + +from deepseismic_interpretation.dutchf3.data import decode_segmap, get_section_loader +from cv_lib.utils import load_log_configuration +from cv_lib.event_handlers import ( + SnapshotHandler, + logging_handlers, + tensorboard_handlers, +) +from cv_lib.event_handlers.logging_handlers import Evaluator +from cv_lib.event_handlers.tensorboard_handlers import ( + create_image_writer, + create_summary_writer, +) +from cv_lib.segmentation import models, extract_metric_from +from cv_lib.segmentation.dutchf3.engine import ( + create_supervised_evaluator, + create_supervised_trainer, +) +from cv_lib.segmentation.metrics import ( + pixelwise_accuracy, + class_accuracy, + mean_class_accuracy, + class_iou, + mean_iou, +) +from cv_lib.segmentation.dutchf3.utils import ( + current_datetime, + generate_path, + git_branch, + git_hash, + np_to_tb, +) +from default import _C as config +from default import update_config +from ignite.contrib.handlers import CosineAnnealingScheduler +from ignite.engine import Events +from ignite.utils import convert_tensor +from ignite.metrics import Loss +from toolz import compose +from torch.utils import data + + +def prepare_batch(batch, device="cuda", non_blocking=False): + x, y = batch + return ( + convert_tensor(x, device=device, non_blocking=non_blocking), + convert_tensor(y, device=device, non_blocking=non_blocking), + ) + + +def run(*options, cfg=None, debug=False): + """Run training and validation of model + + Notes: + Options can be passed in via the options argument and loaded from the cfg file + Options from default.py will be overridden by options loaded from cfg file + Options passed in via options argument will override option loaded from cfg file + + Args: + *options (str,int ,optional): Options used to overide what is loaded from the + config. To see what options are available consult + default.py + cfg (str, optional): Location of config file to load. Defaults to None. + """ + + update_config(config, options=options, config_file=cfg) + + # we will write the model under outputs / config_file_name / model_dir + config_file_name = "default_config" if not cfg else cfg.split("/")[-1].split(".")[0] + + # Start logging + load_log_configuration(config.LOG_CONFIG) + logger = logging.getLogger(__name__) + logger.debug(config.WORKERS) + epochs_per_cycle = config.TRAIN.END_EPOCH // config.TRAIN.SNAPSHOTS + torch.backends.cudnn.benchmark = config.CUDNN.BENCHMARK + + torch.manual_seed(config.SEED) + if torch.cuda.is_available(): + torch.cuda.manual_seed_all(config.SEED) + np.random.seed(seed=config.SEED) + + # Setup Augmentations + basic_aug = Compose([Normalize(mean=(config.TRAIN.MEAN,), std=(config.TRAIN.STD,), max_pixel_value=1)]) + if config.TRAIN.AUGMENTATION: + train_aug = Compose([basic_aug, HorizontalFlip(p=0.5)]) + val_aug = basic_aug + else: + train_aug = val_aug = basic_aug + + TrainLoader = get_section_loader(config) + + train_set = TrainLoader(data_dir=config.DATASET.ROOT, split="train", is_transform=True, augmentations=train_aug,) + + val_set = TrainLoader(data_dir=config.DATASET.ROOT, split="val", is_transform=True, augmentations=val_aug,) + + class CustomSampler(torch.utils.data.Sampler): + def __init__(self, data_source): + self.data_source = data_source + + def __iter__(self): + char = ["i" if np.random.randint(2) == 1 else "x"] + self.indices = [idx for (idx, name) in enumerate(self.data_source) if char[0] in name] + return (self.indices[i] for i in torch.randperm(len(self.indices))) + + def __len__(self): + return len(self.data_source) + + n_classes = train_set.n_classes + + val_list = val_set.sections + train_list = val_set.sections + + train_loader = data.DataLoader( + train_set, + batch_size=config.TRAIN.BATCH_SIZE_PER_GPU, + sampler=CustomSampler(train_list), + num_workers=config.WORKERS, + shuffle=False, + ) + + if debug: + val_set = data.Subset(val_set, range(3)) + + val_loader = data.DataLoader( + val_set, + batch_size=config.VALIDATION.BATCH_SIZE_PER_GPU, + sampler=CustomSampler(val_list), + num_workers=config.WORKERS, + ) + + model = getattr(models, config.MODEL.NAME).get_seg_model(config) + + device = "cpu" + if torch.cuda.is_available(): + device = "cuda" + model = model.to(device) # Send to GPU + + optimizer = torch.optim.SGD( + model.parameters(), + lr=config.TRAIN.MAX_LR, + momentum=config.TRAIN.MOMENTUM, + weight_decay=config.TRAIN.WEIGHT_DECAY, + ) + + try: + output_dir = generate_path( + config.OUTPUT_DIR, git_branch(), git_hash(), config_file_name, config.TRAIN.MODEL_DIR, current_datetime(), + ) + except TypeError: + output_dir = generate_path(config.OUTPUT_DIR, config_file_name, config.TRAIN.MODEL_DIR, current_datetime(),) + + summary_writer = create_summary_writer(log_dir=path.join(output_dir, config.LOG_DIR)) + + snapshot_duration = epochs_per_cycle * len(train_loader) if not debug else 2 * len(train_loader) + scheduler = CosineAnnealingScheduler( + optimizer, "lr", config.TRAIN.MAX_LR, config.TRAIN.MIN_LR, cycle_size=snapshot_duration + ) + + # weights are inversely proportional to the frequency of the classes in + # the training set + class_weights = torch.tensor(config.DATASET.CLASS_WEIGHTS, device=device, requires_grad=False) + + criterion = torch.nn.CrossEntropyLoss(weight=class_weights, ignore_index=255, reduction="mean") + + trainer = create_supervised_trainer(model, optimizer, criterion, prepare_batch, device=device) + + trainer.add_event_handler(Events.ITERATION_STARTED, scheduler) + + trainer.add_event_handler( + Events.ITERATION_COMPLETED, logging_handlers.log_training_output(log_interval=config.TRAIN.BATCH_SIZE_PER_GPU), + ) + + trainer.add_event_handler(Events.EPOCH_STARTED, logging_handlers.log_lr(optimizer)) + + trainer.add_event_handler( + Events.EPOCH_STARTED, tensorboard_handlers.log_lr(summary_writer, optimizer, "epoch"), + ) + + trainer.add_event_handler( + Events.ITERATION_COMPLETED, tensorboard_handlers.log_training_output(summary_writer), + ) + + def _select_pred_and_mask(model_out_dict): + return (model_out_dict["y_pred"].squeeze(), model_out_dict["mask"].squeeze()) + + evaluator = create_supervised_evaluator( + model, + prepare_batch, + metrics={ + "nll": Loss(criterion, output_transform=_select_pred_and_mask, device=device), + "pixacc": pixelwise_accuracy(n_classes, output_transform=_select_pred_and_mask, device=device), + "cacc": class_accuracy(n_classes, output_transform=_select_pred_and_mask, device=device), + "mca": mean_class_accuracy(n_classes, output_transform=_select_pred_and_mask, device=device), + "ciou": class_iou(n_classes, output_transform=_select_pred_and_mask, device=device), + "mIoU": mean_iou(n_classes, output_transform=_select_pred_and_mask, device=device), + }, + device=device, + ) + + trainer.add_event_handler(Events.EPOCH_COMPLETED, Evaluator(evaluator, val_loader)) + + evaluator.add_event_handler( + Events.EPOCH_COMPLETED, + logging_handlers.log_metrics( + "Validation results", + metrics_dict={ + "nll": "Avg loss :", + "pixacc": "Pixelwise Accuracy :", + "mca": "Avg Class Accuracy :", + "mIoU": "Avg Class IoU :", + }, + ), + ) + + evaluator.add_event_handler( + Events.EPOCH_COMPLETED, + logging_handlers.log_class_metrics( + "Per class validation results", metrics_dict={"ciou": "Class IoU :", "cacc": "Class Accuracy :"}, + ), + ) + + evaluator.add_event_handler( + Events.EPOCH_COMPLETED, + tensorboard_handlers.log_metrics( + summary_writer, + trainer, + "epoch", + metrics_dict={ + "mIoU": "Validation/mIoU", + "nll": "Validation/Loss", + "mca": "Validation/MCA", + "pixacc": "Validation/Pixel_Acc", + }, + ), + ) + + def _select_max(pred_tensor): + return pred_tensor.max(1)[1] + + def _tensor_to_numpy(pred_tensor): + return pred_tensor.squeeze().cpu().numpy() + + transform_func = compose(np_to_tb, decode_segmap(n_classes=n_classes), _tensor_to_numpy) + + transform_pred = compose(transform_func, _select_max) + + evaluator.add_event_handler( + Events.EPOCH_COMPLETED, create_image_writer(summary_writer, "Validation/Image", "image"), + ) + + evaluator.add_event_handler( + Events.EPOCH_COMPLETED, + create_image_writer(summary_writer, "Validation/Mask", "mask", transform_func=transform_func), + ) + + evaluator.add_event_handler( + Events.EPOCH_COMPLETED, + create_image_writer(summary_writer, "Validation/Pred", "y_pred", transform_func=transform_pred), + ) + + def snapshot_function(): + return (trainer.state.iteration % snapshot_duration) == 0 + + checkpoint_handler = SnapshotHandler(output_dir, config.MODEL.NAME, extract_metric_from("mIoU"), snapshot_function,) + + evaluator.add_event_handler(Events.EPOCH_COMPLETED, checkpoint_handler, {"model": model}) + + logger.info("Starting training") + if debug: + trainer.run( + train_loader, + max_epochs=config.TRAIN.END_EPOCH, + epoch_length=config.TRAIN.BATCH_SIZE_PER_GPU, + seed=config.SEED, + ) + else: + trainer.run(train_loader, max_epochs=config.TRAIN.END_EPOCH, epoch_length=len(train_loader), seed=config.SEED) + + +if __name__ == "__main__": + fire.Fire(run) diff --git a/contrib/experiments/interpretation/dutchf3_voxel/configs/texture_net.yaml b/contrib/experiments/interpretation/dutchf3_voxel/configs/texture_net.yaml index aeeffb86..3ff72dca 100644 --- a/contrib/experiments/interpretation/dutchf3_voxel/configs/texture_net.yaml +++ b/contrib/experiments/interpretation/dutchf3_voxel/configs/texture_net.yaml @@ -29,7 +29,7 @@ TRAIN: LR: 0.02 MOMENTUM: 0.9 WEIGHT_DECAY: 0.0001 - DEPTH: "voxel" # Options are No, Patch, Section and Voxel + DEPTH: "voxel" # Options are none, patch, section and voxel MODEL_DIR: "models" VALIDATION: diff --git a/contrib/experiments/interpretation/dutchf3_voxel/default.py b/contrib/experiments/interpretation/dutchf3_voxel/default.py index 100da598..bcf84731 100644 --- a/contrib/experiments/interpretation/dutchf3_voxel/default.py +++ b/contrib/experiments/interpretation/dutchf3_voxel/default.py @@ -24,6 +24,8 @@ _C.PRINT_FREQ = 20 _C.LOG_CONFIG = "logging.conf" _C.SEED = 42 +_C.OPENCV_BORDER_CONSTANT = 0 + # size of voxel cube: WINDOW_SIZE x WINDOW_SIZE x WINDOW_SIZE; used for 3D models only _C.WINDOW_SIZE = 65 @@ -50,7 +52,7 @@ _C.TRAIN.LR = 0.01 _C.TRAIN.MOMENTUM = 0.9 _C.TRAIN.WEIGHT_DECAY = 0.0001 -_C.TRAIN.DEPTH = "voxel" # Options are None, Patch and Section +_C.TRAIN.DEPTH = "voxel" # Options are none, patch and section _C.TRAIN.MODEL_DIR = "models" # This will be a subdirectory inside OUTPUT_DIR # validation diff --git a/contrib/experiments/interpretation/dutchf3_voxel/train.py b/contrib/experiments/interpretation/dutchf3_voxel/train.py index bd8cdf4b..3864e38f 100644 --- a/contrib/experiments/interpretation/dutchf3_voxel/train.py +++ b/contrib/experiments/interpretation/dutchf3_voxel/train.py @@ -208,7 +208,7 @@ def _select_pred_and_mask(model_out): summary_writer = create_summary_writer(log_dir=path.join(output_dir, config.LOG_DIR)) - snapshot_duration = 1 + snapshot_duration = 2 def snapshot_function(): return (trainer.state.iteration % snapshot_duration) == 0 diff --git a/contrib/experiments/interpretation/penobscot/README.md b/contrib/experiments/interpretation/penobscot/README.md new file mode 100644 index 00000000..d870ac1c --- /dev/null +++ b/contrib/experiments/interpretation/penobscot/README.md @@ -0,0 +1,27 @@ +# Seismic Interpretation on Penobscot dataset +In this folder are training and testing scripts that work on the Penobscot dataset. +You can run two different models on this dataset: +* [HRNet](local/configs/hrnet.yaml) +* [SEResNet](local/configs/seresnet_unet.yaml) + +All these models take 2D patches of the dataset as input and provide predictions for those patches. The patches need to be stitched together to form a whole inline or crossline. + +To understand the configuration files and the dafault parameters refer to this [section in the top level README](../../../README.md#configuration-files) + +### Setup + +Please set up a conda environment following the instructions in the top-level [README.md](../../../README.md#setting-up-environment) file. +Also follow instructions for [downloading and preparing](../../../README.md#penobscot) the data. + +### Usage +- [`train.sh`](local/train.sh) - Will train the Segmentation model. The default configuration will execute for 300 epochs which will complete in around 3 days on a V100 GPU. During these 300 epochs succesive snapshots will be taken. By default a cyclic learning rate is applied. +- [`test.sh`](local/test.sh) - Will test your model against the test portion of the dataset. You will be able to view the performance of the trained model in Tensorboard. + +### Monitoring progress with TensorBoard +- from the this directory, run `tensorboard --logdir='output'` (all runtime logging information is +written to the `output` folder +- open a web-browser and go to either vmpublicip:6006 if running remotely or localhost:6006 if running locally +> **NOTE**:If running remotely remember that the port must be open and accessible + +More information on Tensorboard can be found [here](https://www.tensorflow.org/get_started/summaries_and_tensorboard#launching_tensorboard). + diff --git a/contrib/experiments/interpretation/penobscot/local/configs/hrnet.yaml b/contrib/experiments/interpretation/penobscot/local/configs/hrnet.yaml new file mode 100644 index 00000000..7c711177 --- /dev/null +++ b/contrib/experiments/interpretation/penobscot/local/configs/hrnet.yaml @@ -0,0 +1,108 @@ +CUDNN: + BENCHMARK: true + DETERMINISTIC: false + ENABLED: true +GPUS: (0,) +OUTPUT_DIR: 'output' +LOG_DIR: 'log' +WORKERS: 4 +PRINT_FREQ: 10 +LOG_CONFIG: logging.conf +SEED: 2019 +OPENCV_BORDER_CONSTANT: 0 + + +DATASET: + NUM_CLASSES: 7 + ROOT: /mnt/penobscot + CLASS_WEIGHTS: [0.02630481, 0.05448931, 0.0811898 , 0.01866496, 0.15868563, 0.0875993 , 0.5730662] + INLINE_HEIGHT: 1501 + INLINE_WIDTH: 481 + +MODEL: + NAME: seg_hrnet + IN_CHANNELS: 3 + PRETRAINED: '/mnt/hrnet_pretrained/image_classification/hrnetv2_w48_imagenet_pretrained.pth' + EXTRA: + FINAL_CONV_KERNEL: 1 + STAGE2: + NUM_MODULES: 1 + NUM_BRANCHES: 2 + BLOCK: BASIC + NUM_BLOCKS: + - 4 + - 4 + NUM_CHANNELS: + - 48 + - 96 + FUSE_METHOD: SUM + STAGE3: + NUM_MODULES: 4 + NUM_BRANCHES: 3 + BLOCK: BASIC + NUM_BLOCKS: + - 4 + - 4 + - 4 + NUM_CHANNELS: + - 48 + - 96 + - 192 + FUSE_METHOD: SUM + STAGE4: + NUM_MODULES: 3 + NUM_BRANCHES: 4 + BLOCK: BASIC + NUM_BLOCKS: + - 4 + - 4 + - 4 + - 4 + NUM_CHANNELS: + - 48 + - 96 + - 192 + - 384 + FUSE_METHOD: SUM + +TRAIN: + COMPLETE_PATCHES_ONLY: True + BATCH_SIZE_PER_GPU: 32 + BEGIN_EPOCH: 0 + END_EPOCH: 300 + MIN_LR: 0.0001 + MAX_LR: 0.02 + MOMENTUM: 0.9 + WEIGHT_DECAY: 0.0001 + SNAPSHOTS: 5 + AUGMENTATION: True + DEPTH: "patch" # Options are none, patch, and section + STRIDE: 64 + PATCH_SIZE: 128 + AUGMENTATIONS: + RESIZE: + HEIGHT: 256 + WIDTH: 256 + PAD: + HEIGHT: 256 + WIDTH: 256 + MEAN: [-0.0001777, 0.49, -0.0000688] # First value is for images, second for depth and then combination of both + STD: [0.14076 , 0.2717, 0.06286] + MAX: 1 + MODEL_DIR: "models" + + +VALIDATION: + BATCH_SIZE_PER_GPU: 32 + COMPLETE_PATCHES_ONLY: True + +TEST: + COMPLETE_PATCHES_ONLY: False + MODEL_PATH: "/data/home/mat/repos/DeepSeismic/experiments/segmentation/penobscot/local/output/penobscot/437970c875226e7e39c8109c0de8d21c5e5d6e3b/seg_hrnet/Sep25_144942/models/seg_hrnet_running_model_28.pth" + AUGMENTATIONS: + RESIZE: + HEIGHT: 256 + WIDTH: 256 + PAD: + HEIGHT: 256 + WIDTH: 256 diff --git a/contrib/experiments/interpretation/penobscot/local/configs/seresnet_unet.yaml b/contrib/experiments/interpretation/penobscot/local/configs/seresnet_unet.yaml new file mode 100644 index 00000000..800cf4ce --- /dev/null +++ b/contrib/experiments/interpretation/penobscot/local/configs/seresnet_unet.yaml @@ -0,0 +1,64 @@ +CUDNN: + BENCHMARK: true + DETERMINISTIC: false + ENABLED: true +GPUS: (0,) +OUTPUT_DIR: 'output' +LOG_DIR: 'log' +WORKERS: 4 +PRINT_FREQ: 10 +LOG_CONFIG: logging.conf +SEED: 2019 + + +DATASET: + NUM_CLASSES: 7 + ROOT: /mnt/penobscot + CLASS_WEIGHTS: [0.02630481, 0.05448931, 0.0811898 , 0.01866496, 0.15868563, 0.0875993 , 0.5730662] + INLINE_HEIGHT: 1501 + INLINE_WIDTH: 481 +MODEL: + NAME: resnet_unet + IN_CHANNELS: 3 + +TRAIN: + COMPLETE_PATCHES_ONLY: True + BATCH_SIZE_PER_GPU: 16 + BEGIN_EPOCH: 0 + END_EPOCH: 300 + MIN_LR: 0.0001 + MAX_LR: 0.006 + MOMENTUM: 0.9 + WEIGHT_DECAY: 0.0001 + SNAPSHOTS: 5 + AUGMENTATION: True + DEPTH: "patch" # Options are none, patch, and section + STRIDE: 64 + PATCH_SIZE: 128 + AUGMENTATIONS: + RESIZE: + HEIGHT: 256 + WIDTH: 256 + PAD: + HEIGHT: 256 + WIDTH: 256 + MEAN: [-0.0001777, 0.49, -0.0000688] # First value is for images, second for depth and then combination of both + STD: [0.14076 , 0.2717, 0.06286] + MAX: 1 + MODEL_DIR: "models" + + +VALIDATION: + BATCH_SIZE_PER_GPU: 16 + COMPLETE_PATCHES_ONLY: True + +TEST: + COMPLETE_PATCHES_ONLY: False + MODEL_PATH: "/data/home/vapaunic/repos/DeepSeismic/experiments/interpretation/penobscot/local/output/vapaunic/metrics/4120aa99152b6e4f92f8134b783ac63c8131e1ed/resnet_unet/Nov05_105704/models/resnet_unet_running_model_1.pth" + AUGMENTATIONS: + RESIZE: + HEIGHT: 256 + WIDTH: 256 + PAD: + HEIGHT: 256 + WIDTH: 256 diff --git a/contrib/experiments/interpretation/penobscot/local/default.py b/contrib/experiments/interpretation/penobscot/local/default.py new file mode 100644 index 00000000..d72946ce --- /dev/null +++ b/contrib/experiments/interpretation/penobscot/local/default.py @@ -0,0 +1,122 @@ +# ------------------------------------------------------------------------------ +# Copyright (c) Microsoft +# Licensed under the MIT License. +# ------------------------------------------------------------------------------ + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +from yacs.config import CfgNode as CN + +_C = CN() + +_C.OUTPUT_DIR = "output" # This will be the base directory for all output, such as logs and saved models + +_C.LOG_DIR = "" # This will be a subdirectory inside OUTPUT_DIR +_C.GPUS = (0,) +_C.WORKERS = 4 +_C.PRINT_FREQ = 20 +_C.AUTO_RESUME = False +_C.PIN_MEMORY = True +_C.LOG_CONFIG = "logging.conf" +_C.SEED = 42 +_C.OPENCV_BORDER_CONSTANT = 0 + +# size of voxel cube: WINDOW_SIZE x WINDOW_SIZE x WINDOW_SIZE; used for 3D models only +_C.WINDOW_SIZE = 65 + +# Cudnn related params +_C.CUDNN = CN() +_C.CUDNN.BENCHMARK = True +_C.CUDNN.DETERMINISTIC = False +_C.CUDNN.ENABLED = True + +# DATASET related params +_C.DATASET = CN() +_C.DATASET.ROOT = "" +_C.DATASET.NUM_CLASSES = 7 +_C.DATASET.CLASS_WEIGHTS = [ + 0.02630481, + 0.05448931, + 0.0811898, + 0.01866496, + 0.15868563, + 0.0875993, + 0.5730662, +] +_C.DATASET.INLINE_HEIGHT = 1501 +_C.DATASET.INLINE_WIDTH = 481 + +# common params for NETWORK +_C.MODEL = CN() +_C.MODEL.NAME = "resnet_unet" +_C.MODEL.IN_CHANNELS = 1 +_C.MODEL.PRETRAINED = "" +_C.MODEL.EXTRA = CN(new_allowed=True) + +# training +_C.TRAIN = CN() +_C.TRAIN.COMPLETE_PATCHES_ONLY = True +_C.TRAIN.MIN_LR = 0.001 +_C.TRAIN.MAX_LR = 0.01 +_C.TRAIN.MOMENTUM = 0.9 +_C.TRAIN.BEGIN_EPOCH = 0 +_C.TRAIN.END_EPOCH = 300 +_C.TRAIN.BATCH_SIZE_PER_GPU = 32 +_C.TRAIN.WEIGHT_DECAY = 0.0001 +_C.TRAIN.SNAPSHOTS = 5 +_C.TRAIN.MODEL_DIR = "models" # This will be a subdirectory inside OUTPUT_DIR +_C.TRAIN.AUGMENTATION = True +_C.TRAIN.STRIDE = 64 +_C.TRAIN.PATCH_SIZE = 128 +_C.TRAIN.MEAN = [-0.0001777, 0.49, -0.0000688] # 0.0009996710808862074 +_C.TRAIN.STD = [0.14076, 0.2717, 0.06286] # 0.20976548783479299 +_C.TRAIN.MAX = 1 +_C.TRAIN.DEPTH = "patch" # Options are none, patch, and section +# None adds no depth information and the num of channels remains at 1 +# Patch adds depth per patch so is simply the height of that patch from 0 to 1, channels=3 +# Section adds depth per section so contains depth information for the whole section, channels=3 +_C.TRAIN.AUGMENTATIONS = CN() +_C.TRAIN.AUGMENTATIONS.RESIZE = CN() +_C.TRAIN.AUGMENTATIONS.RESIZE.HEIGHT = 256 +_C.TRAIN.AUGMENTATIONS.RESIZE.WIDTH = 256 +_C.TRAIN.AUGMENTATIONS.PAD = CN() +_C.TRAIN.AUGMENTATIONS.PAD.HEIGHT = 256 +_C.TRAIN.AUGMENTATIONS.PAD.WIDTH = 256 + +# validation +_C.VALIDATION = CN() +_C.VALIDATION.BATCH_SIZE_PER_GPU = 32 +_C.VALIDATION.COMPLETE_PATCHES_ONLY = True + +# TEST +_C.TEST = CN() +_C.TEST.MODEL_PATH = "" +_C.TEST.COMPLETE_PATCHES_ONLY = True +_C.TEST.AUGMENTATIONS = CN() +_C.TEST.AUGMENTATIONS.RESIZE = CN() +_C.TEST.AUGMENTATIONS.RESIZE.HEIGHT = 256 +_C.TEST.AUGMENTATIONS.RESIZE.WIDTH = 256 +_C.TEST.AUGMENTATIONS.PAD = CN() +_C.TEST.AUGMENTATIONS.PAD.HEIGHT = 256 +_C.TEST.AUGMENTATIONS.PAD.WIDTH = 256 + + +def update_config(cfg, options=None, config_file=None): + cfg.defrost() + + if config_file: + cfg.merge_from_file(config_file) + + if options: + cfg.merge_from_list(options) + + cfg.freeze() + + +if __name__ == "__main__": + import sys + + with open(sys.argv[1], "w") as f: + print(_C, file=f) diff --git a/contrib/experiments/interpretation/penobscot/local/logging.conf b/contrib/experiments/interpretation/penobscot/local/logging.conf new file mode 100644 index 00000000..56334fc4 --- /dev/null +++ b/contrib/experiments/interpretation/penobscot/local/logging.conf @@ -0,0 +1,34 @@ +[loggers] +keys=root,__main__,event_handlers + +[handlers] +keys=consoleHandler + +[formatters] +keys=simpleFormatter + +[logger_root] +level=INFO +handlers=consoleHandler + +[logger___main__] +level=INFO +handlers=consoleHandler +qualname=__main__ +propagate=0 + +[logger_event_handlers] +level=INFO +handlers=consoleHandler +qualname=event_handlers +propagate=0 + +[handler_consoleHandler] +class=StreamHandler +level=INFO +formatter=simpleFormatter +args=(sys.stdout,) + +[formatter_simpleFormatter] +format=%(asctime)s - %(name)s - %(levelname)s - %(message)s + diff --git a/contrib/experiments/interpretation/penobscot/local/test.py b/contrib/experiments/interpretation/penobscot/local/test.py new file mode 100644 index 00000000..e56b82de --- /dev/null +++ b/contrib/experiments/interpretation/penobscot/local/test.py @@ -0,0 +1,288 @@ +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. +# +# To Test: +# python test.py TRAIN.END_EPOCH 1 TRAIN.SNAPSHOTS 1 --cfg "configs/hrnet.yaml" --debug +# +# /* spell-checker: disable */ +"""Train models on Penobscot dataset + +Test models using PyTorch + +Time to run on single V100: 30 minutes +""" + + +import logging +import logging.config +from itertools import chain +from os import path + +import fire +import numpy as np +import torch +import torchvision +from albumentations import Compose, Normalize, PadIfNeeded, Resize +from ignite.engine import Events +from ignite.metrics import Loss +from ignite.utils import convert_tensor +from toolz import compose, tail, take +from toolz.sandbox.core import unzip +from torch.utils import data + +from cv_lib.event_handlers import logging_handlers, tensorboard_handlers +from cv_lib.event_handlers.tensorboard_handlers import create_image_writer, create_summary_writer +from cv_lib.segmentation import models +from cv_lib.segmentation.dutchf3.utils import current_datetime, generate_path, git_branch, git_hash, np_to_tb +from cv_lib.segmentation.metrics import class_accuracy, class_iou, mean_class_accuracy, mean_iou, pixelwise_accuracy +from cv_lib.segmentation.penobscot.engine import create_supervised_evaluator +from cv_lib.utils import load_log_configuration +from deepseismic_interpretation.dutchf3.data import decode_segmap +from deepseismic_interpretation.penobscot.data import get_patch_dataset +from deepseismic_interpretation.penobscot.metrics import InlineMeanIoU +from default import _C as config +from default import update_config + + +def _prepare_batch(batch, device=None, non_blocking=False): + x, y, ids, patch_locations = batch + return ( + convert_tensor(x, device=device, non_blocking=non_blocking), + convert_tensor(y, device=device, non_blocking=non_blocking), + ids, + patch_locations, + ) + + +def _padding_from(config): + padding_height = config.TEST.AUGMENTATIONS.PAD.HEIGHT - config.TEST.AUGMENTATIONS.RESIZE.HEIGHT + padding_width = config.TEST.AUGMENTATIONS.PAD.WIDTH - config.TEST.AUGMENTATIONS.RESIZE.WIDTH + assert padding_height == padding_width, "The padding for the height and width need to be the same" + return int(padding_height) + + +def _scale_from(config): + scale_height = config.TEST.AUGMENTATIONS.PAD.HEIGHT / config.TRAIN.PATCH_SIZE + scale_width = config.TEST.AUGMENTATIONS.PAD.WIDTH / config.TRAIN.PATCH_SIZE + assert ( + config.TEST.AUGMENTATIONS.PAD.HEIGHT % config.TRAIN.PATCH_SIZE == 0 + ), "The scaling between the patch height and resized height must be whole number" + assert ( + config.TEST.AUGMENTATIONS.PAD.WIDTH % config.TRAIN.PATCH_SIZE == 0 + ), "The scaling between the patch width and resized height must be whole number" + assert scale_height == scale_width, "The scaling for the height and width must be the same" + return int(scale_height) + + +def _log_tensor_to_tensorboard(images_tensor, identifier, summary_writer, evaluator): + image_grid = torchvision.utils.make_grid(images_tensor, normalize=False, scale_each=False, nrow=2) + summary_writer.add_image(identifier, image_grid, evaluator.state.epoch) + + +_TOP_K = 2 # Number of best performing inlines to log to tensorboard +_BOTTOM_K = 2 # Number of worst performing inlines to log to tensorboard +mask_value = 255 + + +def run(*options, cfg=None, debug=False): + """Run testing of model + + Notes: + Options can be passed in via the options argument and loaded from the cfg file + Options from default.py will be overridden by options loaded from cfg file + Options passed in via options argument will override option loaded from cfg file + + Args: + *options (str,int ,optional): Options used to overide what is loaded from the + config. To see what options are available consult + default.py + cfg (str, optional): Location of config file to load. Defaults to None. + """ + + update_config(config, options=options, config_file=cfg) + + # Start logging + load_log_configuration(config.LOG_CONFIG) + logger = logging.getLogger(__name__) + logger.debug(config.WORKERS) + torch.backends.cudnn.benchmark = config.CUDNN.BENCHMARK + + torch.manual_seed(config.SEED) + if torch.cuda.is_available(): + torch.cuda.manual_seed_all(config.SEED) + np.random.seed(seed=config.SEED) + + # Setup Augmentations + test_aug = Compose( + [ + Normalize(mean=(config.TRAIN.MEAN,), std=(config.TRAIN.STD,), max_pixel_value=config.TRAIN.MAX,), + PadIfNeeded( + min_height=config.TRAIN.PATCH_SIZE, + min_width=config.TRAIN.PATCH_SIZE, + border_mode=config.OPENCV_BORDER_CONSTANT, + always_apply=True, + mask_value=mask_value, + value=0, + ), + Resize( + config.TRAIN.AUGMENTATIONS.RESIZE.HEIGHT, config.TRAIN.AUGMENTATIONS.RESIZE.WIDTH, always_apply=True, + ), + PadIfNeeded( + min_height=config.TRAIN.AUGMENTATIONS.PAD.HEIGHT, + min_width=config.TRAIN.AUGMENTATIONS.PAD.WIDTH, + border_mode=config.OPENCV_BORDER_CONSTANT, + always_apply=True, + mask_value=mask_value, + value=0, + ), + ] + ) + + PenobscotDataset = get_patch_dataset(config) + + test_set = PenobscotDataset( + config.DATASET.ROOT, + config.TRAIN.PATCH_SIZE, + config.TRAIN.STRIDE, + split="test", + transforms=test_aug, + n_channels=config.MODEL.IN_CHANNELS, + complete_patches_only=config.TEST.COMPLETE_PATCHES_ONLY, + ) + + logger.info(str(test_set)) + n_classes = test_set.n_classes + + test_loader = data.DataLoader( + test_set, batch_size=config.VALIDATION.BATCH_SIZE_PER_GPU, num_workers=config.WORKERS, + ) + + model = getattr(models, config.MODEL.NAME).get_seg_model(config) + logger.info(f"Loading model {config.TEST.MODEL_PATH}") + model.load_state_dict(torch.load(config.TEST.MODEL_PATH), strict=False) + + device = "cpu" + if torch.cuda.is_available(): + device = "cuda" + model = model.to(device) # Send to GPU + + try: + output_dir = generate_path(config.OUTPUT_DIR, git_branch(), git_hash(), config.MODEL.NAME, current_datetime(),) + except TypeError: + output_dir = generate_path(config.OUTPUT_DIR, config.MODEL.NAME, current_datetime(),) + + summary_writer = create_summary_writer(log_dir=path.join(output_dir, config.LOG_DIR)) + + # weights are inversely proportional to the frequency of the classes in + # the training set + class_weights = torch.tensor(config.DATASET.CLASS_WEIGHTS, device=device, requires_grad=False) + + criterion = torch.nn.CrossEntropyLoss(weight=class_weights, ignore_index=mask_value, reduction="mean") + + def _select_pred_and_mask(model_out_dict): + return (model_out_dict["y_pred"].squeeze(), model_out_dict["mask"].squeeze()) + + def _select_all(model_out_dict): + return ( + model_out_dict["y_pred"].squeeze(), + model_out_dict["mask"].squeeze(), + model_out_dict["ids"], + model_out_dict["patch_locations"], + ) + + inline_mean_iou = InlineMeanIoU( + config.DATASET.INLINE_HEIGHT, + config.DATASET.INLINE_WIDTH, + config.TRAIN.PATCH_SIZE, + n_classes, + padding=_padding_from(config), + scale=_scale_from(config), + output_transform=_select_all, + ) + + evaluator = create_supervised_evaluator( + model, + _prepare_batch, + metrics={ + "nll": Loss(criterion, output_transform=_select_pred_and_mask, device=device), + "inIoU": inline_mean_iou, + "pixa": pixelwise_accuracy(n_classes, output_transform=_select_pred_and_mask, device=device), + "cacc": class_accuracy(n_classes, output_transform=_select_pred_and_mask, device=device), + "mca": mean_class_accuracy(n_classes, output_transform=_select_pred_and_mask, device=device), + "ciou": class_iou(n_classes, output_transform=_select_pred_and_mask, device=device), + "mIoU": mean_iou(n_classes, output_transform=_select_pred_and_mask, device=device), + }, + device=device, + ) + + evaluator.add_event_handler( + Events.EPOCH_COMPLETED, + logging_handlers.log_metrics( + "Test results", + metrics_dict={ + "nll": "Avg loss :", + "mIoU": "Avg IoU :", + "pixa": "Pixelwise Accuracy :", + "mca": "Mean Class Accuracy :", + "inIoU": "Mean Inline IoU :", + }, + ), + ) + evaluator.add_event_handler( + Events.EPOCH_COMPLETED, + tensorboard_handlers.log_metrics( + summary_writer, + evaluator, + "epoch", + metrics_dict={"mIoU": "Test/IoU", "nll": "Test/Loss", "mca": "Test/MCA", "inIoU": "Test/MeanInlineIoU",}, + ), + ) + + def _select_max(pred_tensor): + return pred_tensor.max(1)[1] + + def _tensor_to_numpy(pred_tensor): + return pred_tensor.squeeze().cpu().numpy() + + transform_func = compose( + np_to_tb, decode_segmap, _tensor_to_numpy, + ) + + transform_pred = compose(transform_func, _select_max) + + evaluator.add_event_handler( + Events.EPOCH_COMPLETED, create_image_writer(summary_writer, "Test/Image", "image"), + ) + evaluator.add_event_handler( + Events.EPOCH_COMPLETED, create_image_writer(summary_writer, "Test/Mask", "mask", transform_func=transform_func), + ) + evaluator.add_event_handler( + Events.EPOCH_COMPLETED, + create_image_writer(summary_writer, "Test/Pred", "y_pred", transform_func=transform_pred), + ) + + logger.info("Starting training") + if debug: + evaluator.run(test_loader, max_epochs=1, epoch_length=1) + else: + evaluator.run(test_loader, max_epochs=1, epoch_length=len(test_loader)) + + # Log top N and bottom N inlines in terms of IoU to tensorboard + inline_ious = inline_mean_iou.iou_per_inline() + sorted_ious = sorted(inline_ious.items(), key=lambda x: x[1], reverse=True) + topk = ((inline_mean_iou.predictions[key], inline_mean_iou.masks[key]) for key, iou in take(_TOP_K, sorted_ious)) + bottomk = ( + (inline_mean_iou.predictions[key], inline_mean_iou.masks[key]) for key, iou in tail(_BOTTOM_K, sorted_ious) + ) + stack_and_decode = compose(transform_func, torch.stack) + predictions, masks = unzip(chain(topk, bottomk)) + predictions_tensor = stack_and_decode(list(predictions)) + masks_tensor = stack_and_decode(list(masks)) + _log_tensor_to_tensorboard(predictions_tensor, "Test/InlinePredictions", summary_writer, evaluator) + _log_tensor_to_tensorboard(masks_tensor, "Test/InlineMasks", summary_writer, evaluator) + + summary_writer.close() + + +if __name__ == "__main__": + fire.Fire(run) diff --git a/contrib/experiments/interpretation/penobscot/local/test.sh b/contrib/experiments/interpretation/penobscot/local/test.sh new file mode 100755 index 00000000..ad68cf2e --- /dev/null +++ b/contrib/experiments/interpretation/penobscot/local/test.sh @@ -0,0 +1,2 @@ +#!/bin/bash +python test.py --cfg "configs/seresnet_unet.yaml" \ No newline at end of file diff --git a/contrib/experiments/interpretation/penobscot/local/train.py b/contrib/experiments/interpretation/penobscot/local/train.py new file mode 100644 index 00000000..eeeff141 --- /dev/null +++ b/contrib/experiments/interpretation/penobscot/local/train.py @@ -0,0 +1,293 @@ +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. +# +# To Test: +# python train.py TRAIN.END_EPOCH 1 TRAIN.SNAPSHOTS 1 --cfg "configs/hrnet.yaml" --debug +# +# /* spell-checker: disable */ +"""Train models on Penobscot dataset + +Trains models using PyTorch +Uses a warmup schedule that then goes into a cyclic learning rate + +Time to run on single V100 for 300 epochs: 3.5 days +""" + +import logging +import logging.config +from os import path + +import fire +import numpy as np +import torch +from albumentations import Compose, HorizontalFlip, Normalize, PadIfNeeded, Resize +from ignite.contrib.handlers import CosineAnnealingScheduler +from ignite.engine import Events +from ignite.metrics import Loss +from ignite.utils import convert_tensor +from toolz import compose +from torch.utils import data + +from cv_lib.event_handlers import SnapshotHandler, logging_handlers, tensorboard_handlers +from cv_lib.event_handlers.logging_handlers import Evaluator +from cv_lib.event_handlers.tensorboard_handlers import create_image_writer, create_summary_writer +from cv_lib.segmentation import extract_metric_from, models +from cv_lib.segmentation.dutchf3.utils import current_datetime, generate_path, git_branch, git_hash, np_to_tb +from cv_lib.segmentation.metrics import class_accuracy, class_iou, mean_class_accuracy, mean_iou, pixelwise_accuracy +from cv_lib.segmentation.penobscot.engine import create_supervised_evaluator, create_supervised_trainer +from cv_lib.utils import load_log_configuration +from deepseismic_interpretation.dutchf3.data import decode_segmap +from deepseismic_interpretation.penobscot.data import get_patch_dataset +from default import _C as config +from default import update_config + +mask_value = 255 + +def _prepare_batch(batch, device=None, non_blocking=False): + x, y, ids, patch_locations = batch + return ( + convert_tensor(x, device=device, non_blocking=non_blocking), + convert_tensor(y, device=device, non_blocking=non_blocking), + ids, + patch_locations, + ) + + +def run(*options, cfg=None, debug=False): + """Run training and validation of model + + Notes: + Options can be passed in via the options argument and loaded from the cfg file + Options loaded from default.py will be overridden by those loaded from cfg file + Options passed in via options argument will override those loaded from cfg file + + Args: + *options (str, int, optional): Options used to overide what is loaded from the + config. To see what options are available consult + default.py + cfg (str, optional): Location of config file to load. Defaults to None. + debug (bool): Places scripts in debug/test mode and only executes a few iterations + """ + + update_config(config, options=options, config_file=cfg) + + # we will write the model under outputs / config_file_name / model_dir + config_file_name = "default_config" if not cfg else cfg.split("/")[-1].split(".")[0] + + # Start logging + load_log_configuration(config.LOG_CONFIG) + logger = logging.getLogger(__name__) + logger.debug(config.WORKERS) + epochs_per_cycle = config.TRAIN.END_EPOCH // config.TRAIN.SNAPSHOTS + torch.backends.cudnn.benchmark = config.CUDNN.BENCHMARK + + torch.manual_seed(config.SEED) + if torch.cuda.is_available(): + torch.cuda.manual_seed_all(config.SEED) + np.random.seed(seed=config.SEED) + + device = "cpu" + if torch.cuda.is_available(): + device = "cuda" + + # Setup Augmentations + basic_aug = Compose( + [ + Normalize(mean=(config.TRAIN.MEAN,), std=(config.TRAIN.STD,), max_pixel_value=config.TRAIN.MAX,), + PadIfNeeded( + min_height=config.TRAIN.PATCH_SIZE, + min_width=config.TRAIN.PATCH_SIZE, + border_mode=config.OPENCV_BORDER_CONSTANT, + always_apply=True, + mask_value=mask_value, + value=0, + ), + Resize( + config.TRAIN.AUGMENTATIONS.RESIZE.HEIGHT, config.TRAIN.AUGMENTATIONS.RESIZE.WIDTH, always_apply=True, + ), + PadIfNeeded( + min_height=config.TRAIN.AUGMENTATIONS.PAD.HEIGHT, + min_width=config.TRAIN.AUGMENTATIONS.PAD.WIDTH, + border_mode=config.OPENCV_BORDER_CONSTANT, + always_apply=True, + mask_value=mask_value, + value=0, + ), + ] + ) + if config.TRAIN.AUGMENTATION: + train_aug = Compose([basic_aug, HorizontalFlip(p=0.5)]) + val_aug = basic_aug + else: + train_aug = val_aug = basic_aug + + PenobscotDataset = get_patch_dataset(config) + + train_set = PenobscotDataset( + config.DATASET.ROOT, + config.TRAIN.PATCH_SIZE, + config.TRAIN.STRIDE, + split="train", + transforms=train_aug, + n_channels=config.MODEL.IN_CHANNELS, + complete_patches_only=config.TRAIN.COMPLETE_PATCHES_ONLY, + ) + + val_set = PenobscotDataset( + config.DATASET.ROOT, + config.TRAIN.PATCH_SIZE, + config.TRAIN.STRIDE, + split="val", + transforms=val_aug, + n_channels=config.MODEL.IN_CHANNELS, + complete_patches_only=config.VALIDATION.COMPLETE_PATCHES_ONLY, + ) + logger.info(train_set) + logger.info(val_set) + n_classes = train_set.n_classes + + train_loader = data.DataLoader( + train_set, batch_size=config.TRAIN.BATCH_SIZE_PER_GPU, num_workers=config.WORKERS, shuffle=True, + ) + + if debug: + val_set = data.Subset(val_set, range(3)) + + val_loader = data.DataLoader(val_set, batch_size=config.VALIDATION.BATCH_SIZE_PER_GPU, num_workers=config.WORKERS) + + model = getattr(models, config.MODEL.NAME).get_seg_model(config) + + model = model.to(device) # Send to GPU + + optimizer = torch.optim.SGD( + model.parameters(), + lr=config.TRAIN.MAX_LR, + momentum=config.TRAIN.MOMENTUM, + weight_decay=config.TRAIN.WEIGHT_DECAY, + ) + + try: + output_dir = generate_path( + config.OUTPUT_DIR, git_branch(), git_hash(), config_file_name, config.TRAIN.MODEL_DIR, current_datetime(), + ) + except TypeError: + output_dir = generate_path(config.OUTPUT_DIR, config_file_name, config.TRAIN.MODEL_DIR, current_datetime(),) + + summary_writer = create_summary_writer(log_dir=path.join(output_dir, config.LOG_DIR)) + snapshot_duration = epochs_per_cycle * len(train_loader) if not debug else 2 * len(train_loader) + scheduler = CosineAnnealingScheduler( + optimizer, "lr", config.TRAIN.MAX_LR, config.TRAIN.MIN_LR, cycle_size=snapshot_duration + ) + + # weights are inversely proportional to the frequency of the classes in + # the training set + class_weights = torch.tensor(config.DATASET.CLASS_WEIGHTS, device=device, requires_grad=False) + + criterion = torch.nn.CrossEntropyLoss(weight=class_weights, ignore_index=mask_value, reduction="mean") + + trainer = create_supervised_trainer(model, optimizer, criterion, _prepare_batch, device=device) + + trainer.add_event_handler(Events.ITERATION_STARTED, scheduler) + + trainer.add_event_handler( + Events.ITERATION_COMPLETED, logging_handlers.log_training_output(log_interval=config.TRAIN.BATCH_SIZE_PER_GPU), + ) + trainer.add_event_handler(Events.EPOCH_STARTED, logging_handlers.log_lr(optimizer)) + trainer.add_event_handler( + Events.EPOCH_STARTED, tensorboard_handlers.log_lr(summary_writer, optimizer, "epoch"), + ) + trainer.add_event_handler( + Events.ITERATION_COMPLETED, tensorboard_handlers.log_training_output(summary_writer), + ) + + def _select_pred_and_mask(model_out_dict): + return (model_out_dict["y_pred"].squeeze(), model_out_dict["mask"].squeeze()) + + evaluator = create_supervised_evaluator( + model, + _prepare_batch, + metrics={ + "pixacc": pixelwise_accuracy(n_classes, output_transform=_select_pred_and_mask), + "nll": Loss(criterion, output_transform=_select_pred_and_mask), + "cacc": class_accuracy(n_classes, output_transform=_select_pred_and_mask), + "mca": mean_class_accuracy(n_classes, output_transform=_select_pred_and_mask), + "ciou": class_iou(n_classes, output_transform=_select_pred_and_mask), + "mIoU": mean_iou(n_classes, output_transform=_select_pred_and_mask), + }, + device=device, + ) + + # Set the validation run to start on the epoch completion of the training run + trainer.add_event_handler(Events.EPOCH_COMPLETED, Evaluator(evaluator, val_loader)) + + evaluator.add_event_handler( + Events.EPOCH_COMPLETED, + logging_handlers.log_metrics( + "Validation results", + metrics_dict={ + "nll": "Avg loss :", + "pixacc": "Pixelwise Accuracy :", + "mca": "Avg Class Accuracy :", + "mIoU": "Avg Class IoU :", + }, + ), + ) + evaluator.add_event_handler( + Events.EPOCH_COMPLETED, + tensorboard_handlers.log_metrics( + summary_writer, + trainer, + "epoch", + metrics_dict={ + "mIoU": "Validation/mIoU", + "nll": "Validation/Loss", + "mca": "Validation/MCA", + "pixacc": "Validation/Pixel_Acc", + }, + ), + ) + + def _select_max(pred_tensor): + return pred_tensor.max(1)[1] + + def _tensor_to_numpy(pred_tensor): + return pred_tensor.squeeze().cpu().numpy() + + transform_func = compose( + np_to_tb, decode_segmap, _tensor_to_numpy, + ) + + transform_pred = compose(transform_func, _select_max) + + evaluator.add_event_handler( + Events.EPOCH_COMPLETED, create_image_writer(summary_writer, "Validation/Image", "image"), + ) + evaluator.add_event_handler( + Events.EPOCH_COMPLETED, + create_image_writer(summary_writer, "Validation/Mask", "mask", transform_func=transform_func), + ) + evaluator.add_event_handler( + Events.EPOCH_COMPLETED, + create_image_writer(summary_writer, "Validation/Pred", "y_pred", transform_func=transform_pred), + ) + + def snapshot_function(): + return (trainer.state.iteration % snapshot_duration) == 0 + + checkpoint_handler = SnapshotHandler(output_dir, config.MODEL.NAME, extract_metric_from("mIoU"), snapshot_function,) + evaluator.add_event_handler(Events.EPOCH_COMPLETED, checkpoint_handler, {"model": model}) + + logger.info("Starting training") + if debug: + trainer.run( + train_loader, + max_epochs=config.TRAIN.END_EPOCH, + epoch_length=config.TRAIN.BATCH_SIZE_PER_GPU, + seed=config.SEED, + ) + else: + trainer.run(train_loader, max_epochs=config.TRAIN.END_EPOCH, epoch_length=len(train_loader), seed=config.SEED) + + +if __name__ == "__main__": + fire.Fire(run) diff --git a/contrib/experiments/interpretation/penobscot/local/train.sh b/contrib/experiments/interpretation/penobscot/local/train.sh new file mode 100755 index 00000000..eb885b98 --- /dev/null +++ b/contrib/experiments/interpretation/penobscot/local/train.sh @@ -0,0 +1,2 @@ +#!/bin/bash +python train.py --cfg "configs/seresnet_unet.yaml" \ No newline at end of file diff --git a/contrib/fwi/azureml_devito/notebooks/000_Setup_GeophysicsTutorial_FWI_Azure_devito.ipynb b/contrib/fwi/azureml_devito/notebooks/000_Setup_GeophysicsTutorial_FWI_Azure_devito.ipynb index c9a17f1e..0b16843b 100755 --- a/contrib/fwi/azureml_devito/notebooks/000_Setup_GeophysicsTutorial_FWI_Azure_devito.ipynb +++ b/contrib/fwi/azureml_devito/notebooks/000_Setup_GeophysicsTutorial_FWI_Azure_devito.ipynb @@ -64,7 +64,8 @@ "from azureml.core.compute import ComputeTarget, AmlCompute\n", "from azureml.core.compute_target import ComputeTargetException\n", "import platform, dotenv\n", - "import pathlib" + "import pathlib\n", + "import subprocess" ] }, { @@ -76,13 +77,13 @@ "name": "stdout", "output_type": "stream", "text": [ - "Azure ML SDK Version: 1.0.76\n" + "Azure ML SDK Version: 1.0.81\n" ] }, { "data": { "text/plain": [ - "'Linux-4.15.0-1064-azure-x86_64-with-debian-stretch-sid'" + "'Linux-4.15.0-1064-azure-x86_64-with-debian-10.1'" ] }, "execution_count": 3, @@ -92,7 +93,7 @@ { "data": { "text/plain": [ - "'/datadrive01/prj/DeepSeismic/contrib/fwi/azureml_devito/notebooks'" + "'/workspace/contrib/fwi/azureml_devito/notebooks'" ] }, "execution_count": 3, @@ -150,7 +151,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "Overwriting /datadrive01/prj/DeepSeismic/contrib/fwi/azureml_devito/notebooks/./src/project_utils.py\n" + "Overwriting /workspace/contrib/fwi/azureml_devito/notebooks/./src/project_utils.py\n" ] } ], @@ -507,15 +508,6 @@ "execution_count": 14, "metadata": {}, "outputs": [ - { - "name": "stderr", - "output_type": "stream", - "text": [ - "WARNING - Warning: Falling back to use azure cli login credentials.\n", - "If you run your code in unattended mode, i.e., where you can't give a user input, then we recommend to use ServicePrincipalAuthentication or MsiAuthentication.\n", - "Please refer to aka.ms/aml-notebook-auth for different authentication mechanisms in azureml-sdk.\n" - ] - }, { "name": "stdout", "output_type": "stream", @@ -532,8 +524,8 @@ " workspace_name = crt_workspace_name,\n", " auth=project_utils.get_auth(dotenv_file_path))\n", " print(\"Workspace configuration loading succeeded. \")\n", - "# ws1.write_config(path=os.path.join(os.getcwd(), os.path.join(*([workspace_config_dir]))),\n", - "# file_name=workspace_config_file)\n", + " ws1.write_config(path=os.path.join(os.getcwd(), os.path.join(*([workspace_config_dir]))),\n", + " file_name=workspace_config_file)\n", " del ws1 # ws will be (re)created later using from_config() function\n", "except Exception as e :\n", " print('Exception msg: {}'.format(str(e )))\n", @@ -668,84 +660,35 @@ "name": "stdout", "output_type": "stream", "text": [ - "azure-cli 2.0.58 *\r\n", - "\r\n", - "acr 2.2.0 *\r\n", - "acs 2.3.17 *\r\n", - "advisor 2.0.0 *\r\n", - "ams 0.4.1 *\r\n", - "appservice 0.2.13 *\r\n", - "backup 1.2.1 *\r\n", - "batch 3.4.1 *\r\n", - "batchai 0.4.7 *\r\n", - "billing 0.2.0 *\r\n", - "botservice 0.1.6 *\r\n", - "cdn 0.2.0 *\r\n", - "cloud 2.1.0 *\r\n", - "cognitiveservices 0.2.4 *\r\n", - "command-modules-nspkg 2.0.2 *\r\n", - "configure 2.0.20 *\r\n", - "consumption 0.4.2 *\r\n", - "container 0.3.13 *\r\n", - "core 2.0.58 *\r\n", - "cosmosdb 0.2.7 *\r\n", - "dla 0.2.4 *\r\n", - "dls 0.1.8 *\r\n", - "dms 0.1.2 *\r\n", - "eventgrid 0.2.1 *\r\n", - "eventhubs 0.3.3 *\r\n", - "extension 0.2.3 *\r\n", - "feedback 2.1.4 *\r\n", - "find 0.2.13 *\r\n", - "hdinsight 0.3.0 *\r\n", - "interactive 0.4.1 *\r\n", - "iot 0.3.6 *\r\n", - "iotcentral 0.1.6 *\r\n", - "keyvault 2.2.11 *\r\n", - "kusto 0.1.0 *\r\n", - "lab 0.1.5 *\r\n", - "maps 0.3.3 *\r\n", - "monitor 0.2.10 *\r\n", - "network 2.3.2 *\r\n", - "nspkg 3.0.3 *\r\n", - "policyinsights 0.1.1 *\r\n", - "profile 2.1.3 *\r\n", - "rdbms 0.3.7 *\r\n", - "redis 0.4.0 *\r\n", - "relay 0.1.3 *\r\n", - "reservations 0.4.1 *\r\n", - "resource 2.1.10 *\r\n", - "role 2.4.0 *\r\n", - "search 0.1.1 *\r\n", - "security 0.1.0 *\r\n", - "servicebus 0.3.3 *\r\n", - "servicefabric 0.1.12 *\r\n", - "signalr 1.0.0 *\r\n", - "sql 2.1.9 *\r\n", - "sqlvm 0.1.0 *\r\n", - "storage 2.3.1 *\r\n", - "telemetry 1.0.1 *\r\n", - "vm 2.2.15 *\r\n", + "azure-cli 2.0.78\r\n", "\r\n", - "Extensions:\r\n", - "azure-ml-admin-cli 0.0.1\r\n", - "azure-cli-ml Unknown\r\n", + "command-modules-nspkg 2.0.3\r\n", + "core 2.0.78\r\n", + "nspkg 3.0.4\r\n", + "telemetry 1.0.4\r\n", "\r\n", "Python location '/opt/az/bin/python3'\r\n", - "Extensions directory '/opt/az/extensions'\r\n", + "Extensions directory '/root/.azure/cliextensions'\r\n", "\r\n", - "Python (Linux) 3.6.5 (default, Feb 12 2019, 02:10:43) \r\n", - "[GCC 5.4.0 20160609]\r\n", + "Python (Linux) 3.6.5 (default, Dec 12 2019, 11:11:10) \r\n", + "[GCC 6.3.0 20170516]\r\n", "\r\n", "Legal docs and information: aka.ms/AzureCliLegal\r\n", "\r\n", "\r\n", - "\u001b[33mYou have 57 updates available. Consider updating your CLI installation.\u001b[0m\r\n" + "\r\n", + "Your CLI is up-to-date.\r\n", + "\r\n", + "\u001b[33m\u001b[1mPlease let us know how we are doing: \u001b[34mhttps://aka.ms/clihats\u001b[0m\r\n" ] } ], "source": [ "!az --version\n", + "\n", + "# !az login\n", + "# ! az account set --subscription $subscription_id\n", + "\n", "if create_ACR_FLAG:\n", " !az login\n", " response01 = ! az account list --all --refresh -o table\n", @@ -784,9 +727,9 @@ { "data": { "text/plain": [ - "[' \"loginServer\": \"fwi01acr.azurecr.io\",',\n", - " ' \"name\": \"fwi01acr\",',\n", - " ' \"networkRuleSet\": null,',\n", + "[' \"type\": \"Notary\"',\n", + " ' }',\n", + " ' },',\n", " ' \"provisioningState\": \"Succeeded\",',\n", " ' \"resourceGroup\": \"ghiordanfwirsg01\",',\n", " ' \"sku\": {',\n", @@ -861,10 +804,7 @@ "metadata": {}, "outputs": [], "source": [ - "# create_ACR_FLAG=False\n", - "if create_ACR_FLAG:\n", - " import subprocess\n", - " cli_command = 'az acr credential show -n '+acr_name\n", + "cli_command = 'az acr credential show -n '+acr_name\n", "\n", "acr_username = subprocess.Popen(cli_command+' --query username',shell=True,stdout=subprocess.PIPE, stderr=subprocess.PIPE).\\\n", "communicate()[0].decode(\"utf-8\").split()[0].strip('\\\"')\n", @@ -901,9 +841,9 @@ ], "metadata": { "kernelspec": { - "display_name": "Python [conda env:fwi_dev_conda_environment] *", + "display_name": "Python [conda env:aml-sdk-conda-env] *", "language": "python", - "name": "conda-env-fwi_dev_conda_environment-py" + "name": "conda-env-aml-sdk-conda-env-py" }, "language_info": { "codemirror_mode": { diff --git a/contrib/fwi/azureml_devito/notebooks/010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb b/contrib/fwi/azureml_devito/notebooks/010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb index aac30038..792506bf 100755 --- a/contrib/fwi/azureml_devito/notebooks/010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb +++ b/contrib/fwi/azureml_devito/notebooks/010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb @@ -65,7 +65,7 @@ { "data": { "text/plain": [ - "'Linux-4.15.0-1063-azure-x86_64-with-debian-stretch-sid'" + "'Linux-4.15.0-1064-azure-x86_64-with-debian-10.1'" ] }, "execution_count": 3, @@ -75,7 +75,7 @@ { "data": { "text/plain": [ - "'/datadrive01/prj/DeepSeismic/contrib/fwi/azureml_devito/notebooks'" + "'/workspace/contrib/fwi/azureml_devito/notebooks'" ] }, "execution_count": 3, @@ -184,7 +184,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "/datadrive01/prj/DeepSeismic/contrib/fwi/azureml_devito/notebooks\r\n" + "/workspace/contrib/fwi/azureml_devito/notebooks\r\n" ] } ], @@ -199,7 +199,7 @@ "outputs": [], "source": [ "# azureml_sdk_version set here must match azureml sdk version pinned in conda env file written to conda_common_file_path below\n", - "azureml_sdk_version = '1.0.76' " + "azureml_sdk_version = '1.0.81' " ] }, { @@ -222,7 +222,7 @@ { "data": { "text/plain": [ - "(True, 'EXPERIMENTATION_DOCKER_IMAGE_TAG', 'sdk.v1.0.76')" + "(True, 'EXPERIMENTATION_DOCKER_IMAGE_TAG', 'sdk.v1.0.81')" ] }, "execution_count": 9, @@ -255,6 +255,7 @@ "\n", "\n", "docker_container_mount_point = os.getcwd()\n", + "docker_container_mount_point = '/datadrive01/prj/DeepSeismic/contrib/fwi/azureml_devito/notebooks'\n", "# or something like \"/datadrive01/prj/DeepSeismic/contrib/fwi/azureml_devito/notebooks'\n", "dotenv.set_key(dotenv_file_path, 'DOCKER_CONTAINER_MOUNT_POINT', docker_container_mount_point)" ] @@ -267,7 +268,7 @@ { "data": { "text/plain": [ - "'fwi01acr.azurecr.io/fwi01_azureml:sdk.v1.0.76'" + "'fwi01acr.azurecr.io/fwi01_azureml:sdk.v1.0.81'" ] }, "execution_count": 10, @@ -277,7 +278,7 @@ { "data": { "text/plain": [ - "'conda_env_fwi01_azureml_sdk.v1.0.76.yml'" + "'conda_env_fwi01_azureml_sdk.v1.0.81.yml'" ] }, "execution_count": 10, @@ -287,7 +288,7 @@ { "data": { "text/plain": [ - "'/datadrive01/prj/DeepSeismic/contrib/fwi/azureml_devito/notebooks/./../temp/docker_build/conda_env_fwi01_azureml_sdk.v1.0.76.yml'" + "'/workspace/contrib/fwi/azureml_devito/notebooks/./../temp/docker_build/conda_env_fwi01_azureml_sdk.v1.0.81.yml'" ] }, "execution_count": 10, @@ -297,7 +298,7 @@ { "data": { "text/plain": [ - "'/datadrive01/prj/DeepSeismic/contrib/fwi/azureml_devito/notebooks/./../temp/docker_build/conda_env_fwi01_azureml.yml'" + "'/workspace/contrib/fwi/azureml_devito/notebooks/./../temp/docker_build/conda_env_fwi01_azureml.yml'" ] }, "execution_count": 10, @@ -307,7 +308,7 @@ { "data": { "text/plain": [ - "'/datadrive01/prj/DeepSeismic/contrib/fwi/azureml_devito/notebooks/./../temp/docker_build'" + "'/workspace/contrib/fwi/azureml_devito/notebooks/./../temp/docker_build'" ] }, "execution_count": 10, @@ -317,7 +318,7 @@ { "data": { "text/plain": [ - "'/datadrive01/prj/DeepSeismic/contrib/fwi/azureml_devito/notebooks/./../temp/docker_build/Dockerfile_fwi01_azureml_sdk.v1.0.76'" + "'/workspace/contrib/fwi/azureml_devito/notebooks/./../temp/docker_build/Dockerfile_fwi01_azureml_sdk.v1.0.81'" ] }, "execution_count": 10, @@ -374,7 +375,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "Writing /datadrive01/prj/DeepSeismic/contrib/fwi/azureml_devito/notebooks/./../temp/docker_build/conda_env_fwi01_azureml.yml\n" + "Overwriting /workspace/contrib/fwi/azureml_devito/notebooks/./../temp/docker_build/conda_env_fwi01_azureml.yml\n" ] } ], @@ -410,7 +411,7 @@ " - toolz\n", " - pip:\n", " - anytree # required by devito\n", - " - azureml-sdk[notebooks,automl]==1.0.76\n", + " - azureml-sdk[notebooks,automl]\n", " - codepy # required by devito\n", " - papermill[azure]\n", " - pyrevolve # required by devito" @@ -425,14 +426,14 @@ "name": "stdout", "output_type": "stream", "text": [ - "Writing /datadrive01/prj/DeepSeismic/contrib/fwi/azureml_devito/notebooks/./../temp/docker_build/Dockerfile_fwi01_azureml_sdk.v1.0.76\n" + "Writing /workspace/contrib/fwi/azureml_devito/notebooks/./../temp/docker_build/Dockerfile_fwi01_azureml_sdk.v1.0.81\n" ] } ], "source": [ "%%writefile $docker_file_path \n", "\n", - "FROM continuumio/miniconda3:4.7.10 \n", + "FROM continuumio/miniconda3:4.7.12 \n", "MAINTAINER George Iordanescu \n", "\n", "RUN apt-get update --fix-missing && apt-get install -y --no-install-recommends \\\n", @@ -478,7 +479,7 @@ { "data": { "text/plain": [ - "'/datadrive01/prj/DeepSeismic/contrib/fwi/azureml_devito/notebooks/./../temp/docker_build/conda_env_fwi01_azureml_sdk.v1.0.76.yml'" + "'/workspace/contrib/fwi/azureml_devito/notebooks/./../temp/docker_build/conda_env_fwi01_azureml_sdk.v1.0.81.yml'" ] }, "execution_count": 13, @@ -489,10 +490,12 @@ "name": "stdout", "output_type": "stream", "text": [ - "total 12\r\n", - "-rw-rw-r-- 1 loginvm022 loginvm022 725 Dec 6 15:26 conda_env_fwi01_azureml_sdk.v1.0.76.yml\r\n", - "-rw-rw-r-- 1 loginvm022 loginvm022 725 Dec 6 15:26 conda_env_fwi01_azureml.yml\r\n", - "-rw-rw-r-- 1 loginvm022 loginvm022 1073 Dec 6 15:26 Dockerfile_fwi01_azureml_sdk.v1.0.76\r\n" + "total 20\r\n", + "-rwxrwxrwx 1 1003 1003 1073 Dec 17 19:25 Dockerfile_fwi01_azureml_sdk.v1.0.79\r\n", + "-rw-r--r-- 1 root root 1073 Jan 4 00:30 Dockerfile_fwi01_azureml_sdk.v1.0.81\r\n", + "-rwxrwxrwx 1 1003 1003 717 Jan 4 00:30 conda_env_fwi01_azureml.yml\r\n", + "-rwxrwxrwx 1 1003 1003 725 Dec 17 19:25 conda_env_fwi01_azureml_sdk.v1.0.79.yml\r\n", + "-rw-r--r-- 1 root root 717 Jan 4 00:30 conda_env_fwi01_azureml_sdk.v1.0.81.yml\r\n" ] } ], @@ -510,7 +513,7 @@ { "data": { "text/plain": [ - "'docker build -t fwi01acr.azurecr.io/fwi01_azureml:sdk.v1.0.76 -f /datadrive01/prj/DeepSeismic/contrib/fwi/azureml_devito/notebooks/./../temp/docker_build/Dockerfile_fwi01_azureml_sdk.v1.0.76 /datadrive01/prj/DeepSeismic/contrib/fwi/azureml_devito/notebooks/./../temp/docker_build '" + "'docker build -t fwi01acr.azurecr.io/fwi01_azureml:sdk.v1.0.81 -f /workspace/contrib/fwi/azureml_devito/notebooks/./../temp/docker_build/Dockerfile_fwi01_azureml_sdk.v1.0.81 /workspace/contrib/fwi/azureml_devito/notebooks/./../temp/docker_build '" ] }, "execution_count": 14, @@ -520,11 +523,11 @@ { "data": { "text/plain": [ - "['Sending build context to Docker daemon 6.144kB',\n", + "['Sending build context to Docker daemon 9.728kB',\n", " '',\n", - " 'Step 1/15 : FROM continuumio/miniconda3:4.7.10',\n", - " '4.7.10: Pulling from continuumio/miniconda3',\n", - " '1ab2bdfe9778: Pulling fs layer']" + " 'Step 1/15 : FROM continuumio/miniconda3:4.7.12',\n", + " ' ---> 406f2b43ea59',\n", + " 'Step 2/15 : MAINTAINER George Iordanescu ']" ] }, "execution_count": 14, @@ -534,11 +537,11 @@ { "data": { "text/plain": [ - "[' ---> Running in 00c2824f0cd3',\n", - " 'Removing intermediate container 00c2824f0cd3',\n", - " ' ---> 48fb03897096',\n", - " 'Successfully built 48fb03897096',\n", - " 'Successfully tagged fwi01acr.azurecr.io/fwi01_azureml:sdk.v1.0.76']" + "[' ---> Running in 815d23815e0f',\n", + " 'Removing intermediate container 815d23815e0f',\n", + " ' ---> b9555c46cc92',\n", + " 'Successfully built b9555c46cc92',\n", + " 'Successfully tagged fwi01acr.azurecr.io/fwi01_azureml:sdk.v1.0.81']" ] }, "execution_count": 14, @@ -575,7 +578,7 @@ { "data": { "text/plain": [ - "'fwi01acr.azurecr.io/fwi01_azureml:sdk.v1.0.76'" + "'fwi01acr.azurecr.io/fwi01_azureml:sdk.v1.0.81'" ] }, "execution_count": 15, @@ -595,7 +598,7 @@ { "data": { "text/plain": [ - "b'/\\n1.0.76\\n'" + "b'/\\n1.0.81\\n'" ] }, "execution_count": 15, @@ -665,7 +668,7 @@ "text": [ "\n", "content of devito tests log file before testing:\n", - "Before running e13n container... \r\n" + "Before running e13n container... \n" ] }, { @@ -704,7 +707,7 @@ { "data": { "text/plain": [ - "'docker run -it --rm --name fwi01_azureml_container -v /datadrive01/prj/DeepSeismic/contrib/fwi/azureml_devito/notebooks:/workspace:rw fwi01acr.azurecr.io/fwi01_azureml:sdk.v1.0.76 /bin/bash -c \"conda env list ; ls -l /devito/tests; python -c \\'import azureml.core;print(azureml.core.VERSION)\\'; cd /devito; python -m pytest tests/ > ./fwi01_azureml_buildexperimentationdockerimage.log 2>&1; mv ./fwi01_azureml_buildexperimentationdockerimage.log /workspace/ \"'" + "'docker run -it --rm --name fwi01_azureml_container -v /datadrive01/prj/DeepSeismic/contrib/fwi/azureml_devito/notebooks:/workspace:rw fwi01acr.azurecr.io/fwi01_azureml:sdk.v1.0.81 /bin/bash -c \"conda env list ; ls -l /devito/tests; python -c \\'import azureml.core;print(azureml.core.VERSION)\\'; cd /devito; python -m pytest tests/ > ./fwi01_azureml_buildexperimentationdockerimage.log 2>&1; mv ./fwi01_azureml_buildexperimentationdockerimage.log /workspace/ \"'" ] }, "execution_count": 18, @@ -721,50 +724,54 @@ "fwi01_conda_env * /opt/conda/envs/fwi01_conda_env\n", "\n", "total 560\n", - "-rw-r--r-- 1 root root 11521 Dec 6 15:26 conftest.py\n", - "-rw-r--r-- 1 root root 6006 Dec 6 15:26 test_adjoint.py\n", - "-rw-r--r-- 1 root root 14586 Dec 6 15:26 test_autotuner.py\n", - "-rw-r--r-- 1 root root 7538 Dec 6 15:26 test_builtins.py\n", - "-rw-r--r-- 1 root root 24415 Dec 6 15:26 test_caching.py\n", - "-rw-r--r-- 1 root root 9721 Dec 6 15:26 test_checkpointing.py\n", - "-rw-r--r-- 1 root root 1095 Dec 6 15:26 test_constant.py\n", - "-rw-r--r-- 1 root root 55954 Dec 6 15:26 test_data.py\n", - "-rw-r--r-- 1 root root 481 Dec 6 15:26 test_dependency_bugs.py\n", - "-rw-r--r-- 1 root root 16331 Dec 6 15:26 test_derivatives.py\n", - "-rw-r--r-- 1 root root 1473 Dec 6 15:26 test_differentiable.py\n", - "-rw-r--r-- 1 root root 30846 Dec 6 15:26 test_dimension.py\n", - "-rw-r--r-- 1 root root 24838 Dec 6 15:26 test_dle.py\n", - "-rw-r--r-- 1 root root 1169 Dec 6 15:26 test_docstrings.py\n", - "-rw-r--r-- 1 root root 32134 Dec 6 15:26 test_dse.py\n", - "-rw-r--r-- 1 root root 8205 Dec 6 15:26 test_gradient.py\n", - "-rw-r--r-- 1 root root 15227 Dec 6 15:26 test_interpolation.py\n", - "-rw-r--r-- 1 root root 31816 Dec 6 15:26 test_ir.py\n", - "-rw-r--r-- 1 root root 63169 Dec 6 15:26 test_mpi.py\n", - "-rw-r--r-- 1 root root 67053 Dec 6 15:26 test_operator.py\n", - "-rw-r--r-- 1 root root 14875 Dec 6 15:26 test_ops.py\n", - "-rw-r--r-- 1 root root 12228 Dec 6 15:26 test_pickle.py\n", - "-rw-r--r-- 1 root root 1809 Dec 6 15:26 test_resample.py\n", - "-rw-r--r-- 1 root root 1754 Dec 6 15:26 test_save.py\n", - "-rw-r--r-- 1 root root 2115 Dec 6 15:26 test_staggered_utils.py\n", - "-rw-r--r-- 1 root root 5711 Dec 6 15:26 test_subdomains.py\n", - "-rw-r--r-- 1 root root 3320 Dec 6 15:26 test_symbolic_coefficients.py\n", - "-rw-r--r-- 1 root root 7277 Dec 6 15:26 test_tensors.py\n", - "-rw-r--r-- 1 root root 3186 Dec 6 15:26 test_timestepping.py\n", - "-rw-r--r-- 1 root root 603 Dec 6 15:26 test_tools.py\n", - "-rw-r--r-- 1 root root 3296 Dec 6 15:26 test_tti.py\n", - "-rw-r--r-- 1 root root 8835 Dec 6 15:26 test_visitors.py\n", - "-rw-r--r-- 1 root root 21802 Dec 6 15:26 test_yask.py\n", - "1.0.76\n", + "-rw-r--r-- 1 root root 11521 Jan 4 00:30 conftest.py\n", + "-rw-r--r-- 1 root root 5937 Jan 4 00:30 test_adjoint.py\n", + "-rw-r--r-- 1 root root 12326 Jan 4 00:30 test_autotuner.py\n", + "-rw-r--r-- 1 root root 7538 Jan 4 00:30 test_builtins.py\n", + "-rw-r--r-- 1 root root 24415 Jan 4 00:30 test_caching.py\n", + "-rw-r--r-- 1 root root 9721 Jan 4 00:30 test_checkpointing.py\n", + "-rw-r--r-- 1 root root 1095 Jan 4 00:30 test_constant.py\n", + "-rw-r--r-- 1 root root 55954 Jan 4 00:30 test_data.py\n", + "-rw-r--r-- 1 root root 481 Jan 4 00:30 test_dependency_bugs.py\n", + "-rw-r--r-- 1 root root 16331 Jan 4 00:30 test_derivatives.py\n", + "-rw-r--r-- 1 root root 1473 Jan 4 00:30 test_differentiable.py\n", + "-rw-r--r-- 1 root root 30846 Jan 4 00:30 test_dimension.py\n", + "-rw-r--r-- 1 root root 23484 Jan 4 00:30 test_dle.py\n", + "-rw-r--r-- 1 root root 1175 Jan 4 00:30 test_docstrings.py\n", + "-rw-r--r-- 1 root root 32930 Jan 4 00:30 test_dse.py\n", + "-rw-r--r-- 1 root root 8205 Jan 4 00:30 test_gradient.py\n", + "-rw-r--r-- 1 root root 15227 Jan 4 00:30 test_interpolation.py\n", + "-rw-r--r-- 1 root root 31797 Jan 4 00:30 test_ir.py\n", + "-rw-r--r-- 1 root root 63169 Jan 4 00:30 test_mpi.py\n", + "-rw-r--r-- 1 root root 67153 Jan 4 00:30 test_operator.py\n", + "-rw-r--r-- 1 root root 14780 Jan 4 00:30 test_ops.py\n", + "-rw-r--r-- 1 root root 12237 Jan 4 00:30 test_pickle.py\n", + "-rw-r--r-- 1 root root 1809 Jan 4 00:30 test_resample.py\n", + "-rw-r--r-- 1 root root 1754 Jan 4 00:30 test_save.py\n", + "-rw-r--r-- 1 root root 2115 Jan 4 00:30 test_staggered_utils.py\n", + "-rw-r--r-- 1 root root 5711 Jan 4 00:30 test_subdomains.py\n", + "-rw-r--r-- 1 root root 3320 Jan 4 00:30 test_symbolic_coefficients.py\n", + "-rw-r--r-- 1 root root 7277 Jan 4 00:30 test_tensors.py\n", + "-rw-r--r-- 1 root root 3186 Jan 4 00:30 test_timestepping.py\n", + "-rw-r--r-- 1 root root 603 Jan 4 00:30 test_tools.py\n", + "-rw-r--r-- 1 root root 3125 Jan 4 00:30 test_tti.py\n", + "-rw-r--r-- 1 root root 8835 Jan 4 00:30 test_visitors.py\n", + "-rw-r--r-- 1 root root 21716 Jan 4 00:30 test_yask.py\n", + "1.0.81\n", "\n", "content of devito tests log file after testing:\n", "============================= test session starts ==============================\n", - "platform linux -- Python 3.6.9, pytest-5.3.1, py-1.8.0, pluggy-0.13.1\n", + "platform linux -- Python 3.6.9, pytest-5.3.2, py-1.8.0, pluggy-0.13.1\n", "rootdir: /devito, inifile: setup.cfg\n", - "plugins: nbval-0.9.3, cov-2.8.1\n", - "collected 1056 items / 2 skipped / 1054 selected\n", + "plugins: nbval-0.9.4, cov-2.8.1\n", + "/tmp/devito-jitcache-uid0/db6a48f1a0b6e8998ac851d39f5db8ac9b0e69eb.c: In function ‘bf0’:\n", + "/tmp/devito-jitcache-uid0/db6a48f1a0b6e8998ac851d39f5db8ac9b0e69eb.c:217: warning: ignoring #pragma omp simd [-Wunknown-pragmas]\n", + " #pragma omp simd aligned(damp,delta,epsilon,phi,theta,u,v,vp:32)\n", + " \n", + "collected 1065 items / 2 skipped / 1063 selected\n", "\n", "tests/test_adjoint.py .......................... [ 2%]\n", - "tests/test_autotuner.py ..........s..... [ 3%]\n", + "tests/test_autotuner.py .........s..... [ 3%]\n", "tests/test_builtins.py ....s...............s..s [ 6%]\n", "tests/test_caching.py .................................................. [ 10%]\n", " [ 10%]\n", @@ -775,20 +782,20 @@ "tests/test_derivatives.py .............................................. [ 20%]\n", "........................................................................ [ 27%]\n", "........................................................................ [ 34%]\n", - "...... [ 35%]\n", - "tests/test_differentiable.py .. [ 35%]\n", - "tests/test_dimension.py ............................... [ 38%]\n", - "tests/test_dle.py ...................................................... [ 43%]\n", - "........................................... [ 47%]\n", - "tests/test_docstrings.py ................ [ 48%]\n", - "tests/test_dse.py ......x............................................... [ 53%]\n", - "................x..........s.... [ 57%]\n", + "...... [ 34%]\n", + "tests/test_differentiable.py .. [ 34%]\n", + "tests/test_dimension.py ............................... [ 37%]\n", + "tests/test_dle.py ...................................................... [ 42%]\n", + "......................................................... [ 48%]\n", + "tests/test_docstrings.py ................ [ 49%]\n", + "tests/test_dse.py ......x............................................... [ 54%]\n", + "............x..........s.... [ 57%]\n", "tests/test_gradient.py .... [ 57%]\n", - "tests/test_interpolation.py ........................ [ 59%]\n", - "tests/test_ir.py ....................................................... [ 64%]\n", + "tests/test_interpolation.py ........................ [ 60%]\n", + "tests/test_ir.py ....................................................... [ 65%]\n", "................ [ 66%]\n", "tests/test_mpi.py ssssssssssssssssssssssssssssssssssssssssssssssssssssss [ 71%]\n", - "sss [ 71%]\n", + "sss [ 72%]\n", "tests/test_operator.py ................................................. [ 76%]\n", "..............................................s......................... [ 83%]\n", ".................................. [ 86%]\n", @@ -808,7 +815,7 @@ "=================================== FAILURES ===================================\n", "______________________ TestSC.test_function_coefficients _______________________\n", "\n", - "self = \n", + "self = \n", "\n", " def test_function_coefficients(self):\n", " \"\"\"Test that custom function coefficients return the expected result\"\"\"\n", @@ -852,10 +859,10 @@ " \n", "> assert np.all(np.isclose(f0.data[:] - f1.data[:], 0.0, atol=1e-5, rtol=0))\n", "E assert Data(False)\n", - "E + where Data(False) = (Data([[[False, False, False, False],\\n [False, False, False, False],\\n [ True, True, True, True],\\n ...alse],\\n [False, False, False, False],\\n [False, False, False, False],\\n [ True, True, True, True]]]))\n", - "E + where = np.all\n", - "E + and Data([[[False, False, False, False],\\n [False, False, False, False],\\n [ True, True, True, True],\\n ...alse],\\n [False, False, False, False],\\n [False, False, False, False],\\n [ True, True, True, True]]]) = ((Data([[[-1452., -1452., -1452., -1452.],\\n [ 3327., 3327., 3327., 3327.],\\n [-3414., -3414., -3414., -341...3., 383., 383.],\\n [ -598., -598., -598., -598.],\\n [ 341., 341., 341., 341.]]], dtype=float32) - Data([[[-1451.9998 , -1451.9998 , -1451.9998 , -1451.9998 ],\\n [ 3326.9995 , 3326.9995 , 3326.9995 , 33...4 , -597.99994 , -597.99994 ],\\n [ 341. , 341. , 341. , 341. ]]],\\n dtype=float32)), 0.0, atol=1e-05, rtol=0)\n", - "E + where = np.isclose\n", + "E + where Data(False) = (Data([[[False, False, False, False],\\n [False, False, False, False],\\n [ True, True, True, True],\\n ...alse],\\n [False, False, False, False],\\n [False, False, False, False],\\n [ True, True, True, True]]]))\n", + "E + where = np.all\n", + "E + and Data([[[False, False, False, False],\\n [False, False, False, False],\\n [ True, True, True, True],\\n ...alse],\\n [False, False, False, False],\\n [False, False, False, False],\\n [ True, True, True, True]]]) = ((Data([[[-1452., -1452., -1452., -1452.],\\n [ 3327., 3327., 3327., 3327.],\\n [-3414., -3414., -3414., -341...3., 383., 383.],\\n [ -598., -598., -598., -598.],\\n [ 341., 341., 341., 341.]]], dtype=float32) - Data([[[-1451.9998 , -1451.9998 , -1451.9998 , -1451.9998 ],\\n [ 3326.9995 , 3326.9995 , 3326.9995 , 33...4 , -597.99994 , -597.99994 ],\\n [ 341. , 341. , 341. , 341. ]]],\\n dtype=float32)), 0.0, atol=1e-05, rtol=0)\n", + "E + where = np.isclose\n", "\n", "tests/test_symbolic_coefficients.py:96: AssertionError\n", "----------------------------- Captured stderr call -----------------------------\n", @@ -870,9 +877,9 @@ " \n", "Operator `Kernel` run in 0.01 s\n", "------------------------------ Captured log call -------------------------------\n", - "INFO Devito:logger.py:129 Operator `Kernel` run in 0.01 s\n", - "INFO Devito:logger.py:129 Operator `Kernel` run in 0.01 s\n", - "====== 1 failed, 968 passed, 87 skipped, 2 xfailed in 1070.16s (0:17:50) =======\n" + "INFO Devito:logger.py:120 Operator `Kernel` run in 0.01 s\n", + "INFO Devito:logger.py:120 Operator `Kernel` run in 0.01 s\n", + "======= 1 failed, 977 passed, 87 skipped, 2 xfailed in 912.69s (0:15:12) =======\n" ] } ], @@ -902,7 +909,7 @@ }, { "cell_type": "code", - "execution_count": 22, + "execution_count": 19, "metadata": {}, "outputs": [ { @@ -911,7 +918,7 @@ "'az acr login --name fwi01acr'" ] }, - "execution_count": 22, + "execution_count": 19, "metadata": {}, "output_type": "execute_result" }, @@ -920,7 +927,7 @@ "output_type": "stream", "text": [ "Login Succeeded\r\n", - "WARNING! Your password will be stored unencrypted in /home/loginvm022/.docker/config.json.\r\n", + "WARNING! Your password will be stored unencrypted in /root/.docker/config.json.\r\n", "Configure a credential helper to remove this warning. See\r\n", "https://docs.docker.com/engine/reference/commandline/login/#credentials-store\r\n", "\r\n", @@ -946,16 +953,16 @@ }, { "cell_type": "code", - "execution_count": 23, + "execution_count": 20, "metadata": {}, "outputs": [ { "data": { "text/plain": [ - "'docker push fwi01acr.azurecr.io/fwi01_azureml:sdk.v1.0.76'" + "'docker push fwi01acr.azurecr.io/fwi01_azureml:sdk.v1.0.81'" ] }, - "execution_count": 23, + "execution_count": 20, "metadata": {}, "output_type": "execute_result" } @@ -967,7 +974,7 @@ }, { "cell_type": "code", - "execution_count": 24, + "execution_count": 21, "metadata": {}, "outputs": [ { @@ -976,33 +983,26 @@ "text": [ "The push refers to repository [fwi01acr.azurecr.io/fwi01_azureml]\n", "\n", - "\u001b[1Bd6300f53: Preparing \n", - "\u001b[1B01af7f6b: Preparing \n", - "\u001b[1B41f0b573: Preparing \n", - "\u001b[1B04ca5654: Preparing \n", - "\u001b[1Bf8fc4c9a: Preparing \n", - "\u001b[1Bba47210e: Preparing \n" + "\u001b[1Bd3b23d9e: Preparing \n", + "\u001b[1Bdf104634: Preparing \n", + "\u001b[1Bbb9ec1ac: Preparing \n", + "\u001b[1B71d4d165: Preparing \n", + "\u001b[1Bcb249b79: Preparing \n", + "\u001b[1B190fd43a: Preparing \n" ] }, { "name": "stdout", "output_type": "stream", "text": [ - "\u001b[6B01af7f6b: Pushing 1.484GB/3.028GBA\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[2A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K" + "\u001b[7Bd3b23d9e: Pushing 1.525GB/2.966GBA\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[5A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[1A\u001b[2K\u001b[2A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K" ] }, { "name": "stdout", "output_type": "stream", "text": [ - "\u001b[7Bd6300f53: Pushing 3.026GB/3.028GB\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2KPushing 2.58GB/2.968GB\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K" - ] - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ - "\u001b[6B01af7f6b: Pushed 3.103GB/3.028GB\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2Ksdk.v1.0.76: digest: sha256:416dc7ce59c279822e967223790f7b8b7d99ba62bc643ca44b94551135b60b6b size: 1800\n" + "\u001b[6Bdf104634: Pushed 3.046GB/2.966GB\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2Ksdk.v1.0.81: digest: sha256:bf183c89265716cbe65b38f03f4f8c0472dfd394a813cc51dea2513cc27eff45 size: 1800\n" ] } ], @@ -1012,7 +1012,7 @@ }, { "cell_type": "code", - "execution_count": 25, + "execution_count": 22, "metadata": {}, "outputs": [ { diff --git a/contrib/fwi/azureml_devito/notebooks/020_UseAzureMLEstimatorForExperimentation_GeophysicsTutorial_FWI_Azure_devito.ipynb b/contrib/fwi/azureml_devito/notebooks/020_UseAzureMLEstimatorForExperimentation_GeophysicsTutorial_FWI_Azure_devito.ipynb index db76f4c5..e24df855 100755 --- a/contrib/fwi/azureml_devito/notebooks/020_UseAzureMLEstimatorForExperimentation_GeophysicsTutorial_FWI_Azure_devito.ipynb +++ b/contrib/fwi/azureml_devito/notebooks/020_UseAzureMLEstimatorForExperimentation_GeophysicsTutorial_FWI_Azure_devito.ipynb @@ -75,13 +75,13 @@ "name": "stdout", "output_type": "stream", "text": [ - "Azure ML SDK Version: 1.0.76\n" + "Azure ML SDK Version: 1.0.81\n" ] }, { "data": { "text/plain": [ - "'Linux-4.15.0-1063-azure-x86_64-with-debian-stretch-sid'" + "'Linux-4.15.0-1064-azure-x86_64-with-debian-10.1'" ] }, "execution_count": 3, @@ -91,7 +91,7 @@ { "data": { "text/plain": [ - "'/datadrive01/prj/DeepSeismic/contrib/fwi/azureml_devito/notebooks'" + "'/workspace/contrib/fwi/azureml_devito/notebooks'" ] }, "execution_count": 3, @@ -481,7 +481,7 @@ { "data": { "text/plain": [ - "'fwi01_azureml:sdk.v1.0.76'" + "'fwi01_azureml:sdk.v1.0.81'" ] }, "execution_count": 9, @@ -491,7 +491,7 @@ { "data": { "text/plain": [ - "'fwi01acr.azurecr.io/fwi01_azureml:sdk.v1.0.76'" + "'fwi01acr.azurecr.io/fwi01_azureml:sdk.v1.0.81'" ] }, "execution_count": 9, @@ -530,7 +530,7 @@ "output_type": "stream", "text": [ "Login Succeeded\r\n", - "WARNING! Your password will be stored unencrypted in /home/loginvm022/.docker/config.json.\r\n", + "WARNING! Your password will be stored unencrypted in /root/.docker/config.json.\r\n", "Configure a credential helper to remove this warning. See\r\n", "https://docs.docker.com/engine/reference/commandline/login/#credentials-store\r\n", "\r\n", @@ -554,7 +554,7 @@ { "data": { "text/plain": [ - "'docker run -i --rm --name fwi01_azureml_container02 fwi01acr.azurecr.io/fwi01_azureml:sdk.v1.0.76 /bin/bash -c \"which python\" '" + "'docker run -i --rm --name fwi01_azureml_container02 fwi01acr.azurecr.io/fwi01_azureml:sdk.v1.0.81 /bin/bash -c \"which python\" '" ] }, "execution_count": 11, @@ -725,15 +725,6 @@ "execution_count": 14, "metadata": {}, "outputs": [ - { - "name": "stderr", - "output_type": "stream", - "text": [ - "WARNING - Warning: Falling back to use azure cli login credentials.\n", - "If you run your code in unattended mode, i.e., where you can't give a user input, then we recommend to use ServicePrincipalAuthentication or MsiAuthentication.\n", - "Please refer to aka.ms/aml-notebook-auth for different authentication mechanisms in azureml-sdk.\n" - ] - }, { "name": "stdout", "output_type": "stream", @@ -859,12 +850,12 @@ { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "565b952db744469fa2137b6c94e15f7a", + "model_id": "583bcb0530354691b76ee44c897972cd", "version_major": 2, "version_minor": 0 }, "text/plain": [ - "_UserRunWidget(widget_settings={'childWidgetDisplay': 'popup', 'send_telemetry': False, 'log_level': 'NOTSET',…" + "_UserRunWidget(widget_settings={'childWidgetDisplay': 'popup', 'send_telemetry': False, 'log_level': 'INFO', '…" ] }, "metadata": {}, @@ -872,7 +863,7 @@ }, { "data": { - "application/aml.mini.widget.v1": "{\"status\": \"Completed\", \"workbench_run_details_uri\": \"https://ml.azure.com/experiments/020_AzureMLEstimator/runs/020_AzureMLEstimator_1575674728_d40baeba?wsid=/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourcegroups/ghiordanfwirsg01/workspaces/ghiordanfwiws\", \"run_id\": \"020_AzureMLEstimator_1575674728_d40baeba\", \"run_properties\": {\"run_id\": \"020_AzureMLEstimator_1575674728_d40baeba\", \"created_utc\": \"2019-12-06T23:25:30.597858Z\", \"properties\": {\"_azureml.ComputeTargetType\": \"amlcompute\", \"ContentSnapshotId\": \"a5071b2a-37a7-40da-8340-69cc894091cb\", \"azureml.git.repository_uri\": \"git@github.com:georgeAccnt-GH/DeepSeismic.git\", \"mlflow.source.git.repoURL\": \"git@github.com:georgeAccnt-GH/DeepSeismic.git\", \"azureml.git.branch\": \"staging\", \"mlflow.source.git.branch\": \"staging\", \"azureml.git.commit\": \"1d3cd3340f4063508b6f707d5fc2a35f5429a07f\", \"mlflow.source.git.commit\": \"1d3cd3340f4063508b6f707d5fc2a35f5429a07f\", \"azureml.git.dirty\": \"True\", \"ProcessInfoFile\": \"azureml-logs/process_info.json\", \"ProcessStatusFile\": \"azureml-logs/process_status.json\"}, \"tags\": {\"_aml_system_ComputeTargetStatus\": \"{\\\"AllocationState\\\":\\\"steady\\\",\\\"PreparingNodeCount\\\":1,\\\"RunningNodeCount\\\":0,\\\"CurrentNodeCount\\\":1}\"}, \"script_name\": null, \"arguments\": null, \"end_time_utc\": \"2019-12-06T23:34:26.039772Z\", \"status\": \"Completed\", \"log_files\": {\"azureml-logs/55_azureml-execution-tvmps_d8d8a91061fed6f3a36a0e0da11655ae12488195551133265afca81050ad2db4_d.txt\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1575674728_d40baeba/azureml-logs/55_azureml-execution-tvmps_d8d8a91061fed6f3a36a0e0da11655ae12488195551133265afca81050ad2db4_d.txt?sv=2019-02-02&sr=b&sig=1Fz2ltrBSXhF9tDzTuEOv35mBsOLsf%2BCVuTEuSCRWdg%3D&st=2019-12-06T23%3A24%3A44Z&se=2019-12-07T07%3A34%3A44Z&sp=r\", \"azureml-logs/65_job_prep-tvmps_d8d8a91061fed6f3a36a0e0da11655ae12488195551133265afca81050ad2db4_d.txt\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1575674728_d40baeba/azureml-logs/65_job_prep-tvmps_d8d8a91061fed6f3a36a0e0da11655ae12488195551133265afca81050ad2db4_d.txt?sv=2019-02-02&sr=b&sig=PwHIdkWadtTAj29WuPOCF3g0RSrWdriOmKhqdjZNm3I%3D&st=2019-12-06T23%3A24%3A44Z&se=2019-12-07T07%3A34%3A44Z&sp=r\", \"azureml-logs/70_driver_log.txt\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1575674728_d40baeba/azureml-logs/70_driver_log.txt?sv=2019-02-02&sr=b&sig=Iz8WkiOv%2BkEXeOox8p3P8XkLIdb8pjhCO%2Bo8slYUBGk%3D&st=2019-12-06T23%3A24%3A44Z&se=2019-12-07T07%3A34%3A44Z&sp=r\", \"azureml-logs/75_job_post-tvmps_d8d8a91061fed6f3a36a0e0da11655ae12488195551133265afca81050ad2db4_d.txt\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1575674728_d40baeba/azureml-logs/75_job_post-tvmps_d8d8a91061fed6f3a36a0e0da11655ae12488195551133265afca81050ad2db4_d.txt?sv=2019-02-02&sr=b&sig=gz88u5ZC%2B7N8QospVRIL8zd%2FEyQKbljoZXQD01jAyXM%3D&st=2019-12-06T23%3A24%3A44Z&se=2019-12-07T07%3A34%3A44Z&sp=r\", \"azureml-logs/process_info.json\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1575674728_d40baeba/azureml-logs/process_info.json?sv=2019-02-02&sr=b&sig=4nj2pjm1rtKIjBmyudNaBEX6ITd3Gm%2BQLEUgjDYVBIc%3D&st=2019-12-06T23%3A24%3A44Z&se=2019-12-07T07%3A34%3A44Z&sp=r\", \"azureml-logs/process_status.json\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1575674728_d40baeba/azureml-logs/process_status.json?sv=2019-02-02&sr=b&sig=NQLsveMtGHBEYsmiwoPvPpOv%2B6wabnQp2IwDrVjh49Q%3D&st=2019-12-06T23%3A24%3A44Z&se=2019-12-07T07%3A34%3A44Z&sp=r\", \"logs/azureml/729_azureml.log\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1575674728_d40baeba/logs/azureml/729_azureml.log?sv=2019-02-02&sr=b&sig=HpwLZSHX0J%2B2eWILTIDA7%2BmpVIEF0%2BIFfM2LHgYGk8w%3D&st=2019-12-06T23%3A24%3A43Z&se=2019-12-07T07%3A34%3A43Z&sp=r\", \"logs/azureml/azureml.log\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1575674728_d40baeba/logs/azureml/azureml.log?sv=2019-02-02&sr=b&sig=g%2Fi60CvATRGwaeQM9b6QihJxeFX0jTl%2BOKELCYYQ3rM%3D&st=2019-12-06T23%3A24%3A43Z&se=2019-12-07T07%3A34%3A43Z&sp=r\"}, \"log_groups\": [[\"azureml-logs/process_info.json\", \"azureml-logs/process_status.json\", \"logs/azureml/azureml.log\"], [\"azureml-logs/55_azureml-execution-tvmps_d8d8a91061fed6f3a36a0e0da11655ae12488195551133265afca81050ad2db4_d.txt\"], [\"azureml-logs/65_job_prep-tvmps_d8d8a91061fed6f3a36a0e0da11655ae12488195551133265afca81050ad2db4_d.txt\"], [\"azureml-logs/70_driver_log.txt\"], [\"azureml-logs/75_job_post-tvmps_d8d8a91061fed6f3a36a0e0da11655ae12488195551133265afca81050ad2db4_d.txt\"], [\"logs/azureml/729_azureml.log\"]], \"run_duration\": \"0:08:55\"}, \"child_runs\": [], \"children_metrics\": {}, \"run_metrics\": [{\"name\": \"training_message01: \", \"run_id\": \"020_AzureMLEstimator_1575674728_d40baeba\", \"categories\": [0], \"series\": [{\"data\": [\"finished experiment\"]}]}], \"run_logs\": \"2019-12-06 23:32:41,989|azureml|DEBUG|Inputs:: kwargs: {'OutputCollection': True, 'snapshotProject': True, 'only_in_process_features': True, 'skip_track_logs_dir': True}, track_folders: None, deny_list: None, directories_to_watch: []\\n2019-12-06 23:32:41,989|azureml.history._tracking.PythonWorkingDirectory|DEBUG|Execution target type: batchai\\n2019-12-06 23:32:41,990|azureml.history._tracking.PythonWorkingDirectory|DEBUG|Failed to import pyspark with error: No module named 'pyspark'\\n2019-12-06 23:32:41,990|azureml.history._tracking.PythonWorkingDirectory.workingdir|DEBUG|Pinning working directory for filesystems: ['pyfs']\\n2019-12-06 23:32:42,323|azureml._base_sdk_common.user_agent|DEBUG|Fetching client info from /root/.azureml/clientinfo.json\\n2019-12-06 23:32:42,323|azureml._base_sdk_common.user_agent|DEBUG|Error loading client info: [Errno 2] No such file or directory: '/root/.azureml/clientinfo.json'\\n2019-12-06 23:32:42,721|azureml.core._experiment_method|DEBUG|Trying to register submit_function search, on method \\n2019-12-06 23:32:42,721|azureml.core._experiment_method|DEBUG|Registered submit_function search, on method \\n2019-12-06 23:32:42,722|azureml.core._experiment_method|DEBUG|Trying to register submit_function search, on method \\n2019-12-06 23:32:42,722|azureml.core._experiment_method|DEBUG|Registered submit_function search, on method \\n2019-12-06 23:32:42,722|azureml.core.run|DEBUG|Adding new factory for run source hyperdrive\\n2019-12-06 23:32:43,300|azureml.core.run|DEBUG|Adding new factory for run source azureml.PipelineRun\\n2019-12-06 23:32:43,306|azureml.core.run|DEBUG|Adding new factory for run source azureml.ReusedStepRun\\n2019-12-06 23:32:43,311|azureml.core.run|DEBUG|Adding new factory for run source azureml.StepRun\\n2019-12-06 23:32:43,316|azureml.core.run|DEBUG|Adding new factory for run source azureml.scriptrun\\n2019-12-06 23:32:43,318|azureml.core.authentication.TokenRefresherDaemon|DEBUG|Starting daemon and triggering first instance\\n2019-12-06 23:32:43,324|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2019-12-06 23:32:43,325|azureml._restclient.clientbase|INFO|Created a worker pool for first use\\n2019-12-06 23:32:43,325|azureml.core.authentication|DEBUG|Time to expire 1813966.674698 seconds\\n2019-12-06 23:32:43,325|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-06 23:32:43,325|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-06 23:32:43,325|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-06 23:32:43,325|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-06 23:32:43,325|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-06 23:32:43,325|azureml._base_sdk_common.service_discovery|DEBUG|Constructing mms service url in from history url environment variable None, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-06 23:32:43,326|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-06 23:32:43,326|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-06 23:32:43,326|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-06 23:32:43,356|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-06 23:32:43,361|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2019-12-06 23:32:43,369|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2019-12-06 23:32:43,374|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2019-12-06 23:32:43,379|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2019-12-06 23:32:43,385|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2019-12-06 23:32:43,385|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.RunClient.get-async:False|DEBUG|[START]\\n2019-12-06 23:32:43,386|msrest.service_client|DEBUG|Accept header absent and forced to application/json\\n2019-12-06 23:32:43,386|msrest.http_logger|DEBUG|Request URL: 'https://eastus2.experiments.azureml.net/history/v1.0/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourceGroups/ghiordanfwirsg01/providers/Microsoft.MachineLearningServices/workspaces/ghiordanfwiws/experiments/020_AzureMLEstimator/runs/020_AzureMLEstimator_1575674728_d40baeba'\\n2019-12-06 23:32:43,387|msrest.http_logger|DEBUG|Request method: 'GET'\\n2019-12-06 23:32:43,387|msrest.http_logger|DEBUG|Request headers:\\n2019-12-06 23:32:43,387|msrest.http_logger|DEBUG| 'Accept': 'application/json'\\n2019-12-06 23:32:43,387|msrest.http_logger|DEBUG| 'Content-Type': 'application/json; charset=utf-8'\\n2019-12-06 23:32:43,387|msrest.http_logger|DEBUG| 'x-ms-client-request-id': '2a72fb1c-fdba-4e6d-a244-7315dcdf5d54'\\n2019-12-06 23:32:43,387|msrest.http_logger|DEBUG| 'request-id': '2a72fb1c-fdba-4e6d-a244-7315dcdf5d54'\\n2019-12-06 23:32:43,387|msrest.http_logger|DEBUG| 'User-Agent': 'python/3.6.9 (Linux-4.15.0-1057-azure-x86_64-with-debian-10.0) msrest/0.6.10 azureml._restclient/core.1.0.76'\\n2019-12-06 23:32:43,387|msrest.http_logger|DEBUG|Request body:\\n2019-12-06 23:32:43,387|msrest.http_logger|DEBUG|None\\n2019-12-06 23:32:43,387|msrest.universal_http|DEBUG|Configuring redirects: allow=True, max=30\\n2019-12-06 23:32:43,387|msrest.universal_http|DEBUG|Configuring request: timeout=100, verify=True, cert=None\\n2019-12-06 23:32:43,387|msrest.universal_http|DEBUG|Configuring proxies: ''\\n2019-12-06 23:32:43,387|msrest.universal_http|DEBUG|Evaluate proxies against ENV settings: True\\n2019-12-06 23:32:43,442|msrest.http_logger|DEBUG|Response status: 200\\n2019-12-06 23:32:43,443|msrest.http_logger|DEBUG|Response headers:\\n2019-12-06 23:32:43,443|msrest.http_logger|DEBUG| 'Date': 'Fri, 06 Dec 2019 23:32:43 GMT'\\n2019-12-06 23:32:43,443|msrest.http_logger|DEBUG| 'Content-Type': 'application/json; charset=utf-8'\\n2019-12-06 23:32:43,443|msrest.http_logger|DEBUG| 'Transfer-Encoding': 'chunked'\\n2019-12-06 23:32:43,443|msrest.http_logger|DEBUG| 'Connection': 'keep-alive'\\n2019-12-06 23:32:43,443|msrest.http_logger|DEBUG| 'Vary': 'Accept-Encoding'\\n2019-12-06 23:32:43,443|msrest.http_logger|DEBUG| 'Request-Context': 'appId=cid-v1:2d2e8e63-272e-4b3c-8598-4ee570a0e70d'\\n2019-12-06 23:32:43,443|msrest.http_logger|DEBUG| 'x-ms-client-request-id': '2a72fb1c-fdba-4e6d-a244-7315dcdf5d54'\\n2019-12-06 23:32:43,444|msrest.http_logger|DEBUG| 'x-ms-client-session-id': ''\\n2019-12-06 23:32:43,444|msrest.http_logger|DEBUG| 'Strict-Transport-Security': 'max-age=15724800; includeSubDomains; preload'\\n2019-12-06 23:32:43,444|msrest.http_logger|DEBUG| 'X-Content-Type-Options': 'nosniff'\\n2019-12-06 23:32:43,444|msrest.http_logger|DEBUG| 'Content-Encoding': 'gzip'\\n2019-12-06 23:32:43,444|msrest.http_logger|DEBUG|Response content:\\n2019-12-06 23:32:43,444|msrest.http_logger|DEBUG|{\\n \\\"runNumber\\\": 1516,\\n \\\"rootRunId\\\": \\\"020_AzureMLEstimator_1575674728_d40baeba\\\",\\n \\\"experimentId\\\": \\\"8d96276b-f420-4a67-86be-f933dd3d38cd\\\",\\n \\\"createdUtc\\\": \\\"2019-12-06T23:25:30.5978583+00:00\\\",\\n \\\"createdBy\\\": {\\n \\\"userObjectId\\\": \\\"b77869a0-66f2-4288-89ef-13c10accc4dc\\\",\\n \\\"userPuId\\\": \\\"1003000090A95868\\\",\\n \\\"userIdp\\\": null,\\n \\\"userAltSecId\\\": null,\\n \\\"userIss\\\": \\\"https://sts.windows.net/72f988bf-86f1-41af-91ab-2d7cd011db47/\\\",\\n \\\"userTenantId\\\": \\\"72f988bf-86f1-41af-91ab-2d7cd011db47\\\",\\n \\\"userName\\\": \\\"George Iordanescu\\\"\\n },\\n \\\"userId\\\": \\\"b77869a0-66f2-4288-89ef-13c10accc4dc\\\",\\n \\\"token\\\": null,\\n \\\"tokenExpiryTimeUtc\\\": null,\\n \\\"error\\\": null,\\n \\\"warnings\\\": null,\\n \\\"revision\\\": 10,\\n \\\"runId\\\": \\\"020_AzureMLEstimator_1575674728_d40baeba\\\",\\n \\\"parentRunId\\\": null,\\n \\\"status\\\": \\\"Running\\\",\\n \\\"startTimeUtc\\\": \\\"2019-12-06T23:30:15.4122862+00:00\\\",\\n \\\"endTimeUtc\\\": null,\\n \\\"heartbeatEnabled\\\": false,\\n \\\"options\\\": {\\n \\\"generateDataContainerIdIfNotSpecified\\\": true\\n },\\n \\\"name\\\": null,\\n \\\"dataContainerId\\\": \\\"dcid.020_AzureMLEstimator_1575674728_d40baeba\\\",\\n \\\"description\\\": null,\\n \\\"hidden\\\": false,\\n \\\"runType\\\": \\\"azureml.scriptrun\\\",\\n \\\"properties\\\": {\\n \\\"_azureml.ComputeTargetType\\\": \\\"amlcompute\\\",\\n \\\"ContentSnapshotId\\\": \\\"a5071b2a-37a7-40da-8340-69cc894091cb\\\",\\n \\\"azureml.git.repository_uri\\\": \\\"git@github.com:georgeAccnt-GH/DeepSeismic.git\\\",\\n \\\"mlflow.source.git.repoURL\\\": \\\"git@github.com:georgeAccnt-GH/DeepSeismic.git\\\",\\n \\\"azureml.git.branch\\\": \\\"staging\\\",\\n \\\"mlflow.source.git.branch\\\": \\\"staging\\\",\\n \\\"azureml.git.commit\\\": \\\"1d3cd3340f4063508b6f707d5fc2a35f5429a07f\\\",\\n \\\"mlflow.source.git.commit\\\": \\\"1d3cd3340f4063508b6f707d5fc2a35f5429a07f\\\",\\n \\\"azureml.git.dirty\\\": \\\"True\\\",\\n \\\"ProcessInfoFile\\\": \\\"azureml-logs/process_info.json\\\",\\n \\\"ProcessStatusFile\\\": \\\"azureml-logs/process_status.json\\\"\\n },\\n \\\"scriptName\\\": \\\"azureml_01_modelling.py\\\",\\n \\\"target\\\": \\\"gpuclstfwi02\\\",\\n \\\"tags\\\": {\\n \\\"_aml_system_ComputeTargetStatus\\\": \\\"{\\\\\\\"AllocationState\\\\\\\":\\\\\\\"steady\\\\\\\",\\\\\\\"PreparingNodeCount\\\\\\\":1,\\\\\\\"RunningNodeCount\\\\\\\":0,\\\\\\\"CurrentNodeCount\\\\\\\":1}\\\"\\n },\\n \\\"inputDatasets\\\": [],\\n \\\"runDefinition\\\": null,\\n \\\"createdFrom\\\": {\\n \\\"type\\\": \\\"Notebook\\\",\\n \\\"locationType\\\": \\\"ArtifactId\\\",\\n \\\"location\\\": \\\"LocalUpload/020_AzureMLEstimator_1575674728_d40baeba/020_UseAzureMLEstimatorForExperimentation_GeophysicsTutorial_FWI_Azure_devito.ipynb\\\"\\n },\\n \\\"cancelUri\\\": \\\"https://eastus2.experiments.azureml.net/execution/v1.0/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourceGroups/ghiordanfwirsg01/providers/Microsoft.MachineLearningServices/workspaces/ghiordanfwiws/experiments/020_AzureMLEstimator/runId/020_AzureMLEstimator_1575674728_d40baeba/cancel\\\",\\n \\\"completeUri\\\": null,\\n \\\"diagnosticsUri\\\": \\\"https://eastus2.experiments.azureml.net/execution/v1.0/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourceGroups/ghiordanfwirsg01/providers/Microsoft.MachineLearningServices/workspaces/ghiordanfwiws/experiments/020_AzureMLEstimator/runId/020_AzureMLEstimator_1575674728_d40baeba/diagnostics\\\",\\n \\\"computeRequest\\\": {\\n \\\"nodeCount\\\": 1\\n },\\n \\\"retainForLifetimeOfWorkspace\\\": false\\n}\\n2019-12-06 23:32:43,449|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.RunClient.get-async:False|DEBUG|[STOP]\\n2019-12-06 23:32:43,450|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba|DEBUG|Constructing run from dto. type: azureml.scriptrun, source: None, props: {'_azureml.ComputeTargetType': 'amlcompute', 'ContentSnapshotId': 'a5071b2a-37a7-40da-8340-69cc894091cb', 'azureml.git.repository_uri': 'git@github.com:georgeAccnt-GH/DeepSeismic.git', 'mlflow.source.git.repoURL': 'git@github.com:georgeAccnt-GH/DeepSeismic.git', 'azureml.git.branch': 'staging', 'mlflow.source.git.branch': 'staging', 'azureml.git.commit': '1d3cd3340f4063508b6f707d5fc2a35f5429a07f', 'mlflow.source.git.commit': '1d3cd3340f4063508b6f707d5fc2a35f5429a07f', 'azureml.git.dirty': 'True', 'ProcessInfoFile': 'azureml-logs/process_info.json', 'ProcessStatusFile': 'azureml-logs/process_status.json'}\\n2019-12-06 23:32:43,450|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunContextManager|DEBUG|Valid logs dir, setting up content loader\\n2019-12-06 23:32:43,451|azureml|WARNING|Could not import azureml.mlflow or azureml.contrib.mlflow mlflow APIs will not run against AzureML services. Add azureml-mlflow as a conda dependency for the run if this behavior is desired\\n2019-12-06 23:32:43,451|azureml.WorkerPool|DEBUG|[START]\\n2019-12-06 23:32:43,451|azureml.SendRunKillSignal|DEBUG|[START]\\n2019-12-06 23:32:43,451|azureml.RunStatusContext|DEBUG|[START]\\n2019-12-06 23:32:43,451|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunContextManager.RunStatusContext|DEBUG|[START]\\n2019-12-06 23:32:43,451|azureml.WorkingDirectoryCM|DEBUG|[START]\\n2019-12-06 23:32:43,451|azureml.history._tracking.PythonWorkingDirectory.workingdir|DEBUG|[START]\\n2019-12-06 23:32:43,451|azureml.history._tracking.PythonWorkingDirectory|INFO|Current working dir: /mnt/batch/tasks/shared/LS_root/jobs/ghiordanfwiws/azureml/020_azuremlestimator_1575674728_d40baeba/mounts/workspaceblobstore/azureml/020_AzureMLEstimator_1575674728_d40baeba\\n2019-12-06 23:32:43,451|azureml.history._tracking.PythonWorkingDirectory.workingdir|DEBUG|Calling pyfs\\n2019-12-06 23:32:43,451|azureml.history._tracking.PythonWorkingDirectory.workingdir|DEBUG|Storing working dir for pyfs as /mnt/batch/tasks/shared/LS_root/jobs/ghiordanfwiws/azureml/020_azuremlestimator_1575674728_d40baeba/mounts/workspaceblobstore/azureml/020_AzureMLEstimator_1575674728_d40baeba\\n2019-12-06 23:32:45,592|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-06 23:32:45,592|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-06 23:32:45,592|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-06 23:32:45,592|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-06 23:32:45,592|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-06 23:32:45,592|azureml._base_sdk_common.service_discovery|DEBUG|Constructing mms service url in from history url environment variable None, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-06 23:32:45,592|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-06 23:32:45,593|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-06 23:32:45,593|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-06 23:32:45,599|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2019-12-06 23:32:45,600|azureml._run_impl.run_history_facade|DEBUG|Created a static thread pool for RunHistoryFacade class\\n2019-12-06 23:32:45,605|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2019-12-06 23:32:45,610|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2019-12-06 23:32:45,616|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2019-12-06 23:32:45,621|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2019-12-06 23:32:45,622|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.RunClient.get-async:False|DEBUG|[START]\\n2019-12-06 23:32:45,622|msrest.service_client|DEBUG|Accept header absent and forced to application/json\\n2019-12-06 23:32:45,622|msrest.http_logger|DEBUG|Request URL: 'https://eastus2.experiments.azureml.net/history/v1.0/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourceGroups/ghiordanfwirsg01/providers/Microsoft.MachineLearningServices/workspaces/ghiordanfwiws/experiments/020_AzureMLEstimator/runs/020_AzureMLEstimator_1575674728_d40baeba'\\n2019-12-06 23:32:45,622|msrest.http_logger|DEBUG|Request method: 'GET'\\n2019-12-06 23:32:45,622|msrest.http_logger|DEBUG|Request headers:\\n2019-12-06 23:32:45,622|msrest.http_logger|DEBUG| 'Accept': 'application/json'\\n2019-12-06 23:32:45,622|msrest.http_logger|DEBUG| 'Content-Type': 'application/json; charset=utf-8'\\n2019-12-06 23:32:45,623|msrest.http_logger|DEBUG| 'x-ms-client-request-id': '7502a986-27e5-47c2-8a48-e5501a0dda7c'\\n2019-12-06 23:32:45,623|msrest.http_logger|DEBUG| 'request-id': '7502a986-27e5-47c2-8a48-e5501a0dda7c'\\n2019-12-06 23:32:45,623|msrest.http_logger|DEBUG| 'User-Agent': 'python/3.6.9 (Linux-4.15.0-1057-azure-x86_64-with-debian-10.0) msrest/0.6.10 azureml._restclient/core.1.0.76'\\n2019-12-06 23:32:45,623|msrest.http_logger|DEBUG|Request body:\\n2019-12-06 23:32:45,623|msrest.http_logger|DEBUG|None\\n2019-12-06 23:32:45,623|msrest.universal_http|DEBUG|Configuring redirects: allow=True, max=30\\n2019-12-06 23:32:45,623|msrest.universal_http|DEBUG|Configuring request: timeout=100, verify=True, cert=None\\n2019-12-06 23:32:45,623|msrest.universal_http|DEBUG|Configuring proxies: ''\\n2019-12-06 23:32:45,623|msrest.universal_http|DEBUG|Evaluate proxies against ENV settings: True\\n2019-12-06 23:32:46,018|msrest.http_logger|DEBUG|Response status: 200\\n2019-12-06 23:32:46,018|msrest.http_logger|DEBUG|Response headers:\\n2019-12-06 23:32:46,018|msrest.http_logger|DEBUG| 'Date': 'Fri, 06 Dec 2019 23:32:46 GMT'\\n2019-12-06 23:32:46,019|msrest.http_logger|DEBUG| 'Content-Type': 'application/json; charset=utf-8'\\n2019-12-06 23:32:46,019|msrest.http_logger|DEBUG| 'Transfer-Encoding': 'chunked'\\n2019-12-06 23:32:46,019|msrest.http_logger|DEBUG| 'Connection': 'keep-alive'\\n2019-12-06 23:32:46,019|msrest.http_logger|DEBUG| 'Vary': 'Accept-Encoding'\\n2019-12-06 23:32:46,019|msrest.http_logger|DEBUG| 'Request-Context': 'appId=cid-v1:2d2e8e63-272e-4b3c-8598-4ee570a0e70d'\\n2019-12-06 23:32:46,019|msrest.http_logger|DEBUG| 'x-ms-client-request-id': '7502a986-27e5-47c2-8a48-e5501a0dda7c'\\n2019-12-06 23:32:46,019|msrest.http_logger|DEBUG| 'x-ms-client-session-id': ''\\n2019-12-06 23:32:46,019|msrest.http_logger|DEBUG| 'Strict-Transport-Security': 'max-age=15724800; includeSubDomains; preload'\\n2019-12-06 23:32:46,019|msrest.http_logger|DEBUG| 'X-Content-Type-Options': 'nosniff'\\n2019-12-06 23:32:46,019|msrest.http_logger|DEBUG| 'Content-Encoding': 'gzip'\\n2019-12-06 23:32:46,019|msrest.http_logger|DEBUG|Response content:\\n2019-12-06 23:32:46,019|msrest.http_logger|DEBUG|{\\n \\\"runNumber\\\": 1516,\\n \\\"rootRunId\\\": \\\"020_AzureMLEstimator_1575674728_d40baeba\\\",\\n \\\"experimentId\\\": \\\"8d96276b-f420-4a67-86be-f933dd3d38cd\\\",\\n \\\"createdUtc\\\": \\\"2019-12-06T23:25:30.5978583+00:00\\\",\\n \\\"createdBy\\\": {\\n \\\"userObjectId\\\": \\\"b77869a0-66f2-4288-89ef-13c10accc4dc\\\",\\n \\\"userPuId\\\": \\\"1003000090A95868\\\",\\n \\\"userIdp\\\": null,\\n \\\"userAltSecId\\\": null,\\n \\\"userIss\\\": \\\"https://sts.windows.net/72f988bf-86f1-41af-91ab-2d7cd011db47/\\\",\\n \\\"userTenantId\\\": \\\"72f988bf-86f1-41af-91ab-2d7cd011db47\\\",\\n \\\"userName\\\": \\\"George Iordanescu\\\"\\n },\\n \\\"userId\\\": \\\"b77869a0-66f2-4288-89ef-13c10accc4dc\\\",\\n \\\"token\\\": null,\\n \\\"tokenExpiryTimeUtc\\\": null,\\n \\\"error\\\": null,\\n \\\"warnings\\\": null,\\n \\\"revision\\\": 10,\\n \\\"runId\\\": \\\"020_AzureMLEstimator_1575674728_d40baeba\\\",\\n \\\"parentRunId\\\": null,\\n \\\"status\\\": \\\"Running\\\",\\n \\\"startTimeUtc\\\": \\\"2019-12-06T23:30:15.4122862+00:00\\\",\\n \\\"endTimeUtc\\\": null,\\n \\\"heartbeatEnabled\\\": false,\\n \\\"options\\\": {\\n \\\"generateDataContainerIdIfNotSpecified\\\": true\\n },\\n \\\"name\\\": null,\\n \\\"dataContainerId\\\": \\\"dcid.020_AzureMLEstimator_1575674728_d40baeba\\\",\\n \\\"description\\\": null,\\n \\\"hidden\\\": false,\\n \\\"runType\\\": \\\"azureml.scriptrun\\\",\\n \\\"properties\\\": {\\n \\\"_azureml.ComputeTargetType\\\": \\\"amlcompute\\\",\\n \\\"ContentSnapshotId\\\": \\\"a5071b2a-37a7-40da-8340-69cc894091cb\\\",\\n \\\"azureml.git.repository_uri\\\": \\\"git@github.com:georgeAccnt-GH/DeepSeismic.git\\\",\\n \\\"mlflow.source.git.repoURL\\\": \\\"git@github.com:georgeAccnt-GH/DeepSeismic.git\\\",\\n \\\"azureml.git.branch\\\": \\\"staging\\\",\\n \\\"mlflow.source.git.branch\\\": \\\"staging\\\",\\n \\\"azureml.git.commit\\\": \\\"1d3cd3340f4063508b6f707d5fc2a35f5429a07f\\\",\\n \\\"mlflow.source.git.commit\\\": \\\"1d3cd3340f4063508b6f707d5fc2a35f5429a07f\\\",\\n \\\"azureml.git.dirty\\\": \\\"True\\\",\\n \\\"ProcessInfoFile\\\": \\\"azureml-logs/process_info.json\\\",\\n \\\"ProcessStatusFile\\\": \\\"azureml-logs/process_status.json\\\"\\n },\\n \\\"scriptName\\\": \\\"azureml_01_modelling.py\\\",\\n \\\"target\\\": \\\"gpuclstfwi02\\\",\\n \\\"tags\\\": {\\n \\\"_aml_system_ComputeTargetStatus\\\": \\\"{\\\\\\\"AllocationState\\\\\\\":\\\\\\\"steady\\\\\\\",\\\\\\\"PreparingNodeCount\\\\\\\":1,\\\\\\\"RunningNodeCount\\\\\\\":0,\\\\\\\"CurrentNodeCount\\\\\\\":1}\\\"\\n },\\n \\\"inputDatasets\\\": [],\\n \\\"runDefinition\\\": null,\\n \\\"createdFrom\\\": {\\n \\\"type\\\": \\\"Notebook\\\",\\n \\\"locationType\\\": \\\"ArtifactId\\\",\\n \\\"location\\\": \\\"LocalUpload/020_AzureMLEstimator_1575674728_d40baeba/020_UseAzureMLEstimatorForExperimentation_GeophysicsTutorial_FWI_Azure_devito.ipynb\\\"\\n },\\n \\\"cancelUri\\\": \\\"https://eastus2.experiments.azureml.net/execution/v1.0/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourceGroups/ghiordanfwirsg01/providers/Microsoft.MachineLearningServices/workspaces/ghiordanfwiws/experiments/020_AzureMLEstimator/runId/020_AzureMLEstimator_1575674728_d40baeba/cancel\\\",\\n \\\"completeUri\\\": null,\\n \\\"diagnosticsUri\\\": \\\"https://eastus2.experiments.azureml.net/execution/v1.0/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourceGroups/ghiordanfwirsg01/providers/Microsoft.MachineLearningServices/workspaces/ghiordanfwiws/experiments/020_AzureMLEstimator/runId/020_AzureMLEstimator_1575674728_d40baeba/diagnostics\\\",\\n \\\"computeRequest\\\": {\\n \\\"nodeCount\\\": 1\\n },\\n \\\"retainForLifetimeOfWorkspace\\\": false\\n}\\n2019-12-06 23:32:46,022|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.RunClient.get-async:False|DEBUG|[STOP]\\n2019-12-06 23:32:46,023|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba|DEBUG|Constructing run from dto. type: azureml.scriptrun, source: None, props: {'_azureml.ComputeTargetType': 'amlcompute', 'ContentSnapshotId': 'a5071b2a-37a7-40da-8340-69cc894091cb', 'azureml.git.repository_uri': 'git@github.com:georgeAccnt-GH/DeepSeismic.git', 'mlflow.source.git.repoURL': 'git@github.com:georgeAccnt-GH/DeepSeismic.git', 'azureml.git.branch': 'staging', 'mlflow.source.git.branch': 'staging', 'azureml.git.commit': '1d3cd3340f4063508b6f707d5fc2a35f5429a07f', 'mlflow.source.git.commit': '1d3cd3340f4063508b6f707d5fc2a35f5429a07f', 'azureml.git.dirty': 'True', 'ProcessInfoFile': 'azureml-logs/process_info.json', 'ProcessStatusFile': 'azureml-logs/process_status.json'}\\n2019-12-06 23:32:46,023|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunContextManager|DEBUG|Valid logs dir, setting up content loader\\n2019-12-06 23:33:13,322|azureml.core.authentication|DEBUG|Time to expire 1813936.677149 seconds\\n2019-12-06 23:33:43,323|azureml.core.authentication|DEBUG|Time to expire 1813906.67683 seconds\\n2019-12-06 23:33:57,866|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient|DEBUG|Overrides: Max batch size: 50, batch cushion: 5, Interval: 1.\\n2019-12-06 23:33:57,867|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient.PostMetricsBatch.PostMetricsBatchDaemon|DEBUG|Starting daemon and triggering first instance\\n2019-12-06 23:33:57,867|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient|DEBUG|Used for use_batch=True.\\n2019-12-06 23:33:57,911|azureml.history._tracking.PythonWorkingDirectory.workingdir|DEBUG|Calling pyfs\\n2019-12-06 23:33:57,911|azureml.history._tracking.PythonWorkingDirectory|INFO|Current working dir: /devito\\n2019-12-06 23:33:57,911|azureml.history._tracking.PythonWorkingDirectory.workingdir|DEBUG|pyfs has path /devito\\n2019-12-06 23:33:57,911|azureml.history._tracking.PythonWorkingDirectory.workingdir|DEBUG|Reverting working dir from /devito to /mnt/batch/tasks/shared/LS_root/jobs/ghiordanfwiws/azureml/020_azuremlestimator_1575674728_d40baeba/mounts/workspaceblobstore/azureml/020_AzureMLEstimator_1575674728_d40baeba\\n2019-12-06 23:33:57,911|azureml.history._tracking.PythonWorkingDirectory|INFO|Setting working dir to /mnt/batch/tasks/shared/LS_root/jobs/ghiordanfwiws/azureml/020_azuremlestimator_1575674728_d40baeba/mounts/workspaceblobstore/azureml/020_AzureMLEstimator_1575674728_d40baeba\\n2019-12-06 23:33:57,912|azureml.history._tracking.PythonWorkingDirectory.workingdir|DEBUG|[STOP]\\n2019-12-06 23:33:57,912|azureml.WorkingDirectoryCM|DEBUG|[STOP]\\n2019-12-06 23:33:57,912|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba|INFO|complete is not setting status for submitted runs.\\n2019-12-06 23:33:57,912|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient.FlushingMetricsClient|DEBUG|[START]\\n2019-12-06 23:33:57,912|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient|DEBUG|Overrides: Max batch size: 50, batch cushion: 5, Interval: 1.\\n2019-12-06 23:33:57,912|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient.PostMetricsBatch.PostMetricsBatchDaemon|DEBUG|Starting daemon and triggering first instance\\n2019-12-06 23:33:57,912|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient|DEBUG|Used for use_batch=True.\\n2019-12-06 23:33:57,912|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|[START]\\n2019-12-06 23:33:57,912|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|flush timeout 300 is different from task queue timeout 120, using flush timeout\\n2019-12-06 23:33:57,913|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|Waiting 300 seconds on tasks: [].\\n2019-12-06 23:33:57,913|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient.PostMetricsBatch|DEBUG|\\n2019-12-06 23:33:57,913|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|[STOP]\\n2019-12-06 23:33:57,913|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient.FlushingMetricsClient|DEBUG|[STOP]\\n2019-12-06 23:33:57,913|azureml.RunStatusContext|DEBUG|[STOP]\\n2019-12-06 23:33:57,913|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient.FlushingMetricsClient|DEBUG|[START]\\n2019-12-06 23:33:57,913|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|[START]\\n2019-12-06 23:33:57,913|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|flush timeout 300.0 is different from task queue timeout 120, using flush timeout\\n2019-12-06 23:33:57,913|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|Waiting 300.0 seconds on tasks: [].\\n2019-12-06 23:33:57,913|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient.PostMetricsBatch|DEBUG|\\n2019-12-06 23:33:57,913|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|[STOP]\\n2019-12-06 23:33:57,914|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient.FlushingMetricsClient|DEBUG|[STOP]\\n2019-12-06 23:33:57,914|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient.FlushingMetricsClient|DEBUG|[START]\\n2019-12-06 23:33:57,914|azureml.BatchTaskQueueAdd_1_Batches|DEBUG|[Start]\\n2019-12-06 23:33:57,914|azureml.BatchTaskQueueAdd_1_Batches.WorkerPool|DEBUG|submitting future: _handle_batch\\n2019-12-06 23:33:57,914|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient.PostMetricsBatch|DEBUG|Batch size 1.\\n2019-12-06 23:33:57,914|azureml.BatchTaskQueueAdd_1_Batches.0__handle_batch|DEBUG|Using basic handler - no exception handling\\n2019-12-06 23:33:57,914|azureml._restclient.clientbase.WorkerPool|DEBUG|submitting future: _log_batch\\n2019-12-06 23:33:57,914|azureml.BatchTaskQueueAdd_1_Batches|DEBUG|Adding task 0__handle_batch to queue of approximate size: 0\\n2019-12-06 23:33:57,915|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient.PostMetricsBatch.0__log_batch|DEBUG|Using basic handler - no exception handling\\n2019-12-06 23:33:57,915|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient.post_batch-async:False|DEBUG|[START]\\n2019-12-06 23:33:57,915|azureml.BatchTaskQueueAdd_1_Batches|DEBUG|[Stop] - waiting default timeout\\n2019-12-06 23:33:57,915|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient.PostMetricsBatch|DEBUG|Adding task 0__log_batch to queue of approximate size: 0\\n2019-12-06 23:33:57,916|msrest.service_client|DEBUG|Accept header absent and forced to application/json\\n2019-12-06 23:33:57,916|azureml.BatchTaskQueueAdd_1_Batches.WaitFlushSource:BatchTaskQueueAdd_1_Batches|DEBUG|[START]\\n2019-12-06 23:33:57,917|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2019-12-06 23:33:57,917|azureml.BatchTaskQueueAdd_1_Batches.WaitFlushSource:BatchTaskQueueAdd_1_Batches|DEBUG|Overriding default flush timeout from None to 120\\n2019-12-06 23:33:57,917|msrest.http_logger|DEBUG|Request URL: 'https://eastus2.experiments.azureml.net/history/v1.0/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourceGroups/ghiordanfwirsg01/providers/Microsoft.MachineLearningServices/workspaces/ghiordanfwiws/experiments/020_AzureMLEstimator/runs/020_AzureMLEstimator_1575674728_d40baeba/batch/metrics'\\n2019-12-06 23:33:57,917|azureml.BatchTaskQueueAdd_1_Batches.WaitFlushSource:BatchTaskQueueAdd_1_Batches|DEBUG|Waiting 120 seconds on tasks: [AsyncTask(0__handle_batch)].\\n2019-12-06 23:33:57,918|msrest.http_logger|DEBUG|Request method: 'POST'\\n2019-12-06 23:33:57,918|azureml.BatchTaskQueueAdd_1_Batches.0__handle_batch.WaitingTask|DEBUG|[START]\\n2019-12-06 23:33:57,918|msrest.http_logger|DEBUG|Request headers:\\n2019-12-06 23:33:57,918|azureml.BatchTaskQueueAdd_1_Batches.0__handle_batch.WaitingTask|DEBUG|Awaiter is BatchTaskQueueAdd_1_Batches\\n2019-12-06 23:33:57,918|msrest.http_logger|DEBUG| 'Accept': 'application/json'\\n2019-12-06 23:33:57,918|azureml.BatchTaskQueueAdd_1_Batches.0__handle_batch.WaitingTask|DEBUG|[STOP]\\n2019-12-06 23:33:57,918|msrest.http_logger|DEBUG| 'Content-Type': 'application/json-patch+json; charset=utf-8'\\n2019-12-06 23:33:57,918|azureml.BatchTaskQueueAdd_1_Batches|DEBUG|\\n2019-12-06 23:33:57,918|msrest.http_logger|DEBUG| 'x-ms-client-request-id': '7318af30-3aa3-4d84-a4db-0595c67afd70'\\n2019-12-06 23:33:57,918|azureml.BatchTaskQueueAdd_1_Batches.WaitFlushSource:BatchTaskQueueAdd_1_Batches|DEBUG|[STOP]\\n2019-12-06 23:33:57,919|msrest.http_logger|DEBUG| 'request-id': '7318af30-3aa3-4d84-a4db-0595c67afd70'\\n2019-12-06 23:33:57,919|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|[START]\\n2019-12-06 23:33:57,919|msrest.http_logger|DEBUG| 'Content-Length': '410'\\n2019-12-06 23:33:57,919|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|flush timeout 300.0 is different from task queue timeout 120, using flush timeout\\n2019-12-06 23:33:57,919|msrest.http_logger|DEBUG| 'User-Agent': 'python/3.6.9 (Linux-4.15.0-1057-azure-x86_64-with-debian-10.0) msrest/0.6.10 azureml._restclient/core.1.0.76 sdk_run'\\n2019-12-06 23:33:57,919|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|Waiting 300.0 seconds on tasks: [AsyncTask(0__log_batch)].\\n2019-12-06 23:33:57,919|msrest.http_logger|DEBUG|Request body:\\n2019-12-06 23:33:57,919|msrest.http_logger|DEBUG|{\\\"values\\\": [{\\\"metricId\\\": \\\"d160ffa3-e1bc-4ff2-b60f-7742b38cdfd2\\\", \\\"metricType\\\": \\\"azureml.v1.scalar\\\", \\\"createdUtc\\\": \\\"2019-12-06T23:33:57.866688Z\\\", \\\"name\\\": \\\"training_message01: \\\", \\\"description\\\": \\\"\\\", \\\"numCells\\\": 1, \\\"cells\\\": [{\\\"training_message01: \\\": \\\"finished experiment\\\"}], \\\"schema\\\": {\\\"numProperties\\\": 1, \\\"properties\\\": [{\\\"propertyId\\\": \\\"training_message01: \\\", \\\"name\\\": \\\"training_message01: \\\", \\\"type\\\": \\\"string\\\"}]}}]}\\n2019-12-06 23:33:57,919|msrest.universal_http|DEBUG|Configuring redirects: allow=True, max=30\\n2019-12-06 23:33:57,920|msrest.universal_http|DEBUG|Configuring request: timeout=100, verify=True, cert=None\\n2019-12-06 23:33:57,920|msrest.universal_http|DEBUG|Configuring proxies: ''\\n2019-12-06 23:33:57,920|msrest.universal_http|DEBUG|Evaluate proxies against ENV settings: True\\n2019-12-06 23:33:58,044|msrest.http_logger|DEBUG|Response status: 200\\n2019-12-06 23:33:58,044|msrest.http_logger|DEBUG|Response headers:\\n2019-12-06 23:33:58,044|msrest.http_logger|DEBUG| 'Date': 'Fri, 06 Dec 2019 23:33:58 GMT'\\n2019-12-06 23:33:58,044|msrest.http_logger|DEBUG| 'Content-Length': '0'\\n2019-12-06 23:33:58,044|msrest.http_logger|DEBUG| 'Connection': 'keep-alive'\\n2019-12-06 23:33:58,044|msrest.http_logger|DEBUG| 'Request-Context': 'appId=cid-v1:2d2e8e63-272e-4b3c-8598-4ee570a0e70d'\\n2019-12-06 23:33:58,044|msrest.http_logger|DEBUG| 'x-ms-client-request-id': '7318af30-3aa3-4d84-a4db-0595c67afd70'\\n2019-12-06 23:33:58,044|msrest.http_logger|DEBUG| 'x-ms-client-session-id': ''\\n2019-12-06 23:33:58,045|msrest.http_logger|DEBUG| 'Strict-Transport-Security': 'max-age=15724800; includeSubDomains; preload'\\n2019-12-06 23:33:58,045|msrest.http_logger|DEBUG| 'X-Content-Type-Options': 'nosniff'\\n2019-12-06 23:33:58,045|msrest.http_logger|DEBUG|Response content:\\n2019-12-06 23:33:58,045|msrest.http_logger|DEBUG|\\n2019-12-06 23:33:58,045|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient.post_batch-async:False|DEBUG|[STOP]\\n2019-12-06 23:33:58,170|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient.PostMetricsBatch.0__log_batch.WaitingTask|DEBUG|[START]\\n2019-12-06 23:33:58,170|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient.PostMetricsBatch.0__log_batch.WaitingTask|DEBUG|Awaiter is PostMetricsBatch\\n2019-12-06 23:33:58,170|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient.PostMetricsBatch.0__log_batch.WaitingTask|DEBUG|[STOP]\\n2019-12-06 23:33:58,170|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient.PostMetricsBatch|DEBUG|Waiting on task: 0__log_batch.\\n1 tasks left. Current duration of flush 0.0002143383026123047 seconds.\\n\\n2019-12-06 23:33:58,170|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|[STOP]\\n2019-12-06 23:33:58,170|azureml._SubmittedRun#020_AzureMLEstimator_1575674728_d40baeba.RunHistoryFacade.MetricsClient.FlushingMetricsClient|DEBUG|[STOP]\\n2019-12-06 23:33:58,170|azureml.SendRunKillSignal|DEBUG|[STOP]\\n2019-12-06 23:33:58,170|azureml.HistoryTrackingWorkerPool.WorkerPoolShutdown|DEBUG|[START]\\n2019-12-06 23:33:58,170|azureml.HistoryTrackingWorkerPool.WorkerPoolShutdown|DEBUG|[STOP]\\n2019-12-06 23:33:58,170|azureml.WorkerPool|DEBUG|[STOP]\\n\\nRun is completed.\", \"graph\": {}, \"widget_settings\": {\"childWidgetDisplay\": \"popup\", \"send_telemetry\": false, \"log_level\": \"NOTSET\", \"sdk_version\": \"1.0.76\"}, \"loading\": false}" + "application/aml.mini.widget.v1": "{\"status\": \"Completed\", \"workbench_run_details_uri\": \"https://ml.azure.com/experiments/020_AzureMLEstimator/runs/020_AzureMLEstimator_1578104365_7f98f029?wsid=/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourcegroups/ghiordanfwirsg01/workspaces/ghiordanfwiws\", \"run_id\": \"020_AzureMLEstimator_1578104365_7f98f029\", \"run_properties\": {\"run_id\": \"020_AzureMLEstimator_1578104365_7f98f029\", \"created_utc\": \"2020-01-04T02:19:29.146046Z\", \"properties\": {\"_azureml.ComputeTargetType\": \"amlcompute\", \"ContentSnapshotId\": \"a5071b2a-37a7-40da-8340-69cc894091cb\", \"azureml.git.repository_uri\": \"git@github.com:georgeAccnt-GH/seismic-deeplearning.git\", \"mlflow.source.git.repoURL\": \"git@github.com:georgeAccnt-GH/seismic-deeplearning.git\", \"azureml.git.branch\": \"ghiordan/azureml_devito04\", \"mlflow.source.git.branch\": \"ghiordan/azureml_devito04\", \"azureml.git.commit\": \"b93dcf2325fbc8b1dff1ad74ad14ee41f4e184bb\", \"mlflow.source.git.commit\": \"b93dcf2325fbc8b1dff1ad74ad14ee41f4e184bb\", \"azureml.git.dirty\": \"True\", \"ProcessInfoFile\": \"azureml-logs/process_info.json\", \"ProcessStatusFile\": \"azureml-logs/process_status.json\"}, \"tags\": {\"_aml_system_ComputeTargetStatus\": \"{\\\"AllocationState\\\":\\\"steady\\\",\\\"PreparingNodeCount\\\":1,\\\"RunningNodeCount\\\":0,\\\"CurrentNodeCount\\\":1}\"}, \"script_name\": null, \"arguments\": null, \"end_time_utc\": \"2020-01-04T02:28:26.519321Z\", \"status\": \"Completed\", \"log_files\": {\"azureml-logs/55_azureml-execution-tvmps_81163cbc97f5ab08ce9248634652cf612a0c3b341dc76c61636cd017f913afb2_d.txt\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1578104365_7f98f029/azureml-logs/55_azureml-execution-tvmps_81163cbc97f5ab08ce9248634652cf612a0c3b341dc76c61636cd017f913afb2_d.txt?sv=2019-02-02&sr=b&sig=gs3WJ%2BZC5QV8V%2BdMT1zXbfYW0CHBUb%2FAS7HzsNABmTI%3D&st=2020-01-04T02%3A18%3A37Z&se=2020-01-04T10%3A28%3A37Z&sp=r\", \"azureml-logs/65_job_prep-tvmps_81163cbc97f5ab08ce9248634652cf612a0c3b341dc76c61636cd017f913afb2_d.txt\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1578104365_7f98f029/azureml-logs/65_job_prep-tvmps_81163cbc97f5ab08ce9248634652cf612a0c3b341dc76c61636cd017f913afb2_d.txt?sv=2019-02-02&sr=b&sig=O1S10hFUaUhxPJXpUxGbq%2BHkCnDNQlzhIGsLA8oVAuw%3D&st=2020-01-04T02%3A18%3A37Z&se=2020-01-04T10%3A28%3A37Z&sp=r\", \"azureml-logs/70_driver_log.txt\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1578104365_7f98f029/azureml-logs/70_driver_log.txt?sv=2019-02-02&sr=b&sig=Llu%2BGoIPioRU9AnUPEbBXr2BZuIY%2BzQ7wbDZb%2BfwP8k%3D&st=2020-01-04T02%3A18%3A37Z&se=2020-01-04T10%3A28%3A37Z&sp=r\", \"azureml-logs/75_job_post-tvmps_81163cbc97f5ab08ce9248634652cf612a0c3b341dc76c61636cd017f913afb2_d.txt\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1578104365_7f98f029/azureml-logs/75_job_post-tvmps_81163cbc97f5ab08ce9248634652cf612a0c3b341dc76c61636cd017f913afb2_d.txt?sv=2019-02-02&sr=b&sig=MPmJi0vLgA869NOFWCR7okSEO4t9uTtrQ4kxelUV0yA%3D&st=2020-01-04T02%3A18%3A37Z&se=2020-01-04T10%3A28%3A37Z&sp=r\", \"azureml-logs/process_info.json\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1578104365_7f98f029/azureml-logs/process_info.json?sv=2019-02-02&sr=b&sig=nv%2Fd5rAAqJsP8uo8HcDag25bVwIw5aeST16VPRBo8vk%3D&st=2020-01-04T02%3A18%3A37Z&se=2020-01-04T10%3A28%3A37Z&sp=r\", \"azureml-logs/process_status.json\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1578104365_7f98f029/azureml-logs/process_status.json?sv=2019-02-02&sr=b&sig=rWbAamDOk%2F9T5bwuSJmlvZ4QehlE31Fo2fZ5Rhbcau8%3D&st=2020-01-04T02%3A18%3A37Z&se=2020-01-04T10%3A28%3A37Z&sp=r\", \"logs/azureml/689_azureml.log\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1578104365_7f98f029/logs/azureml/689_azureml.log?sv=2019-02-02&sr=b&sig=xQwoXQUPG1BYQWaF9sRCbk%2Bm3%2FasZOTLyCWfGIOTPhw%3D&st=2020-01-04T02%3A18%3A37Z&se=2020-01-04T10%3A28%3A37Z&sp=r\", \"logs/azureml/azureml.log\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1578104365_7f98f029/logs/azureml/azureml.log?sv=2019-02-02&sr=b&sig=jZcSWDRZeWwKlQ82%2FYIYVQo6yerPGKV56HLnHF4cbPE%3D&st=2020-01-04T02%3A18%3A37Z&se=2020-01-04T10%3A28%3A37Z&sp=r\"}, \"log_groups\": [[\"azureml-logs/process_info.json\", \"azureml-logs/process_status.json\", \"logs/azureml/azureml.log\"], [\"azureml-logs/55_azureml-execution-tvmps_81163cbc97f5ab08ce9248634652cf612a0c3b341dc76c61636cd017f913afb2_d.txt\"], [\"azureml-logs/65_job_prep-tvmps_81163cbc97f5ab08ce9248634652cf612a0c3b341dc76c61636cd017f913afb2_d.txt\"], [\"azureml-logs/70_driver_log.txt\"], [\"azureml-logs/75_job_post-tvmps_81163cbc97f5ab08ce9248634652cf612a0c3b341dc76c61636cd017f913afb2_d.txt\"], [\"logs/azureml/689_azureml.log\"]], \"run_duration\": \"0:08:57\"}, \"child_runs\": [], \"children_metrics\": {}, \"run_metrics\": [{\"name\": \"training_message01: \", \"run_id\": \"020_AzureMLEstimator_1578104365_7f98f029\", \"categories\": [0], \"series\": [{\"data\": [\"finished experiment\"]}]}], \"run_logs\": \"2020-01-04 02:26:41,710|azureml|DEBUG|Inputs:: kwargs: {'OutputCollection': True, 'snapshotProject': True, 'only_in_process_features': True, 'skip_track_logs_dir': True}, track_folders: None, deny_list: None, directories_to_watch: []\\n2020-01-04 02:26:41,711|azureml.history._tracking.PythonWorkingDirectory|DEBUG|Execution target type: batchai\\n2020-01-04 02:26:41,711|azureml.history._tracking.PythonWorkingDirectory|DEBUG|Failed to import pyspark with error: No module named 'pyspark'\\n2020-01-04 02:26:41,711|azureml.history._tracking.PythonWorkingDirectory.workingdir|DEBUG|Pinning working directory for filesystems: ['pyfs']\\n2020-01-04 02:26:42,039|azureml._base_sdk_common.user_agent|DEBUG|Fetching client info from /root/.azureml/clientinfo.json\\n2020-01-04 02:26:42,040|azureml._base_sdk_common.user_agent|DEBUG|Error loading client info: [Errno 2] No such file or directory: '/root/.azureml/clientinfo.json'\\n2020-01-04 02:26:42,431|azureml.core._experiment_method|DEBUG|Trying to register submit_function search, on method \\n2020-01-04 02:26:42,431|azureml.core._experiment_method|DEBUG|Registered submit_function search, on method \\n2020-01-04 02:26:42,431|azureml.core._experiment_method|DEBUG|Trying to register submit_function search, on method \\n2020-01-04 02:26:42,431|azureml.core._experiment_method|DEBUG|Registered submit_function search, on method \\n2020-01-04 02:26:42,431|azureml.core.run|DEBUG|Adding new factory for run source hyperdrive\\n2020-01-04 02:26:43,032|azureml.core.run|DEBUG|Adding new factory for run source azureml.PipelineRun\\n2020-01-04 02:26:43,037|azureml.core.run|DEBUG|Adding new factory for run source azureml.ReusedStepRun\\n2020-01-04 02:26:43,043|azureml.core.run|DEBUG|Adding new factory for run source azureml.StepRun\\n2020-01-04 02:26:43,048|azureml.core.run|DEBUG|Adding new factory for run source azureml.scriptrun\\n2020-01-04 02:26:43,050|azureml.core.authentication.TokenRefresherDaemon|DEBUG|Starting daemon and triggering first instance\\n2020-01-04 02:26:43,056|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2020-01-04 02:26:43,057|azureml._restclient.clientbase|INFO|Created a worker pool for first use\\n2020-01-04 02:26:43,057|azureml.core.authentication|DEBUG|Time to expire 1813965.942446 seconds\\n2020-01-04 02:26:43,057|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 02:26:43,057|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 02:26:43,057|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 02:26:43,058|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 02:26:43,058|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 02:26:43,058|azureml._base_sdk_common.service_discovery|DEBUG|Constructing mms service url in from history url environment variable None, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 02:26:43,058|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 02:26:43,058|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 02:26:43,058|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 02:26:43,094|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 02:26:43,099|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2020-01-04 02:26:43,107|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2020-01-04 02:26:43,112|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2020-01-04 02:26:43,118|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2020-01-04 02:26:43,123|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2020-01-04 02:26:43,124|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.RunClient.get-async:False|DEBUG|[START]\\n2020-01-04 02:26:43,124|msrest.service_client|DEBUG|Accept header absent and forced to application/json\\n2020-01-04 02:26:43,124|msrest.http_logger|DEBUG|Request URL: 'https://eastus2.experiments.azureml.net/history/v1.0/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourceGroups/ghiordanfwirsg01/providers/Microsoft.MachineLearningServices/workspaces/ghiordanfwiws/experiments/020_AzureMLEstimator/runs/020_AzureMLEstimator_1578104365_7f98f029'\\n2020-01-04 02:26:43,125|msrest.http_logger|DEBUG|Request method: 'GET'\\n2020-01-04 02:26:43,125|msrest.http_logger|DEBUG|Request headers:\\n2020-01-04 02:26:43,125|msrest.http_logger|DEBUG| 'Accept': 'application/json'\\n2020-01-04 02:26:43,125|msrest.http_logger|DEBUG| 'Content-Type': 'application/json; charset=utf-8'\\n2020-01-04 02:26:43,125|msrest.http_logger|DEBUG| 'x-ms-client-request-id': '27689e15-3f4c-44fb-a317-1ce64e869860'\\n2020-01-04 02:26:43,126|msrest.http_logger|DEBUG| 'request-id': '27689e15-3f4c-44fb-a317-1ce64e869860'\\n2020-01-04 02:26:43,126|msrest.http_logger|DEBUG| 'User-Agent': 'python/3.6.9 (Linux-4.15.0-1057-azure-x86_64-with-debian-10.1) msrest/0.6.10 azureml._restclient/core.1.0.81'\\n2020-01-04 02:26:43,126|msrest.http_logger|DEBUG|Request body:\\n2020-01-04 02:26:43,126|msrest.http_logger|DEBUG|None\\n2020-01-04 02:26:43,126|msrest.universal_http|DEBUG|Configuring redirects: allow=True, max=30\\n2020-01-04 02:26:43,126|msrest.universal_http|DEBUG|Configuring request: timeout=100, verify=True, cert=None\\n2020-01-04 02:26:43,126|msrest.universal_http|DEBUG|Configuring proxies: ''\\n2020-01-04 02:26:43,126|msrest.universal_http|DEBUG|Evaluate proxies against ENV settings: True\\n2020-01-04 02:26:43,182|msrest.http_logger|DEBUG|Response status: 200\\n2020-01-04 02:26:43,182|msrest.http_logger|DEBUG|Response headers:\\n2020-01-04 02:26:43,182|msrest.http_logger|DEBUG| 'Date': 'Sat, 04 Jan 2020 02:26:43 GMT'\\n2020-01-04 02:26:43,182|msrest.http_logger|DEBUG| 'Content-Type': 'application/json; charset=utf-8'\\n2020-01-04 02:26:43,182|msrest.http_logger|DEBUG| 'Transfer-Encoding': 'chunked'\\n2020-01-04 02:26:43,183|msrest.http_logger|DEBUG| 'Connection': 'keep-alive'\\n2020-01-04 02:26:43,183|msrest.http_logger|DEBUG| 'Vary': 'Accept-Encoding'\\n2020-01-04 02:26:43,183|msrest.http_logger|DEBUG| 'Request-Context': 'appId=cid-v1:2d2e8e63-272e-4b3c-8598-4ee570a0e70d'\\n2020-01-04 02:26:43,183|msrest.http_logger|DEBUG| 'x-ms-client-request-id': '27689e15-3f4c-44fb-a317-1ce64e869860'\\n2020-01-04 02:26:43,183|msrest.http_logger|DEBUG| 'x-ms-client-session-id': ''\\n2020-01-04 02:26:43,183|msrest.http_logger|DEBUG| 'Strict-Transport-Security': 'max-age=15724800; includeSubDomains; preload'\\n2020-01-04 02:26:43,183|msrest.http_logger|DEBUG| 'X-Content-Type-Options': 'nosniff'\\n2020-01-04 02:26:43,183|msrest.http_logger|DEBUG| 'Content-Encoding': 'gzip'\\n2020-01-04 02:26:43,183|msrest.http_logger|DEBUG|Response content:\\n2020-01-04 02:26:43,184|msrest.http_logger|DEBUG|{\\n \\\"runNumber\\\": 6733,\\n \\\"rootRunId\\\": \\\"020_AzureMLEstimator_1578104365_7f98f029\\\",\\n \\\"experimentId\\\": \\\"8d96276b-f420-4a67-86be-f933dd3d38cd\\\",\\n \\\"createdUtc\\\": \\\"2020-01-04T02:19:29.1460463+00:00\\\",\\n \\\"createdBy\\\": {\\n \\\"userObjectId\\\": \\\"b77869a0-66f2-4288-89ef-13c10accc4dc\\\",\\n \\\"userPuId\\\": \\\"1003000090A95868\\\",\\n \\\"userIdp\\\": null,\\n \\\"userAltSecId\\\": null,\\n \\\"userIss\\\": \\\"https://sts.windows.net/72f988bf-86f1-41af-91ab-2d7cd011db47/\\\",\\n \\\"userTenantId\\\": \\\"72f988bf-86f1-41af-91ab-2d7cd011db47\\\",\\n \\\"userName\\\": \\\"George Iordanescu\\\"\\n },\\n \\\"userId\\\": \\\"b77869a0-66f2-4288-89ef-13c10accc4dc\\\",\\n \\\"token\\\": null,\\n \\\"tokenExpiryTimeUtc\\\": null,\\n \\\"error\\\": null,\\n \\\"warnings\\\": null,\\n \\\"revision\\\": 10,\\n \\\"runId\\\": \\\"020_AzureMLEstimator_1578104365_7f98f029\\\",\\n \\\"parentRunId\\\": null,\\n \\\"status\\\": \\\"Running\\\",\\n \\\"startTimeUtc\\\": \\\"2020-01-04T02:24:16.7785992+00:00\\\",\\n \\\"endTimeUtc\\\": null,\\n \\\"heartbeatEnabled\\\": false,\\n \\\"options\\\": {\\n \\\"generateDataContainerIdIfNotSpecified\\\": true\\n },\\n \\\"name\\\": null,\\n \\\"dataContainerId\\\": \\\"dcid.020_AzureMLEstimator_1578104365_7f98f029\\\",\\n \\\"description\\\": null,\\n \\\"hidden\\\": false,\\n \\\"runType\\\": \\\"azureml.scriptrun\\\",\\n \\\"properties\\\": {\\n \\\"_azureml.ComputeTargetType\\\": \\\"amlcompute\\\",\\n \\\"ContentSnapshotId\\\": \\\"a5071b2a-37a7-40da-8340-69cc894091cb\\\",\\n \\\"azureml.git.repository_uri\\\": \\\"git@github.com:georgeAccnt-GH/seismic-deeplearning.git\\\",\\n \\\"mlflow.source.git.repoURL\\\": \\\"git@github.com:georgeAccnt-GH/seismic-deeplearning.git\\\",\\n \\\"azureml.git.branch\\\": \\\"ghiordan/azureml_devito04\\\",\\n \\\"mlflow.source.git.branch\\\": \\\"ghiordan/azureml_devito04\\\",\\n \\\"azureml.git.commit\\\": \\\"b93dcf2325fbc8b1dff1ad74ad14ee41f4e184bb\\\",\\n \\\"mlflow.source.git.commit\\\": \\\"b93dcf2325fbc8b1dff1ad74ad14ee41f4e184bb\\\",\\n \\\"azureml.git.dirty\\\": \\\"True\\\",\\n \\\"ProcessInfoFile\\\": \\\"azureml-logs/process_info.json\\\",\\n \\\"ProcessStatusFile\\\": \\\"azureml-logs/process_status.json\\\"\\n },\\n \\\"scriptName\\\": \\\"azureml_01_modelling.py\\\",\\n \\\"target\\\": \\\"gpuclstfwi02\\\",\\n \\\"tags\\\": {\\n \\\"_aml_system_ComputeTargetStatus\\\": \\\"{\\\\\\\"AllocationState\\\\\\\":\\\\\\\"steady\\\\\\\",\\\\\\\"PreparingNodeCount\\\\\\\":1,\\\\\\\"RunningNodeCount\\\\\\\":0,\\\\\\\"CurrentNodeCount\\\\\\\":1}\\\"\\n },\\n \\\"inputDatasets\\\": [],\\n \\\"runDefinition\\\": null,\\n \\\"createdFrom\\\": {\\n \\\"type\\\": \\\"Notebook\\\",\\n \\\"locationType\\\": \\\"ArtifactId\\\",\\n \\\"location\\\": \\\"LocalUpload/020_AzureMLEstimator_1578104365_7f98f029/020_UseAzureMLEstimatorForExperimentation_GeophysicsTutorial_FWI_Azure_devito.ipynb\\\"\\n },\\n \\\"cancelUri\\\": \\\"https://eastus2.experiments.azureml.net/execution/v1.0/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourceGroups/ghiordanfwirsg01/providers/Microsoft.MachineLearningServices/workspaces/ghiordanfwiws/experiments/020_AzureMLEstimator/runId/020_AzureMLEstimator_1578104365_7f98f029/cancel\\\",\\n \\\"completeUri\\\": null,\\n \\\"diagnosticsUri\\\": \\\"https://eastus2.experiments.azureml.net/execution/v1.0/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourceGroups/ghiordanfwirsg01/providers/Microsoft.MachineLearningServices/workspaces/ghiordanfwiws/experiments/020_AzureMLEstimator/runId/020_AzureMLEstimator_1578104365_7f98f029/diagnostics\\\",\\n \\\"computeRequest\\\": {\\n \\\"nodeCount\\\": 1\\n },\\n \\\"retainForLifetimeOfWorkspace\\\": false,\\n \\\"queueingInfo\\\": null\\n}\\n2020-01-04 02:26:43,189|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.RunClient.get-async:False|DEBUG|[STOP]\\n2020-01-04 02:26:43,190|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029|DEBUG|Constructing run from dto. type: azureml.scriptrun, source: None, props: {'_azureml.ComputeTargetType': 'amlcompute', 'ContentSnapshotId': 'a5071b2a-37a7-40da-8340-69cc894091cb', 'azureml.git.repository_uri': 'git@github.com:georgeAccnt-GH/seismic-deeplearning.git', 'mlflow.source.git.repoURL': 'git@github.com:georgeAccnt-GH/seismic-deeplearning.git', 'azureml.git.branch': 'ghiordan/azureml_devito04', 'mlflow.source.git.branch': 'ghiordan/azureml_devito04', 'azureml.git.commit': 'b93dcf2325fbc8b1dff1ad74ad14ee41f4e184bb', 'mlflow.source.git.commit': 'b93dcf2325fbc8b1dff1ad74ad14ee41f4e184bb', 'azureml.git.dirty': 'True', 'ProcessInfoFile': 'azureml-logs/process_info.json', 'ProcessStatusFile': 'azureml-logs/process_status.json'}\\n2020-01-04 02:26:43,190|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunContextManager|DEBUG|Valid logs dir, setting up content loader\\n2020-01-04 02:26:43,190|azureml|WARNING|Could not import azureml.mlflow or azureml.contrib.mlflow mlflow APIs will not run against AzureML services. Add azureml-mlflow as a conda dependency for the run if this behavior is desired\\n2020-01-04 02:26:43,191|azureml.WorkerPool|DEBUG|[START]\\n2020-01-04 02:26:43,191|azureml.SendRunKillSignal|DEBUG|[START]\\n2020-01-04 02:26:43,191|azureml.RunStatusContext|DEBUG|[START]\\n2020-01-04 02:26:43,191|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunContextManager.RunStatusContext|DEBUG|[START]\\n2020-01-04 02:26:43,191|azureml.WorkingDirectoryCM|DEBUG|[START]\\n2020-01-04 02:26:43,191|azureml.history._tracking.PythonWorkingDirectory.workingdir|DEBUG|[START]\\n2020-01-04 02:26:43,191|azureml.history._tracking.PythonWorkingDirectory|INFO|Current working dir: /mnt/batch/tasks/shared/LS_root/jobs/ghiordanfwiws/azureml/020_azuremlestimator_1578104365_7f98f029/mounts/workspaceblobstore/azureml/020_AzureMLEstimator_1578104365_7f98f029\\n2020-01-04 02:26:43,191|azureml.history._tracking.PythonWorkingDirectory.workingdir|DEBUG|Calling pyfs\\n2020-01-04 02:26:43,191|azureml.history._tracking.PythonWorkingDirectory.workingdir|DEBUG|Storing working dir for pyfs as /mnt/batch/tasks/shared/LS_root/jobs/ghiordanfwiws/azureml/020_azuremlestimator_1578104365_7f98f029/mounts/workspaceblobstore/azureml/020_AzureMLEstimator_1578104365_7f98f029\\n2020-01-04 02:26:45,676|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 02:26:45,676|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 02:26:45,676|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 02:26:45,676|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 02:26:45,676|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 02:26:45,677|azureml._base_sdk_common.service_discovery|DEBUG|Constructing mms service url in from history url environment variable None, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 02:26:45,677|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 02:26:45,677|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 02:26:45,677|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 02:26:45,684|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2020-01-04 02:26:45,685|azureml._run_impl.run_history_facade|DEBUG|Created a static thread pool for RunHistoryFacade class\\n2020-01-04 02:26:45,690|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2020-01-04 02:26:45,695|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2020-01-04 02:26:45,701|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2020-01-04 02:26:45,706|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2020-01-04 02:26:45,707|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.RunClient.get-async:False|DEBUG|[START]\\n2020-01-04 02:26:45,707|msrest.service_client|DEBUG|Accept header absent and forced to application/json\\n2020-01-04 02:26:45,707|msrest.http_logger|DEBUG|Request URL: 'https://eastus2.experiments.azureml.net/history/v1.0/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourceGroups/ghiordanfwirsg01/providers/Microsoft.MachineLearningServices/workspaces/ghiordanfwiws/experiments/020_AzureMLEstimator/runs/020_AzureMLEstimator_1578104365_7f98f029'\\n2020-01-04 02:26:45,707|msrest.http_logger|DEBUG|Request method: 'GET'\\n2020-01-04 02:26:45,707|msrest.http_logger|DEBUG|Request headers:\\n2020-01-04 02:26:45,707|msrest.http_logger|DEBUG| 'Accept': 'application/json'\\n2020-01-04 02:26:45,708|msrest.http_logger|DEBUG| 'Content-Type': 'application/json; charset=utf-8'\\n2020-01-04 02:26:45,708|msrest.http_logger|DEBUG| 'x-ms-client-request-id': 'de302e01-309f-47c2-99d4-7d72185c98a5'\\n2020-01-04 02:26:45,708|msrest.http_logger|DEBUG| 'request-id': 'de302e01-309f-47c2-99d4-7d72185c98a5'\\n2020-01-04 02:26:45,708|msrest.http_logger|DEBUG| 'User-Agent': 'python/3.6.9 (Linux-4.15.0-1057-azure-x86_64-with-debian-10.1) msrest/0.6.10 azureml._restclient/core.1.0.81'\\n2020-01-04 02:26:45,708|msrest.http_logger|DEBUG|Request body:\\n2020-01-04 02:26:45,708|msrest.http_logger|DEBUG|None\\n2020-01-04 02:26:45,708|msrest.universal_http|DEBUG|Configuring redirects: allow=True, max=30\\n2020-01-04 02:26:45,708|msrest.universal_http|DEBUG|Configuring request: timeout=100, verify=True, cert=None\\n2020-01-04 02:26:45,708|msrest.universal_http|DEBUG|Configuring proxies: ''\\n2020-01-04 02:26:45,708|msrest.universal_http|DEBUG|Evaluate proxies against ENV settings: True\\n2020-01-04 02:26:45,768|msrest.http_logger|DEBUG|Response status: 200\\n2020-01-04 02:26:45,768|msrest.http_logger|DEBUG|Response headers:\\n2020-01-04 02:26:45,768|msrest.http_logger|DEBUG| 'Date': 'Sat, 04 Jan 2020 02:26:45 GMT'\\n2020-01-04 02:26:45,768|msrest.http_logger|DEBUG| 'Content-Type': 'application/json; charset=utf-8'\\n2020-01-04 02:26:45,768|msrest.http_logger|DEBUG| 'Transfer-Encoding': 'chunked'\\n2020-01-04 02:26:45,768|msrest.http_logger|DEBUG| 'Connection': 'keep-alive'\\n2020-01-04 02:26:45,768|msrest.http_logger|DEBUG| 'Vary': 'Accept-Encoding'\\n2020-01-04 02:26:45,768|msrest.http_logger|DEBUG| 'Request-Context': 'appId=cid-v1:2d2e8e63-272e-4b3c-8598-4ee570a0e70d'\\n2020-01-04 02:26:45,768|msrest.http_logger|DEBUG| 'x-ms-client-request-id': 'de302e01-309f-47c2-99d4-7d72185c98a5'\\n2020-01-04 02:26:45,768|msrest.http_logger|DEBUG| 'x-ms-client-session-id': ''\\n2020-01-04 02:26:45,769|msrest.http_logger|DEBUG| 'Strict-Transport-Security': 'max-age=15724800; includeSubDomains; preload'\\n2020-01-04 02:26:45,769|msrest.http_logger|DEBUG| 'X-Content-Type-Options': 'nosniff'\\n2020-01-04 02:26:45,769|msrest.http_logger|DEBUG| 'Content-Encoding': 'gzip'\\n2020-01-04 02:26:45,769|msrest.http_logger|DEBUG|Response content:\\n2020-01-04 02:26:45,769|msrest.http_logger|DEBUG|{\\n \\\"runNumber\\\": 6733,\\n \\\"rootRunId\\\": \\\"020_AzureMLEstimator_1578104365_7f98f029\\\",\\n \\\"experimentId\\\": \\\"8d96276b-f420-4a67-86be-f933dd3d38cd\\\",\\n \\\"createdUtc\\\": \\\"2020-01-04T02:19:29.1460463+00:00\\\",\\n \\\"createdBy\\\": {\\n \\\"userObjectId\\\": \\\"b77869a0-66f2-4288-89ef-13c10accc4dc\\\",\\n \\\"userPuId\\\": \\\"1003000090A95868\\\",\\n \\\"userIdp\\\": null,\\n \\\"userAltSecId\\\": null,\\n \\\"userIss\\\": \\\"https://sts.windows.net/72f988bf-86f1-41af-91ab-2d7cd011db47/\\\",\\n \\\"userTenantId\\\": \\\"72f988bf-86f1-41af-91ab-2d7cd011db47\\\",\\n \\\"userName\\\": \\\"George Iordanescu\\\"\\n },\\n \\\"userId\\\": \\\"b77869a0-66f2-4288-89ef-13c10accc4dc\\\",\\n \\\"token\\\": null,\\n \\\"tokenExpiryTimeUtc\\\": null,\\n \\\"error\\\": null,\\n \\\"warnings\\\": null,\\n \\\"revision\\\": 10,\\n \\\"runId\\\": \\\"020_AzureMLEstimator_1578104365_7f98f029\\\",\\n \\\"parentRunId\\\": null,\\n \\\"status\\\": \\\"Running\\\",\\n \\\"startTimeUtc\\\": \\\"2020-01-04T02:24:16.7785992+00:00\\\",\\n \\\"endTimeUtc\\\": null,\\n \\\"heartbeatEnabled\\\": false,\\n \\\"options\\\": {\\n \\\"generateDataContainerIdIfNotSpecified\\\": true\\n },\\n \\\"name\\\": null,\\n \\\"dataContainerId\\\": \\\"dcid.020_AzureMLEstimator_1578104365_7f98f029\\\",\\n \\\"description\\\": null,\\n \\\"hidden\\\": false,\\n \\\"runType\\\": \\\"azureml.scriptrun\\\",\\n \\\"properties\\\": {\\n \\\"_azureml.ComputeTargetType\\\": \\\"amlcompute\\\",\\n \\\"ContentSnapshotId\\\": \\\"a5071b2a-37a7-40da-8340-69cc894091cb\\\",\\n \\\"azureml.git.repository_uri\\\": \\\"git@github.com:georgeAccnt-GH/seismic-deeplearning.git\\\",\\n \\\"mlflow.source.git.repoURL\\\": \\\"git@github.com:georgeAccnt-GH/seismic-deeplearning.git\\\",\\n \\\"azureml.git.branch\\\": \\\"ghiordan/azureml_devito04\\\",\\n \\\"mlflow.source.git.branch\\\": \\\"ghiordan/azureml_devito04\\\",\\n \\\"azureml.git.commit\\\": \\\"b93dcf2325fbc8b1dff1ad74ad14ee41f4e184bb\\\",\\n \\\"mlflow.source.git.commit\\\": \\\"b93dcf2325fbc8b1dff1ad74ad14ee41f4e184bb\\\",\\n \\\"azureml.git.dirty\\\": \\\"True\\\",\\n \\\"ProcessInfoFile\\\": \\\"azureml-logs/process_info.json\\\",\\n \\\"ProcessStatusFile\\\": \\\"azureml-logs/process_status.json\\\"\\n },\\n \\\"scriptName\\\": \\\"azureml_01_modelling.py\\\",\\n \\\"target\\\": \\\"gpuclstfwi02\\\",\\n \\\"tags\\\": {\\n \\\"_aml_system_ComputeTargetStatus\\\": \\\"{\\\\\\\"AllocationState\\\\\\\":\\\\\\\"steady\\\\\\\",\\\\\\\"PreparingNodeCount\\\\\\\":1,\\\\\\\"RunningNodeCount\\\\\\\":0,\\\\\\\"CurrentNodeCount\\\\\\\":1}\\\"\\n },\\n \\\"inputDatasets\\\": [],\\n \\\"runDefinition\\\": null,\\n \\\"createdFrom\\\": {\\n \\\"type\\\": \\\"Notebook\\\",\\n \\\"locationType\\\": \\\"ArtifactId\\\",\\n \\\"location\\\": \\\"LocalUpload/020_AzureMLEstimator_1578104365_7f98f029/020_UseAzureMLEstimatorForExperimentation_GeophysicsTutorial_FWI_Azure_devito.ipynb\\\"\\n },\\n \\\"cancelUri\\\": \\\"https://eastus2.experiments.azureml.net/execution/v1.0/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourceGroups/ghiordanfwirsg01/providers/Microsoft.MachineLearningServices/workspaces/ghiordanfwiws/experiments/020_AzureMLEstimator/runId/020_AzureMLEstimator_1578104365_7f98f029/cancel\\\",\\n \\\"completeUri\\\": null,\\n \\\"diagnosticsUri\\\": \\\"https://eastus2.experiments.azureml.net/execution/v1.0/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourceGroups/ghiordanfwirsg01/providers/Microsoft.MachineLearningServices/workspaces/ghiordanfwiws/experiments/020_AzureMLEstimator/runId/020_AzureMLEstimator_1578104365_7f98f029/diagnostics\\\",\\n \\\"computeRequest\\\": {\\n \\\"nodeCount\\\": 1\\n },\\n \\\"retainForLifetimeOfWorkspace\\\": false,\\n \\\"queueingInfo\\\": null\\n}\\n2020-01-04 02:26:45,771|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.RunClient.get-async:False|DEBUG|[STOP]\\n2020-01-04 02:26:45,771|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029|DEBUG|Constructing run from dto. type: azureml.scriptrun, source: None, props: {'_azureml.ComputeTargetType': 'amlcompute', 'ContentSnapshotId': 'a5071b2a-37a7-40da-8340-69cc894091cb', 'azureml.git.repository_uri': 'git@github.com:georgeAccnt-GH/seismic-deeplearning.git', 'mlflow.source.git.repoURL': 'git@github.com:georgeAccnt-GH/seismic-deeplearning.git', 'azureml.git.branch': 'ghiordan/azureml_devito04', 'mlflow.source.git.branch': 'ghiordan/azureml_devito04', 'azureml.git.commit': 'b93dcf2325fbc8b1dff1ad74ad14ee41f4e184bb', 'mlflow.source.git.commit': 'b93dcf2325fbc8b1dff1ad74ad14ee41f4e184bb', 'azureml.git.dirty': 'True', 'ProcessInfoFile': 'azureml-logs/process_info.json', 'ProcessStatusFile': 'azureml-logs/process_status.json'}\\n2020-01-04 02:26:45,772|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunContextManager|DEBUG|Valid logs dir, setting up content loader\\n2020-01-04 02:27:13,055|azureml.core.authentication|DEBUG|Time to expire 1813935.945032 seconds\\n2020-01-04 02:27:43,055|azureml.core.authentication|DEBUG|Time to expire 1813905.944679 seconds\\n2020-01-04 02:28:03,352|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient|DEBUG|Overrides: Max batch size: 50, batch cushion: 5, Interval: 1.\\n2020-01-04 02:28:03,353|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient.PostMetricsBatch.PostMetricsBatchDaemon|DEBUG|Starting daemon and triggering first instance\\n2020-01-04 02:28:03,353|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient|DEBUG|Used for use_batch=True.\\n2020-01-04 02:28:03,431|azureml.history._tracking.PythonWorkingDirectory.workingdir|DEBUG|Calling pyfs\\n2020-01-04 02:28:03,431|azureml.history._tracking.PythonWorkingDirectory|INFO|Current working dir: /devito\\n2020-01-04 02:28:03,431|azureml.history._tracking.PythonWorkingDirectory.workingdir|DEBUG|pyfs has path /devito\\n2020-01-04 02:28:03,431|azureml.history._tracking.PythonWorkingDirectory.workingdir|DEBUG|Reverting working dir from /devito to /mnt/batch/tasks/shared/LS_root/jobs/ghiordanfwiws/azureml/020_azuremlestimator_1578104365_7f98f029/mounts/workspaceblobstore/azureml/020_AzureMLEstimator_1578104365_7f98f029\\n2020-01-04 02:28:03,431|azureml.history._tracking.PythonWorkingDirectory|INFO|Setting working dir to /mnt/batch/tasks/shared/LS_root/jobs/ghiordanfwiws/azureml/020_azuremlestimator_1578104365_7f98f029/mounts/workspaceblobstore/azureml/020_AzureMLEstimator_1578104365_7f98f029\\n2020-01-04 02:28:03,431|azureml.history._tracking.PythonWorkingDirectory.workingdir|DEBUG|[STOP]\\n2020-01-04 02:28:03,431|azureml.WorkingDirectoryCM|DEBUG|[STOP]\\n2020-01-04 02:28:03,431|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029|INFO|complete is not setting status for submitted runs.\\n2020-01-04 02:28:03,432|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient.FlushingMetricsClient|DEBUG|[START]\\n2020-01-04 02:28:03,432|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient|DEBUG|Overrides: Max batch size: 50, batch cushion: 5, Interval: 1.\\n2020-01-04 02:28:03,432|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient.PostMetricsBatch.PostMetricsBatchDaemon|DEBUG|Starting daemon and triggering first instance\\n2020-01-04 02:28:03,432|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient|DEBUG|Used for use_batch=True.\\n2020-01-04 02:28:03,432|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|[START]\\n2020-01-04 02:28:03,432|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|flush timeout 300 is different from task queue timeout 120, using flush timeout\\n2020-01-04 02:28:03,432|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|Waiting 300 seconds on tasks: [].\\n2020-01-04 02:28:03,432|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient.PostMetricsBatch|DEBUG|\\n2020-01-04 02:28:03,432|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|[STOP]\\n2020-01-04 02:28:03,433|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient.FlushingMetricsClient|DEBUG|[STOP]\\n2020-01-04 02:28:03,433|azureml.RunStatusContext|DEBUG|[STOP]\\n2020-01-04 02:28:03,433|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient.FlushingMetricsClient|DEBUG|[START]\\n2020-01-04 02:28:03,433|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|[START]\\n2020-01-04 02:28:03,433|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|flush timeout 300.0 is different from task queue timeout 120, using flush timeout\\n2020-01-04 02:28:03,433|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|Waiting 300.0 seconds on tasks: [].\\n2020-01-04 02:28:03,433|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient.PostMetricsBatch|DEBUG|\\n2020-01-04 02:28:03,433|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|[STOP]\\n2020-01-04 02:28:03,433|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient.FlushingMetricsClient|DEBUG|[STOP]\\n2020-01-04 02:28:03,433|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient.FlushingMetricsClient|DEBUG|[START]\\n2020-01-04 02:28:03,434|azureml.BatchTaskQueueAdd_1_Batches|DEBUG|[Start]\\n2020-01-04 02:28:03,434|azureml.BatchTaskQueueAdd_1_Batches.WorkerPool|DEBUG|submitting future: _handle_batch\\n2020-01-04 02:28:03,434|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient.PostMetricsBatch|DEBUG|Batch size 1.\\n2020-01-04 02:28:03,434|azureml.BatchTaskQueueAdd_1_Batches.0__handle_batch|DEBUG|Using basic handler - no exception handling\\n2020-01-04 02:28:03,434|azureml._restclient.clientbase.WorkerPool|DEBUG|submitting future: _log_batch\\n2020-01-04 02:28:03,434|azureml.BatchTaskQueueAdd_1_Batches|DEBUG|Adding task 0__handle_batch to queue of approximate size: 0\\n2020-01-04 02:28:03,435|azureml.BatchTaskQueueAdd_1_Batches|DEBUG|[Stop] - waiting default timeout\\n2020-01-04 02:28:03,435|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient.post_batch-async:False|DEBUG|[START]\\n2020-01-04 02:28:03,436|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient.PostMetricsBatch.0__log_batch|DEBUG|Using basic handler - no exception handling\\n2020-01-04 02:28:03,436|azureml.BatchTaskQueueAdd_1_Batches.WaitFlushSource:BatchTaskQueueAdd_1_Batches|DEBUG|[START]\\n2020-01-04 02:28:03,437|msrest.service_client|DEBUG|Accept header absent and forced to application/json\\n2020-01-04 02:28:03,437|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient.PostMetricsBatch|DEBUG|Adding task 0__log_batch to queue of approximate size: 0\\n2020-01-04 02:28:03,437|azureml.BatchTaskQueueAdd_1_Batches.WaitFlushSource:BatchTaskQueueAdd_1_Batches|DEBUG|Overriding default flush timeout from None to 120\\n2020-01-04 02:28:03,438|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2020-01-04 02:28:03,438|azureml.BatchTaskQueueAdd_1_Batches.WaitFlushSource:BatchTaskQueueAdd_1_Batches|DEBUG|Waiting 120 seconds on tasks: [AsyncTask(0__handle_batch)].\\n2020-01-04 02:28:03,438|msrest.http_logger|DEBUG|Request URL: 'https://eastus2.experiments.azureml.net/history/v1.0/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourceGroups/ghiordanfwirsg01/providers/Microsoft.MachineLearningServices/workspaces/ghiordanfwiws/experiments/020_AzureMLEstimator/runs/020_AzureMLEstimator_1578104365_7f98f029/batch/metrics'\\n2020-01-04 02:28:03,438|azureml.BatchTaskQueueAdd_1_Batches.0__handle_batch.WaitingTask|DEBUG|[START]\\n2020-01-04 02:28:03,438|msrest.http_logger|DEBUG|Request method: 'POST'\\n2020-01-04 02:28:03,438|azureml.BatchTaskQueueAdd_1_Batches.0__handle_batch.WaitingTask|DEBUG|Awaiter is BatchTaskQueueAdd_1_Batches\\n2020-01-04 02:28:03,439|msrest.http_logger|DEBUG|Request headers:\\n2020-01-04 02:28:03,439|azureml.BatchTaskQueueAdd_1_Batches.0__handle_batch.WaitingTask|DEBUG|[STOP]\\n2020-01-04 02:28:03,439|msrest.http_logger|DEBUG| 'Accept': 'application/json'\\n2020-01-04 02:28:03,439|azureml.BatchTaskQueueAdd_1_Batches|DEBUG|\\n2020-01-04 02:28:03,439|msrest.http_logger|DEBUG| 'Content-Type': 'application/json-patch+json; charset=utf-8'\\n2020-01-04 02:28:03,439|azureml.BatchTaskQueueAdd_1_Batches.WaitFlushSource:BatchTaskQueueAdd_1_Batches|DEBUG|[STOP]\\n2020-01-04 02:28:03,439|msrest.http_logger|DEBUG| 'x-ms-client-request-id': '09e4c3b1-5347-4544-add3-1e02c72558a0'\\n2020-01-04 02:28:03,439|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|[START]\\n2020-01-04 02:28:03,439|msrest.http_logger|DEBUG| 'request-id': '09e4c3b1-5347-4544-add3-1e02c72558a0'\\n2020-01-04 02:28:03,440|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|flush timeout 300.0 is different from task queue timeout 120, using flush timeout\\n2020-01-04 02:28:03,440|msrest.http_logger|DEBUG| 'Content-Length': '410'\\n2020-01-04 02:28:03,440|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|Waiting 300.0 seconds on tasks: [AsyncTask(0__log_batch)].\\n2020-01-04 02:28:03,440|msrest.http_logger|DEBUG| 'User-Agent': 'python/3.6.9 (Linux-4.15.0-1057-azure-x86_64-with-debian-10.1) msrest/0.6.10 azureml._restclient/core.1.0.81 sdk_run'\\n2020-01-04 02:28:03,440|msrest.http_logger|DEBUG|Request body:\\n2020-01-04 02:28:03,440|msrest.http_logger|DEBUG|{\\\"values\\\": [{\\\"metricId\\\": \\\"acfd5695-72ca-449b-a3c8-3b7b72780aec\\\", \\\"metricType\\\": \\\"azureml.v1.scalar\\\", \\\"createdUtc\\\": \\\"2020-01-04T02:28:03.352477Z\\\", \\\"name\\\": \\\"training_message01: \\\", \\\"description\\\": \\\"\\\", \\\"numCells\\\": 1, \\\"cells\\\": [{\\\"training_message01: \\\": \\\"finished experiment\\\"}], \\\"schema\\\": {\\\"numProperties\\\": 1, \\\"properties\\\": [{\\\"propertyId\\\": \\\"training_message01: \\\", \\\"name\\\": \\\"training_message01: \\\", \\\"type\\\": \\\"string\\\"}]}}]}\\n2020-01-04 02:28:03,440|msrest.universal_http|DEBUG|Configuring redirects: allow=True, max=30\\n2020-01-04 02:28:03,440|msrest.universal_http|DEBUG|Configuring request: timeout=100, verify=True, cert=None\\n2020-01-04 02:28:03,440|msrest.universal_http|DEBUG|Configuring proxies: ''\\n2020-01-04 02:28:03,441|msrest.universal_http|DEBUG|Evaluate proxies against ENV settings: True\\n2020-01-04 02:28:03,582|msrest.http_logger|DEBUG|Response status: 200\\n2020-01-04 02:28:03,582|msrest.http_logger|DEBUG|Response headers:\\n2020-01-04 02:28:03,582|msrest.http_logger|DEBUG| 'Date': 'Sat, 04 Jan 2020 02:28:03 GMT'\\n2020-01-04 02:28:03,582|msrest.http_logger|DEBUG| 'Content-Length': '0'\\n2020-01-04 02:28:03,583|msrest.http_logger|DEBUG| 'Connection': 'keep-alive'\\n2020-01-04 02:28:03,583|msrest.http_logger|DEBUG| 'Request-Context': 'appId=cid-v1:2d2e8e63-272e-4b3c-8598-4ee570a0e70d'\\n2020-01-04 02:28:03,583|msrest.http_logger|DEBUG| 'x-ms-client-request-id': '09e4c3b1-5347-4544-add3-1e02c72558a0'\\n2020-01-04 02:28:03,583|msrest.http_logger|DEBUG| 'x-ms-client-session-id': ''\\n2020-01-04 02:28:03,583|msrest.http_logger|DEBUG| 'Strict-Transport-Security': 'max-age=15724800; includeSubDomains; preload'\\n2020-01-04 02:28:03,583|msrest.http_logger|DEBUG| 'X-Content-Type-Options': 'nosniff'\\n2020-01-04 02:28:03,583|msrest.http_logger|DEBUG|Response content:\\n2020-01-04 02:28:03,583|msrest.http_logger|DEBUG|\\n2020-01-04 02:28:03,584|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient.post_batch-async:False|DEBUG|[STOP]\\n2020-01-04 02:28:03,690|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient.PostMetricsBatch.0__log_batch.WaitingTask|DEBUG|[START]\\n2020-01-04 02:28:03,691|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient.PostMetricsBatch.0__log_batch.WaitingTask|DEBUG|Awaiter is PostMetricsBatch\\n2020-01-04 02:28:03,691|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient.PostMetricsBatch.0__log_batch.WaitingTask|DEBUG|[STOP]\\n2020-01-04 02:28:03,691|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient.PostMetricsBatch|DEBUG|Waiting on task: 0__log_batch.\\n1 tasks left. Current duration of flush 0.00022459030151367188 seconds.\\n\\n2020-01-04 02:28:03,691|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|[STOP]\\n2020-01-04 02:28:03,691|azureml._SubmittedRun#020_AzureMLEstimator_1578104365_7f98f029.RunHistoryFacade.MetricsClient.FlushingMetricsClient|DEBUG|[STOP]\\n2020-01-04 02:28:03,691|azureml.SendRunKillSignal|DEBUG|[STOP]\\n2020-01-04 02:28:03,691|azureml.HistoryTrackingWorkerPool.WorkerPoolShutdown|DEBUG|[START]\\n2020-01-04 02:28:03,691|azureml.HistoryTrackingWorkerPool.WorkerPoolShutdown|DEBUG|[STOP]\\n2020-01-04 02:28:03,691|azureml.WorkerPool|DEBUG|[STOP]\\n\\nRun is completed.\", \"graph\": {}, \"widget_settings\": {\"childWidgetDisplay\": \"popup\", \"send_telemetry\": false, \"log_level\": \"INFO\", \"sdk_version\": \"1.0.81\"}, \"loading\": false}" }, "metadata": {}, "output_type": "display_data" @@ -935,7 +926,7 @@ { "data": { "text/plain": [ - "'runId= 020_AzureMLEstimator_1575674728_d40baeba'" + "'runId= 020_AzureMLEstimator_1578104365_7f98f029'" ] }, "execution_count": 19, @@ -945,7 +936,7 @@ { "data": { "text/plain": [ - "'experimentation baseImage: fwi01_azureml:sdk.v1.0.76'" + "'experimentation baseImage: fwi01_azureml:sdk.v1.0.81'" ] }, "execution_count": 19, diff --git a/contrib/fwi/azureml_devito/notebooks/030_ScaleJobsUsingAzuremL_GeophysicsTutorial_FWI_Azure_devito.ipynb b/contrib/fwi/azureml_devito/notebooks/030_ScaleJobsUsingAzuremL_GeophysicsTutorial_FWI_Azure_devito.ipynb index 1b6ecaf4..fa055491 100755 --- a/contrib/fwi/azureml_devito/notebooks/030_ScaleJobsUsingAzuremL_GeophysicsTutorial_FWI_Azure_devito.ipynb +++ b/contrib/fwi/azureml_devito/notebooks/030_ScaleJobsUsingAzuremL_GeophysicsTutorial_FWI_Azure_devito.ipynb @@ -75,13 +75,13 @@ "name": "stdout", "output_type": "stream", "text": [ - "Azure ML SDK Version: 1.0.76\n" + "Azure ML SDK Version: 1.0.81\n" ] }, { "data": { "text/plain": [ - "'Linux-4.15.0-1064-azure-x86_64-with-debian-stretch-sid'" + "'Linux-4.15.0-1064-azure-x86_64-with-debian-10.1'" ] }, "execution_count": 3, @@ -91,7 +91,7 @@ { "data": { "text/plain": [ - "'/datadrive01/prj/DeepSeismic/contrib/fwi/azureml_devito/notebooks'" + "'/workspace/contrib/fwi/azureml_devito/notebooks'" ] }, "execution_count": 3, @@ -481,7 +481,7 @@ { "data": { "text/plain": [ - "'fwi01_azureml:sdk.v1.0.76'" + "'fwi01_azureml:sdk.v1.0.81'" ] }, "execution_count": 9, @@ -491,7 +491,7 @@ { "data": { "text/plain": [ - "'fwi01acr.azurecr.io/fwi01_azureml:sdk.v1.0.76'" + "'fwi01acr.azurecr.io/fwi01_azureml:sdk.v1.0.81'" ] }, "execution_count": 9, @@ -529,7 +529,7 @@ { "data": { "text/plain": [ - "'docker run -i --rm --name fwi01_azureml_container02 fwi01acr.azurecr.io/fwi01_azureml:sdk.v1.0.76 /bin/bash -c \"which python\" '" + "'docker run -i --rm --name fwi01_azureml_container02 fwi01acr.azurecr.io/fwi01_azureml:sdk.v1.0.81 /bin/bash -c \"which python\" '" ] }, "execution_count": 10, @@ -700,15 +700,6 @@ "execution_count": 13, "metadata": {}, "outputs": [ - { - "name": "stderr", - "output_type": "stream", - "text": [ - "WARNING - Warning: Falling back to use azure cli login credentials.\n", - "If you run your code in unattended mode, i.e., where you can't give a user input, then we recommend to use ServicePrincipalAuthentication or MsiAuthentication.\n", - "Please refer to aka.ms/aml-notebook-auth for different authentication mechanisms in azureml-sdk.\n" - ] - }, { "name": "stdout", "output_type": "stream", @@ -835,12 +826,12 @@ { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "a0312dfcb82f419288e3c3c37c39b9dd", + "model_id": "40e874e0889e460086a809fdb5a0b03b", "version_major": 2, "version_minor": 0 }, "text/plain": [ - "_UserRunWidget(widget_settings={'childWidgetDisplay': 'popup', 'send_telemetry': False, 'log_level': 'NOTSET',…" + "_UserRunWidget(widget_settings={'childWidgetDisplay': 'popup', 'send_telemetry': False, 'log_level': 'INFO', '…" ] }, "metadata": {}, @@ -848,7 +839,7 @@ }, { "data": { - "application/aml.mini.widget.v1": "{\"status\": \"Running\", \"workbench_run_details_uri\": \"https://ml.azure.com/experiments/020_AzureMLEstimator/runs/020_AzureMLEstimator_1575678435_be18a2fc?wsid=/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourcegroups/ghiordanfwirsg01/workspaces/ghiordanfwiws\", \"run_id\": \"020_AzureMLEstimator_1575678435_be18a2fc\", \"run_properties\": {\"run_id\": \"020_AzureMLEstimator_1575678435_be18a2fc\", \"created_utc\": \"2019-12-07T00:27:18.102865Z\", \"properties\": {\"_azureml.ComputeTargetType\": \"amlcompute\", \"ContentSnapshotId\": \"a5071b2a-37a7-40da-8340-69cc894091cb\", \"azureml.git.repository_uri\": \"git@github.com:georgeAccnt-GH/DeepSeismic.git\", \"mlflow.source.git.repoURL\": \"git@github.com:georgeAccnt-GH/DeepSeismic.git\", \"azureml.git.branch\": \"staging\", \"mlflow.source.git.branch\": \"staging\", \"azureml.git.commit\": \"1d3cd3340f4063508b6f707d5fc2a35f5429a07f\", \"mlflow.source.git.commit\": \"1d3cd3340f4063508b6f707d5fc2a35f5429a07f\", \"azureml.git.dirty\": \"True\", \"ProcessInfoFile\": \"azureml-logs/process_info.json\", \"ProcessStatusFile\": \"azureml-logs/process_status.json\"}, \"tags\": {\"_aml_system_ComputeTargetStatus\": \"{\\\"AllocationState\\\":\\\"steady\\\",\\\"PreparingNodeCount\\\":1,\\\"RunningNodeCount\\\":1,\\\"CurrentNodeCount\\\":2}\"}, \"script_name\": null, \"arguments\": null, \"end_time_utc\": null, \"status\": \"Running\", \"log_files\": {\"azureml-logs/55_azureml-execution-tvmps_e010639b61f121ff1dbd780d646c8bd4bc6a423228429632e00c37ab5e150756_p.txt\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1575678435_be18a2fc/azureml-logs/55_azureml-execution-tvmps_e010639b61f121ff1dbd780d646c8bd4bc6a423228429632e00c37ab5e150756_p.txt?sv=2019-02-02&sr=b&sig=99MfEJ4IvLwXgM3jjLm4amfljnv7gOK3%2BQPb1GN%2BZKg%3D&st=2019-12-07T00%3A22%3A27Z&se=2019-12-07T08%3A32%3A27Z&sp=r\"}, \"log_groups\": [[\"azureml-logs/55_azureml-execution-tvmps_e010639b61f121ff1dbd780d646c8bd4bc6a423228429632e00c37ab5e150756_p.txt\"]], \"run_duration\": \"0:05:10\"}, \"child_runs\": [], \"children_metrics\": {}, \"run_metrics\": [], \"run_logs\": \"2019-12-07T00:31:04Z Starting output-watcher...\\nLogin Succeeded\\nsdk.v1.0.76: Pulling from fwi01_azureml\\n1ab2bdfe9778: Pulling fs layer\\ndd7d28bd8be5: Pulling fs layer\\naf998e3a361b: Pulling fs layer\\n8f61820757bf: Pulling fs layer\\n0eb461057035: Pulling fs layer\\n23276e49c76d: Pulling fs layer\\nc55ca301ea9f: Pulling fs layer\\n0eb461057035: Waiting\\n8f61820757bf: Waiting\\nc55ca301ea9f: Waiting\\n1ab2bdfe9778: Verifying Checksum\\n1ab2bdfe9778: Download complete\\naf998e3a361b: Verifying Checksum\\naf998e3a361b: Download complete\\n0eb461057035: Verifying Checksum\\n0eb461057035: Download complete\\ndd7d28bd8be5: Verifying Checksum\\ndd7d28bd8be5: Download complete\\n1ab2bdfe9778: Pull complete\\n8f61820757bf: Verifying Checksum\\n8f61820757bf: Download complete\\ndd7d28bd8be5: Pull complete\\nc55ca301ea9f: Verifying Checksum\\nc55ca301ea9f: Download complete\\n23276e49c76d: Verifying Checksum\\n23276e49c76d: Download complete\\naf998e3a361b: Pull complete\\n8f61820757bf: Pull complete\\n0eb461057035: Pull complete\\n23276e49c76d: Pull complete\\n\", \"graph\": {}, \"widget_settings\": {\"childWidgetDisplay\": \"popup\", \"send_telemetry\": false, \"log_level\": \"NOTSET\", \"sdk_version\": \"1.0.76\"}, \"loading\": false}" + "application/aml.mini.widget.v1": "{\"status\": \"Running\", \"workbench_run_details_uri\": \"https://ml.azure.com/experiments/020_AzureMLEstimator/runs/020_AzureMLEstimator_1578105537_18907742?wsid=/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourcegroups/ghiordanfwirsg01/workspaces/ghiordanfwiws\", \"run_id\": \"020_AzureMLEstimator_1578105537_18907742\", \"run_properties\": {\"run_id\": \"020_AzureMLEstimator_1578105537_18907742\", \"created_utc\": \"2020-01-04T02:39:01.163504Z\", \"properties\": {\"_azureml.ComputeTargetType\": \"amlcompute\", \"ContentSnapshotId\": \"a5071b2a-37a7-40da-8340-69cc894091cb\", \"azureml.git.repository_uri\": \"git@github.com:georgeAccnt-GH/seismic-deeplearning.git\", \"mlflow.source.git.repoURL\": \"git@github.com:georgeAccnt-GH/seismic-deeplearning.git\", \"azureml.git.branch\": \"ghiordan/azureml_devito04\", \"mlflow.source.git.branch\": \"ghiordan/azureml_devito04\", \"azureml.git.commit\": \"b93dcf2325fbc8b1dff1ad74ad14ee41f4e184bb\", \"mlflow.source.git.commit\": \"b93dcf2325fbc8b1dff1ad74ad14ee41f4e184bb\", \"azureml.git.dirty\": \"True\", \"ProcessInfoFile\": \"azureml-logs/process_info.json\", \"ProcessStatusFile\": \"azureml-logs/process_status.json\"}, \"tags\": {\"_aml_system_ComputeTargetStatus\": \"{\\\"AllocationState\\\":\\\"steady\\\",\\\"PreparingNodeCount\\\":1,\\\"RunningNodeCount\\\":0,\\\"CurrentNodeCount\\\":1}\"}, \"script_name\": null, \"arguments\": null, \"end_time_utc\": null, \"status\": \"Running\", \"log_files\": {\"azureml-logs/55_azureml-execution-tvmps_37354120d8a20f3b502269d3e8b5c071826d11429ecdbd884f819e97a5e689a3_d.txt\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1578105537_18907742/azureml-logs/55_azureml-execution-tvmps_37354120d8a20f3b502269d3e8b5c071826d11429ecdbd884f819e97a5e689a3_d.txt?sv=2019-02-02&sr=b&sig=kELMOp%2B45Wj9tH6EYGAZYb7EHmMEShXIeJCJ3LSb7fs%3D&st=2020-01-04T02%3A34%3A08Z&se=2020-01-04T10%3A44%3A08Z&sp=r\"}, \"log_groups\": [[\"azureml-logs/55_azureml-execution-tvmps_37354120d8a20f3b502269d3e8b5c071826d11429ecdbd884f819e97a5e689a3_d.txt\"]], \"run_duration\": \"0:05:07\"}, \"child_runs\": [], \"children_metrics\": {}, \"run_metrics\": [], \"run_logs\": \"2020-01-04T02:42:01Z Starting output-watcher...\\nLogin Succeeded\\nsdk.v1.0.81: Pulling from fwi01_azureml\\nb8f262c62ec6: Pulling fs layer\\n0a43c0154f16: Pulling fs layer\\n906d7b5da8fb: Pulling fs layer\\n265baab7d98a: Pulling fs layer\\n300780db9fa5: Pulling fs layer\\n6df08465f871: Pulling fs layer\\nb074f450fcba: Pulling fs layer\\n6df08465f871: Waiting\\nb074f450fcba: Waiting\\n300780db9fa5: Waiting\\n265baab7d98a: Waiting\\n0a43c0154f16: Verifying Checksum\\n0a43c0154f16: Download complete\\n906d7b5da8fb: Verifying Checksum\\n906d7b5da8fb: Download complete\\n300780db9fa5: Download complete\\nb8f262c62ec6: Download complete\\n265baab7d98a: Verifying Checksum\\n265baab7d98a: Download complete\\nb8f262c62ec6: Pull complete\\nb074f450fcba: Verifying Checksum\\nb074f450fcba: Download complete\\n6df08465f871: Verifying Checksum\\n6df08465f871: Download complete\\n0a43c0154f16: Pull complete\\n906d7b5da8fb: Pull complete\\n265baab7d98a: Pull complete\\n300780db9fa5: Pull complete\\n6df08465f871: Pull complete\\n\", \"graph\": {}, \"widget_settings\": {\"childWidgetDisplay\": \"popup\", \"send_telemetry\": false, \"log_level\": \"INFO\", \"sdk_version\": \"1.0.81\"}, \"loading\": false}" }, "metadata": {}, "output_type": "display_data" @@ -921,7 +912,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "Final print 9, time 20.798 seconds: Counter({'Completed': 1})\r" + "Final print 10, time 21.403 seconds: Counter({'Completed': 1})\r" ] } ], @@ -954,8 +945,8 @@ "name": "stdout", "output_type": "stream", "text": [ - "run_duration in seconds 243.960763\n", - "run_duration= 4m 3.961s\n" + "run_duration in seconds 252.429646\n", + "run_duration= 4m 12.430s\n" ] } ], @@ -982,18 +973,18 @@ "name": "stdout", "output_type": "stream", "text": [ - "Showing details for run 498\n" + "Showing details for run 181\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "cd44e7b0a1c447dabe98bf114f420d76", + "model_id": "846e641584ed4d83b15afd06e51173cb", "version_major": 2, "version_minor": 0 }, "text/plain": [ - "_UserRunWidget(widget_settings={'childWidgetDisplay': 'popup', 'send_telemetry': False, 'log_level': 'NOTSET',…" + "_UserRunWidget(widget_settings={'childWidgetDisplay': 'popup', 'send_telemetry': False, 'log_level': 'INFO', '…" ] }, "metadata": {}, @@ -1001,7 +992,7 @@ }, { "data": { - "application/aml.mini.widget.v1": "{\"status\": \"Completed\", \"workbench_run_details_uri\": \"https://ml.azure.com/experiments/020_AzureMLEstimator/runs/020_AzureMLEstimator_1575683693_ddd16e31?wsid=/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourcegroups/ghiordanfwirsg01/workspaces/ghiordanfwiws\", \"run_id\": \"020_AzureMLEstimator_1575683693_ddd16e31\", \"run_properties\": {\"run_id\": \"020_AzureMLEstimator_1575683693_ddd16e31\", \"created_utc\": \"2019-12-07T01:54:55.33033Z\", \"properties\": {\"_azureml.ComputeTargetType\": \"amlcompute\", \"ContentSnapshotId\": \"a5071b2a-37a7-40da-8340-69cc894091cb\", \"azureml.git.repository_uri\": \"git@github.com:georgeAccnt-GH/DeepSeismic.git\", \"mlflow.source.git.repoURL\": \"git@github.com:georgeAccnt-GH/DeepSeismic.git\", \"azureml.git.branch\": \"staging\", \"mlflow.source.git.branch\": \"staging\", \"azureml.git.commit\": \"1d3cd3340f4063508b6f707d5fc2a35f5429a07f\", \"mlflow.source.git.commit\": \"1d3cd3340f4063508b6f707d5fc2a35f5429a07f\", \"azureml.git.dirty\": \"True\", \"ProcessInfoFile\": \"azureml-logs/process_info.json\", \"ProcessStatusFile\": \"azureml-logs/process_status.json\"}, \"tags\": {}, \"script_name\": null, \"arguments\": null, \"end_time_utc\": \"2019-12-07T01:56:48.811115Z\", \"status\": \"Completed\", \"log_files\": {\"azureml-logs/55_azureml-execution-tvmps_01b47c06fd150418ce69a91b330cb6996c9e9e076f7368a183a2f9a708f17ccb_p.txt\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1575683693_ddd16e31/azureml-logs/55_azureml-execution-tvmps_01b47c06fd150418ce69a91b330cb6996c9e9e076f7368a183a2f9a708f17ccb_p.txt?sv=2019-02-02&sr=b&sig=9mQARzuRlCW%2F%2Brv3FDzJvm%2Fsaudk6GFjNypMRkV3O8g%3D&st=2019-12-07T01%3A46%3A50Z&se=2019-12-07T09%3A56%3A50Z&sp=r\", \"azureml-logs/65_job_prep-tvmps_01b47c06fd150418ce69a91b330cb6996c9e9e076f7368a183a2f9a708f17ccb_p.txt\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1575683693_ddd16e31/azureml-logs/65_job_prep-tvmps_01b47c06fd150418ce69a91b330cb6996c9e9e076f7368a183a2f9a708f17ccb_p.txt?sv=2019-02-02&sr=b&sig=TMxrg26ywABOyJtGYT3KVLrGP0TYIHQ9E3ePlr%2BQepg%3D&st=2019-12-07T01%3A46%3A50Z&se=2019-12-07T09%3A56%3A50Z&sp=r\", \"azureml-logs/70_driver_log.txt\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1575683693_ddd16e31/azureml-logs/70_driver_log.txt?sv=2019-02-02&sr=b&sig=vWkErsH55%2BLhIG%2FBJbtZb8NSNHFyNAzxk5VjW4p6lcM%3D&st=2019-12-07T01%3A46%3A50Z&se=2019-12-07T09%3A56%3A50Z&sp=r\", \"azureml-logs/75_job_post-tvmps_01b47c06fd150418ce69a91b330cb6996c9e9e076f7368a183a2f9a708f17ccb_p.txt\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1575683693_ddd16e31/azureml-logs/75_job_post-tvmps_01b47c06fd150418ce69a91b330cb6996c9e9e076f7368a183a2f9a708f17ccb_p.txt?sv=2019-02-02&sr=b&sig=cbDgvPNn4LNXDsUXZwmWCjRMj0O9PnFSqSCtuCPMTFo%3D&st=2019-12-07T01%3A46%3A50Z&se=2019-12-07T09%3A56%3A50Z&sp=r\", \"azureml-logs/process_info.json\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1575683693_ddd16e31/azureml-logs/process_info.json?sv=2019-02-02&sr=b&sig=wvqhR%2Bnzw0uLEsCGETAxkKrdwN5eI%2FgvTeB4juQ4aUI%3D&st=2019-12-07T01%3A46%3A50Z&se=2019-12-07T09%3A56%3A50Z&sp=r\", \"azureml-logs/process_status.json\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1575683693_ddd16e31/azureml-logs/process_status.json?sv=2019-02-02&sr=b&sig=kkirWrsrpjcrKndUUPxuJVeRWu0GthsVZ4cXpxbEGMg%3D&st=2019-12-07T01%3A46%3A50Z&se=2019-12-07T09%3A56%3A50Z&sp=r\", \"logs/azureml/728_azureml.log\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1575683693_ddd16e31/logs/azureml/728_azureml.log?sv=2019-02-02&sr=b&sig=pK%2F6TBBvQEPexjuRPR1FyOq6CUPXfnNBobkTmpmaeiM%3D&st=2019-12-07T01%3A46%3A50Z&se=2019-12-07T09%3A56%3A50Z&sp=r\", \"logs/azureml/azureml.log\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1575683693_ddd16e31/logs/azureml/azureml.log?sv=2019-02-02&sr=b&sig=o%2BPcdcJvKZyQWRA0HpaJbM%2BxhqFOkdDjgBqtxtHtoag%3D&st=2019-12-07T01%3A46%3A50Z&se=2019-12-07T09%3A56%3A50Z&sp=r\"}, \"log_groups\": [[\"azureml-logs/process_info.json\", \"azureml-logs/process_status.json\", \"logs/azureml/azureml.log\"], [\"azureml-logs/55_azureml-execution-tvmps_01b47c06fd150418ce69a91b330cb6996c9e9e076f7368a183a2f9a708f17ccb_p.txt\"], [\"azureml-logs/65_job_prep-tvmps_01b47c06fd150418ce69a91b330cb6996c9e9e076f7368a183a2f9a708f17ccb_p.txt\"], [\"azureml-logs/70_driver_log.txt\"], [\"azureml-logs/75_job_post-tvmps_01b47c06fd150418ce69a91b330cb6996c9e9e076f7368a183a2f9a708f17ccb_p.txt\"], [\"logs/azureml/728_azureml.log\"]], \"run_duration\": \"0:01:53\"}, \"child_runs\": [], \"children_metrics\": {}, \"run_metrics\": [{\"name\": \"training_message01: \", \"run_id\": \"020_AzureMLEstimator_1575683693_ddd16e31\", \"categories\": [0], \"series\": [{\"data\": [\"finished experiment\"]}]}], \"run_logs\": \"2019-12-07 01:55:16,975|azureml|DEBUG|Inputs:: kwargs: {'OutputCollection': True, 'snapshotProject': True, 'only_in_process_features': True, 'skip_track_logs_dir': True}, track_folders: None, deny_list: None, directories_to_watch: []\\n2019-12-07 01:55:16,976|azureml.history._tracking.PythonWorkingDirectory|DEBUG|Execution target type: batchai\\n2019-12-07 01:55:16,976|azureml.history._tracking.PythonWorkingDirectory|DEBUG|Failed to import pyspark with error: No module named 'pyspark'\\n2019-12-07 01:55:16,976|azureml.history._tracking.PythonWorkingDirectory.workingdir|DEBUG|Pinning working directory for filesystems: ['pyfs']\\n2019-12-07 01:55:17,242|azureml._base_sdk_common.user_agent|DEBUG|Fetching client info from /root/.azureml/clientinfo.json\\n2019-12-07 01:55:17,243|azureml._base_sdk_common.user_agent|DEBUG|Error loading client info: [Errno 2] No such file or directory: '/root/.azureml/clientinfo.json'\\n2019-12-07 01:55:17,566|azureml.core._experiment_method|DEBUG|Trying to register submit_function search, on method \\n2019-12-07 01:55:17,566|azureml.core._experiment_method|DEBUG|Registered submit_function search, on method \\n2019-12-07 01:55:17,566|azureml.core._experiment_method|DEBUG|Trying to register submit_function search, on method \\n2019-12-07 01:55:17,566|azureml.core._experiment_method|DEBUG|Registered submit_function search, on method \\n2019-12-07 01:55:17,566|azureml.core.run|DEBUG|Adding new factory for run source hyperdrive\\n2019-12-07 01:55:18,070|azureml.core.run|DEBUG|Adding new factory for run source azureml.PipelineRun\\n2019-12-07 01:55:18,075|azureml.core.run|DEBUG|Adding new factory for run source azureml.ReusedStepRun\\n2019-12-07 01:55:18,078|azureml.core.run|DEBUG|Adding new factory for run source azureml.StepRun\\n2019-12-07 01:55:18,082|azureml.core.run|DEBUG|Adding new factory for run source azureml.scriptrun\\n2019-12-07 01:55:18,083|azureml.core.authentication.TokenRefresherDaemon|DEBUG|Starting daemon and triggering first instance\\n2019-12-07 01:55:18,088|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2019-12-07 01:55:18,089|azureml._restclient.clientbase|INFO|Created a worker pool for first use\\n2019-12-07 01:55:18,089|azureml.core.authentication|DEBUG|Time to expire 1814376.910384 seconds\\n2019-12-07 01:55:18,089|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-07 01:55:18,089|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-07 01:55:18,089|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-07 01:55:18,089|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-07 01:55:18,090|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-07 01:55:18,090|azureml._base_sdk_common.service_discovery|DEBUG|Constructing mms service url in from history url environment variable None, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-07 01:55:18,090|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-07 01:55:18,090|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-07 01:55:18,090|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-07 01:55:18,118|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-07 01:55:18,122|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2019-12-07 01:55:18,128|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2019-12-07 01:55:18,132|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2019-12-07 01:55:18,136|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2019-12-07 01:55:18,141|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2019-12-07 01:55:18,141|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.RunClient.get-async:False|DEBUG|[START]\\n2019-12-07 01:55:18,142|msrest.service_client|DEBUG|Accept header absent and forced to application/json\\n2019-12-07 01:55:18,142|msrest.http_logger|DEBUG|Request URL: 'https://eastus2.experiments.azureml.net/history/v1.0/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourceGroups/ghiordanfwirsg01/providers/Microsoft.MachineLearningServices/workspaces/ghiordanfwiws/experiments/020_AzureMLEstimator/runs/020_AzureMLEstimator_1575683693_ddd16e31'\\n2019-12-07 01:55:18,142|msrest.http_logger|DEBUG|Request method: 'GET'\\n2019-12-07 01:55:18,142|msrest.http_logger|DEBUG|Request headers:\\n2019-12-07 01:55:18,142|msrest.http_logger|DEBUG| 'Accept': 'application/json'\\n2019-12-07 01:55:18,142|msrest.http_logger|DEBUG| 'Content-Type': 'application/json; charset=utf-8'\\n2019-12-07 01:55:18,142|msrest.http_logger|DEBUG| 'x-ms-client-request-id': '066d53de-da2b-470f-936a-ed66dab2d28c'\\n2019-12-07 01:55:18,142|msrest.http_logger|DEBUG| 'request-id': '066d53de-da2b-470f-936a-ed66dab2d28c'\\n2019-12-07 01:55:18,143|msrest.http_logger|DEBUG| 'User-Agent': 'python/3.6.9 (Linux-4.15.0-1057-azure-x86_64-with-debian-10.0) msrest/0.6.10 azureml._restclient/core.1.0.76'\\n2019-12-07 01:55:18,143|msrest.http_logger|DEBUG|Request body:\\n2019-12-07 01:55:18,143|msrest.http_logger|DEBUG|None\\n2019-12-07 01:55:18,143|msrest.universal_http|DEBUG|Configuring redirects: allow=True, max=30\\n2019-12-07 01:55:18,143|msrest.universal_http|DEBUG|Configuring request: timeout=100, verify=True, cert=None\\n2019-12-07 01:55:18,143|msrest.universal_http|DEBUG|Configuring proxies: ''\\n2019-12-07 01:55:18,143|msrest.universal_http|DEBUG|Evaluate proxies against ENV settings: True\\n2019-12-07 01:55:18,196|msrest.http_logger|DEBUG|Response status: 200\\n2019-12-07 01:55:18,196|msrest.http_logger|DEBUG|Response headers:\\n2019-12-07 01:55:18,196|msrest.http_logger|DEBUG| 'Date': 'Sat, 07 Dec 2019 01:55:18 GMT'\\n2019-12-07 01:55:18,196|msrest.http_logger|DEBUG| 'Content-Type': 'application/json; charset=utf-8'\\n2019-12-07 01:55:18,196|msrest.http_logger|DEBUG| 'Transfer-Encoding': 'chunked'\\n2019-12-07 01:55:18,197|msrest.http_logger|DEBUG| 'Connection': 'keep-alive'\\n2019-12-07 01:55:18,197|msrest.http_logger|DEBUG| 'Vary': 'Accept-Encoding'\\n2019-12-07 01:55:18,197|msrest.http_logger|DEBUG| 'Request-Context': 'appId=cid-v1:2d2e8e63-272e-4b3c-8598-4ee570a0e70d'\\n2019-12-07 01:55:18,197|msrest.http_logger|DEBUG| 'x-ms-client-request-id': '066d53de-da2b-470f-936a-ed66dab2d28c'\\n2019-12-07 01:55:18,197|msrest.http_logger|DEBUG| 'x-ms-client-session-id': ''\\n2019-12-07 01:55:18,197|msrest.http_logger|DEBUG| 'Strict-Transport-Security': 'max-age=15724800; includeSubDomains; preload'\\n2019-12-07 01:55:18,197|msrest.http_logger|DEBUG| 'X-Content-Type-Options': 'nosniff'\\n2019-12-07 01:55:18,197|msrest.http_logger|DEBUG| 'Content-Encoding': 'gzip'\\n2019-12-07 01:55:18,197|msrest.http_logger|DEBUG|Response content:\\n2019-12-07 01:55:18,197|msrest.http_logger|DEBUG|{\\n \\\"runNumber\\\": 2107,\\n \\\"rootRunId\\\": \\\"020_AzureMLEstimator_1575683693_ddd16e31\\\",\\n \\\"experimentId\\\": \\\"8d96276b-f420-4a67-86be-f933dd3d38cd\\\",\\n \\\"createdUtc\\\": \\\"2019-12-07T01:54:55.3303306+00:00\\\",\\n \\\"createdBy\\\": {\\n \\\"userObjectId\\\": \\\"b77869a0-66f2-4288-89ef-13c10accc4dc\\\",\\n \\\"userPuId\\\": \\\"1003000090A95868\\\",\\n \\\"userIdp\\\": null,\\n \\\"userAltSecId\\\": null,\\n \\\"userIss\\\": \\\"https://sts.windows.net/72f988bf-86f1-41af-91ab-2d7cd011db47/\\\",\\n \\\"userTenantId\\\": \\\"72f988bf-86f1-41af-91ab-2d7cd011db47\\\",\\n \\\"userName\\\": \\\"George Iordanescu\\\"\\n },\\n \\\"userId\\\": \\\"b77869a0-66f2-4288-89ef-13c10accc4dc\\\",\\n \\\"token\\\": null,\\n \\\"tokenExpiryTimeUtc\\\": null,\\n \\\"error\\\": null,\\n \\\"warnings\\\": null,\\n \\\"revision\\\": 7,\\n \\\"runId\\\": \\\"020_AzureMLEstimator_1575683693_ddd16e31\\\",\\n \\\"parentRunId\\\": null,\\n \\\"status\\\": \\\"Running\\\",\\n \\\"startTimeUtc\\\": \\\"2019-12-07T01:55:07.6378716+00:00\\\",\\n \\\"endTimeUtc\\\": null,\\n \\\"heartbeatEnabled\\\": false,\\n \\\"options\\\": {\\n \\\"generateDataContainerIdIfNotSpecified\\\": true\\n },\\n \\\"name\\\": null,\\n \\\"dataContainerId\\\": \\\"dcid.020_AzureMLEstimator_1575683693_ddd16e31\\\",\\n \\\"description\\\": null,\\n \\\"hidden\\\": false,\\n \\\"runType\\\": \\\"azureml.scriptrun\\\",\\n \\\"properties\\\": {\\n \\\"_azureml.ComputeTargetType\\\": \\\"amlcompute\\\",\\n \\\"ContentSnapshotId\\\": \\\"a5071b2a-37a7-40da-8340-69cc894091cb\\\",\\n \\\"azureml.git.repository_uri\\\": \\\"git@github.com:georgeAccnt-GH/DeepSeismic.git\\\",\\n \\\"mlflow.source.git.repoURL\\\": \\\"git@github.com:georgeAccnt-GH/DeepSeismic.git\\\",\\n \\\"azureml.git.branch\\\": \\\"staging\\\",\\n \\\"mlflow.source.git.branch\\\": \\\"staging\\\",\\n \\\"azureml.git.commit\\\": \\\"1d3cd3340f4063508b6f707d5fc2a35f5429a07f\\\",\\n \\\"mlflow.source.git.commit\\\": \\\"1d3cd3340f4063508b6f707d5fc2a35f5429a07f\\\",\\n \\\"azureml.git.dirty\\\": \\\"True\\\",\\n \\\"ProcessInfoFile\\\": \\\"azureml-logs/process_info.json\\\",\\n \\\"ProcessStatusFile\\\": \\\"azureml-logs/process_status.json\\\"\\n },\\n \\\"scriptName\\\": \\\"azureml_01_modelling.py\\\",\\n \\\"target\\\": \\\"gpuclstfwi08\\\",\\n \\\"tags\\\": {},\\n \\\"inputDatasets\\\": [],\\n \\\"runDefinition\\\": null,\\n \\\"createdFrom\\\": {\\n \\\"type\\\": \\\"Notebook\\\",\\n \\\"locationType\\\": \\\"ArtifactId\\\",\\n \\\"location\\\": \\\"LocalUpload/020_AzureMLEstimator_1575683693_ddd16e31/030_ScaleJobsUsingAzuremL_GeophysicsTutorial_FWI_Azure_devito.ipynb\\\"\\n },\\n \\\"cancelUri\\\": \\\"https://eastus2.experiments.azureml.net/execution/v1.0/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourceGroups/ghiordanfwirsg01/providers/Microsoft.MachineLearningServices/workspaces/ghiordanfwiws/experiments/020_AzureMLEstimator/runId/020_AzureMLEstimator_1575683693_ddd16e31/cancel\\\",\\n \\\"completeUri\\\": null,\\n \\\"diagnosticsUri\\\": \\\"https://eastus2.experiments.azureml.net/execution/v1.0/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourceGroups/ghiordanfwirsg01/providers/Microsoft.MachineLearningServices/workspaces/ghiordanfwiws/experiments/020_AzureMLEstimator/runId/020_AzureMLEstimator_1575683693_ddd16e31/diagnostics\\\",\\n \\\"computeRequest\\\": {\\n \\\"nodeCount\\\": 1\\n },\\n \\\"retainForLifetimeOfWorkspace\\\": false\\n}\\n2019-12-07 01:55:18,202|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.RunClient.get-async:False|DEBUG|[STOP]\\n2019-12-07 01:55:18,202|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31|DEBUG|Constructing run from dto. type: azureml.scriptrun, source: None, props: {'_azureml.ComputeTargetType': 'amlcompute', 'ContentSnapshotId': 'a5071b2a-37a7-40da-8340-69cc894091cb', 'azureml.git.repository_uri': 'git@github.com:georgeAccnt-GH/DeepSeismic.git', 'mlflow.source.git.repoURL': 'git@github.com:georgeAccnt-GH/DeepSeismic.git', 'azureml.git.branch': 'staging', 'mlflow.source.git.branch': 'staging', 'azureml.git.commit': '1d3cd3340f4063508b6f707d5fc2a35f5429a07f', 'mlflow.source.git.commit': '1d3cd3340f4063508b6f707d5fc2a35f5429a07f', 'azureml.git.dirty': 'True', 'ProcessInfoFile': 'azureml-logs/process_info.json', 'ProcessStatusFile': 'azureml-logs/process_status.json'}\\n2019-12-07 01:55:18,202|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunContextManager|DEBUG|Valid logs dir, setting up content loader\\n2019-12-07 01:55:18,202|azureml|WARNING|Could not import azureml.mlflow or azureml.contrib.mlflow mlflow APIs will not run against AzureML services. Add azureml-mlflow as a conda dependency for the run if this behavior is desired\\n2019-12-07 01:55:18,203|azureml.WorkerPool|DEBUG|[START]\\n2019-12-07 01:55:18,203|azureml.SendRunKillSignal|DEBUG|[START]\\n2019-12-07 01:55:18,203|azureml.RunStatusContext|DEBUG|[START]\\n2019-12-07 01:55:18,203|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunContextManager.RunStatusContext|DEBUG|[START]\\n2019-12-07 01:55:18,203|azureml.WorkingDirectoryCM|DEBUG|[START]\\n2019-12-07 01:55:18,203|azureml.history._tracking.PythonWorkingDirectory.workingdir|DEBUG|[START]\\n2019-12-07 01:55:18,203|azureml.history._tracking.PythonWorkingDirectory|INFO|Current working dir: /mnt/batch/tasks/shared/LS_root/jobs/ghiordanfwiws/azureml/020_azuremlestimator_1575683693_ddd16e31/mounts/workspaceblobstore/azureml/020_AzureMLEstimator_1575683693_ddd16e31\\n2019-12-07 01:55:18,203|azureml.history._tracking.PythonWorkingDirectory.workingdir|DEBUG|Calling pyfs\\n2019-12-07 01:55:18,203|azureml.history._tracking.PythonWorkingDirectory.workingdir|DEBUG|Storing working dir for pyfs as /mnt/batch/tasks/shared/LS_root/jobs/ghiordanfwiws/azureml/020_azuremlestimator_1575683693_ddd16e31/mounts/workspaceblobstore/azureml/020_AzureMLEstimator_1575683693_ddd16e31\\n2019-12-07 01:55:20,151|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-07 01:55:20,151|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-07 01:55:20,151|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-07 01:55:20,151|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-07 01:55:20,152|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-07 01:55:20,152|azureml._base_sdk_common.service_discovery|DEBUG|Constructing mms service url in from history url environment variable None, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-07 01:55:20,152|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-07 01:55:20,152|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-07 01:55:20,152|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2019-12-07 01:55:20,157|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2019-12-07 01:55:20,158|azureml._run_impl.run_history_facade|DEBUG|Created a static thread pool for RunHistoryFacade class\\n2019-12-07 01:55:20,162|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2019-12-07 01:55:20,166|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2019-12-07 01:55:20,170|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2019-12-07 01:55:20,175|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2019-12-07 01:55:20,175|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.RunClient.get-async:False|DEBUG|[START]\\n2019-12-07 01:55:20,175|msrest.service_client|DEBUG|Accept header absent and forced to application/json\\n2019-12-07 01:55:20,175|msrest.http_logger|DEBUG|Request URL: 'https://eastus2.experiments.azureml.net/history/v1.0/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourceGroups/ghiordanfwirsg01/providers/Microsoft.MachineLearningServices/workspaces/ghiordanfwiws/experiments/020_AzureMLEstimator/runs/020_AzureMLEstimator_1575683693_ddd16e31'\\n2019-12-07 01:55:20,175|msrest.http_logger|DEBUG|Request method: 'GET'\\n2019-12-07 01:55:20,175|msrest.http_logger|DEBUG|Request headers:\\n2019-12-07 01:55:20,176|msrest.http_logger|DEBUG| 'Accept': 'application/json'\\n2019-12-07 01:55:20,176|msrest.http_logger|DEBUG| 'Content-Type': 'application/json; charset=utf-8'\\n2019-12-07 01:55:20,176|msrest.http_logger|DEBUG| 'x-ms-client-request-id': 'b087e081-4f44-4f48-8adf-8c816a59faae'\\n2019-12-07 01:55:20,176|msrest.http_logger|DEBUG| 'request-id': 'b087e081-4f44-4f48-8adf-8c816a59faae'\\n2019-12-07 01:55:20,176|msrest.http_logger|DEBUG| 'User-Agent': 'python/3.6.9 (Linux-4.15.0-1057-azure-x86_64-with-debian-10.0) msrest/0.6.10 azureml._restclient/core.1.0.76'\\n2019-12-07 01:55:20,176|msrest.http_logger|DEBUG|Request body:\\n2019-12-07 01:55:20,176|msrest.http_logger|DEBUG|None\\n2019-12-07 01:55:20,176|msrest.universal_http|DEBUG|Configuring redirects: allow=True, max=30\\n2019-12-07 01:55:20,176|msrest.universal_http|DEBUG|Configuring request: timeout=100, verify=True, cert=None\\n2019-12-07 01:55:20,176|msrest.universal_http|DEBUG|Configuring proxies: ''\\n2019-12-07 01:55:20,176|msrest.universal_http|DEBUG|Evaluate proxies against ENV settings: True\\n2019-12-07 01:55:20,259|msrest.http_logger|DEBUG|Response status: 200\\n2019-12-07 01:55:20,259|msrest.http_logger|DEBUG|Response headers:\\n2019-12-07 01:55:20,259|msrest.http_logger|DEBUG| 'Date': 'Sat, 07 Dec 2019 01:55:20 GMT'\\n2019-12-07 01:55:20,259|msrest.http_logger|DEBUG| 'Content-Type': 'application/json; charset=utf-8'\\n2019-12-07 01:55:20,260|msrest.http_logger|DEBUG| 'Transfer-Encoding': 'chunked'\\n2019-12-07 01:55:20,260|msrest.http_logger|DEBUG| 'Connection': 'keep-alive'\\n2019-12-07 01:55:20,260|msrest.http_logger|DEBUG| 'Vary': 'Accept-Encoding'\\n2019-12-07 01:55:20,260|msrest.http_logger|DEBUG| 'Request-Context': 'appId=cid-v1:2d2e8e63-272e-4b3c-8598-4ee570a0e70d'\\n2019-12-07 01:55:20,260|msrest.http_logger|DEBUG| 'x-ms-client-request-id': 'b087e081-4f44-4f48-8adf-8c816a59faae'\\n2019-12-07 01:55:20,260|msrest.http_logger|DEBUG| 'x-ms-client-session-id': ''\\n2019-12-07 01:55:20,260|msrest.http_logger|DEBUG| 'Strict-Transport-Security': 'max-age=15724800; includeSubDomains; preload'\\n2019-12-07 01:55:20,260|msrest.http_logger|DEBUG| 'X-Content-Type-Options': 'nosniff'\\n2019-12-07 01:55:20,260|msrest.http_logger|DEBUG| 'Content-Encoding': 'gzip'\\n2019-12-07 01:55:20,260|msrest.http_logger|DEBUG|Response content:\\n2019-12-07 01:55:20,260|msrest.http_logger|DEBUG|{\\n \\\"runNumber\\\": 2107,\\n \\\"rootRunId\\\": \\\"020_AzureMLEstimator_1575683693_ddd16e31\\\",\\n \\\"experimentId\\\": \\\"8d96276b-f420-4a67-86be-f933dd3d38cd\\\",\\n \\\"createdUtc\\\": \\\"2019-12-07T01:54:55.3303306+00:00\\\",\\n \\\"createdBy\\\": {\\n \\\"userObjectId\\\": \\\"b77869a0-66f2-4288-89ef-13c10accc4dc\\\",\\n \\\"userPuId\\\": \\\"1003000090A95868\\\",\\n \\\"userIdp\\\": null,\\n \\\"userAltSecId\\\": null,\\n \\\"userIss\\\": \\\"https://sts.windows.net/72f988bf-86f1-41af-91ab-2d7cd011db47/\\\",\\n \\\"userTenantId\\\": \\\"72f988bf-86f1-41af-91ab-2d7cd011db47\\\",\\n \\\"userName\\\": \\\"George Iordanescu\\\"\\n },\\n \\\"userId\\\": \\\"b77869a0-66f2-4288-89ef-13c10accc4dc\\\",\\n \\\"token\\\": null,\\n \\\"tokenExpiryTimeUtc\\\": null,\\n \\\"error\\\": null,\\n \\\"warnings\\\": null,\\n \\\"revision\\\": 7,\\n \\\"runId\\\": \\\"020_AzureMLEstimator_1575683693_ddd16e31\\\",\\n \\\"parentRunId\\\": null,\\n \\\"status\\\": \\\"Running\\\",\\n \\\"startTimeUtc\\\": \\\"2019-12-07T01:55:07.6378716+00:00\\\",\\n \\\"endTimeUtc\\\": null,\\n \\\"heartbeatEnabled\\\": false,\\n \\\"options\\\": {\\n \\\"generateDataContainerIdIfNotSpecified\\\": true\\n },\\n \\\"name\\\": null,\\n \\\"dataContainerId\\\": \\\"dcid.020_AzureMLEstimator_1575683693_ddd16e31\\\",\\n \\\"description\\\": null,\\n \\\"hidden\\\": false,\\n \\\"runType\\\": \\\"azureml.scriptrun\\\",\\n \\\"properties\\\": {\\n \\\"_azureml.ComputeTargetType\\\": \\\"amlcompute\\\",\\n \\\"ContentSnapshotId\\\": \\\"a5071b2a-37a7-40da-8340-69cc894091cb\\\",\\n \\\"azureml.git.repository_uri\\\": \\\"git@github.com:georgeAccnt-GH/DeepSeismic.git\\\",\\n \\\"mlflow.source.git.repoURL\\\": \\\"git@github.com:georgeAccnt-GH/DeepSeismic.git\\\",\\n \\\"azureml.git.branch\\\": \\\"staging\\\",\\n \\\"mlflow.source.git.branch\\\": \\\"staging\\\",\\n \\\"azureml.git.commit\\\": \\\"1d3cd3340f4063508b6f707d5fc2a35f5429a07f\\\",\\n \\\"mlflow.source.git.commit\\\": \\\"1d3cd3340f4063508b6f707d5fc2a35f5429a07f\\\",\\n \\\"azureml.git.dirty\\\": \\\"True\\\",\\n \\\"ProcessInfoFile\\\": \\\"azureml-logs/process_info.json\\\",\\n \\\"ProcessStatusFile\\\": \\\"azureml-logs/process_status.json\\\"\\n },\\n \\\"scriptName\\\": \\\"azureml_01_modelling.py\\\",\\n \\\"target\\\": \\\"gpuclstfwi08\\\",\\n \\\"tags\\\": {},\\n \\\"inputDatasets\\\": [],\\n \\\"runDefinition\\\": null,\\n \\\"createdFrom\\\": {\\n \\\"type\\\": \\\"Notebook\\\",\\n \\\"locationType\\\": \\\"ArtifactId\\\",\\n \\\"location\\\": \\\"LocalUpload/020_AzureMLEstimator_1575683693_ddd16e31/030_ScaleJobsUsingAzuremL_GeophysicsTutorial_FWI_Azure_devito.ipynb\\\"\\n },\\n \\\"cancelUri\\\": \\\"https://eastus2.experiments.azureml.net/execution/v1.0/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourceGroups/ghiordanfwirsg01/providers/Microsoft.MachineLearningServices/workspaces/ghiordanfwiws/experiments/020_AzureMLEstimator/runId/020_AzureMLEstimator_1575683693_ddd16e31/cancel\\\",\\n \\\"completeUri\\\": null,\\n \\\"diagnosticsUri\\\": \\\"https://eastus2.experiments.azureml.net/execution/v1.0/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourceGroups/ghiordanfwirsg01/providers/Microsoft.MachineLearningServices/workspaces/ghiordanfwiws/experiments/020_AzureMLEstimator/runId/020_AzureMLEstimator_1575683693_ddd16e31/diagnostics\\\",\\n \\\"computeRequest\\\": {\\n \\\"nodeCount\\\": 1\\n },\\n \\\"retainForLifetimeOfWorkspace\\\": false\\n}\\n2019-12-07 01:55:20,262|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.RunClient.get-async:False|DEBUG|[STOP]\\n2019-12-07 01:55:20,262|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31|DEBUG|Constructing run from dto. type: azureml.scriptrun, source: None, props: {'_azureml.ComputeTargetType': 'amlcompute', 'ContentSnapshotId': 'a5071b2a-37a7-40da-8340-69cc894091cb', 'azureml.git.repository_uri': 'git@github.com:georgeAccnt-GH/DeepSeismic.git', 'mlflow.source.git.repoURL': 'git@github.com:georgeAccnt-GH/DeepSeismic.git', 'azureml.git.branch': 'staging', 'mlflow.source.git.branch': 'staging', 'azureml.git.commit': '1d3cd3340f4063508b6f707d5fc2a35f5429a07f', 'mlflow.source.git.commit': '1d3cd3340f4063508b6f707d5fc2a35f5429a07f', 'azureml.git.dirty': 'True', 'ProcessInfoFile': 'azureml-logs/process_info.json', 'ProcessStatusFile': 'azureml-logs/process_status.json'}\\n2019-12-07 01:55:20,262|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunContextManager|DEBUG|Valid logs dir, setting up content loader\\n2019-12-07 01:55:48,084|azureml.core.authentication|DEBUG|Time to expire 1814346.915499 seconds\\n2019-12-07 01:56:18,084|azureml.core.authentication|DEBUG|Time to expire 1814316.915133 seconds\\n2019-12-07 01:56:25,858|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient|DEBUG|Overrides: Max batch size: 50, batch cushion: 5, Interval: 1.\\n2019-12-07 01:56:25,858|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient.PostMetricsBatch.PostMetricsBatchDaemon|DEBUG|Starting daemon and triggering first instance\\n2019-12-07 01:56:25,859|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient|DEBUG|Used for use_batch=True.\\n2019-12-07 01:56:25,924|azureml.history._tracking.PythonWorkingDirectory.workingdir|DEBUG|Calling pyfs\\n2019-12-07 01:56:25,924|azureml.history._tracking.PythonWorkingDirectory|INFO|Current working dir: /devito\\n2019-12-07 01:56:25,924|azureml.history._tracking.PythonWorkingDirectory.workingdir|DEBUG|pyfs has path /devito\\n2019-12-07 01:56:25,925|azureml.history._tracking.PythonWorkingDirectory.workingdir|DEBUG|Reverting working dir from /devito to /mnt/batch/tasks/shared/LS_root/jobs/ghiordanfwiws/azureml/020_azuremlestimator_1575683693_ddd16e31/mounts/workspaceblobstore/azureml/020_AzureMLEstimator_1575683693_ddd16e31\\n2019-12-07 01:56:25,925|azureml.history._tracking.PythonWorkingDirectory|INFO|Setting working dir to /mnt/batch/tasks/shared/LS_root/jobs/ghiordanfwiws/azureml/020_azuremlestimator_1575683693_ddd16e31/mounts/workspaceblobstore/azureml/020_AzureMLEstimator_1575683693_ddd16e31\\n2019-12-07 01:56:25,925|azureml.history._tracking.PythonWorkingDirectory.workingdir|DEBUG|[STOP]\\n2019-12-07 01:56:25,925|azureml.WorkingDirectoryCM|DEBUG|[STOP]\\n2019-12-07 01:56:25,925|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31|INFO|complete is not setting status for submitted runs.\\n2019-12-07 01:56:25,925|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient.FlushingMetricsClient|DEBUG|[START]\\n2019-12-07 01:56:25,925|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient|DEBUG|Overrides: Max batch size: 50, batch cushion: 5, Interval: 1.\\n2019-12-07 01:56:25,925|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient.PostMetricsBatch.PostMetricsBatchDaemon|DEBUG|Starting daemon and triggering first instance\\n2019-12-07 01:56:25,925|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient|DEBUG|Used for use_batch=True.\\n2019-12-07 01:56:25,925|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|[START]\\n2019-12-07 01:56:25,925|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|flush timeout 300 is different from task queue timeout 120, using flush timeout\\n2019-12-07 01:56:25,926|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|Waiting 300 seconds on tasks: [].\\n2019-12-07 01:56:25,926|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient.PostMetricsBatch|DEBUG|\\n2019-12-07 01:56:25,926|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|[STOP]\\n2019-12-07 01:56:25,926|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient.FlushingMetricsClient|DEBUG|[STOP]\\n2019-12-07 01:56:25,926|azureml.RunStatusContext|DEBUG|[STOP]\\n2019-12-07 01:56:25,926|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient.FlushingMetricsClient|DEBUG|[START]\\n2019-12-07 01:56:25,926|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|[START]\\n2019-12-07 01:56:25,926|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|flush timeout 300.0 is different from task queue timeout 120, using flush timeout\\n2019-12-07 01:56:25,926|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|Waiting 300.0 seconds on tasks: [].\\n2019-12-07 01:56:25,926|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient.PostMetricsBatch|DEBUG|\\n2019-12-07 01:56:25,926|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|[STOP]\\n2019-12-07 01:56:25,926|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient.FlushingMetricsClient|DEBUG|[STOP]\\n2019-12-07 01:56:25,926|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient.FlushingMetricsClient|DEBUG|[START]\\n2019-12-07 01:56:25,927|azureml.BatchTaskQueueAdd_1_Batches|DEBUG|[Start]\\n2019-12-07 01:56:25,927|azureml.BatchTaskQueueAdd_1_Batches.WorkerPool|DEBUG|submitting future: _handle_batch\\n2019-12-07 01:56:25,927|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient.PostMetricsBatch|DEBUG|Batch size 1.\\n2019-12-07 01:56:25,927|azureml.BatchTaskQueueAdd_1_Batches.0__handle_batch|DEBUG|Using basic handler - no exception handling\\n2019-12-07 01:56:25,927|azureml._restclient.clientbase.WorkerPool|DEBUG|submitting future: _log_batch\\n2019-12-07 01:56:25,927|azureml.BatchTaskQueueAdd_1_Batches|DEBUG|Adding task 0__handle_batch to queue of approximate size: 0\\n2019-12-07 01:56:25,928|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient.post_batch-async:False|DEBUG|[START]\\n2019-12-07 01:56:25,928|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient.PostMetricsBatch.0__log_batch|DEBUG|Using basic handler - no exception handling\\n2019-12-07 01:56:25,928|azureml.BatchTaskQueueAdd_1_Batches|DEBUG|[Stop] - waiting default timeout\\n2019-12-07 01:56:25,929|msrest.service_client|DEBUG|Accept header absent and forced to application/json\\n2019-12-07 01:56:25,929|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient.PostMetricsBatch|DEBUG|Adding task 0__log_batch to queue of approximate size: 0\\n2019-12-07 01:56:25,929|azureml.BatchTaskQueueAdd_1_Batches.WaitFlushSource:BatchTaskQueueAdd_1_Batches|DEBUG|[START]\\n2019-12-07 01:56:25,929|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2019-12-07 01:56:25,930|azureml.BatchTaskQueueAdd_1_Batches.WaitFlushSource:BatchTaskQueueAdd_1_Batches|DEBUG|Overriding default flush timeout from None to 120\\n2019-12-07 01:56:25,930|msrest.http_logger|DEBUG|Request URL: 'https://eastus2.experiments.azureml.net/history/v1.0/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourceGroups/ghiordanfwirsg01/providers/Microsoft.MachineLearningServices/workspaces/ghiordanfwiws/experiments/020_AzureMLEstimator/runs/020_AzureMLEstimator_1575683693_ddd16e31/batch/metrics'\\n2019-12-07 01:56:25,930|azureml.BatchTaskQueueAdd_1_Batches.WaitFlushSource:BatchTaskQueueAdd_1_Batches|DEBUG|Waiting 120 seconds on tasks: [AsyncTask(0__handle_batch)].\\n2019-12-07 01:56:25,930|msrest.http_logger|DEBUG|Request method: 'POST'\\n2019-12-07 01:56:25,930|azureml.BatchTaskQueueAdd_1_Batches.0__handle_batch.WaitingTask|DEBUG|[START]\\n2019-12-07 01:56:25,930|msrest.http_logger|DEBUG|Request headers:\\n2019-12-07 01:56:25,930|azureml.BatchTaskQueueAdd_1_Batches.0__handle_batch.WaitingTask|DEBUG|Awaiter is BatchTaskQueueAdd_1_Batches\\n2019-12-07 01:56:25,931|msrest.http_logger|DEBUG| 'Accept': 'application/json'\\n2019-12-07 01:56:25,931|azureml.BatchTaskQueueAdd_1_Batches.0__handle_batch.WaitingTask|DEBUG|[STOP]\\n2019-12-07 01:56:25,931|msrest.http_logger|DEBUG| 'Content-Type': 'application/json-patch+json; charset=utf-8'\\n2019-12-07 01:56:25,931|azureml.BatchTaskQueueAdd_1_Batches|DEBUG|\\n2019-12-07 01:56:25,931|msrest.http_logger|DEBUG| 'x-ms-client-request-id': '18a01463-68a6-4c03-bc10-c9e912702ee6'\\n2019-12-07 01:56:25,931|azureml.BatchTaskQueueAdd_1_Batches.WaitFlushSource:BatchTaskQueueAdd_1_Batches|DEBUG|[STOP]\\n2019-12-07 01:56:25,931|msrest.http_logger|DEBUG| 'request-id': '18a01463-68a6-4c03-bc10-c9e912702ee6'\\n2019-12-07 01:56:25,931|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|[START]\\n2019-12-07 01:56:25,931|msrest.http_logger|DEBUG| 'Content-Length': '410'\\n2019-12-07 01:56:25,932|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|flush timeout 300.0 is different from task queue timeout 120, using flush timeout\\n2019-12-07 01:56:25,932|msrest.http_logger|DEBUG| 'User-Agent': 'python/3.6.9 (Linux-4.15.0-1057-azure-x86_64-with-debian-10.0) msrest/0.6.10 azureml._restclient/core.1.0.76 sdk_run'\\n2019-12-07 01:56:25,932|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|Waiting 300.0 seconds on tasks: [AsyncTask(0__log_batch)].\\n2019-12-07 01:56:25,932|msrest.http_logger|DEBUG|Request body:\\n2019-12-07 01:56:25,932|msrest.http_logger|DEBUG|{\\\"values\\\": [{\\\"metricId\\\": \\\"1a8ad3d8-accf-42da-a07d-fd00ef5ee1e6\\\", \\\"metricType\\\": \\\"azureml.v1.scalar\\\", \\\"createdUtc\\\": \\\"2019-12-07T01:56:25.858188Z\\\", \\\"name\\\": \\\"training_message01: \\\", \\\"description\\\": \\\"\\\", \\\"numCells\\\": 1, \\\"cells\\\": [{\\\"training_message01: \\\": \\\"finished experiment\\\"}], \\\"schema\\\": {\\\"numProperties\\\": 1, \\\"properties\\\": [{\\\"propertyId\\\": \\\"training_message01: \\\", \\\"name\\\": \\\"training_message01: \\\", \\\"type\\\": \\\"string\\\"}]}}]}\\n2019-12-07 01:56:25,932|msrest.universal_http|DEBUG|Configuring redirects: allow=True, max=30\\n2019-12-07 01:56:25,932|msrest.universal_http|DEBUG|Configuring request: timeout=100, verify=True, cert=None\\n2019-12-07 01:56:25,932|msrest.universal_http|DEBUG|Configuring proxies: ''\\n2019-12-07 01:56:25,932|msrest.universal_http|DEBUG|Evaluate proxies against ENV settings: True\\n2019-12-07 01:56:26,050|msrest.http_logger|DEBUG|Response status: 200\\n2019-12-07 01:56:26,051|msrest.http_logger|DEBUG|Response headers:\\n2019-12-07 01:56:26,051|msrest.http_logger|DEBUG| 'Date': 'Sat, 07 Dec 2019 01:56:26 GMT'\\n2019-12-07 01:56:26,051|msrest.http_logger|DEBUG| 'Content-Length': '0'\\n2019-12-07 01:56:26,051|msrest.http_logger|DEBUG| 'Connection': 'keep-alive'\\n2019-12-07 01:56:26,051|msrest.http_logger|DEBUG| 'Request-Context': 'appId=cid-v1:2d2e8e63-272e-4b3c-8598-4ee570a0e70d'\\n2019-12-07 01:56:26,051|msrest.http_logger|DEBUG| 'x-ms-client-request-id': '18a01463-68a6-4c03-bc10-c9e912702ee6'\\n2019-12-07 01:56:26,051|msrest.http_logger|DEBUG| 'x-ms-client-session-id': ''\\n2019-12-07 01:56:26,051|msrest.http_logger|DEBUG| 'Strict-Transport-Security': 'max-age=15724800; includeSubDomains; preload'\\n2019-12-07 01:56:26,051|msrest.http_logger|DEBUG| 'X-Content-Type-Options': 'nosniff'\\n2019-12-07 01:56:26,051|msrest.http_logger|DEBUG|Response content:\\n2019-12-07 01:56:26,051|msrest.http_logger|DEBUG|\\n2019-12-07 01:56:26,052|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient.post_batch-async:False|DEBUG|[STOP]\\n2019-12-07 01:56:26,182|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient.PostMetricsBatch.0__log_batch.WaitingTask|DEBUG|[START]\\n2019-12-07 01:56:26,182|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient.PostMetricsBatch.0__log_batch.WaitingTask|DEBUG|Awaiter is PostMetricsBatch\\n2019-12-07 01:56:26,183|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient.PostMetricsBatch.0__log_batch.WaitingTask|DEBUG|[STOP]\\n2019-12-07 01:56:26,183|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient.PostMetricsBatch|DEBUG|Waiting on task: 0__log_batch.\\n1 tasks left. Current duration of flush 0.0002186298370361328 seconds.\\n\\n2019-12-07 01:56:26,183|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|[STOP]\\n2019-12-07 01:56:26,183|azureml._SubmittedRun#020_AzureMLEstimator_1575683693_ddd16e31.RunHistoryFacade.MetricsClient.FlushingMetricsClient|DEBUG|[STOP]\\n2019-12-07 01:56:26,183|azureml.SendRunKillSignal|DEBUG|[STOP]\\n2019-12-07 01:56:26,183|azureml.HistoryTrackingWorkerPool.WorkerPoolShutdown|DEBUG|[START]\\n2019-12-07 01:56:26,183|azureml.HistoryTrackingWorkerPool.WorkerPoolShutdown|DEBUG|[STOP]\\n2019-12-07 01:56:26,183|azureml.WorkerPool|DEBUG|[STOP]\\n\\nRun is completed.\", \"graph\": {}, \"widget_settings\": {\"childWidgetDisplay\": \"popup\", \"send_telemetry\": false, \"log_level\": \"NOTSET\", \"sdk_version\": \"1.0.76\"}, \"loading\": false}" + "application/aml.mini.widget.v1": "{\"status\": \"Completed\", \"workbench_run_details_uri\": \"https://ml.azure.com/experiments/020_AzureMLEstimator/runs/020_AzureMLEstimator_1578107635_6a48bf8c?wsid=/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourcegroups/ghiordanfwirsg01/workspaces/ghiordanfwiws\", \"run_id\": \"020_AzureMLEstimator_1578107635_6a48bf8c\", \"run_properties\": {\"run_id\": \"020_AzureMLEstimator_1578107635_6a48bf8c\", \"created_utc\": \"2020-01-04T03:13:57.423637Z\", \"properties\": {\"_azureml.ComputeTargetType\": \"amlcompute\", \"ContentSnapshotId\": \"a5071b2a-37a7-40da-8340-69cc894091cb\", \"azureml.git.repository_uri\": \"git@github.com:georgeAccnt-GH/seismic-deeplearning.git\", \"mlflow.source.git.repoURL\": \"git@github.com:georgeAccnt-GH/seismic-deeplearning.git\", \"azureml.git.branch\": \"ghiordan/azureml_devito04\", \"mlflow.source.git.branch\": \"ghiordan/azureml_devito04\", \"azureml.git.commit\": \"b93dcf2325fbc8b1dff1ad74ad14ee41f4e184bb\", \"mlflow.source.git.commit\": \"b93dcf2325fbc8b1dff1ad74ad14ee41f4e184bb\", \"azureml.git.dirty\": \"True\", \"ProcessInfoFile\": \"azureml-logs/process_info.json\", \"ProcessStatusFile\": \"azureml-logs/process_status.json\"}, \"tags\": {}, \"script_name\": null, \"arguments\": null, \"end_time_utc\": \"2020-01-04T03:15:49.565041Z\", \"status\": \"Completed\", \"log_files\": {\"azureml-logs/55_azureml-execution-tvmps_bb49cbe82626e1162dc32b5dd0516d96fa46f67a2765150b71dc08d139a6770d_d.txt\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1578107635_6a48bf8c/azureml-logs/55_azureml-execution-tvmps_bb49cbe82626e1162dc32b5dd0516d96fa46f67a2765150b71dc08d139a6770d_d.txt?sv=2019-02-02&sr=b&sig=OZ%2BtuSv7JS9GjYmN7pkI3Uys4EFn3L%2B7x6Z5ZhPCBvk%3D&st=2020-01-04T03%3A05%3A57Z&se=2020-01-04T11%3A15%3A57Z&sp=r\", \"azureml-logs/65_job_prep-tvmps_bb49cbe82626e1162dc32b5dd0516d96fa46f67a2765150b71dc08d139a6770d_d.txt\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1578107635_6a48bf8c/azureml-logs/65_job_prep-tvmps_bb49cbe82626e1162dc32b5dd0516d96fa46f67a2765150b71dc08d139a6770d_d.txt?sv=2019-02-02&sr=b&sig=Qpd%2Fo0g2YYl2z7AyTsnO0qf3hVlrReqtSwa1kiC3Xpo%3D&st=2020-01-04T03%3A05%3A57Z&se=2020-01-04T11%3A15%3A57Z&sp=r\", \"azureml-logs/70_driver_log.txt\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1578107635_6a48bf8c/azureml-logs/70_driver_log.txt?sv=2019-02-02&sr=b&sig=m0m%2FtKBFCFwd3snV6dwxoJxXFt5gum70hFx%2FYXhTBxg%3D&st=2020-01-04T03%3A05%3A57Z&se=2020-01-04T11%3A15%3A57Z&sp=r\", \"azureml-logs/75_job_post-tvmps_bb49cbe82626e1162dc32b5dd0516d96fa46f67a2765150b71dc08d139a6770d_d.txt\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1578107635_6a48bf8c/azureml-logs/75_job_post-tvmps_bb49cbe82626e1162dc32b5dd0516d96fa46f67a2765150b71dc08d139a6770d_d.txt?sv=2019-02-02&sr=b&sig=n7V311vapVnNI16qMoRTD3ofR7m2UC4qms6UfGa3BFc%3D&st=2020-01-04T03%3A05%3A57Z&se=2020-01-04T11%3A15%3A57Z&sp=r\", \"azureml-logs/process_info.json\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1578107635_6a48bf8c/azureml-logs/process_info.json?sv=2019-02-02&sr=b&sig=ywZQvmv%2FRx%2Fo8i01PwG0Vk6brrp2SN%2FYRX7zM8BNMKY%3D&st=2020-01-04T03%3A05%3A57Z&se=2020-01-04T11%3A15%3A57Z&sp=r\", \"azureml-logs/process_status.json\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1578107635_6a48bf8c/azureml-logs/process_status.json?sv=2019-02-02&sr=b&sig=WWJCJ%2BrdtrknVBPh6iNfCNGkwoKW17xEF7Tm%2B4zBmrU%3D&st=2020-01-04T03%3A05%3A57Z&se=2020-01-04T11%3A15%3A57Z&sp=r\", \"logs/azureml/687_azureml.log\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1578107635_6a48bf8c/logs/azureml/687_azureml.log?sv=2019-02-02&sr=b&sig=3hS3J3REFV71A%2Bjeere6UVwXWLhcIPYcsCRahA2hDaw%3D&st=2020-01-04T03%3A05%3A57Z&se=2020-01-04T11%3A15%3A57Z&sp=r\", \"logs/azureml/azureml.log\": \"https://ghiordanstoragee145cef0b.blob.core.windows.net/azureml/ExperimentRun/dcid.020_AzureMLEstimator_1578107635_6a48bf8c/logs/azureml/azureml.log?sv=2019-02-02&sr=b&sig=IIO36ZTtOuc9mnkmqzFCVO85QuVPDbC7HoHRb333MjU%3D&st=2020-01-04T03%3A05%3A57Z&se=2020-01-04T11%3A15%3A57Z&sp=r\"}, \"log_groups\": [[\"azureml-logs/process_info.json\", \"azureml-logs/process_status.json\", \"logs/azureml/azureml.log\"], [\"azureml-logs/55_azureml-execution-tvmps_bb49cbe82626e1162dc32b5dd0516d96fa46f67a2765150b71dc08d139a6770d_d.txt\"], [\"azureml-logs/65_job_prep-tvmps_bb49cbe82626e1162dc32b5dd0516d96fa46f67a2765150b71dc08d139a6770d_d.txt\"], [\"azureml-logs/70_driver_log.txt\"], [\"azureml-logs/75_job_post-tvmps_bb49cbe82626e1162dc32b5dd0516d96fa46f67a2765150b71dc08d139a6770d_d.txt\"], [\"logs/azureml/687_azureml.log\"]], \"run_duration\": \"0:01:52\"}, \"child_runs\": [], \"children_metrics\": {}, \"run_metrics\": [{\"name\": \"training_message01: \", \"run_id\": \"020_AzureMLEstimator_1578107635_6a48bf8c\", \"categories\": [0], \"series\": [{\"data\": [\"finished experiment\"]}]}], \"run_logs\": \"2020-01-04 03:14:20,312|azureml|DEBUG|Inputs:: kwargs: {'OutputCollection': True, 'snapshotProject': True, 'only_in_process_features': True, 'skip_track_logs_dir': True}, track_folders: None, deny_list: None, directories_to_watch: []\\n2020-01-04 03:14:20,313|azureml.history._tracking.PythonWorkingDirectory|DEBUG|Execution target type: batchai\\n2020-01-04 03:14:20,313|azureml.history._tracking.PythonWorkingDirectory|DEBUG|Failed to import pyspark with error: No module named 'pyspark'\\n2020-01-04 03:14:20,313|azureml.history._tracking.PythonWorkingDirectory.workingdir|DEBUG|Pinning working directory for filesystems: ['pyfs']\\n2020-01-04 03:14:20,576|azureml._base_sdk_common.user_agent|DEBUG|Fetching client info from /root/.azureml/clientinfo.json\\n2020-01-04 03:14:20,577|azureml._base_sdk_common.user_agent|DEBUG|Error loading client info: [Errno 2] No such file or directory: '/root/.azureml/clientinfo.json'\\n2020-01-04 03:14:20,895|azureml.core._experiment_method|DEBUG|Trying to register submit_function search, on method \\n2020-01-04 03:14:20,895|azureml.core._experiment_method|DEBUG|Registered submit_function search, on method \\n2020-01-04 03:14:20,896|azureml.core._experiment_method|DEBUG|Trying to register submit_function search, on method \\n2020-01-04 03:14:20,896|azureml.core._experiment_method|DEBUG|Registered submit_function search, on method \\n2020-01-04 03:14:20,896|azureml.core.run|DEBUG|Adding new factory for run source hyperdrive\\n2020-01-04 03:14:21,383|azureml.core.run|DEBUG|Adding new factory for run source azureml.PipelineRun\\n2020-01-04 03:14:21,387|azureml.core.run|DEBUG|Adding new factory for run source azureml.ReusedStepRun\\n2020-01-04 03:14:21,391|azureml.core.run|DEBUG|Adding new factory for run source azureml.StepRun\\n2020-01-04 03:14:21,395|azureml.core.run|DEBUG|Adding new factory for run source azureml.scriptrun\\n2020-01-04 03:14:21,396|azureml.core.authentication.TokenRefresherDaemon|DEBUG|Starting daemon and triggering first instance\\n2020-01-04 03:14:21,401|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2020-01-04 03:14:21,401|azureml._restclient.clientbase|INFO|Created a worker pool for first use\\n2020-01-04 03:14:21,402|azureml.core.authentication|DEBUG|Time to expire 1814375.597972 seconds\\n2020-01-04 03:14:21,402|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 03:14:21,402|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 03:14:21,402|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 03:14:21,402|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 03:14:21,402|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 03:14:21,402|azureml._base_sdk_common.service_discovery|DEBUG|Constructing mms service url in from history url environment variable None, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 03:14:21,402|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 03:14:21,402|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 03:14:21,402|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 03:14:21,461|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 03:14:21,466|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2020-01-04 03:14:21,472|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2020-01-04 03:14:21,476|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2020-01-04 03:14:21,480|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2020-01-04 03:14:21,484|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2020-01-04 03:14:21,485|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.RunClient.get-async:False|DEBUG|[START]\\n2020-01-04 03:14:21,485|msrest.service_client|DEBUG|Accept header absent and forced to application/json\\n2020-01-04 03:14:21,485|msrest.http_logger|DEBUG|Request URL: 'https://eastus2.experiments.azureml.net/history/v1.0/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourceGroups/ghiordanfwirsg01/providers/Microsoft.MachineLearningServices/workspaces/ghiordanfwiws/experiments/020_AzureMLEstimator/runs/020_AzureMLEstimator_1578107635_6a48bf8c'\\n2020-01-04 03:14:21,486|msrest.http_logger|DEBUG|Request method: 'GET'\\n2020-01-04 03:14:21,486|msrest.http_logger|DEBUG|Request headers:\\n2020-01-04 03:14:21,486|msrest.http_logger|DEBUG| 'Accept': 'application/json'\\n2020-01-04 03:14:21,486|msrest.http_logger|DEBUG| 'Content-Type': 'application/json; charset=utf-8'\\n2020-01-04 03:14:21,486|msrest.http_logger|DEBUG| 'x-ms-client-request-id': '27adf543-ba6f-46d0-9089-c0da73fbd53d'\\n2020-01-04 03:14:21,486|msrest.http_logger|DEBUG| 'request-id': '27adf543-ba6f-46d0-9089-c0da73fbd53d'\\n2020-01-04 03:14:21,486|msrest.http_logger|DEBUG| 'User-Agent': 'python/3.6.9 (Linux-4.15.0-1057-azure-x86_64-with-debian-10.1) msrest/0.6.10 azureml._restclient/core.1.0.81'\\n2020-01-04 03:14:21,486|msrest.http_logger|DEBUG|Request body:\\n2020-01-04 03:14:21,486|msrest.http_logger|DEBUG|None\\n2020-01-04 03:14:21,486|msrest.universal_http|DEBUG|Configuring redirects: allow=True, max=30\\n2020-01-04 03:14:21,486|msrest.universal_http|DEBUG|Configuring request: timeout=100, verify=True, cert=None\\n2020-01-04 03:14:21,486|msrest.universal_http|DEBUG|Configuring proxies: ''\\n2020-01-04 03:14:21,487|msrest.universal_http|DEBUG|Evaluate proxies against ENV settings: True\\n2020-01-04 03:14:21,576|msrest.http_logger|DEBUG|Response status: 200\\n2020-01-04 03:14:21,576|msrest.http_logger|DEBUG|Response headers:\\n2020-01-04 03:14:21,576|msrest.http_logger|DEBUG| 'Date': 'Sat, 04 Jan 2020 03:14:21 GMT'\\n2020-01-04 03:14:21,576|msrest.http_logger|DEBUG| 'Content-Type': 'application/json; charset=utf-8'\\n2020-01-04 03:14:21,576|msrest.http_logger|DEBUG| 'Transfer-Encoding': 'chunked'\\n2020-01-04 03:14:21,576|msrest.http_logger|DEBUG| 'Connection': 'keep-alive'\\n2020-01-04 03:14:21,576|msrest.http_logger|DEBUG| 'Vary': 'Accept-Encoding'\\n2020-01-04 03:14:21,576|msrest.http_logger|DEBUG| 'Request-Context': 'appId=cid-v1:2d2e8e63-272e-4b3c-8598-4ee570a0e70d'\\n2020-01-04 03:14:21,576|msrest.http_logger|DEBUG| 'x-ms-client-request-id': '27adf543-ba6f-46d0-9089-c0da73fbd53d'\\n2020-01-04 03:14:21,576|msrest.http_logger|DEBUG| 'x-ms-client-session-id': ''\\n2020-01-04 03:14:21,577|msrest.http_logger|DEBUG| 'Strict-Transport-Security': 'max-age=15724800; includeSubDomains; preload'\\n2020-01-04 03:14:21,577|msrest.http_logger|DEBUG| 'X-Content-Type-Options': 'nosniff'\\n2020-01-04 03:14:21,577|msrest.http_logger|DEBUG| 'Content-Encoding': 'gzip'\\n2020-01-04 03:14:21,577|msrest.http_logger|DEBUG|Response content:\\n2020-01-04 03:14:21,577|msrest.http_logger|DEBUG|{\\n \\\"runNumber\\\": 6915,\\n \\\"rootRunId\\\": \\\"020_AzureMLEstimator_1578107635_6a48bf8c\\\",\\n \\\"experimentId\\\": \\\"8d96276b-f420-4a67-86be-f933dd3d38cd\\\",\\n \\\"createdUtc\\\": \\\"2020-01-04T03:13:57.4236375+00:00\\\",\\n \\\"createdBy\\\": {\\n \\\"userObjectId\\\": \\\"b77869a0-66f2-4288-89ef-13c10accc4dc\\\",\\n \\\"userPuId\\\": \\\"1003000090A95868\\\",\\n \\\"userIdp\\\": null,\\n \\\"userAltSecId\\\": null,\\n \\\"userIss\\\": \\\"https://sts.windows.net/72f988bf-86f1-41af-91ab-2d7cd011db47/\\\",\\n \\\"userTenantId\\\": \\\"72f988bf-86f1-41af-91ab-2d7cd011db47\\\",\\n \\\"userName\\\": \\\"George Iordanescu\\\"\\n },\\n \\\"userId\\\": \\\"b77869a0-66f2-4288-89ef-13c10accc4dc\\\",\\n \\\"token\\\": null,\\n \\\"tokenExpiryTimeUtc\\\": null,\\n \\\"error\\\": null,\\n \\\"warnings\\\": null,\\n \\\"revision\\\": 7,\\n \\\"runId\\\": \\\"020_AzureMLEstimator_1578107635_6a48bf8c\\\",\\n \\\"parentRunId\\\": null,\\n \\\"status\\\": \\\"Running\\\",\\n \\\"startTimeUtc\\\": \\\"2020-01-04T03:14:09.4469741+00:00\\\",\\n \\\"endTimeUtc\\\": null,\\n \\\"heartbeatEnabled\\\": false,\\n \\\"options\\\": {\\n \\\"generateDataContainerIdIfNotSpecified\\\": true\\n },\\n \\\"name\\\": null,\\n \\\"dataContainerId\\\": \\\"dcid.020_AzureMLEstimator_1578107635_6a48bf8c\\\",\\n \\\"description\\\": null,\\n \\\"hidden\\\": false,\\n \\\"runType\\\": \\\"azureml.scriptrun\\\",\\n \\\"properties\\\": {\\n \\\"_azureml.ComputeTargetType\\\": \\\"amlcompute\\\",\\n \\\"ContentSnapshotId\\\": \\\"a5071b2a-37a7-40da-8340-69cc894091cb\\\",\\n \\\"azureml.git.repository_uri\\\": \\\"git@github.com:georgeAccnt-GH/seismic-deeplearning.git\\\",\\n \\\"mlflow.source.git.repoURL\\\": \\\"git@github.com:georgeAccnt-GH/seismic-deeplearning.git\\\",\\n \\\"azureml.git.branch\\\": \\\"ghiordan/azureml_devito04\\\",\\n \\\"mlflow.source.git.branch\\\": \\\"ghiordan/azureml_devito04\\\",\\n \\\"azureml.git.commit\\\": \\\"b93dcf2325fbc8b1dff1ad74ad14ee41f4e184bb\\\",\\n \\\"mlflow.source.git.commit\\\": \\\"b93dcf2325fbc8b1dff1ad74ad14ee41f4e184bb\\\",\\n \\\"azureml.git.dirty\\\": \\\"True\\\",\\n \\\"ProcessInfoFile\\\": \\\"azureml-logs/process_info.json\\\",\\n \\\"ProcessStatusFile\\\": \\\"azureml-logs/process_status.json\\\"\\n },\\n \\\"scriptName\\\": \\\"azureml_01_modelling.py\\\",\\n \\\"target\\\": \\\"gpuclstfwi07\\\",\\n \\\"tags\\\": {},\\n \\\"inputDatasets\\\": [],\\n \\\"runDefinition\\\": null,\\n \\\"createdFrom\\\": {\\n \\\"type\\\": \\\"Notebook\\\",\\n \\\"locationType\\\": \\\"ArtifactId\\\",\\n \\\"location\\\": \\\"LocalUpload/020_AzureMLEstimator_1578107635_6a48bf8c/030_ScaleJobsUsingAzuremL_GeophysicsTutorial_FWI_Azure_devito.ipynb\\\"\\n },\\n \\\"cancelUri\\\": \\\"https://eastus2.experiments.azureml.net/execution/v1.0/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourceGroups/ghiordanfwirsg01/providers/Microsoft.MachineLearningServices/workspaces/ghiordanfwiws/experiments/020_AzureMLEstimator/runId/020_AzureMLEstimator_1578107635_6a48bf8c/cancel\\\",\\n \\\"completeUri\\\": null,\\n \\\"diagnosticsUri\\\": \\\"https://eastus2.experiments.azureml.net/execution/v1.0/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourceGroups/ghiordanfwirsg01/providers/Microsoft.MachineLearningServices/workspaces/ghiordanfwiws/experiments/020_AzureMLEstimator/runId/020_AzureMLEstimator_1578107635_6a48bf8c/diagnostics\\\",\\n \\\"computeRequest\\\": {\\n \\\"nodeCount\\\": 1\\n },\\n \\\"retainForLifetimeOfWorkspace\\\": false,\\n \\\"queueingInfo\\\": null\\n}\\n2020-01-04 03:14:21,582|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.RunClient.get-async:False|DEBUG|[STOP]\\n2020-01-04 03:14:21,582|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c|DEBUG|Constructing run from dto. type: azureml.scriptrun, source: None, props: {'_azureml.ComputeTargetType': 'amlcompute', 'ContentSnapshotId': 'a5071b2a-37a7-40da-8340-69cc894091cb', 'azureml.git.repository_uri': 'git@github.com:georgeAccnt-GH/seismic-deeplearning.git', 'mlflow.source.git.repoURL': 'git@github.com:georgeAccnt-GH/seismic-deeplearning.git', 'azureml.git.branch': 'ghiordan/azureml_devito04', 'mlflow.source.git.branch': 'ghiordan/azureml_devito04', 'azureml.git.commit': 'b93dcf2325fbc8b1dff1ad74ad14ee41f4e184bb', 'mlflow.source.git.commit': 'b93dcf2325fbc8b1dff1ad74ad14ee41f4e184bb', 'azureml.git.dirty': 'True', 'ProcessInfoFile': 'azureml-logs/process_info.json', 'ProcessStatusFile': 'azureml-logs/process_status.json'}\\n2020-01-04 03:14:21,582|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunContextManager|DEBUG|Valid logs dir, setting up content loader\\n2020-01-04 03:14:21,583|azureml|WARNING|Could not import azureml.mlflow or azureml.contrib.mlflow mlflow APIs will not run against AzureML services. Add azureml-mlflow as a conda dependency for the run if this behavior is desired\\n2020-01-04 03:14:21,583|azureml.WorkerPool|DEBUG|[START]\\n2020-01-04 03:14:21,583|azureml.SendRunKillSignal|DEBUG|[START]\\n2020-01-04 03:14:21,583|azureml.RunStatusContext|DEBUG|[START]\\n2020-01-04 03:14:21,583|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunContextManager.RunStatusContext|DEBUG|[START]\\n2020-01-04 03:14:21,583|azureml.WorkingDirectoryCM|DEBUG|[START]\\n2020-01-04 03:14:21,583|azureml.history._tracking.PythonWorkingDirectory.workingdir|DEBUG|[START]\\n2020-01-04 03:14:21,583|azureml.history._tracking.PythonWorkingDirectory|INFO|Current working dir: /mnt/batch/tasks/shared/LS_root/jobs/ghiordanfwiws/azureml/020_azuremlestimator_1578107635_6a48bf8c/mounts/workspaceblobstore/azureml/020_AzureMLEstimator_1578107635_6a48bf8c\\n2020-01-04 03:14:21,583|azureml.history._tracking.PythonWorkingDirectory.workingdir|DEBUG|Calling pyfs\\n2020-01-04 03:14:21,583|azureml.history._tracking.PythonWorkingDirectory.workingdir|DEBUG|Storing working dir for pyfs as /mnt/batch/tasks/shared/LS_root/jobs/ghiordanfwiws/azureml/020_azuremlestimator_1578107635_6a48bf8c/mounts/workspaceblobstore/azureml/020_AzureMLEstimator_1578107635_6a48bf8c\\n2020-01-04 03:14:23,532|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 03:14:23,533|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 03:14:23,533|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 03:14:23,533|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 03:14:23,533|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 03:14:23,533|azureml._base_sdk_common.service_discovery|DEBUG|Constructing mms service url in from history url environment variable None, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 03:14:23,533|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 03:14:23,533|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 03:14:23,533|azureml._base_sdk_common.service_discovery|DEBUG|Found history service url in environment variable AZUREML_SERVICE_ENDPOINT, history service url: https://eastus2.experiments.azureml.net.\\n2020-01-04 03:14:23,538|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2020-01-04 03:14:23,539|azureml._run_impl.run_history_facade|DEBUG|Created a static thread pool for RunHistoryFacade class\\n2020-01-04 03:14:23,543|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2020-01-04 03:14:23,548|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2020-01-04 03:14:23,552|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2020-01-04 03:14:23,556|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2020-01-04 03:14:23,557|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.RunClient.get-async:False|DEBUG|[START]\\n2020-01-04 03:14:23,557|msrest.service_client|DEBUG|Accept header absent and forced to application/json\\n2020-01-04 03:14:23,557|msrest.http_logger|DEBUG|Request URL: 'https://eastus2.experiments.azureml.net/history/v1.0/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourceGroups/ghiordanfwirsg01/providers/Microsoft.MachineLearningServices/workspaces/ghiordanfwiws/experiments/020_AzureMLEstimator/runs/020_AzureMLEstimator_1578107635_6a48bf8c'\\n2020-01-04 03:14:23,557|msrest.http_logger|DEBUG|Request method: 'GET'\\n2020-01-04 03:14:23,557|msrest.http_logger|DEBUG|Request headers:\\n2020-01-04 03:14:23,557|msrest.http_logger|DEBUG| 'Accept': 'application/json'\\n2020-01-04 03:14:23,557|msrest.http_logger|DEBUG| 'Content-Type': 'application/json; charset=utf-8'\\n2020-01-04 03:14:23,558|msrest.http_logger|DEBUG| 'x-ms-client-request-id': '25b11562-e86e-4813-adad-c438311009ab'\\n2020-01-04 03:14:23,558|msrest.http_logger|DEBUG| 'request-id': '25b11562-e86e-4813-adad-c438311009ab'\\n2020-01-04 03:14:23,558|msrest.http_logger|DEBUG| 'User-Agent': 'python/3.6.9 (Linux-4.15.0-1057-azure-x86_64-with-debian-10.1) msrest/0.6.10 azureml._restclient/core.1.0.81'\\n2020-01-04 03:14:23,558|msrest.http_logger|DEBUG|Request body:\\n2020-01-04 03:14:23,558|msrest.http_logger|DEBUG|None\\n2020-01-04 03:14:23,558|msrest.universal_http|DEBUG|Configuring redirects: allow=True, max=30\\n2020-01-04 03:14:23,558|msrest.universal_http|DEBUG|Configuring request: timeout=100, verify=True, cert=None\\n2020-01-04 03:14:23,558|msrest.universal_http|DEBUG|Configuring proxies: ''\\n2020-01-04 03:14:23,558|msrest.universal_http|DEBUG|Evaluate proxies against ENV settings: True\\n2020-01-04 03:14:23,700|msrest.http_logger|DEBUG|Response status: 200\\n2020-01-04 03:14:23,700|msrest.http_logger|DEBUG|Response headers:\\n2020-01-04 03:14:23,700|msrest.http_logger|DEBUG| 'Date': 'Sat, 04 Jan 2020 03:14:23 GMT'\\n2020-01-04 03:14:23,700|msrest.http_logger|DEBUG| 'Content-Type': 'application/json; charset=utf-8'\\n2020-01-04 03:14:23,700|msrest.http_logger|DEBUG| 'Transfer-Encoding': 'chunked'\\n2020-01-04 03:14:23,700|msrest.http_logger|DEBUG| 'Connection': 'keep-alive'\\n2020-01-04 03:14:23,700|msrest.http_logger|DEBUG| 'Vary': 'Accept-Encoding'\\n2020-01-04 03:14:23,700|msrest.http_logger|DEBUG| 'Request-Context': 'appId=cid-v1:2d2e8e63-272e-4b3c-8598-4ee570a0e70d'\\n2020-01-04 03:14:23,700|msrest.http_logger|DEBUG| 'x-ms-client-request-id': '25b11562-e86e-4813-adad-c438311009ab'\\n2020-01-04 03:14:23,700|msrest.http_logger|DEBUG| 'x-ms-client-session-id': ''\\n2020-01-04 03:14:23,701|msrest.http_logger|DEBUG| 'Strict-Transport-Security': 'max-age=15724800; includeSubDomains; preload'\\n2020-01-04 03:14:23,701|msrest.http_logger|DEBUG| 'X-Content-Type-Options': 'nosniff'\\n2020-01-04 03:14:23,701|msrest.http_logger|DEBUG| 'Content-Encoding': 'gzip'\\n2020-01-04 03:14:23,701|msrest.http_logger|DEBUG|Response content:\\n2020-01-04 03:14:23,701|msrest.http_logger|DEBUG|{\\n \\\"runNumber\\\": 6915,\\n \\\"rootRunId\\\": \\\"020_AzureMLEstimator_1578107635_6a48bf8c\\\",\\n \\\"experimentId\\\": \\\"8d96276b-f420-4a67-86be-f933dd3d38cd\\\",\\n \\\"createdUtc\\\": \\\"2020-01-04T03:13:57.4236375+00:00\\\",\\n \\\"createdBy\\\": {\\n \\\"userObjectId\\\": \\\"b77869a0-66f2-4288-89ef-13c10accc4dc\\\",\\n \\\"userPuId\\\": \\\"1003000090A95868\\\",\\n \\\"userIdp\\\": null,\\n \\\"userAltSecId\\\": null,\\n \\\"userIss\\\": \\\"https://sts.windows.net/72f988bf-86f1-41af-91ab-2d7cd011db47/\\\",\\n \\\"userTenantId\\\": \\\"72f988bf-86f1-41af-91ab-2d7cd011db47\\\",\\n \\\"userName\\\": \\\"George Iordanescu\\\"\\n },\\n \\\"userId\\\": \\\"b77869a0-66f2-4288-89ef-13c10accc4dc\\\",\\n \\\"token\\\": null,\\n \\\"tokenExpiryTimeUtc\\\": null,\\n \\\"error\\\": null,\\n \\\"warnings\\\": null,\\n \\\"revision\\\": 7,\\n \\\"runId\\\": \\\"020_AzureMLEstimator_1578107635_6a48bf8c\\\",\\n \\\"parentRunId\\\": null,\\n \\\"status\\\": \\\"Running\\\",\\n \\\"startTimeUtc\\\": \\\"2020-01-04T03:14:09.4469741+00:00\\\",\\n \\\"endTimeUtc\\\": null,\\n \\\"heartbeatEnabled\\\": false,\\n \\\"options\\\": {\\n \\\"generateDataContainerIdIfNotSpecified\\\": true\\n },\\n \\\"name\\\": null,\\n \\\"dataContainerId\\\": \\\"dcid.020_AzureMLEstimator_1578107635_6a48bf8c\\\",\\n \\\"description\\\": null,\\n \\\"hidden\\\": false,\\n \\\"runType\\\": \\\"azureml.scriptrun\\\",\\n \\\"properties\\\": {\\n \\\"_azureml.ComputeTargetType\\\": \\\"amlcompute\\\",\\n \\\"ContentSnapshotId\\\": \\\"a5071b2a-37a7-40da-8340-69cc894091cb\\\",\\n \\\"azureml.git.repository_uri\\\": \\\"git@github.com:georgeAccnt-GH/seismic-deeplearning.git\\\",\\n \\\"mlflow.source.git.repoURL\\\": \\\"git@github.com:georgeAccnt-GH/seismic-deeplearning.git\\\",\\n \\\"azureml.git.branch\\\": \\\"ghiordan/azureml_devito04\\\",\\n \\\"mlflow.source.git.branch\\\": \\\"ghiordan/azureml_devito04\\\",\\n \\\"azureml.git.commit\\\": \\\"b93dcf2325fbc8b1dff1ad74ad14ee41f4e184bb\\\",\\n \\\"mlflow.source.git.commit\\\": \\\"b93dcf2325fbc8b1dff1ad74ad14ee41f4e184bb\\\",\\n \\\"azureml.git.dirty\\\": \\\"True\\\",\\n \\\"ProcessInfoFile\\\": \\\"azureml-logs/process_info.json\\\",\\n \\\"ProcessStatusFile\\\": \\\"azureml-logs/process_status.json\\\"\\n },\\n \\\"scriptName\\\": \\\"azureml_01_modelling.py\\\",\\n \\\"target\\\": \\\"gpuclstfwi07\\\",\\n \\\"tags\\\": {},\\n \\\"inputDatasets\\\": [],\\n \\\"runDefinition\\\": null,\\n \\\"createdFrom\\\": {\\n \\\"type\\\": \\\"Notebook\\\",\\n \\\"locationType\\\": \\\"ArtifactId\\\",\\n \\\"location\\\": \\\"LocalUpload/020_AzureMLEstimator_1578107635_6a48bf8c/030_ScaleJobsUsingAzuremL_GeophysicsTutorial_FWI_Azure_devito.ipynb\\\"\\n },\\n \\\"cancelUri\\\": \\\"https://eastus2.experiments.azureml.net/execution/v1.0/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourceGroups/ghiordanfwirsg01/providers/Microsoft.MachineLearningServices/workspaces/ghiordanfwiws/experiments/020_AzureMLEstimator/runId/020_AzureMLEstimator_1578107635_6a48bf8c/cancel\\\",\\n \\\"completeUri\\\": null,\\n \\\"diagnosticsUri\\\": \\\"https://eastus2.experiments.azureml.net/execution/v1.0/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourceGroups/ghiordanfwirsg01/providers/Microsoft.MachineLearningServices/workspaces/ghiordanfwiws/experiments/020_AzureMLEstimator/runId/020_AzureMLEstimator_1578107635_6a48bf8c/diagnostics\\\",\\n \\\"computeRequest\\\": {\\n \\\"nodeCount\\\": 1\\n },\\n \\\"retainForLifetimeOfWorkspace\\\": false,\\n \\\"queueingInfo\\\": null\\n}\\n2020-01-04 03:14:23,703|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.RunClient.get-async:False|DEBUG|[STOP]\\n2020-01-04 03:14:23,703|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c|DEBUG|Constructing run from dto. type: azureml.scriptrun, source: None, props: {'_azureml.ComputeTargetType': 'amlcompute', 'ContentSnapshotId': 'a5071b2a-37a7-40da-8340-69cc894091cb', 'azureml.git.repository_uri': 'git@github.com:georgeAccnt-GH/seismic-deeplearning.git', 'mlflow.source.git.repoURL': 'git@github.com:georgeAccnt-GH/seismic-deeplearning.git', 'azureml.git.branch': 'ghiordan/azureml_devito04', 'mlflow.source.git.branch': 'ghiordan/azureml_devito04', 'azureml.git.commit': 'b93dcf2325fbc8b1dff1ad74ad14ee41f4e184bb', 'mlflow.source.git.commit': 'b93dcf2325fbc8b1dff1ad74ad14ee41f4e184bb', 'azureml.git.dirty': 'True', 'ProcessInfoFile': 'azureml-logs/process_info.json', 'ProcessStatusFile': 'azureml-logs/process_status.json'}\\n2020-01-04 03:14:23,704|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunContextManager|DEBUG|Valid logs dir, setting up content loader\\n2020-01-04 03:14:51,396|azureml.core.authentication|DEBUG|Time to expire 1814345.603107 seconds\\n2020-01-04 03:15:21,397|azureml.core.authentication|DEBUG|Time to expire 1814315.602798 seconds\\n2020-01-04 03:15:29,099|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient|DEBUG|Overrides: Max batch size: 50, batch cushion: 5, Interval: 1.\\n2020-01-04 03:15:29,099|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient.PostMetricsBatch.PostMetricsBatchDaemon|DEBUG|Starting daemon and triggering first instance\\n2020-01-04 03:15:29,100|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient|DEBUG|Used for use_batch=True.\\n2020-01-04 03:15:29,189|azureml.history._tracking.PythonWorkingDirectory.workingdir|DEBUG|Calling pyfs\\n2020-01-04 03:15:29,189|azureml.history._tracking.PythonWorkingDirectory|INFO|Current working dir: /devito\\n2020-01-04 03:15:29,189|azureml.history._tracking.PythonWorkingDirectory.workingdir|DEBUG|pyfs has path /devito\\n2020-01-04 03:15:29,189|azureml.history._tracking.PythonWorkingDirectory.workingdir|DEBUG|Reverting working dir from /devito to /mnt/batch/tasks/shared/LS_root/jobs/ghiordanfwiws/azureml/020_azuremlestimator_1578107635_6a48bf8c/mounts/workspaceblobstore/azureml/020_AzureMLEstimator_1578107635_6a48bf8c\\n2020-01-04 03:15:29,189|azureml.history._tracking.PythonWorkingDirectory|INFO|Setting working dir to /mnt/batch/tasks/shared/LS_root/jobs/ghiordanfwiws/azureml/020_azuremlestimator_1578107635_6a48bf8c/mounts/workspaceblobstore/azureml/020_AzureMLEstimator_1578107635_6a48bf8c\\n2020-01-04 03:15:29,189|azureml.history._tracking.PythonWorkingDirectory.workingdir|DEBUG|[STOP]\\n2020-01-04 03:15:29,189|azureml.WorkingDirectoryCM|DEBUG|[STOP]\\n2020-01-04 03:15:29,189|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c|INFO|complete is not setting status for submitted runs.\\n2020-01-04 03:15:29,190|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient.FlushingMetricsClient|DEBUG|[START]\\n2020-01-04 03:15:29,190|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient|DEBUG|Overrides: Max batch size: 50, batch cushion: 5, Interval: 1.\\n2020-01-04 03:15:29,190|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient.PostMetricsBatch.PostMetricsBatchDaemon|DEBUG|Starting daemon and triggering first instance\\n2020-01-04 03:15:29,190|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient|DEBUG|Used for use_batch=True.\\n2020-01-04 03:15:29,190|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|[START]\\n2020-01-04 03:15:29,190|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|flush timeout 300 is different from task queue timeout 120, using flush timeout\\n2020-01-04 03:15:29,190|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|Waiting 300 seconds on tasks: [].\\n2020-01-04 03:15:29,190|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient.PostMetricsBatch|DEBUG|\\n2020-01-04 03:15:29,190|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|[STOP]\\n2020-01-04 03:15:29,190|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient.FlushingMetricsClient|DEBUG|[STOP]\\n2020-01-04 03:15:29,190|azureml.RunStatusContext|DEBUG|[STOP]\\n2020-01-04 03:15:29,191|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient.FlushingMetricsClient|DEBUG|[START]\\n2020-01-04 03:15:29,191|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|[START]\\n2020-01-04 03:15:29,191|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|flush timeout 300.0 is different from task queue timeout 120, using flush timeout\\n2020-01-04 03:15:29,191|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|Waiting 300.0 seconds on tasks: [].\\n2020-01-04 03:15:29,191|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient.PostMetricsBatch|DEBUG|\\n2020-01-04 03:15:29,191|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|[STOP]\\n2020-01-04 03:15:29,191|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient.FlushingMetricsClient|DEBUG|[STOP]\\n2020-01-04 03:15:29,191|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient.FlushingMetricsClient|DEBUG|[START]\\n2020-01-04 03:15:29,191|azureml.BatchTaskQueueAdd_1_Batches|DEBUG|[Start]\\n2020-01-04 03:15:29,191|azureml.BatchTaskQueueAdd_1_Batches.WorkerPool|DEBUG|submitting future: _handle_batch\\n2020-01-04 03:15:29,191|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient.PostMetricsBatch|DEBUG|Batch size 1.\\n2020-01-04 03:15:29,192|azureml.BatchTaskQueueAdd_1_Batches.0__handle_batch|DEBUG|Using basic handler - no exception handling\\n2020-01-04 03:15:29,192|azureml._restclient.clientbase.WorkerPool|DEBUG|submitting future: _log_batch\\n2020-01-04 03:15:29,192|azureml.BatchTaskQueueAdd_1_Batches|DEBUG|Adding task 0__handle_batch to queue of approximate size: 0\\n2020-01-04 03:15:29,192|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient.PostMetricsBatch.0__log_batch|DEBUG|Using basic handler - no exception handling\\n2020-01-04 03:15:29,192|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient.post_batch-async:False|DEBUG|[START]\\n2020-01-04 03:15:29,192|azureml.BatchTaskQueueAdd_1_Batches|DEBUG|[Stop] - waiting default timeout\\n2020-01-04 03:15:29,192|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient.PostMetricsBatch|DEBUG|Adding task 0__log_batch to queue of approximate size: 0\\n2020-01-04 03:15:29,193|msrest.service_client|DEBUG|Accept header absent and forced to application/json\\n2020-01-04 03:15:29,194|azureml.BatchTaskQueueAdd_1_Batches.WaitFlushSource:BatchTaskQueueAdd_1_Batches|DEBUG|[START]\\n2020-01-04 03:15:29,194|msrest.universal_http.requests|DEBUG|Configuring retry: max_retries=3, backoff_factor=0.8, max_backoff=90\\n2020-01-04 03:15:29,194|azureml.BatchTaskQueueAdd_1_Batches.WaitFlushSource:BatchTaskQueueAdd_1_Batches|DEBUG|Overriding default flush timeout from None to 120\\n2020-01-04 03:15:29,194|msrest.http_logger|DEBUG|Request URL: 'https://eastus2.experiments.azureml.net/history/v1.0/subscriptions/789908e0-5fc2-4c4d-b5f5-9764b0d602b3/resourceGroups/ghiordanfwirsg01/providers/Microsoft.MachineLearningServices/workspaces/ghiordanfwiws/experiments/020_AzureMLEstimator/runs/020_AzureMLEstimator_1578107635_6a48bf8c/batch/metrics'\\n2020-01-04 03:15:29,195|azureml.BatchTaskQueueAdd_1_Batches.WaitFlushSource:BatchTaskQueueAdd_1_Batches|DEBUG|Waiting 120 seconds on tasks: [AsyncTask(0__handle_batch)].\\n2020-01-04 03:15:29,195|msrest.http_logger|DEBUG|Request method: 'POST'\\n2020-01-04 03:15:29,195|azureml.BatchTaskQueueAdd_1_Batches.0__handle_batch.WaitingTask|DEBUG|[START]\\n2020-01-04 03:15:29,195|msrest.http_logger|DEBUG|Request headers:\\n2020-01-04 03:15:29,195|azureml.BatchTaskQueueAdd_1_Batches.0__handle_batch.WaitingTask|DEBUG|Awaiter is BatchTaskQueueAdd_1_Batches\\n2020-01-04 03:15:29,195|msrest.http_logger|DEBUG| 'Accept': 'application/json'\\n2020-01-04 03:15:29,195|azureml.BatchTaskQueueAdd_1_Batches.0__handle_batch.WaitingTask|DEBUG|[STOP]\\n2020-01-04 03:15:29,195|msrest.http_logger|DEBUG| 'Content-Type': 'application/json-patch+json; charset=utf-8'\\n2020-01-04 03:15:29,195|azureml.BatchTaskQueueAdd_1_Batches|DEBUG|\\n2020-01-04 03:15:29,196|msrest.http_logger|DEBUG| 'x-ms-client-request-id': '3bf60291-a896-44db-a757-77b3dc584c55'\\n2020-01-04 03:15:29,196|azureml.BatchTaskQueueAdd_1_Batches.WaitFlushSource:BatchTaskQueueAdd_1_Batches|DEBUG|[STOP]\\n2020-01-04 03:15:29,196|msrest.http_logger|DEBUG| 'request-id': '3bf60291-a896-44db-a757-77b3dc584c55'\\n2020-01-04 03:15:29,196|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|[START]\\n2020-01-04 03:15:29,196|msrest.http_logger|DEBUG| 'Content-Length': '410'\\n2020-01-04 03:15:29,196|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|flush timeout 300.0 is different from task queue timeout 120, using flush timeout\\n2020-01-04 03:15:29,196|msrest.http_logger|DEBUG| 'User-Agent': 'python/3.6.9 (Linux-4.15.0-1057-azure-x86_64-with-debian-10.1) msrest/0.6.10 azureml._restclient/core.1.0.81 sdk_run'\\n2020-01-04 03:15:29,196|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|Waiting 300.0 seconds on tasks: [AsyncTask(0__log_batch)].\\n2020-01-04 03:15:29,196|msrest.http_logger|DEBUG|Request body:\\n2020-01-04 03:15:29,197|msrest.http_logger|DEBUG|{\\\"values\\\": [{\\\"metricId\\\": \\\"688c12af-7b49-4dc4-b4de-5fabd7959e2d\\\", \\\"metricType\\\": \\\"azureml.v1.scalar\\\", \\\"createdUtc\\\": \\\"2020-01-04T03:15:29.099399Z\\\", \\\"name\\\": \\\"training_message01: \\\", \\\"description\\\": \\\"\\\", \\\"numCells\\\": 1, \\\"cells\\\": [{\\\"training_message01: \\\": \\\"finished experiment\\\"}], \\\"schema\\\": {\\\"numProperties\\\": 1, \\\"properties\\\": [{\\\"propertyId\\\": \\\"training_message01: \\\", \\\"name\\\": \\\"training_message01: \\\", \\\"type\\\": \\\"string\\\"}]}}]}\\n2020-01-04 03:15:29,197|msrest.universal_http|DEBUG|Configuring redirects: allow=True, max=30\\n2020-01-04 03:15:29,197|msrest.universal_http|DEBUG|Configuring request: timeout=100, verify=True, cert=None\\n2020-01-04 03:15:29,197|msrest.universal_http|DEBUG|Configuring proxies: ''\\n2020-01-04 03:15:29,197|msrest.universal_http|DEBUG|Evaluate proxies against ENV settings: True\\n2020-01-04 03:15:29,354|msrest.http_logger|DEBUG|Response status: 200\\n2020-01-04 03:15:29,354|msrest.http_logger|DEBUG|Response headers:\\n2020-01-04 03:15:29,354|msrest.http_logger|DEBUG| 'Date': 'Sat, 04 Jan 2020 03:15:29 GMT'\\n2020-01-04 03:15:29,354|msrest.http_logger|DEBUG| 'Content-Length': '0'\\n2020-01-04 03:15:29,354|msrest.http_logger|DEBUG| 'Connection': 'keep-alive'\\n2020-01-04 03:15:29,354|msrest.http_logger|DEBUG| 'Request-Context': 'appId=cid-v1:2d2e8e63-272e-4b3c-8598-4ee570a0e70d'\\n2020-01-04 03:15:29,354|msrest.http_logger|DEBUG| 'x-ms-client-request-id': '3bf60291-a896-44db-a757-77b3dc584c55'\\n2020-01-04 03:15:29,354|msrest.http_logger|DEBUG| 'x-ms-client-session-id': ''\\n2020-01-04 03:15:29,354|msrest.http_logger|DEBUG| 'Strict-Transport-Security': 'max-age=15724800; includeSubDomains; preload'\\n2020-01-04 03:15:29,354|msrest.http_logger|DEBUG| 'X-Content-Type-Options': 'nosniff'\\n2020-01-04 03:15:29,354|msrest.http_logger|DEBUG|Response content:\\n2020-01-04 03:15:29,354|msrest.http_logger|DEBUG|\\n2020-01-04 03:15:29,355|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient.post_batch-async:False|DEBUG|[STOP]\\n2020-01-04 03:15:29,447|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient.PostMetricsBatch.0__log_batch.WaitingTask|DEBUG|[START]\\n2020-01-04 03:15:29,447|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient.PostMetricsBatch.0__log_batch.WaitingTask|DEBUG|Awaiter is PostMetricsBatch\\n2020-01-04 03:15:29,447|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient.PostMetricsBatch.0__log_batch.WaitingTask|DEBUG|[STOP]\\n2020-01-04 03:15:29,447|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient.PostMetricsBatch|DEBUG|Waiting on task: 0__log_batch.\\n1 tasks left. Current duration of flush 0.0002353191375732422 seconds.\\n\\n2020-01-04 03:15:29,447|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient.PostMetricsBatch.WaitFlushSource:MetricsClient|DEBUG|[STOP]\\n2020-01-04 03:15:29,447|azureml._SubmittedRun#020_AzureMLEstimator_1578107635_6a48bf8c.RunHistoryFacade.MetricsClient.FlushingMetricsClient|DEBUG|[STOP]\\n2020-01-04 03:15:29,447|azureml.SendRunKillSignal|DEBUG|[STOP]\\n2020-01-04 03:15:29,447|azureml.HistoryTrackingWorkerPool.WorkerPoolShutdown|DEBUG|[START]\\n2020-01-04 03:15:29,448|azureml.HistoryTrackingWorkerPool.WorkerPoolShutdown|DEBUG|[STOP]\\n2020-01-04 03:15:29,448|azureml.WorkerPool|DEBUG|[STOP]\\n\\nRun is completed.\", \"graph\": {}, \"widget_settings\": {\"childWidgetDisplay\": \"popup\", \"send_telemetry\": false, \"log_level\": \"INFO\", \"sdk_version\": \"1.0.81\"}, \"loading\": false}" }, "metadata": {}, "output_type": "display_data" @@ -1010,8 +1001,42 @@ "name": "stdout", "output_type": "stream", "text": [ - "Counter499: submission of job 499 on 400 nodes took 9.16640019416809 seconds \n", - "run list length 499\n" + "Counter182: submission of job 182 on 20 nodes took 8.759885311126709 seconds \n", + "run list length 182\n", + "Counter183: submission of job 183 on 20 nodes took 9.231754302978516 seconds \n", + "run list length 183\n", + "Counter184: submission of job 184 on 20 nodes took 13.600019454956055 seconds \n", + "run list length 184\n", + "Counter185: submission of job 185 on 20 nodes took 8.25251030921936 seconds \n", + "run list length 185\n", + "Counter186: submission of job 186 on 20 nodes took 8.89614224433899 seconds \n", + "run list length 186\n", + "Counter187: submission of job 187 on 20 nodes took 9.315387725830078 seconds \n", + "run list length 187\n", + "Counter188: submission of job 188 on 20 nodes took 8.64873480796814 seconds \n", + "run list length 188\n", + "Counter189: submission of job 189 on 20 nodes took 8.950633525848389 seconds \n", + "run list length 189\n", + "Counter190: submission of job 190 on 20 nodes took 7.8102922439575195 seconds \n", + "run list length 190\n", + "Counter191: submission of job 191 on 20 nodes took 8.68752121925354 seconds \n", + "run list length 191\n", + "Counter192: submission of job 192 on 20 nodes took 10.058020830154419 seconds \n", + "run list length 192\n", + "Counter193: submission of job 193 on 20 nodes took 10.503464221954346 seconds \n", + "run list length 193\n", + "Counter194: submission of job 194 on 20 nodes took 15.409441709518433 seconds \n", + "run list length 194\n", + "Counter195: submission of job 195 on 20 nodes took 12.09773850440979 seconds \n", + "run list length 195\n", + "Counter196: submission of job 196 on 20 nodes took 8.979861497879028 seconds \n", + "run list length 196\n", + "Counter197: submission of job 197 on 20 nodes took 9.068669319152832 seconds \n", + "run list length 197\n", + "Counter198: submission of job 198 on 20 nodes took 8.007090330123901 seconds \n", + "run list length 198\n", + "Counter199: submission of job 199 on 20 nodes took 9.039068460464478 seconds \n", + "run list length 199\n" ] } ], @@ -1019,8 +1044,9 @@ "import time\n", "from IPython.display import clear_output\n", "\n", - "no_of_jobs = 500\n", - "no_of_nodes = 400\n", + "no_of_nodes = int(20)\n", + "no_of_jobs = int(no_of_nodes*10)\n", + "\n", "\n", "job_counter = 0\n", "print_cycle = 20\n", @@ -1054,106 +1080,46 @@ { "data": { "text/plain": [ - "array([10.16889381, 10.52522182, 8.67223501, 7.76976609, 8.98659873,\n", - " 9.54043746, 7.56379271, 7.95067477, 10.98772812, 8.58469343,\n", - " 9.19690919, 8.37747335, 8.49322033, 8.96249437, 11.00566387,\n", - " 10.18721223, 8.70340395, 9.07873917, 8.83641577, 9.93886757,\n", - " 8.43751788, 8.88584614, 8.46158338, 8.10118651, 7.95576859,\n", - " 8.02682757, 8.59585524, 11.43893504, 8.21132302, 7.56929898,\n", - " 9.16166759, 7.96446443, 8.20211887, 8.0066514 , 8.16604567,\n", - " 9.03855515, 9.27646971, 7.88356876, 8.6105082 , 8.63279152,\n", - " 9.63798594, 7.88380122, 11.83064437, 7.67609763, 8.36450744,\n", - " 10.36203027, 8.20605659, 8.27934074, 8.71854138, 7.48072934,\n", - " 7.98534775, 7.88993239, 9.49783468, 8.20365477, 8.31964707,\n", - " 8.24653029, 9.14784336, 8.39632297, 8.88221884, 10.17075896,\n", - " 7.93166018, 8.50952411, 8.35107565, 8.62145162, 9.1473949 ,\n", - " 10.16314006, 9.48931861, 9.52163553, 10.48561263, 8.70149064,\n", - " 8.83968425, 8.77899456, 8.19752908, 8.23720503, 8.44300842,\n", - " 10.4865036 , 9.38597918, 8.16601682, 10.31557417, 9.39266205,\n", - " 9.3517375 , 8.26235414, 9.90602231, 8.08361053, 9.55309701,\n", - " 8.37694287, 8.2842195 , 9.27187061, 8.05741239, 9.81221128,\n", - " 8.67282987, 7.50111246, 8.84159875, 7.5928266 , 8.2180264 ,\n", - " 11.30247498, 8.97954369, 9.08557224, 8.62394547, 27.931288 ,\n", - " 11.31702137, 9.03355598, 9.82408452, 10.98696327, 8.15972924,\n", - " 8.10580516, 8.6766634 , 9.18826079, 9.91399217, 9.63535714,\n", - " 8.84899211, 8.59690166, 9.08935356, 7.87525439, 9.04824638,\n", - " 10.58436322, 8.05351543, 8.0442934 , 8.51687765, 8.23182964,\n", - " 7.90365982, 9.41734576, 7.82690763, 7.86053801, 8.81060672,\n", - " 15.63083076, 9.12365007, 8.4692018 , 8.38626456, 9.1455934 ,\n", - " 7.9579742 , 8.32254815, 9.60984373, 7.72059083, 9.80256414,\n", - " 8.03569841, 8.56897283, 9.88993764, 9.825032 , 9.10494757,\n", - " 7.96795917, 8.83923078, 8.12920213, 9.14702606, 10.44252062,\n", - " 8.11435223, 11.10698366, 8.54753256, 11.07914209, 8.0072608 ,\n", - " 8.64252162, 7.86998582, 8.16502595, 9.72599697, 8.01553535,\n", - " 8.05236411, 9.4306016 , 8.3510747 , 8.15123487, 7.73660946,\n", - " 8.78807712, 8.42650437, 9.09502602, 67.75333071, 14.179214 ,\n", - " 13.08692336, 14.52568007, 12.39239168, 8.40634942, 8.3893857 ,\n", - " 7.80925822, 8.04524732, 10.61561441, 9.33992386, 8.05361605,\n", - " 8.71911073, 8.13864756, 8.18779135, 8.03402972, 8.20232296,\n", - " 10.52845287, 8.21701574, 9.63750052, 8.16265893, 7.95386362,\n", - " 7.85334754, 7.96290469, 8.1984942 , 8.32950211, 17.0101552 ,\n", - " 14.20266891, 13.09765553, 14.32137418, 8.90045214, 9.79849219,\n", - " 7.7378149 , 8.17814636, 8.0692122 , 8.02391315, 7.73337412,\n", - " 8.24749708, 8.21430159, 8.42469835, 7.93915629, 8.17162681,\n", - " 9.29439068, 8.39062524, 8.05844831, 12.62865376, 8.03868556,\n", - " 8.03020358, 8.72658324, 7.98921943, 10.13008642, 8.36204886,\n", - " 9.8618927 , 8.84138846, 8.26497674, 8.53586483, 11.22441888,\n", - " 8.60046291, 9.52709126, 8.1862669 , 8.47402501, 8.08845234,\n", - " 8.0216496 , 8.25297642, 9.52822161, 8.53732967, 9.20458651,\n", - " 7.84344959, 8.76693869, 9.55830622, 9.32047439, 9.61785316,\n", - " 14.20765901, 13.20616293, 12.79950929, 13.23175693, 10.48755121,\n", - " 7.89634991, 8.62207508, 10.17518067, 9.5078795 , 8.16943836,\n", - " 11.88958383, 8.53581595, 8.78866196, 9.86849713, 8.38485384,\n", - " 7.80456519, 8.7930553 , 8.67091751, 11.64525867, 10.70969439,\n", - " 9.57600379, 7.88863015, 9.16765165, 8.10214615, 8.1002388 ,\n", - " 7.79884577, 7.84607792, 10.70999765, 8.32228923, 8.15903163,\n", - " 8.16516185, 11.13710332, 8.67460465, 8.04933095, 7.92010641,\n", - " 9.71926355, 7.96389985, 8.50223684, 7.80719972, 7.94503832,\n", - " 9.14503789, 8.74866915, 8.32825327, 9.38176489, 8.7043674 ,\n", - " 8.11469626, 8.39300489, 8.52375507, 9.48120856, 9.30481339,\n", - " 11.00180173, 8.00356221, 9.36562443, 11.26503015, 8.29429078,\n", - " 10.5787971 , 8.23888326, 8.25085521, 9.65488529, 10.22367787,\n", - " 8.86958766, 8.67924905, 9.8065629 , 9.98437238, 10.44085979,\n", - " 8.48997521, 13.41537356, 8.53429914, 9.41697288, 8.75000739,\n", - " 8.67022324, 10.65776849, 8.78767824, 29.17240787, 8.29843664,\n", - " 10.48030996, 8.60965252, 9.05648637, 11.23915553, 7.71198177,\n", - " 8.58811665, 11.27894258, 11.26059055, 8.08691239, 9.09145069,\n", - " 8.37398744, 9.33932018, 9.50723815, 14.62887979, 8.08766961,\n", - " 8.1010766 , 8.15962887, 7.86279893, 7.81253982, 8.72090292,\n", - " 28.51810336, 8.20156765, 8.10436082, 9.35736108, 10.11271501,\n", - " 8.28001332, 8.10338402, 7.82260585, 7.74735689, 9.37371802,\n", - " 7.83298874, 8.09861684, 11.44845009, 13.80942464, 13.86787438,\n", - " 12.95256805, 13.5946703 , 9.04438519, 8.42931032, 7.69650388,\n", - " 8.3203001 , 8.93009233, 8.99896145, 10.261621 , 9.76696181,\n", - " 8.42695355, 9.45543766, 8.35829163, 8.19327784, 8.54582119,\n", - " 10.28408813, 9.96855664, 9.4126513 , 8.85548735, 8.37564468,\n", - " 7.85812593, 11.26866746, 11.99777699, 8.90290856, 9.73011518,\n", - " 11.37953544, 9.56070495, 13.08286595, 7.91717887, 8.70709944,\n", - " 8.89286566, 9.43534017, 9.63375568, 9.45693254, 9.41722798,\n", - " 8.95478702, 10.59636545, 9.07217526, 8.91465688, 8.43598938,\n", - " 10.09872103, 8.53826594, 10.51633263, 8.16474724, 9.60920191,\n", - " 8.79985189, 11.08250904, 15.82575488, 13.72388315, 13.76962495,\n", - " 15.5107224 , 12.99527621, 9.55358648, 11.27318692, 10.64224267,\n", - " 9.28194666, 8.15835619, 10.34727526, 9.13943338, 8.47959018,\n", - " 12.95671797, 8.67874169, 9.48093748, 11.13487458, 11.16393185,\n", - " 9.45039058, 9.26687908, 10.83345985, 10.013412 , 12.88114643,\n", - " 8.90868664, 9.11424375, 10.62471223, 10.37447572, 8.56728458,\n", - " 11.44042325, 8.61506176, 14.37763166, 9.26899981, 9.01356244,\n", - " 12.6770153 , 7.95549965, 8.69824529, 8.16541219, 10.80149889,\n", - " 9.85532331, 9.16404986, 11.05029202, 8.95759201, 9.60003638,\n", - " 8.64066339, 11.99474025, 10.88645577, 9.82658648, 8.38357234,\n", - " 8.1931479 , 8.36809587, 8.34779596, 9.29737759, 7.71148348,\n", - " 8.34155583, 8.46944427, 9.46755242, 8.39070392, 9.67334032,\n", - " 9.42819619, 8.90718842, 8.95999622, 17.03638124, 14.13874507,\n", - " 14.17324162, 14.82433629, 10.27358413, 7.75390744, 10.63386297,\n", - " 10.74013877, 9.25264263, 8.88592076, 15.62230277, 8.68499494,\n", - " 7.90613437, 10.8253715 , 9.28829837, 9.96133757, 8.82941794,\n", - " 11.07499003, 9.08565426, 8.76584291, 11.91541052, 9.45269704,\n", - " 9.68554997, 9.76184082, 10.95884109, 9.22084093, 9.07609534,\n", - " 9.72482204, 8.66262245, 8.85580897, 12.12771249, 9.1096139 ,\n", - " 9.55135322, 9.73613167, 12.00068331, 9.63835907, 8.8003633 ,\n", - " 10.78142428, 10.36234426, 8.7075491 , 8.79299307, 10.6836946 ,\n", - " 8.24508142, 9.70224071, 8.64105797, 9.16640019])" + "array([ 9.38066268, 9.179739 , 7.78263307, 8.48762512, 8.10376453,\n", + " 8.56524658, 9.3991704 , 9.5658536 , 8.36927128, 8.75663853,\n", + " 12.13388062, 14.47540092, 11.96517801, 8.72594619, 8.40938139,\n", + " 9.32459807, 7.49898648, 7.6770916 , 8.02397871, 7.90862179,\n", + " 8.10608029, 11.15578365, 8.79269648, 8.14802432, 8.24170065,\n", + " 8.16714478, 7.58979988, 8.1290102 , 8.62176943, 8.25858569,\n", + " 9.62505913, 47.5489819 , 8.66863251, 9.16207457, 8.39729404,\n", + " 8.35266876, 9.4563067 , 8.82322693, 8.02664924, 8.37660813,\n", + " 9.95896864, 8.33410001, 8.31452036, 8.16242218, 8.89259601,\n", + " 8.59475064, 8.67619205, 8.53494453, 8.38801956, 8.262182 ,\n", + " 8.15596676, 8.39553308, 7.6218369 , 8.0278995 , 8.25957584,\n", + " 9.44061399, 8.49700546, 8.50827813, 7.66743159, 7.89283347,\n", + " 9.41012764, 7.84124994, 9.12892008, 9.09043598, 8.20927215,\n", + " 8.52260566, 8.71522832, 8.5416894 , 8.18118405, 9.00752664,\n", + " 8.04391742, 8.14891124, 8.28526998, 9.01239848, 8.38266683,\n", + " 14.62356496, 14.96698308, 9.08445382, 9.09234452, 10.11763263,\n", + " 9.4793036 , 8.85432053, 8.04963088, 11.1493423 , 13.14935565,\n", + " 11.87230349, 9.03305769, 7.69870114, 9.0376091 , 7.95852947,\n", + " 9.09171367, 7.89099622, 7.95618558, 8.01989579, 7.80822897,\n", + " 8.87989831, 7.47948337, 8.94316697, 11.28595853, 8.06794882,\n", + " 8.43929172, 13.42541838, 12.66442442, 10.31300116, 8.40639782,\n", + " 9.44638705, 7.86042881, 7.9562912 , 7.63827801, 9.36086607,\n", + " 8.08337593, 8.08384132, 8.62920904, 7.93341088, 27.11030984,\n", + " 8.74523377, 7.85957456, 7.72122622, 8.07594705, 7.69747329,\n", + " 10.03533864, 7.98938775, 8.65719438, 7.70120573, 8.13396835,\n", + " 7.81585193, 8.7429204 , 8.03873968, 7.91843748, 8.5353601 ,\n", + " 9.08414316, 9.40751314, 11.16570473, 12.92745161, 12.54237223,\n", + " 8.64982891, 8.20941329, 8.21981692, 8.69059372, 9.09631157,\n", + " 8.40581775, 8.06178737, 7.88094616, 8.26159453, 8.75838733,\n", + " 8.35216618, 8.49808455, 8.0595901 , 8.94898224, 8.55322099,\n", + " 9.57320905, 8.15324783, 8.81365919, 8.33678389, 10.17447209,\n", + " 10.76701999, 8.4331758 , 8.84952474, 8.16706038, 8.91656828,\n", + " 11.22193599, 7.92432523, 9.3538506 , 8.28465366, 8.90996122,\n", + " 8.44929314, 7.96649742, 7.91064453, 8.33018184, 9.50152779,\n", + " 8.73313498, 7.64226604, 9.21144247, 8.67113829, 7.94187903,\n", + " 9.58002162, 8.84625363, 9.22457576, 9.34697914, 8.5770123 ,\n", + " 8.62616229, 8.75988531, 9.2317543 , 13.60001945, 8.25251031,\n", + " 8.89614224, 9.31538773, 8.64873481, 8.95063353, 7.81029224,\n", + " 8.68752122, 10.05802083, 10.50346422, 15.40944171, 12.0977385 ,\n", + " 8.9798615 , 9.06866932, 8.00709033, 9.03906846])" ] }, "execution_count": 22, @@ -1163,7 +1129,7 @@ { "data": { "text/plain": [ - "(array([ 0, 0, 0, 16, 105, 85, 75, 61, 40]),\n", + "(array([ 0, 0, 0, 12, 50, 44, 40, 20, 5]),\n", " array([ 6. , 6.44444444, 6.88888889, 7.33333333, 7.77777778,\n", " 8.22222222, 8.66666667, 9.11111111, 9.55555556, 10. ]))" ] @@ -1188,7 +1154,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "Final print 24, time 107.859 seconds: Counter({'Completed': 478, 'Failed': 21})izing': 1})Running': 1})\r" + "Final print 38, time 113.758 seconds: Counter({'Completed': 183, 'Failed': 16})izing': 1})Running': 1})\r" ] } ], @@ -1246,8 +1212,8 @@ { "data": { "text/plain": [ - "array([28, 33, 15, 45, 18, 43, 30, 31, 65, 6, 42, 16, 11, 41, 19, 8, 5,\n", - " 2, 64, 34])" + "array([68, 43, 18, 4, 10, 7, 9, 3, 15, 13, 2, 19, 16, 12, 11, 5, 8,\n", + " 20, 23, 14])" ] }, "execution_count": 25, @@ -1258,22 +1224,21 @@ "name": "stdout", "output_type": "stream", "text": [ - - "[244.173832 244.510378 245.027595 245.540781 247.395535 247.411761\n", - " 247.933416 248.256958 248.468753 249.724234 249.874347 250.013758\n", - " 250.53221 251.10704 251.400594 253.192625 253.421425 253.968411\n", - " 256.888013 260.331917]\n", - "['Completed' 'Completed' 'Completed' 'Completed' 'Completed' 'Completed'\n", - " 'Completed' 'Failed' 'Completed' 'Completed' 'Completed' 'Completed'\n", - " 'Failed' 'Completed' 'Completed' 'Completed' 'Completed' 'Completed'\n", - " 'Failed' 'Completed']\n" + "[125.670241 235.262071 242.972693 243.289016 245.793012 246.206194\n", + " 246.548031 247.237064 248.347261 248.495178 248.532195 249.195745\n", + " 251.287567 251.48567 251.764877 252.207378 253.225733 253.885991\n", + " 256.071908 258.126523]\n", + "['Completed' 'Completed' 'Completed' 'Failed' 'Completed' 'Completed'\n", + " 'Completed' 'Completed' 'Completed' 'Completed' 'Completed' 'Completed'\n", + " 'Completed' 'Completed' 'Completed' 'Completed' 'Completed' 'Completed'\n", + " 'Completed' 'Completed']\n" ] }, { "data": { "text/plain": [ - "array([232, 54, 195, 214, 250, 48, 490, 261, 329, 140, 336, 129, 311,\n", - " 223, 226, 370, 319, 254, 197, 85])" + "array([137, 140, 113, 70, 136, 135, 159, 149, 179, 101, 198, 171, 142,\n", + " 116, 181, 125, 185, 151, 115, 154])" ] }, "execution_count": 25, @@ -1284,19 +1249,19 @@ "name": "stdout", "output_type": "stream", "text": [ - "[92.52469 92.854187 93.127771 93.19945 93.319895 93.372538 93.557287\n", - " 93.579393 93.646901 93.681486 93.890417 94.05724 94.162242 94.165297\n", - " 94.182998 94.263456 94.316783 94.400242 94.406081 94.583321]\n", - "['Completed' 'Completed' 'Completed' 'Completed' 'Failed' 'Completed'\n", - " 'Failed' 'Failed' 'Completed' 'Completed' 'Completed' 'Completed'\n", - " 'Completed' 'Completed' 'Completed' 'Failed' 'Completed' 'Completed'\n", - " 'Failed' 'Completed']\n" + "[93.328156 93.536588 93.550991 93.575355 93.699247 93.909906 94.273905\n", + " 94.396221 94.500897 94.526355 94.56506 94.763535 95.017692 95.024318\n", + " 95.145761 95.297085 95.374434 95.387763 95.447059 95.533165]\n", + "['Completed' 'Completed' 'Completed' 'Failed' 'Completed' 'Failed'\n", + " 'Completed' 'Failed' 'Completed' 'Completed' 'Completed' 'Failed'\n", + " 'Completed' 'Completed' 'Failed' 'Completed' 'Completed' 'Failed'\n", + " 'Completed' 'Completed']\n" ] }, { "data": { "text/plain": [ - "(array([ 0, 0, 128, 320, 8, 1, 3, 3, 0]),\n", + "(array([ 0, 0, 73, 102, 5, 0, 0, 0, 0]),\n", " array([ 50. , 66.66666667, 83.33333333, 100. ,\n", " 116.66666667, 133.33333333, 150. , 166.66666667,\n", " 183.33333333, 200. ]))" @@ -1347,9 +1312,9 @@ ], "metadata": { "kernelspec": { - "display_name": "fwi_dev_conda_environment Python", + "display_name": "Python [conda env:aml-sdk-conda-env] *", "language": "python", - "name": "fwi_dev_conda_environment" + "name": "conda-env-aml-sdk-conda-env-py" }, "language_info": { "codemirror_mode": { diff --git a/contrib/scripts/ablation.sh b/contrib/scripts/ablation.sh index 81fcdaa6..3e57a6e5 100755 --- a/contrib/scripts/ablation.sh +++ b/contrib/scripts/ablation.sh @@ -3,22 +3,22 @@ source activate seismic-interpretation # Patch_Size 100: Patch vs Section Depth -python scripts/prepare_dutchf3.py split_train_val patch --data-dir=/mnt/dutch --stride=50 --patch=100 +python scripts/prepare_dutchf3.py split_train_val patch --data_dir=/mnt/dutch --stride=50 --patch_size=100 --split_direction=both python train.py OUTPUT_DIR /data/output/hrnet_patch TRAIN.DEPTH patch TRAIN.PATCH_SIZE 100 --cfg 'configs/hrnet.yaml' python train.py OUTPUT_DIR /data/output/hrnet_section TRAIN.DEPTH section TRAIN.PATCH_SIZE 100 --cfg 'configs/hrnet.yaml' # Patch_Size 150: Patch vs Section Depth -python scripts/prepare_dutchf3.py split_train_val patch --data-dir=/mnt/dutch --stride=50 --patch=150 +python scripts/prepare_dutchf3.py split_train_val patch --data_dir=/mnt/dutch --stride=50 --patch_size=150 --split_direction=both python train.py OUTPUT_DIR /data/output/hrnet_patch TRAIN.DEPTH patch TRAIN.PATCH_SIZE 150 --cfg 'configs/hrnet.yaml' python train.py OUTPUT_DIR /data/output/hrnet_section TRAIN.DEPTH section TRAIN.PATCH_SIZE 150 --cfg 'configs/hrnet.yaml' # Patch_Size 200: Patch vs Section Depth -python scripts/prepare_dutchf3.py split_train_val patch --data-dir=/mnt/dutch --stride=50 --patch=200 +python scripts/prepare_dutchf3.py split_train_val patch --data_dir=/mnt/dutch --stride=50 --patch_size=200 --split_direction=both python train.py OUTPUT_DIR /data/output/hrnet_patch TRAIN.DEPTH patch TRAIN.PATCH_SIZE 200 --cfg 'configs/hrnet.yaml' python train.py OUTPUT_DIR /data/output/hrnet_section TRAIN.DEPTH section TRAIN.PATCH_SIZE 200 --cfg 'configs/hrnet.yaml' # Patch_Size 250: Patch vs Section Depth -python scripts/prepare_dutchf3.py split_train_val patch --data-dir=/mnt/dutch --stride=50 --patch=250 +python scripts/prepare_dutchf3.py split_train_val patch --data_dir=/mnt/dutch --stride=50 --patch_size=250 --split_direction=both python train.py OUTPUT_DIR /data/output/hrnet_patch TRAIN.DEPTH patch TRAIN.PATCH_SIZE 250 TRAIN.AUGMENTATIONS.RESIZE.HEIGHT 250 TRAIN.AUGMENTATIONS.RESIZE.WIDTH 250 --cfg 'configs/hrnet.yaml' python train.py OUTPUT_DIR /data/output/hrnet_section TRAIN.DEPTH section TRAIN.PATCH_SIZE 250 TRAIN.AUGMENTATIONS.RESIZE.HEIGHT 250 TRAIN.AUGMENTATIONS.RESIZE.WIDTH 250 --cfg 'configs/hrnet.yaml' diff --git a/cv_lib/cv_lib/event_handlers/__init__.py b/cv_lib/cv_lib/event_handlers/__init__.py index 589bbd86..8bd8567f 100644 --- a/cv_lib/cv_lib/event_handlers/__init__.py +++ b/cv_lib/cv_lib/event_handlers/__init__.py @@ -31,8 +31,7 @@ def _create_checkpoint_handler(self): def __call__(self, engine, to_save): self._checkpoint_handler(engine, to_save) if self._snapshot_function(): - files = glob.glob(os.path.join(self._model_save_location, self._running_model_prefix + "*")) - print(files) + files = glob.glob(os.path.join(self._model_save_location, self._running_model_prefix + "*")) name_postfix = os.path.basename(files[0]).lstrip(self._running_model_prefix) copyfile( files[0], diff --git a/cv_lib/cv_lib/event_handlers/logging_handlers.py b/cv_lib/cv_lib/event_handlers/logging_handlers.py index b7c41651..de354760 100644 --- a/cv_lib/cv_lib/event_handlers/logging_handlers.py +++ b/cv_lib/cv_lib/event_handlers/logging_handlers.py @@ -1,6 +1,6 @@ # Copyright (c) Microsoft Corporation. # Licensed under the MIT License. - +import json import logging import logging.config from toolz import curry @@ -25,15 +25,22 @@ def log_lr(optimizer, engine): logger.info(f"lr - {lr}") -_DEFAULT_METRICS = {"pixacc": "Avg accuracy :", "nll": "Avg loss :"} - - @curry -def log_metrics(log_msg, engine, metrics_dict=_DEFAULT_METRICS): +def log_metrics( + engine, + evaluator, + metrics_dict={ + "nll": "Avg loss :", + "pixacc": "Pixelwise Accuracy :", + "mca": "Avg Class Accuracy :", + "mIoU": "Avg Class IoU :", + }, + stage="undefined", +): logger = logging.getLogger(__name__) - metrics = engine.state.metrics - metrics_msg = " ".join([f"{metrics_dict[k]} {metrics[k]:.2f}" for k in metrics_dict]) - logger.info(f"{log_msg} - Epoch {engine.state.epoch} [{engine.state.max_epochs}] " + metrics_msg) + metrics = evaluator.state.metrics + metrics_msg = " ".join([f"{metrics_dict[k]} {metrics[k]:.4f}" for k in metrics_dict]) + logger.info(f"{stage} - Epoch {engine.state.epoch} [{engine.state.max_epochs}] " + metrics_msg) @curry @@ -44,6 +51,7 @@ def log_class_metrics(log_msg, engine, metrics_dict): logger.info(f"{log_msg} - Epoch {engine.state.epoch} [{engine.state.max_epochs}]\n" + metrics_msg) +# TODO: remove Evaluator once other train.py scripts are updated class Evaluator: def __init__(self, evaluation_engine, data_loader): self._evaluation_engine = evaluation_engine @@ -51,40 +59,3 @@ def __init__(self, evaluation_engine, data_loader): def __call__(self, engine): self._evaluation_engine.run(self._data_loader) - - -class HorovodLRScheduler: - """ - Horovod: using `lr = base_lr * hvd.size()` from the very beginning leads to worse final - accuracy. Scale the learning rate `lr = base_lr` ---> `lr = base_lr * hvd.size()` during - the first five epochs. See https://arxiv.org/abs/1706.02677 for details. - After the warmup reduce learning rate by 10 on the 30th, 60th and 80th epochs. - """ - - def __init__( - self, base_lr, warmup_epochs, cluster_size, data_loader, optimizer, batches_per_allreduce, - ): - self._warmup_epochs = warmup_epochs - self._cluster_size = cluster_size - self._data_loader = data_loader - self._optimizer = optimizer - self._base_lr = base_lr - self._batches_per_allreduce = batches_per_allreduce - self._logger = logging.getLogger(__name__) - - def __call__(self, engine): - epoch = engine.state.epoch - if epoch < self._warmup_epochs: - epoch += float(engine.state.iteration + 1) / len(self._data_loader) - lr_adj = 1.0 / self._cluster_size * (epoch * (self._cluster_size - 1) / self._warmup_epochs + 1) - elif epoch < 30: - lr_adj = 1.0 - elif epoch < 60: - lr_adj = 1e-1 - elif epoch < 80: - lr_adj = 1e-2 - else: - lr_adj = 1e-3 - for param_group in self._optimizer.param_groups: - param_group["lr"] = self._base_lr * self._cluster_size * self._batches_per_allreduce * lr_adj - self._logger.debug(f"Adjust learning rate {param_group['lr']}") diff --git a/cv_lib/cv_lib/event_handlers/tensorboard_handlers.py b/cv_lib/cv_lib/event_handlers/tensorboard_handlers.py index a9ba5f4c..d3df7f31 100644 --- a/cv_lib/cv_lib/event_handlers/tensorboard_handlers.py +++ b/cv_lib/cv_lib/event_handlers/tensorboard_handlers.py @@ -1,31 +1,43 @@ # Copyright (c) Microsoft Corporation. # Licensed under the MIT License. -from toolz import curry import torchvision +from tensorboardX import SummaryWriter import logging import logging.config +from toolz import curry -from tensorboardX import SummaryWriter - +from cv_lib.segmentation.dutchf3.utils import np_to_tb +from cv_lib.utils import decode_segmap def create_summary_writer(log_dir): writer = SummaryWriter(logdir=log_dir) return writer +def _transform_image(output_tensor): + output_tensor = output_tensor.cpu() + return torchvision.utils.make_grid(output_tensor, normalize=True, scale_each=True) + + +def _transform_pred(output_tensor): + output_tensor = output_tensor.squeeze().cpu().numpy() + decoded = decode_segmap(output_tensor) + return torchvision.utils.make_grid(np_to_tb(decoded), normalize=False, scale_each=False) + + def _log_model_output(log_label, summary_writer, engine): summary_writer.add_scalar(log_label, engine.state.output["loss"], engine.state.iteration) @curry def log_training_output(summary_writer, engine): - _log_model_output("training/loss", summary_writer, engine) + _log_model_output("Training/loss", summary_writer, engine) @curry def log_validation_output(summary_writer, engine): - _log_model_output("validation/loss", summary_writer, engine) + _log_model_output("Validation/loss", summary_writer, engine) @curry @@ -42,31 +54,62 @@ def log_lr(summary_writer, optimizer, log_interval, engine): summary_writer.add_scalar("lr", lr[0], getattr(engine.state, log_interval)) -_DEFAULT_METRICS = {"accuracy": "Avg accuracy :", "nll": "Avg loss :"} - - +# TODO: This is deprecated, and will be removed in the future. @curry -def log_metrics(summary_writer, train_engine, log_interval, engine, metrics_dict=_DEFAULT_METRICS): +def log_metrics( + summary_writer, train_engine, log_interval, engine, metrics_dict={"pixacc": "Avg accuracy :", "nll": "Avg loss :"} +): metrics = engine.state.metrics for m in metrics_dict: - summary_writer.add_scalar( - metrics_dict[m], metrics[m], getattr(train_engine.state, log_interval) - ) + summary_writer.add_scalar(metrics_dict[m], metrics[m], getattr(train_engine.state, log_interval)) -def create_image_writer( - summary_writer, label, output_variable, normalize=False, transform_func=lambda x: x -): +# TODO: This is deprecated, and will be removed in the future. +def create_image_writer(summary_writer, label, output_variable, normalize=False, transform_func=lambda x: x): logger = logging.getLogger(__name__) + logger.warning( + "create_image_writer() in tensorboard_handlers.py is deprecated, and will be removed in a future update." + ) def write_to(engine): try: data_tensor = transform_func(engine.state.output[output_variable]) - image_grid = torchvision.utils.make_grid( - data_tensor, normalize=normalize, scale_each=True - ) + image_grid = torchvision.utils.make_grid(data_tensor, normalize=normalize, scale_each=True) summary_writer.add_image(label, image_grid, engine.state.epoch) except KeyError: logger.warning("Predictions and or ground truth labels not available to report") return write_to + + +def log_results(engine, evaluator, summary_writer, n_classes, stage): + epoch = engine.state.epoch + metrics = evaluator.state.metrics + outputs = evaluator.state.output + + # Log Metrics: + summary_writer.add_scalar(f"{stage}/mIoU", metrics["mIoU"], epoch) + summary_writer.add_scalar(f"{stage}/nll", metrics["nll"], epoch) + summary_writer.add_scalar(f"{stage}/mca", metrics["mca"], epoch) + summary_writer.add_scalar(f"{stage}/pixacc", metrics["pixacc"], epoch) + + for i in range(n_classes): + summary_writer.add_scalar(f"{stage}/IoU_class_" + str(i), metrics["ciou"][i], epoch) + + # Log Images: + image = outputs["image"] + mask = outputs["mask"] + y_pred = outputs["y_pred"].max(1, keepdim=True)[1] + VISUALIZATION_LIMIT = 8 + + if evaluator.state.batch[0].shape[0] > VISUALIZATION_LIMIT: + image = image[:VISUALIZATION_LIMIT] + mask = mask[:VISUALIZATION_LIMIT] + y_pred = y_pred[:VISUALIZATION_LIMIT] + + # Mask out the region in y_pred where padding exists in the mask: + y_pred[mask == 255] = 255 + + summary_writer.add_image(f"{stage}/Image", _transform_image(image), epoch) + summary_writer.add_image(f"{stage}/Mask", _transform_pred(mask), epoch) + summary_writer.add_image(f"{stage}/Pred", _transform_pred(y_pred), epoch) diff --git a/cv_lib/cv_lib/segmentation/dutchf3/__init__.py b/cv_lib/cv_lib/segmentation/dutchf3/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/cv_lib/cv_lib/segmentation/dutchf3/utils.py b/cv_lib/cv_lib/segmentation/dutchf3/utils.py index adad1e97..f00cae8c 100644 --- a/cv_lib/cv_lib/segmentation/dutchf3/utils.py +++ b/cv_lib/cv_lib/segmentation/dutchf3/utils.py @@ -38,9 +38,3 @@ def git_hash(): repo = Repo(search_parent_directories=True) return repo.active_branch.commit.hexsha - -def generate_path(base_path, *directories): - path = os.path.join(base_path, *directories) - if not os.path.exists(path): - os.makedirs(path) - return path diff --git a/cv_lib/cv_lib/segmentation/models/seg_hrnet.py b/cv_lib/cv_lib/segmentation/models/seg_hrnet.py index dd06118e..6671603f 100644 --- a/cv_lib/cv_lib/segmentation/models/seg_hrnet.py +++ b/cv_lib/cv_lib/segmentation/models/seg_hrnet.py @@ -427,14 +427,18 @@ def init_weights( elif isinstance(m, nn.BatchNorm2d): nn.init.constant_(m.weight, 1) nn.init.constant_(m.bias, 0) + + if pretrained and not os.path.isfile(pretrained): + raise FileNotFoundError(f"The file {pretrained} was not found. Please supply correct path or leave empty") + if os.path.isfile(pretrained): pretrained_dict = torch.load(pretrained) logger.info("=> loading pretrained model {}".format(pretrained)) model_dict = self.state_dict() pretrained_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict.keys()} - # for k, _ in pretrained_dict.items(): - # logger.info( - # '=> loading {} pretrained model {}'.format(k, pretrained)) + for k, _ in pretrained_dict.items(): + logger.info( + '=> loading {} pretrained model {}'.format(k, pretrained)) model_dict.update(pretrained_dict) self.load_state_dict(model_dict) diff --git a/cv_lib/cv_lib/segmentation/utils.py b/cv_lib/cv_lib/segmentation/utils.py index 07951e88..9c68d398 100644 --- a/cv_lib/cv_lib/segmentation/utils.py +++ b/cv_lib/cv_lib/segmentation/utils.py @@ -2,38 +2,8 @@ # Licensed under the MIT License. import numpy as np -from deepseismic_interpretation.dutchf3.data import decode_segmap -from os import path -from PIL import Image -from toolz import pipe - def _chw_to_hwc(image_array_numpy): return np.moveaxis(image_array_numpy, 0, -1) -def save_images(pred_dict, output_dir, num_classes, colours, extra_identifier=""): - for id in pred_dict: - save_image( - pred_dict[id].unsqueeze(0).cpu().numpy(), - output_dir, - num_classes, - colours, - extra_identifier=extra_identifier, - ) - - -def save_image(image_numpy_array, output_dir, num_classes, colours, extra_identifier=""): - """Save segmentation map as image - - Args: - image_numpy_array (numpy.Array): numpy array that represents an image - output_dir ([type]): - num_classes ([type]): [description] - colours ([type]): [description] - extra_identifier (str, optional): [description]. Defaults to "". - """ - im_array = decode_segmap(image_numpy_array, n_classes=num_classes, label_colours=colours,) - im = pipe((im_array * 255).astype(np.uint8).squeeze(), _chw_to_hwc, Image.fromarray,) - filename = path.join(output_dir, f"{id}_{extra_identifier}.png") - im.save(filename) diff --git a/cv_lib/cv_lib/utils.py b/cv_lib/cv_lib/utils.py index d3e41aeb..8af56d16 100644 --- a/cv_lib/cv_lib/utils.py +++ b/cv_lib/cv_lib/utils.py @@ -1,6 +1,52 @@ + +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. + import os import logging +from PIL import Image +import numpy as np +from matplotlib import pyplot as plt + +def normalize(array): + """ + Normalizes a segmentation mask array to be in [0,1] range + for use with PIL.Image + """ + min = array.min() + return (array - min) / (array.max() - min) +def mask_to_disk(mask, fname, cmap_name="Paired"): + """ + write segmentation mask to disk using a particular colormap + """ + cmap = plt.get_cmap(cmap_name) + Image.fromarray(cmap(normalize(mask), bytes=True)).save(fname) + +def image_to_disk(mask, fname, cmap_name="seismic"): + """ + write segmentation image to disk using a particular colormap + """ + cmap = plt.get_cmap(cmap_name) + Image.fromarray(cmap(normalize(mask), bytes=True)).save(fname) + +def decode_segmap(label_mask, colormap_name="Paired"): + """ + Decode segmentation class labels into a colour image + Args: + label_mask (np.ndarray): an (N,H,W) array of integer values denoting + the class label at each spatial location. + Returns: + (np.ndarray): the resulting decoded color image (NCHW). + """ + out = np.zeros((label_mask.shape[0], 3, label_mask.shape[1], label_mask.shape[2])) + cmap = plt.get_cmap(colormap_name) + # loop over the batch + for i in range(label_mask.shape[0]): + im = Image.fromarray(cmap(normalize(label_mask[i, :, :]), bytes=True)).convert("RGB") + out[i, :, :, :] = np.array(im).swapaxes(0, 2).swapaxes(1, 2) + + return out def load_log_configuration(log_config_file): """ @@ -17,3 +63,10 @@ def load_log_configuration(log_config_file): logging.getLogger(__name__).error("Failed to load configuration from %s!", log_config_file) logging.getLogger(__name__).debug(str(e), exc_info=True) raise e + + +def generate_path(base_path, *directories): + path = os.path.join(base_path, *directories) + if not os.path.exists(path): + os.makedirs(path) + return path diff --git a/docker/Dockerfile b/docker/Dockerfile index 36e5f375..22d50ef4 100644 --- a/docker/Dockerfile +++ b/docker/Dockerfile @@ -13,10 +13,6 @@ ENV LANG=C.UTF-8 LC_ALL=C.UTF-8 ENV PATH /opt/conda/bin:$PATH SHELL ["/bin/bash", "-c"] -# Set bash as the only shell -RUN rm /bin/sh && \ - ln -s /bin/bash /bin/sh - # Install Anaconda and download the seismic-deeplearning repo RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh && \ /bin/bash ~/miniconda.sh -b -p /opt/conda && \ @@ -24,35 +20,36 @@ RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86 ln -s /opt/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh && \ echo ". /opt/conda/etc/profile.d/conda.sh" >> ~/.bashrc && \ apt-get install -y zip && \ - wget --quiet https://github.com/microsoft/seismic-deeplearning/archive/master.zip -O master.zip && \ - unzip master.zip && rm master.zip + wget --quiet https://github.com/microsoft/seismic-deeplearning/archive/staging.zip -O staging.zip && \ + unzip staging.zip && rm staging.zip -RUN cd seismic-deeplearning-master && \ +RUN cd seismic-deeplearning-staging && \ conda env create -n seismic-interpretation --file environment/anaconda/local/environment.yml && \ source activate seismic-interpretation && \ python -m ipykernel install --user --name seismic-interpretation && \ pip install -e interpretation && \ pip install -e cv_lib +# TODO: add back in later when Penobscot notebook is available +# Download Penobscot dataset: +# RUN cd seismic-deeplearning-staging && \ +# data_dir="/home/username/data/penobscot" && \ +# mkdir -p "$data_dir" && \ +# ./scripts/download_penobscot.sh "$data_dir" && \ +# cd scripts && \ +# source activate seismic-interpretation && \ +# python prepare_penobscot.py split_inline --data-dir=$data_dir --val-ratio=.1 --test-ratio=.2 && \ +# cd .. + # Download F3 dataset: -RUN cd seismic-deeplearning-master && \ +RUN cd seismic-deeplearning-staging && \ data_dir="/home/username/data/dutch" && \ mkdir -p "$data_dir" && \ ./scripts/download_dutch_f3.sh "$data_dir" && \ cd scripts && \ source activate seismic-interpretation && \ - python prepare_dutchf3.py split_train_val section --data-dir=${data_dir}/data && \ - python prepare_dutchf3.py split_train_val patch --data-dir=${data_dir}/data --stride=50 --patch=100 && \ - cd .. - -# Download Penobscot dataset: -RUN cd seismic-deeplearning-master && \ - data_dir="/home/username/data/penobscot" && \ - mkdir -p "$data_dir" && \ - ./scripts/download_penobscot.sh "$data_dir" && \ - cd scripts && \ - source activate seismic-interpretation && \ - python prepare_penobscot.py split_inline --data-dir=$data_dir --val-ratio=.1 --test-ratio=.2 && \ + python prepare_dutchf3.py split_train_val section --data-dir=${data_dir}/data --label_file=train/train_labels.npy --output_dir=splits --split_direction=both && \ + python prepare_dutchf3.py split_train_val patch --data-dir=${data_dir}/data --label_file=train/train_labels.npy --output_dir=splits --stride=50 --patch_size=100 --split_direction=both && \ cd .. # Run notebook @@ -60,6 +57,5 @@ EXPOSE 9000/tcp # TensorBoard inside notebook EXPOSE 9001/tcp -CMD cd /home/username && \ - source activate seismic-interpretation && \ +CMD source activate seismic-interpretation && \ jupyter lab --allow-root --ip 0.0.0.0 --port 9000 diff --git a/docker/README.md b/docker/README.md index 9c3d0f5e..701c2834 100644 --- a/docker/README.md +++ b/docker/README.md @@ -31,4 +31,4 @@ To run Tensorboard to visualize the logged metrics and results, open a terminal ```bash tensorboard --logdir output/ --port 9001 --bind_all ``` -Make sure your VM has the port 9001 allowed in the networking rules, and then you can open TensorBoard by navigating to `http://:9001/` on your browser where `` is your public VM IP address (or private VM IP address if you are using a VPN). \ No newline at end of file +Make sure your VM has the port 9001 allowed in the networking rules, and then you can open TensorBoard by navigating to `http://:9001/` on your browser where `` is your public VM IP address (or private VM IP address if you are using a VPN). diff --git a/environment/anaconda/local/environment.yml b/environment/anaconda/local/environment.yml index 67077bc2..567bac81 100644 --- a/environment/anaconda/local/environment.yml +++ b/environment/anaconda/local/environment.yml @@ -4,25 +4,24 @@ channels: - pytorch dependencies: - python=3.6.7 - - pip - - pytorch==1.3.1 + - pip=19.0 + - pytorch==1.4.0 - cudatoolkit==10.1.243 - jupyter - ipykernel - - torchvision==0.4.2 + - torchvision>=0.5.0 - pandas==0.25.3 - - opencv==4.1.2 - scikit-learn==0.21.3 - tensorflow==2.0 - opt-einsum>=2.3.2 - tqdm==4.39.0 - - itkwidgets==0.23.1 + - itkwidgets==0.23.1 - pytest - papermill>=1.0.1 - jupyterlab - pip: - segyio==1.8.8 - - pytorch-ignite==0.3.0.dev20191105 # pre-release until stable available + - pytorch-ignite==0.3.0 - fire==0.2.1 - toolz==0.10.0 - tabulate==0.8.2 @@ -37,4 +36,5 @@ dependencies: - pylint - validators - scipy==1.1.0 - - jupytext==1.3.0 + - jupytext==1.3.0 + - validators diff --git a/environment/docker/apex/dockerfile b/environment/docker/apex/dockerfile index 3becd3c4..9dcf5615 100644 --- a/environment/docker/apex/dockerfile +++ b/environment/docker/apex/dockerfile @@ -10,7 +10,7 @@ RUN git clone https://github.com/NVIDIA/apex && \ cd apex && \ pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./ -RUN pip install toolz pytorch-ignite torchvision pandas opencv-python fire tensorboardx scikit-learn yacs +RUN pip install toolz pytorch-ignite torchvision pandas fire tensorboardx scikit-learn yacs WORKDIR /workspace CMD /bin/bash \ No newline at end of file diff --git a/environment/docker/horovod/dockerfile b/environment/docker/horovod/dockerfile index 0e12f455..04ed2f67 100644 --- a/environment/docker/horovod/dockerfile +++ b/environment/docker/horovod/dockerfile @@ -60,7 +60,7 @@ RUN pip install future typing RUN pip install numpy RUN pip install https://download.pytorch.org/whl/cu100/torch-${PYTORCH_VERSION}-$(python -c "import wheel.pep425tags as w; print('-'.join(w.get_supported()[0]))").whl \ https://download.pytorch.org/whl/cu100/torchvision-${TORCHVISION_VERSION}-$(python -c "import wheel.pep425tags as w; print('-'.join(w.get_supported()[0]))").whl -RUN pip install --no-cache-dir torchvision h5py toolz pytorch-ignite pandas opencv-python fire tensorboardx scikit-learn tqdm yacs albumentations gitpython +RUN pip install --no-cache-dir torchvision h5py toolz pytorch-ignite pandas fire tensorboardx scikit-learn tqdm yacs albumentations gitpython COPY ComputerVision_fork/contrib /contrib RUN pip install -e /contrib COPY DeepSeismic /DeepSeismic diff --git a/examples/interpretation/README.md b/examples/interpretation/README.md index 7f151c60..01bc2dff 100644 --- a/examples/interpretation/README.md +++ b/examples/interpretation/README.md @@ -1 +1,8 @@ -Description of examples +The folder contains notebook examples illustrating the use of segmentation algorithms on openly available datasets. Make sure you have followed the [set up instructions](../README.md) before running these examples. We provide the following notebook examples + +* [Dutch F3 dataset](notebooks/F3_block_training_and_evaluation_local.ipynb): This notebook illustrates section and patch based segmentation approaches on the [Dutch F3](https://terranubis.com/datainfo/Netherlands-Offshore-F3-Block-Complete) open dataset. This notebook uses denconvolution based segmentation algorithm on 2D patches. The notebook will guide you through visualization of the input volume, setting up model training and evaluation. + + +* [Penobscot dataset](notebooks/HRNet_Penobscot_demo_notebook.ipynb): +In this notebook, we demonstrate how to train an [HRNet](https://github.com/HRNet/HRNet-Semantic-Segmentation) model for facies prediction using [Penobscot](https://terranubis.com/datainfo/Penobscot) dataset. The Penobscot 3D seismic dataset was acquired in the Scotian shelf, offshore Nova Scotia, Canada. This notebook illustrates the use of HRNet based segmentation algorithm on the dataset. Details of HRNet based model can be found [here](https://arxiv.org/abs/1904.04514) + diff --git a/examples/interpretation/notebooks/Dutch_F3_patch_model_training_and_evaluation.ipynb b/examples/interpretation/notebooks/Dutch_F3_patch_model_training_and_evaluation.ipynb new file mode 100644 index 00000000..bffbe756 --- /dev/null +++ b/examples/interpretation/notebooks/Dutch_F3_patch_model_training_and_evaluation.ipynb @@ -0,0 +1,1103 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Copyright (c) Microsoft Corporation.\n", + "\n", + "Licensed under the MIT License." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Model training and evaluation on F3 Netherlands dataset" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Seismic interpretation, also referred to as facies classification, is a task of determining types of rock in the earth’s subsurface, given seismic data. Seismic interpretation is used as a standard approach for determining precise locations of oil deposits for drilling, therefore reducing risks and potential losses. In recent years, there has been a great interest in using fully-supervised deep learning models for seismic interpretation. \n", + "\n", + "In this notebook, we demonstrate how to train a deep neural network for facies prediction using F3 Netherlands dataset. The F3 block is located in the North Sea off the shores of Netherlands. The dataset contains 6 classes (facies or lithostratigraphic units), all of which are of varying thickness (class imbalance). Processed data is available in numpy format as a `401 x 701 x 255` array. The processed F3 data is made available by [Alaudah et al. 2019](https://github.com/yalaudah/facies_classification_benchmark).\n", + "\n", + "We specifically demonstrate a patch-based model approach, where we process a patch of an inline or crossline slice, instead of the entire slice." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Environment setup\n", + "\n", + "To set up the conda environment and the Jupyter notebook kernel, please follow the instructions in the top-level [README.md](../../../README.md) file." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Notebook-specific parameters\n", + "\n", + "Now let's set parameters which are required only for this notebook.\n", + "\n", + "We use configuration files to specify experiment configuration, such as hyperparameters used in training and evaluation, as well as other experiment settings. \n", + "\n", + "This notebook is designed to showcase the patch-based models on Dutch F3 dataset, hence we load the configuration files from that experiment by navigating to the `experiments` folder in the root directory. Each configuration file specifies a different Computer Vision model which is loaded for this notebook.\n", + "\n", + "Modify the `CONFIG_FILE` variable below if you would like to run the experiment using a different configuration file from the same experiment.\n", + "\n", + "For \"out-of-the-box\" Docker experience we, already pre-poppulated each model configuration file with the correct paramters." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# load an existing experiment configuration file\n", + "CONFIG_FILE = (\n", + " \"../../../experiments/interpretation/dutchf3_patch/local/configs/hrnet.yaml\"\n", + ")\n", + "# number of images to score\n", + "N_EVALUATE = 20\n", + "# demo flag - by default notebook runs in demo mode and only fine-tunes the pre-trained model. Set to False for full re-training.\n", + "DEMO = True\n", + "# options are test1 or test2 - picks which Dutch F3 test set split to use\n", + "TEST_SPLIT = \"test1\"\n", + "\n", + "import os\n", + "assert os.path.isfile(CONFIG_FILE), \"Experiment config file CONFIG_FILE not found on disk\"\n", + "assert isinstance(N_EVALUATE, int) and N_EVALUATE>0, \"Number of images to score has to be a positive integer\"\n", + "assert isinstance(DEMO, bool), \"demo mode should be a boolean\"\n", + "assert TEST_SPLIT == \"test1\" or TEST_SPLIT == \"test2\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Data download and preparation\n", + "\n", + "To download and prepare the F3 data set, please follow the instructions in the top-level [README](../../../README.md) file. Once you have downloaded and prepared the data set, you will find your files in the following directory tree:\n", + "\n", + "```\n", + "data\n", + "├── splits\n", + "├── test_once\n", + "│ ├── test1_labels.npy\n", + "│ ├── test1_seismic.npy\n", + "│ ├── test2_labels.npy\n", + "│ └── test2_seismic.npy\n", + "└── train\n", + " ├── train_labels.npy\n", + " └── train_seismic.npy\n", + "```\n", + "\n", + "We recommend saving the data under `$HOME/data/dutchf3` since this notebook will use that location as the data root. Otherwise, modify the `DATASET.ROOT` field in the configuration file, described next. " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Library imports\n", + "\n", + "Let's load required libraries - the first step fixes the seeds to obtain reproducible results and the rest of the steps import the libraries" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "import torch\n", + "import logging\n", + "import logging.config\n", + "from os import path\n", + "\n", + "import random\n", + "import matplotlib.pyplot as plt\n", + "\n", + "plt.rcParams.update({\"font.size\": 16})\n", + "\n", + "import yacs.config\n", + "\n", + "import cv2\n", + "from albumentations import Compose, HorizontalFlip, Normalize, PadIfNeeded, Resize\n", + "from ignite.contrib.handlers import CosineAnnealingScheduler\n", + "from ignite.handlers import ModelCheckpoint\n", + "from ignite.engine import Events\n", + "from ignite.metrics import Loss\n", + "from ignite.utils import convert_tensor\n", + "from toolz import compose\n", + "from torch.utils import data\n", + "\n", + "from cv_lib.utils import load_log_configuration\n", + "from cv_lib.event_handlers import SnapshotHandler, logging_handlers\n", + "from cv_lib.event_handlers.logging_handlers import Evaluator\n", + "from cv_lib.event_handlers import tensorboard_handlers\n", + "from cv_lib.event_handlers.tensorboard_handlers import create_summary_writer\n", + "from cv_lib.segmentation import models\n", + "from cv_lib.segmentation.dutchf3.engine import (\n", + " create_supervised_evaluator,\n", + " create_supervised_trainer,\n", + ")\n", + "\n", + "from cv_lib.segmentation.metrics import (\n", + " pixelwise_accuracy,\n", + " class_accuracy,\n", + " mean_class_accuracy,\n", + " class_iou,\n", + " mean_iou,\n", + ")\n", + "\n", + "from cv_lib.segmentation.dutchf3.utils import (\n", + " current_datetime, \n", + " git_branch,\n", + " git_hash,\n", + " np_to_tb,\n", + ")\n", + "\n", + "from cv_lib.utils import generate_path\n", + "\n", + "from deepseismic_interpretation.dutchf3.data import (\n", + " get_patch_loader, \n", + " get_test_loader,\n", + ")\n", + "\n", + "from itkwidgets import view\n", + "\n", + "from utilities import (\n", + " plot_aline,\n", + " patch_label_2d,\n", + " compose_processing_pipeline,\n", + " output_processing_pipeline,\n", + " write_section_file,\n", + " runningScore,\n", + " validate_config_paths,\n", + " download_pretrained_model,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Experiment configuration file\n", + "\n", + "Let's load the experiment configuration!" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "with open(CONFIG_FILE, \"rt\") as f_read:\n", + " config = yacs.config.load_cfg(f_read)\n", + "\n", + "print(\n", + " f\"Configuration loaded. Please check that the DATASET.ROOT:{config.DATASET.ROOT} points to your data location.\"\n", + ")\n", + "print(\n", + " f\"To modify any of the options, please edit the configuration file {CONFIG_FILE} and reload. \\n\"\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We run test pipelines to test the notebooks, which use [papermill](https://papermill.readthedocs.io/en/latest/). If this notebook is being executed as part of such pipeline, the variables below are overridden. If not, we simply update these variable from a static configuration file specified earlier.\n", + "\n", + "Override parameters in case we use papermill:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [ + "parameters" + ] + }, + "outputs": [], + "source": [ + "# The number of datapoints you want to run in training or validation per batch\n", + "# Setting to None will run whole dataset\n", + "# useful for integration tests with a setting of something like 3\n", + "# Use only if you want to check things are running and don't want to run\n", + "# through whole dataset\n", + "# The number of epochs to run in training\n", + "max_epochs = config.TRAIN.END_EPOCH\n", + "max_snapshots = config.TRAIN.SNAPSHOTS\n", + "papermill = False\n", + "dataset_root = config.DATASET.ROOT\n", + "model_pretrained = config.MODEL.PRETRAINED" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# read back the parameters from papermill to config if papermill was used to run this notebook\n", + "if papermill:\n", + " # reduce number of images scored for testing\n", + " N_EVALUATE=2\n", + "\n", + "opts = [\n", + " \"DATASET.ROOT\",\n", + " dataset_root,\n", + " \"TRAIN.END_EPOCH\",\n", + " max_epochs,\n", + " \"TRAIN.SNAPSHOTS\",\n", + " max_snapshots,\n", + "]\n", + "if \"PRETRAINED\" in config.MODEL.keys():\n", + " opts += [\"MODEL.PRETRAINED\", model_pretrained]\n", + "\n", + "config.merge_from_list(opts)\n", + "\n", + "# download pre-trained model if possible\n", + "config = download_pretrained_model(config)\n", + "\n", + "# update model pretrained (in case it was changed when the pretrained model was downloaded)\n", + "model_pretrained = config.MODEL.PRETRAINED" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "These are the final configs which are going to be used for this notebook - please check them carefully:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "if DEMO:\n", + " opts = [\n", + " \"TRAIN.END_EPOCH\",\n", + " 1,\n", + " \"TRAIN.SNAPSHOTS\",\n", + " 1,\n", + " \"TRAIN.MAX_LR\",\n", + " 10 ** -9,\n", + " \"TRAIN.MIN_LR\",\n", + " 10 ** -9,\n", + " ]\n", + " config.merge_from_list(opts)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Fix random seeds, and set CUDNN benchmark mode:\n", + "torch.backends.cudnn.benchmark = config.CUDNN.BENCHMARK\n", + "\n", + "# Fix random seeds:\n", + "torch.manual_seed(config.SEED)\n", + "if torch.cuda.is_available():\n", + " torch.cuda.manual_seed_all(config.SEED)\n", + "np.random.seed(seed=config.SEED)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print(config)\n", + "validate_config_paths(config)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "For tests we reduce the number of data used by the Jupyter notebook (pending Ignite 0.3.0 where we can just reduce the number of batches per EPOCH)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## F3 data set " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's visualize a few sections of the F3 data set. The processed F3 data set is stored as a 3D numpy array. Let's view slices of the data along inline and crossline directions. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Load training data and labels\n", + "train_seismic = np.load(path.join(config.DATASET.ROOT, \"train/train_seismic.npy\"))\n", + "train_labels = np.load(path.join(config.DATASET.ROOT, \"train/train_labels.npy\"))\n", + "\n", + "print(f\"Number of inline slices: {train_seismic.shape[0]}\")\n", + "print(f\"Number of crossline slices: {train_seismic.shape[1]}\")\n", + "print(f\"Depth dimension : {train_seismic.shape[2]}\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "view(train_labels, slicing_planes=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's plot a __crossline__ slice." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "idx = 100\n", + "x_in = train_seismic[idx, :, :].swapaxes(0, 1)\n", + "x_inl = train_labels[idx, :, :].swapaxes(0, 1)\n", + "\n", + "plot_aline(x_in, x_inl, xlabel=\"crossline (relative)\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's plot an __inline__ slice." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "x_cr = train_seismic[:, idx, :].swapaxes(0, 1)\n", + "x_crl = train_labels[:, idx, :].swapaxes(0, 1)\n", + "\n", + "plot_aline(x_cr, x_crl, xlabel=\"inline (relative)\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Model training" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Set up logging\n", + "load_log_configuration(config.LOG_CONFIG)\n", + "logger = logging.getLogger(__name__)\n", + "logger.debug(config.WORKERS)\n", + "\n", + "scheduler_step = config.TRAIN.END_EPOCH // config.TRAIN.SNAPSHOTS" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Set up data augmentation\n", + "\n", + "Let's define our data augmentation pipeline, which includes basic transformations, such as _data normalization, resizing, and padding_ if necessary. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Setup Augmentations\n", + "base_aug = Compose(\n", + " [\n", + " Normalize(\n", + " mean=(config.TRAIN.MEAN,), std=(config.TRAIN.STD,), max_pixel_value=1\n", + " ),\n", + " PadIfNeeded(\n", + " min_height=config.TRAIN.PATCH_SIZE,\n", + " min_width=config.TRAIN.PATCH_SIZE,\n", + " border_mode=0,\n", + " always_apply=True,\n", + " mask_value=255,\n", + " value=0,\n", + " ),\n", + " Resize(\n", + " config.TRAIN.AUGMENTATIONS.RESIZE.HEIGHT,\n", + " config.TRAIN.AUGMENTATIONS.RESIZE.WIDTH,\n", + " always_apply=True,\n", + " ),\n", + " PadIfNeeded(\n", + " min_height=config.TRAIN.AUGMENTATIONS.PAD.HEIGHT,\n", + " min_width=config.TRAIN.AUGMENTATIONS.PAD.WIDTH,\n", + " border_mode=config.OPENCV_BORDER_CONSTANT,\n", + " always_apply=True,\n", + " mask_value=255,\n", + " ),\n", + " ]\n", + ")\n", + "\n", + "if config.TRAIN.AUGMENTATION:\n", + " train_aug = Compose([base_aug, HorizontalFlip(p=0.5)])\n", + " val_aug = base_aug\n", + "else:\n", + " raise NotImplementedError(\n", + " \"We don't support turning off data augmentation at this time\"\n", + " )" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Load the data" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "For training the model, we will use a patch-based approach. Rather than using entire sections (crosslines or inlines) of the data, we extract a large number of small patches from the sections, and use the patches as our data. This allows us to generate larger set of images for training, but is also a more feasible approach for large seismic volumes. \n", + "\n", + "We are using a custom patch data loader from our __`deepseismic_interpretation`__ library for generating and loading patches from seismic section data." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "scheduler_step = config.TRAIN.END_EPOCH // config.TRAIN.SNAPSHOTS\n", + "\n", + "TrainPatchLoader = get_patch_loader(config)\n", + "\n", + "train_set = TrainPatchLoader(\n", + " config.DATASET.ROOT,\n", + " config.DATASET.NUM_CLASSES,\n", + " split=\"train\",\n", + " is_transform=True,\n", + " stride=config.TRAIN.STRIDE,\n", + " patch_size=config.TRAIN.PATCH_SIZE,\n", + " augmentations=train_aug,\n", + ")\n", + "n_classes = train_set.n_classes\n", + "logger.info(train_set)\n", + "val_set = TrainPatchLoader(\n", + " config.DATASET.ROOT,\n", + " config.DATASET.NUM_CLASSES,\n", + " split=\"val\",\n", + " is_transform=True,\n", + " stride=config.TRAIN.STRIDE,\n", + " patch_size=config.TRAIN.PATCH_SIZE,\n", + " augmentations=val_aug,\n", + ")\n", + "\n", + "if papermill:\n", + " train_set = data.Subset(train_set, range(3))\n", + " val_set = data.Subset(val_set, range(3))\n", + "elif DEMO:\n", + " val_set = data.Subset(val_set, range(config.VALIDATION.BATCH_SIZE_PER_GPU))\n", + "\n", + "logger.info(val_set)\n", + "\n", + "train_loader = data.DataLoader(\n", + " train_set,\n", + " batch_size=config.TRAIN.BATCH_SIZE_PER_GPU,\n", + " num_workers=config.WORKERS,\n", + " shuffle=True,\n", + ")\n", + "val_loader = data.DataLoader(\n", + " val_set,\n", + " batch_size=config.VALIDATION.BATCH_SIZE_PER_GPU,\n", + " num_workers=config.WORKERS,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The following code defines the snapshot duration in batches over which we snapshot training models to disk. Variable `scheduler_step` defines how many epochs we have in a snapshot and multiplying that by the number of data points per epoch gives us the number of datapoints which we have per snapshot." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# if we're running in test mode, just run 2 batches\n", + "if papermill:\n", + " train_len = 2\n", + "# if we're running in demo mode, just run 20 batches to fine-tune the model\n", + "elif DEMO:\n", + " train_len = 20\n", + "# if we're not in test or demo modes, run the entire loop\n", + "else:\n", + " train_len = len(train_loader)\n", + "\n", + "snapshot_duration = scheduler_step * train_len if not papermill else train_len" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We also must specify a batch transformation function which allows us to selectively manipulate the data for each batch into the format which model training expects in the next step." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def prepare_batch(batch, device=None, non_blocking=False):\n", + " x, y = batch\n", + " return (\n", + " convert_tensor(x, device=device, non_blocking=non_blocking),\n", + " convert_tensor(y, device=device, non_blocking=non_blocking),\n", + " )\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Set up model training" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Next, let's define a model to train, an optimization algorithm, and a loss function. \n", + "\n", + "Note that the model is loaded from our __`cv_lib`__ library, using the name of the model as specified in the configuration file. To load a different model, either change the `MODEL.NAME` field in the configuration file, or create a new one corresponding to the model you wish to train." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# load a model\n", + "model = getattr(models, config.MODEL.NAME).get_seg_model(config)\n", + "\n", + "# Send to GPU if available\n", + "device = \"cpu\"\n", + "if torch.cuda.is_available():\n", + " device = \"cuda\"\n", + "model = model.to(device)\n", + "\n", + "# SGD optimizer\n", + "optimizer = torch.optim.SGD(\n", + " model.parameters(),\n", + " lr=config.TRAIN.MAX_LR,\n", + " momentum=config.TRAIN.MOMENTUM,\n", + " weight_decay=config.TRAIN.WEIGHT_DECAY,\n", + ")\n", + "\n", + "# learning rate scheduler\n", + "scheduler = CosineAnnealingScheduler(\n", + " optimizer, \"lr\", config.TRAIN.MAX_LR, config.TRAIN.MIN_LR, cycle_size=snapshot_duration\n", + ")\n", + "\n", + "# weights are inversely proportional to the frequency of the classes in the training set\n", + "class_weights = torch.tensor(\n", + " config.DATASET.CLASS_WEIGHTS, device=device, requires_grad=False\n", + ")\n", + "\n", + "# loss function\n", + "criterion = torch.nn.CrossEntropyLoss(\n", + " weight=class_weights, ignore_index=255, reduction=\"mean\"\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Training the model\n", + "\n", + "We use [ignite](https://pytorch.org/ignite/index.html) framework to create training and validation loops in our codebase. Ignite provides an easy way to create compact training/validation loops without too much boilerplate code.\n", + "\n", + "In this notebook, we demonstrate the use of ignite on the training loop only. We create a training engine `trainer` that loops multiple times over the training dataset and updates model parameters. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# create training engine\n", + "trainer = create_supervised_trainer(\n", + " model, optimizer, criterion, prepare_batch, device=device\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Logging\n", + "\n", + "We add various events to the trainer, using an event system, that allows us to interact with the engine on each step of the run, such as, when the trainer is started/completed, when the epoch is started/completed and so on. \n", + "\n", + "Over the next few cells, we use event handlers to add the following events to the training loop:\n", + "- log training output\n", + "- log and schedule learning rate and\n", + "- periodically save model to disk." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# define and create main output directory \n", + "# output_dir = path.join(config.OUTPUT_DIR+\"_nb\", config.TRAIN.MODEL_DIR)\n", + "output_dir = config.OUTPUT_DIR+\"_nb\"\n", + "generate_path(output_dir)\n", + "\n", + "# define main summary writer which logs all model summaries\n", + "summary_writer = create_summary_writer(log_dir=path.join(output_dir, config.LOG_DIR))\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Next we need to score the model on validation set as it's training. To do this we need to add helper functions to manipulate data into the required shape just as we've done to prepare each batch for training at the beginning of this notebook.\n", + "\n", + "We also set up evaluation metrics which we want to record on the training set." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "transform_fn = lambda output_dict: (output_dict[\"y_pred\"].squeeze(), output_dict[\"mask\"].squeeze())\n", + "evaluator = create_supervised_evaluator(\n", + " model,\n", + " prepare_batch,\n", + " metrics={\n", + " \"nll\": Loss(criterion, output_transform=transform_fn),\n", + " \"pixacc\": pixelwise_accuracy(n_classes, output_transform=transform_fn, device=device),\n", + " \"cacc\": class_accuracy(n_classes, output_transform=transform_fn),\n", + " \"mca\": mean_class_accuracy(n_classes, output_transform=transform_fn),\n", + " \"ciou\": class_iou(n_classes, output_transform=transform_fn),\n", + " \"mIoU\": mean_iou(n_classes, output_transform=transform_fn),\n", + " },\n", + " device=device,\n", + ")\n", + "trainer.add_event_handler(Events.ITERATION_STARTED, scheduler)\n", + "\n", + "# Logging:\n", + "trainer.add_event_handler(\n", + " Events.ITERATION_COMPLETED, logging_handlers.log_training_output(log_interval=config.PRINT_FREQ),\n", + ")\n", + "trainer.add_event_handler(Events.EPOCH_COMPLETED, logging_handlers.log_lr(optimizer))\n", + "\n", + "# Tensorboard and Logging:\n", + "trainer.add_event_handler(Events.ITERATION_COMPLETED, tensorboard_handlers.log_training_output(summary_writer))\n", + "trainer.add_event_handler(Events.ITERATION_COMPLETED, tensorboard_handlers.log_validation_output(summary_writer))\n", + "\n", + "# add specific logger which also triggers printed metrics on test set\n", + "@trainer.on(Events.EPOCH_COMPLETED)\n", + "def log_training_results(engine):\n", + " evaluator.run(train_loader)\n", + " tensorboard_handlers.log_results(engine, evaluator, summary_writer, n_classes, stage=\"Training\")\n", + " logging_handlers.log_metrics(engine, evaluator, stage=\"Training\")\n", + "\n", + "# add specific logger which also triggers printed metrics on validation set\n", + "@trainer.on(Events.EPOCH_COMPLETED)\n", + "def log_validation_results(engine):\n", + " evaluator.run(val_loader)\n", + " tensorboard_handlers.log_results(engine, evaluator, summary_writer, n_classes, stage=\"Validation\")\n", + " logging_handlers.log_metrics(engine, evaluator, stage=\"Validation\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We also checkpoint models and snapshot them to disk with every training epoch." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# add model checkpointing\n", + "checkpoint_handler = ModelCheckpoint(\n", + " output_dir,\n", + " \"model_f3_nb\",\n", + " save_interval=1,\n", + " n_saved=1,\n", + " create_dir=True,\n", + " require_empty=False,\n", + ")\n", + "\n", + "trainer.add_event_handler(\n", + " Events.EPOCH_COMPLETED, checkpoint_handler, {config.MODEL.NAME: model}\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Start the training engine run." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "trainer.run(train_loader, max_epochs=config.TRAIN.END_EPOCH, epoch_length=train_len, seed = config.SEED)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Tensorboard\n", + "Using tensorboard for monitoring runs can be quite enlightening. Just ensure that the appropriate port is open on the VM so you can access it. Below we have the command for running tensorboard in your notebook. You can as easily view it in a seperate browser window by pointing the browser to the appropriate location and port." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "if not papermill:\n", + " %load_ext tensorboard" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "if not papermill:\n", + " %tensorboard --logdir $output_dir --port 9001 --host 0.0.0.0" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Evaluation" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We will next evaluate the performance of the model by looking at how well it predicts facies labels on samples from the test set.\n", + "\n", + "We will use the following evaluation metrics:\n", + "\n", + "- Pixel Accuracy (PA)\n", + "- Class Accuracy (CA)\n", + "- Mean Class Accuracy (MCA)\n", + "- Frequency Weighted intersection-over-union (FW IoU)\n", + "- Mean IoU (MIoU)\n", + "\n", + "You have an option here to use either the pre-trained model which we provided for you or to use the model which we just fine-tuned in this notebook. By default, we use the fine-tuned model, but you can change that in the cell below." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# use the model which we just fine-tuned\n", + "opts = [\"TEST.MODEL_PATH\", path.join(output_dir, f\"model_f3_nb_seg_hrnet_{train_len}.pth\")]\n", + "# uncomment the line below to use the pre-trained model instead\n", + "# opts = [\"TEST.MODEL_PATH\", config.MODEL.PRETRAINED]\n", + "config.merge_from_list(opts)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "model.load_state_dict(torch.load(config.TEST.MODEL_PATH))\n", + "model = model.to(device)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Next we load the test data and define the augmentations on it. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Augmentation\n", + "# augment entire sections with the same normalization\n", + "section_aug = Compose(\n", + " [Normalize(mean=(config.TRAIN.MEAN,), std=(config.TRAIN.STD,), max_pixel_value=1,)]\n", + ")\n", + "\n", + "# augment each patch and not the entire sectiom which the patches are taken from\n", + "patch_aug = Compose(\n", + " [\n", + " Resize(\n", + " config.TRAIN.AUGMENTATIONS.RESIZE.HEIGHT,\n", + " config.TRAIN.AUGMENTATIONS.RESIZE.WIDTH,\n", + " always_apply=True,\n", + " ),\n", + " PadIfNeeded(\n", + " min_height=config.TRAIN.AUGMENTATIONS.PAD.HEIGHT,\n", + " min_width=config.TRAIN.AUGMENTATIONS.PAD.WIDTH,\n", + " border_mode=config.OPENCV_BORDER_CONSTANT,\n", + " always_apply=True,\n", + " mask_value=255,\n", + " ),\n", + " ]\n", + ")\n", + "\n", + "# Process test data\n", + "pre_processing = compose_processing_pipeline(config.TRAIN.DEPTH, aug=patch_aug)\n", + "output_processing = output_processing_pipeline(config)\n", + "\n", + "# Select the test split\n", + "split = TEST_SPLIT\n", + "\n", + "labels = np.load(path.join(config.DATASET.ROOT, \"test_once\", split + \"_labels.npy\"))\n", + "section_file = path.join(config.DATASET.ROOT, \"splits\", \"section_\" + split + \".txt\")\n", + "write_section_file(labels, section_file, config)\n", + "\n", + "# Load test data\n", + "TestSectionLoader = get_test_loader(config)\n", + "test_set = TestSectionLoader(\n", + " config.DATASET.ROOT, config.DATASET.NUM_CLASSES, split=split, is_transform=True, augmentations=section_aug\n", + ")\n", + "# needed to fix this bug in pytorch https://github.com/pytorch/pytorch/issues/973\n", + "# one of the workers will quit prematurely\n", + "torch.multiprocessing.set_sharing_strategy(\"file_system\")\n", + "test_loader = data.DataLoader(\n", + " test_set, batch_size=1, num_workers=config.WORKERS, shuffle=False\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Predict segmentation mask on the test data\n", + "\n", + "For demonstration purposes and efficiency, we will only use a subset of the test data to predict segmentation mask on. More precisely, we will score `N_EVALUATE` images. If you would like to evaluate more images, set this variable to the desired number of images." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "CLASS_NAMES = [\n", + " \"upper_ns\",\n", + " \"middle_ns\",\n", + " \"lower_ns\",\n", + " \"rijnland_chalk\",\n", + " \"scruff\",\n", + " \"zechstein\",\n", + "]\n", + "\n", + "n_classes = len(CLASS_NAMES)\n", + "\n", + "# keep only N_EVALUATE sections to score\n", + "test_subset = random.sample(list(test_loader), N_EVALUATE)\n", + "\n", + "results = list()\n", + "running_metrics_split = runningScore(n_classes)\n", + "\n", + "# testing mode\n", + "with torch.no_grad():\n", + " model.eval()\n", + " # loop over testing data\n", + " for i, (images, labels) in enumerate(test_subset):\n", + " logger.info(f\"split: {split}, section: {i}\")\n", + " outputs = patch_label_2d(\n", + " model,\n", + " images,\n", + " pre_processing,\n", + " output_processing,\n", + " config.TRAIN.PATCH_SIZE,\n", + " config.TEST.TEST_STRIDE,\n", + " config.VALIDATION.BATCH_SIZE_PER_GPU,\n", + " device,\n", + " n_classes,\n", + " )\n", + "\n", + " pred = outputs.detach().max(1)[1].numpy()\n", + " gt = labels.numpy()\n", + " \n", + " # update evaluation metrics\n", + " running_metrics_split.update(gt, pred)\n", + " \n", + " # keep ground truth and result for plotting\n", + " results.append((np.squeeze(gt), np.squeeze(pred)))\n", + " " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's view the obtained metrics on this subset of test images. Note that we trained our model for for a small number of epochs, for demonstration purposes, so the performance results here are not meant to be representative. \n", + "\n", + "The performance exceed the ones shown here when the models are trained properly. For the full report on benchmarking performance results, please refer to the [README.md](../../../README.md) file." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# get scores\n", + "score, _ = running_metrics_split.get_scores()\n", + "\n", + "# Log split results\n", + "print(f'Pixel Acc: {score[\"Pixel Acc: \"]:.3f}')\n", + "for cdx, class_name in enumerate(CLASS_NAMES):\n", + " print(f' {class_name}_accuracy {score[\"Class Accuracy: \"][cdx]:.3f}')\n", + "\n", + "print(f'Mean Class Acc: {score[\"Mean Class Acc: \"]:.3f}')\n", + "print(f'Freq Weighted IoU: {score[\"Freq Weighted IoU: \"]:.3f}')\n", + "print(f'Mean IoU: {score[\"Mean IoU: \"]:0.3f}')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Visualize predictions" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's visualize the predictions on entire test sections. Note that the crosslines and inlines have different dimensions, however we were able to use them jointly for our network training and evaluation, since we were using smaller patches from the sections, whose size we can control via hyperparameter in the experiment configuration file. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "fig = plt.figure(figsize=(15, 50))\n", + "# only plot a few images\n", + "nplot = min(N_EVALUATE, 10)\n", + "for idx in range(nplot):\n", + " # plot actual\n", + " plt.subplot(nplot, 2, 2 * (idx + 1) - 1)\n", + " plt.imshow(results[idx][0])\n", + " # plot predicted\n", + " plt.subplot(nplot, 2, 2 * (idx + 1))\n", + " plt.imshow(results[idx][1])\n", + " \n", + "f_axes = fig.axes\n", + "_ = f_axes[0].set_title(\"Actual\")\n", + "_ = f_axes[1].set_title(\"Predicted\")\n", + "fig.savefig(\"plot_predictions.png\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "celltoolbar": "Tags", + "kernelspec": { + "display_name": "seismic-interpretation", + "language": "python", + "name": "seismic-interpretation" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.7" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} \ No newline at end of file diff --git a/examples/interpretation/notebooks/utilities.py b/examples/interpretation/notebooks/utilities.py index 0ef72f02..f0d3b9e3 100644 --- a/examples/interpretation/notebooks/utilities.py +++ b/examples/interpretation/notebooks/utilities.py @@ -2,11 +2,15 @@ # Licensed under the MIT License. import itertools - +import os +import urllib +import pathlib +import validators import matplotlib.pyplot as plt import numpy as np import torch import torch.nn.functional as F +import yacs from ignite.utils import convert_tensor from scipy.ndimage import zoom from toolz import compose, curry, itertoolz, pipe @@ -19,9 +23,9 @@ def __init__(self, n_classes): def _fast_hist(self, label_true, label_pred, n_class): mask = (label_true >= 0) & (label_true < n_class) - hist = np.bincount(n_class * label_true[mask].astype(int) + label_pred[mask], minlength=n_class ** 2,).reshape( - n_class, n_class - ) + hist = np.bincount( + n_class * label_true[mask].astype(int) + label_pred[mask], minlength=n_class ** 2, + ).reshape(n_class, n_class) return hist def update(self, label_trues, label_preds): @@ -148,7 +152,9 @@ def compose_processing_pipeline(depth, aug=None): def _generate_batches(h, w, ps, patch_size, stride, batch_size=64): - hdc_wdx_generator = itertools.product(range(0, h - patch_size + ps, stride), range(0, w - patch_size + ps, stride)) + hdc_wdx_generator = itertools.product( + range(0, h - patch_size + ps, stride), range(0, w - patch_size + ps, stride) + ) for batch_indexes in itertoolz.partition_all(batch_size, hdc_wdx_generator): yield batch_indexes @@ -160,7 +166,9 @@ def output_processing_pipeline(config, output): _, _, h, w = output.shape if config.TEST.POST_PROCESSING.SIZE != h or config.TEST.POST_PROCESSING.SIZE != w: output = F.interpolate( - output, size=(config.TEST.POST_PROCESSING.SIZE, config.TEST.POST_PROCESSING.SIZE), mode="bilinear", + output, + size=(config.TEST.POST_PROCESSING.SIZE, config.TEST.POST_PROCESSING.SIZE), + mode="bilinear", ) if config.TEST.POST_PROCESSING.CROP_PIXELS > 0: @@ -175,7 +183,15 @@ def output_processing_pipeline(config, output): def patch_label_2d( - model, img, pre_processing, output_processing, patch_size, stride, batch_size, device, num_classes, + model, + img, + pre_processing, + output_processing, + patch_size, + stride, + batch_size, + device, + num_classes, ): """Processes a whole section""" img = torch.squeeze(img) @@ -189,14 +205,19 @@ def patch_label_2d( # generate output: for batch_indexes in _generate_batches(h, w, ps, patch_size, stride, batch_size=batch_size): batch = torch.stack( - [pipe(img_p, _extract_patch(hdx, wdx, ps, patch_size), pre_processing) for hdx, wdx in batch_indexes], + [ + pipe(img_p, _extract_patch(hdx, wdx, ps, patch_size), pre_processing) + for hdx, wdx in batch_indexes + ], dim=0, ) model_output = model(batch.to(device)) for (hdx, wdx), output in zip(batch_indexes, model_output.detach().cpu()): output = output_processing(output) - output_p[:, :, hdx + ps : hdx + ps + patch_size, wdx + ps : wdx + ps + patch_size] += output + output_p[ + :, :, hdx + ps : hdx + ps + patch_size, wdx + ps : wdx + ps + patch_size + ] += output # crop the output_p in the middle output = output_p[:, :, ps:-ps, ps:-ps] @@ -240,3 +261,177 @@ def plot_aline(aline, labels, xlabel, ylabel="depth"): plt.imshow(labels) plt.xlabel(xlabel) plt.title("Label") + + +def validate_config_paths(config): + """Checks that all paths in the config file are valid""" + # TODO: this is currently hardcoded, in the future, its better to have a more generic solution. + # issue https://github.com/microsoft/seismic-deeplearning/issues/265 + + # Make sure DATASET.ROOT directory exist: + assert os.path.isdir(config.DATASET.ROOT), ( + "The DATASET.ROOT specified in the config file is not a valid directory." + f" Please make sure this path is correct: {config.DATASET.ROOT}" + ) + + # if a pretrained model path is specified in the config, it should exist: + if "PRETRAINED" in config.MODEL.keys(): + assert os.path.isfile(config.MODEL.PRETRAINED), ( + "A pretrained model is specified in the config file but does not exist." + f" Please make sure this path is correct: {config.MODEL.PRETRAINED}" + ) + + # if a test model path is specified in the config, it should exist: + if "TEST" in config.keys(): + if "MODEL_PATH" in config.TEST.keys(): + assert os.path.isfile(config.TEST.MODEL_PATH), ( + "The TEST.MODEL_PATH specified in the config file does not exist." + f" Please make sure this path is correct: {config.TEST.MODEL_PATH}" + ) + # Furthermore, if this is a HRNet model, the pretrained model path should exist if the test model is specified: + if "hrnet" in config.MODEL.NAME: + assert os.path.isfile(config.MODEL.PRETRAINED), ( + "For an HRNet model, you should specify the MODEL.PRETRAINED path" + " in the config file if the TEST.MODEL_PATH is also specified." + ) + + +def download_pretrained_model(config): + """ + This function reads the config file and downloads model pretrained on the penobscot or dutch + f3 datasets from the deepseismicsharedstore Azure storage. + + Pre-trained model is specified with MODEL.PRETRAINED parameter: + - if it's a URL, model is downloaded from the URL + - if it's a valid file handle, model is loaded from that file + - otherwise model is loaded from a pre-made URL which this code creates + + Running this code will overwrite the config.MODEL.PRETRAINED parameter value to the downloaded + pretrained model. The is the model which training is initialized from. + If this parameter is blank, we start from a randomly-initialized model. + + DATASET.ROOT parameter specifies the dataset which the model was pre-trained on + + MODEL.DEPTH optional parameter specified whether or not depth information was used in the model + and what kind of depth augmentation it was. + + We determine the pre-trained model name from these two parameters. + + """ + + # this assumes the name of the dataset is preserved in the path -- this is the default behaviour of the code. + if "dutch" in config.DATASET.ROOT: + dataset = "dutch" + elif "penobscot" in config.DATASET.ROOT: + dataset = "penobscot" + else: + raise NameError( + "Unknown dataset name. Only dutch f3 and penobscot are currently supported." + ) + + if "hrnet" in config.MODEL.NAME: + model = "hrnet" + elif "deconvnet" in config.MODEL.NAME: + model = "deconvnet" + elif "unet" in config.MODEL.NAME: + model = "unet" + else: + raise NameError( + "Unknown model name. Only hrnet, deconvnet, and unet are currently supported." + ) + + # check if the user already supplied a URL, otherwise figure out the URL + if validators.url(config.MODEL.PRETRAINED): + url = config.MODEL.PRETRAINED + print(f"Will use user-supplied URL of '{url}'") + elif os.path.isfile(config.MODEL.PRETRAINED): + url = None + print(f"Will use user-supplied file on local disk of '{config.MODEL.PRETRAINED}'") + else: + # As more pretrained models are added, add their URLs below: + if dataset == "penobscot": + if model == "hrnet": + # TODO: the code should check if the model uses patches or sections. + # issue: https://github.com/microsoft/seismic-deeplearning/issues/266 + url = "https://deepseismicsharedstore.blob.core.windows.net/master-public-models/penobscot_hrnet_patch_section_depth.pth" + else: + raise NotImplementedError( + "We don't store a pretrained model for Dutch F3 for this model combination yet." + ) + # add other models here .. + elif dataset == "dutch": + # add other models here .. + if model == "hrnet" and config.TRAIN.DEPTH == "section": + url = "https://deepseismicsharedstore.blob.core.windows.net/master-public-models/dutchf3_hrnet_patch_section_depth.pth" + elif model == "hrnet" and config.TRAIN.DEPTH == "patch": + url = "https://deepseismicsharedstore.blob.core.windows.net/master-public-models/dutchf3_hrnet_patch_patch_depth.pth" + elif ( + model == "deconvnet" + and "skip" in config.MODEL.NAME + and config.TRAIN.DEPTH == "none" + ): + url = "http://deepseismicsharedstore.blob.core.windows.net/master-public-models/dutchf3_deconvnetskip_patch_no_depth.pth" + + elif ( + model == "deconvnet" + and "skip" not in config.MODEL.NAME + and config.TRAIN.DEPTH == "none" + ): + url = "http://deepseismicsharedstore.blob.core.windows.net/master-public-models/dutchf3_deconvnet_patch_no_depth.pth" + elif model == "unet" and config.TRAIN.DEPTH == "section": + url = "http://deepseismicsharedstore.blob.core.windows.net/master-public-models/dutchf3_seresnetunet_patch_section_depth.pth" + else: + raise NotImplementedError( + "We don't store a pretrained model for Dutch F3 for this model combination yet." + ) + else: + raise NotImplementedError( + "We don't store a pretrained model for this dataset/model combination yet." + ) + + print(f"Could not find a user-supplied URL, downloading from '{url}'") + + # make sure the model_dir directory is writeable + model_dir = config.TRAIN.MODEL_DIR + + if not os.path.isdir(os.path.dirname(model_dir)) or not os.access( + os.path.dirname(model_dir), os.W_OK + ): + print(f"Cannot write to TRAIN.MODEL_DIR={config.TRAIN.MODEL_DIR}") + home = str(pathlib.Path.home()) + model_dir = os.path.join(home, "models") + print(f"Will write to TRAIN.MODEL_DIR={model_dir}") + + if not os.path.isdir(model_dir): + os.makedirs(model_dir) + + if url: + # Download the pretrained model: + pretrained_model_path = os.path.join( + model_dir, "pretrained_" + dataset + "_" + model + ".pth" + ) + + # always redownload the model + print( + f"Downloading the pretrained model to '{pretrained_model_path}'. This will take a few mintues.. \n" + ) + urllib.request.urlretrieve(url, pretrained_model_path) + print("Model successfully downloaded.. \n") + else: + # use same model which was on disk anyway - no download needed + pretrained_model_path = config.MODEL.PRETRAINED + + # Update config MODEL.PRETRAINED + # TODO: Only HRNet uses a pretrained model currently. + # issue https://github.com/microsoft/seismic-deeplearning/issues/267 + opts = [ + "MODEL.PRETRAINED", + pretrained_model_path, + "TRAIN.MODEL_DIR", + model_dir, + "TEST.MODEL_PATH", + pretrained_model_path, + ] + config.merge_from_list(opts) + + return config diff --git a/experiments/interpretation/dutchf3_patch/local/configs/hrnet.yaml b/experiments/interpretation/dutchf3_patch/local/configs/hrnet.yaml index 80600162..52263bbf 100644 --- a/experiments/interpretation/dutchf3_patch/local/configs/hrnet.yaml +++ b/experiments/interpretation/dutchf3_patch/local/configs/hrnet.yaml @@ -9,18 +9,19 @@ WORKERS: 4 PRINT_FREQ: 10 LOG_CONFIG: logging.conf SEED: 2019 +OPENCV_BORDER_CONSTANT: 0 DATASET: NUM_CLASSES: 6 - ROOT: /home/username/data/dutch/data + ROOT: "/home/username/data/dutch/data" CLASS_WEIGHTS: [0.7151, 0.8811, 0.5156, 0.9346, 0.9683, 0.9852] MODEL: NAME: seg_hrnet IN_CHANNELS: 3 - PRETRAINED: '/mnt/hrnet_pretrained/image_classification/hrnetv2_w48_imagenet_pretrained.pth' + PRETRAINED: "" EXTRA: FINAL_CONV_KERNEL: 1 STAGE2: @@ -73,7 +74,7 @@ TRAIN: WEIGHT_DECAY: 0.0001 SNAPSHOTS: 5 AUGMENTATION: True - DEPTH: "section" #"patch" # Options are No, Patch and Section + DEPTH: "section" # Options are: none, patch, and section STRIDE: 50 PATCH_SIZE: 100 AUGMENTATIONS: @@ -82,7 +83,7 @@ TRAIN: WIDTH: 200 PAD: HEIGHT: 256 - WIDTH: 256 + WIDTH: 256 MEAN: 0.0009997 # 0.0009996710808862074 STD: 0.20977 # 0.20976548783479299 MODEL_DIR: "models" @@ -91,12 +92,12 @@ TRAIN: VALIDATION: BATCH_SIZE_PER_GPU: 128 -TEST: - MODEL_PATH: "/data/home/mat/repos/DeepSeismic/experiments/segmentation/dutchf3/local/output/mat/exp/237c16780794800631c3f1895cacc475e15aca99/seg_hrnet/Sep17_115731/models/seg_hrnet_running_model_33.pth" +TEST: + MODEL_PATH: "/data/home/mat/repos/DeepSeismic/experiments/interpretation/dutchf3_patch/local/output/staging/0d1d2bbf9685995a0515ca1d9de90f9bcec0db90/seg_hrnet/Dec20_233535/models/seg_hrnet_running_model_33.pth" TEST_STRIDE: 10 SPLIT: 'Both' # Can be Both, Test1, Test2 INLINE: True CROSSLINE: True - POST_PROCESSING: - SIZE: 128 # + POST_PROCESSING: + SIZE: 128 # CROP_PIXELS: 14 # Number of pixels to crop top, bottom, left and right diff --git a/experiments/interpretation/dutchf3_patch/local/configs/patch_deconvnet.yaml b/experiments/interpretation/dutchf3_patch/local/configs/patch_deconvnet.yaml index 02d1d66b..18297a2d 100644 --- a/experiments/interpretation/dutchf3_patch/local/configs/patch_deconvnet.yaml +++ b/experiments/interpretation/dutchf3_patch/local/configs/patch_deconvnet.yaml @@ -10,7 +10,6 @@ PRINT_FREQ: 10 LOG_CONFIG: logging.conf SEED: 2019 - DATASET: NUM_CLASSES: 6 ROOT: /home/username/data/dutch/data @@ -20,7 +19,6 @@ MODEL: NAME: patch_deconvnet IN_CHANNELS: 1 - TRAIN: BATCH_SIZE_PER_GPU: 64 BEGIN_EPOCH: 0 @@ -31,7 +29,7 @@ TRAIN: WEIGHT_DECAY: 0.0001 SNAPSHOTS: 5 AUGMENTATION: True - DEPTH: "none" # Options are None, Patch and Section + DEPTH: "none" # Options are none, patch, and section STRIDE: 50 PATCH_SIZE: 99 AUGMENTATIONS: @@ -46,7 +44,7 @@ TRAIN: MODEL_DIR: "models" VALIDATION: - BATCH_SIZE_PER_GPU: 512 + BATCH_SIZE_PER_GPU: 64 TEST: MODEL_PATH: "/data/home/mat/repos/DeepSeismic/interpretation/experiments/segmentation/dutchf3/local/output/mat/exp/5cc37bbe5302e1989ef1388d629400a16f82d1a9/patch_deconvnet/Aug27_200339/models/patch_deconvnet_snapshot1model_50.pth" diff --git a/experiments/interpretation/dutchf3_patch/local/configs/patch_deconvnet_skip.yaml b/experiments/interpretation/dutchf3_patch/local/configs/patch_deconvnet_skip.yaml index 0e8ebe1d..4f06a089 100644 --- a/experiments/interpretation/dutchf3_patch/local/configs/patch_deconvnet_skip.yaml +++ b/experiments/interpretation/dutchf3_patch/local/configs/patch_deconvnet_skip.yaml @@ -19,7 +19,6 @@ MODEL: NAME: patch_deconvnet_skip IN_CHANNELS: 1 - TRAIN: BATCH_SIZE_PER_GPU: 64 BEGIN_EPOCH: 0 @@ -30,7 +29,7 @@ TRAIN: WEIGHT_DECAY: 0.0001 SNAPSHOTS: 5 AUGMENTATION: True - DEPTH: "none" #"patch" # Options are None, Patch and Section + DEPTH: "none" #"patch" # Options are none, patch, and section STRIDE: 50 PATCH_SIZE: 99 AUGMENTATIONS: @@ -45,7 +44,7 @@ TRAIN: MODEL_DIR: "models" VALIDATION: - BATCH_SIZE_PER_GPU: 512 + BATCH_SIZE_PER_GPU: 64 TEST: MODEL_PATH: "" diff --git a/experiments/interpretation/dutchf3_patch/local/configs/seresnet_unet.yaml b/experiments/interpretation/dutchf3_patch/local/configs/seresnet_unet.yaml index 56480d01..81b4e54a 100644 --- a/experiments/interpretation/dutchf3_patch/local/configs/seresnet_unet.yaml +++ b/experiments/interpretation/dutchf3_patch/local/configs/seresnet_unet.yaml @@ -30,7 +30,7 @@ TRAIN: WEIGHT_DECAY: 0.0001 SNAPSHOTS: 5 AUGMENTATION: True - DEPTH: "section" # Options are No, Patch and Section + DEPTH: "section" # Options are none, patch, and section STRIDE: 50 PATCH_SIZE: 100 AUGMENTATIONS: diff --git a/experiments/interpretation/dutchf3_patch/local/configs/unet.yaml b/experiments/interpretation/dutchf3_patch/local/configs/unet.yaml index f76478e5..ab4b9674 100644 --- a/experiments/interpretation/dutchf3_patch/local/configs/unet.yaml +++ b/experiments/interpretation/dutchf3_patch/local/configs/unet.yaml @@ -33,7 +33,7 @@ TRAIN: WEIGHT_DECAY: 0.0001 SNAPSHOTS: 5 AUGMENTATION: True - DEPTH: "section" # Options are No, Patch and Section + DEPTH: "section" # Options are none, patch, and section STRIDE: 50 PATCH_SIZE: 100 AUGMENTATIONS: diff --git a/experiments/interpretation/dutchf3_patch/local/default.py b/experiments/interpretation/dutchf3_patch/local/default.py index e34627a8..0322d5b1 100644 --- a/experiments/interpretation/dutchf3_patch/local/default.py +++ b/experiments/interpretation/dutchf3_patch/local/default.py @@ -11,8 +11,10 @@ _C = CN() -_C.OUTPUT_DIR = "output" # This will be the base directory for all output, such as logs and saved models -_C.LOG_DIR = "" # This will be a subdirectory inside OUTPUT_DIR +# This will be the base directory for all output, such as logs and saved models +_C.OUTPUT_DIR = "output" +# This will be a subdirectory inside OUTPUT_DIR +_C.LOG_DIR = "" _C.GPUS = (0,) _C.WORKERS = 4 _C.PRINT_FREQ = 20 @@ -20,7 +22,9 @@ _C.PIN_MEMORY = True _C.LOG_CONFIG = "logging.conf" _C.SEED = 42 - +_C.OPENCV_BORDER_CONSTANT = 0 +# number of batches to use in test/debug mode +_C.NUM_DEBUG_BATCHES = 1 # Cudnn related params _C.CUDNN = CN() @@ -58,7 +62,7 @@ _C.TRAIN.PATCH_SIZE = 99 _C.TRAIN.MEAN = 0.0009997 # 0.0009996710808862074 _C.TRAIN.STD = 0.20977 # 0.20976548783479299 # TODO: Should we apply std scaling? -_C.TRAIN.DEPTH = "no" # Options are None, Patch and Section +_C.TRAIN.DEPTH = "none" # Options are: none, patch, and section # None adds no depth information and the num of channels remains at 1 # Patch adds depth per patch so is simply the height of that patch from 0 to 1, channels=3 # Section adds depth per section so contains depth information for the whole section, channels=3 diff --git a/experiments/interpretation/dutchf3_patch/local/test.py b/experiments/interpretation/dutchf3_patch/local/test.py index a7e50b74..6aa68062 100644 --- a/experiments/interpretation/dutchf3_patch/local/test.py +++ b/experiments/interpretation/dutchf3_patch/local/test.py @@ -1,7 +1,7 @@ # Copyright (c) Microsoft Corporation. All rights reserved. # Licensed under the MIT License. # commitHash: c76bf579a0d5090ebd32426907d051d499f3e847 -# url: https://github.com/olivesgatech/facies_classification_benchmark +# url: https://github.com/yalaudah/facies_classification_benchmark # # To Test: # python test.py TRAIN.END_EPOCH 1 TRAIN.SNAPSHOTS 1 --cfg "configs/hrnet.yaml" --debug @@ -13,39 +13,27 @@ """ import itertools +import json import logging import logging.config import os from os import path -import cv2 import fire import numpy as np import torch import torch.nn.functional as F -from PIL import Image from albumentations import Compose, Normalize, PadIfNeeded, Resize -from cv_lib.utils import load_log_configuration -from cv_lib.segmentation import models -from cv_lib.segmentation.dutchf3.utils import ( - current_datetime, - generate_path, - git_branch, - git_hash, -) -from deepseismic_interpretation.dutchf3.data import ( - add_patch_depth_channels, - get_seismic_labels, - get_test_loader, -) -from default import _C as config -from default import update_config -from toolz import compose, curry, itertoolz, pipe +from toolz import compose, curry, itertoolz, pipe, take from torch.utils import data -from toolz import take -from matplotlib import cm +from cv_lib.segmentation import models +from cv_lib.segmentation.dutchf3.utils import current_datetime, git_branch, git_hash +from cv_lib.utils import load_log_configuration, mask_to_disk, generate_path +from deepseismic_interpretation.dutchf3.data import add_patch_depth_channels, get_test_loader +from default import _C as config +from default import update_config _CLASS_NAMES = [ "upper_ns", @@ -64,9 +52,9 @@ def __init__(self, n_classes): def _fast_hist(self, label_true, label_pred, n_class): mask = (label_true >= 0) & (label_true < n_class) - hist = np.bincount( - n_class * label_true[mask].astype(int) + label_pred[mask], minlength=n_class ** 2, - ).reshape(n_class, n_class) + hist = np.bincount(n_class * label_true[mask].astype(int) + label_pred[mask], minlength=n_class ** 2,).reshape( + n_class, n_class + ) return hist def update(self, label_trues, label_preds): @@ -106,21 +94,6 @@ def reset(self): self.confusion_matrix = np.zeros((self.n_classes, self.n_classes)) -def normalize(array): - """ - Normalizes a segmentation mask array to be in [0,1] range - """ - min = array.min() - return (array - min) / (array.max() - min) - - -def mask_to_disk(mask, fname): - """ - write segmentation mask to disk using a particular colormap - """ - Image.fromarray(cm.gist_earth(normalize(mask), bytes=True)).save(fname) - - def _transform_CHW_to_HWC(numpy_array): return np.moveaxis(numpy_array, 0, -1) @@ -202,9 +175,7 @@ def _compose_processing_pipeline(depth, aug=None): def _generate_batches(h, w, ps, patch_size, stride, batch_size=64): - hdc_wdx_generator = itertools.product( - range(0, h - patch_size + ps, stride), range(0, w - patch_size + ps, stride), - ) + hdc_wdx_generator = itertools.product(range(0, h - patch_size + ps, stride), range(0, w - patch_size + ps, stride),) for batch_indexes in itertoolz.partition_all(batch_size, hdc_wdx_generator): yield batch_indexes @@ -215,9 +186,7 @@ def _output_processing_pipeline(config, output): _, _, h, w = output.shape if config.TEST.POST_PROCESSING.SIZE != h or config.TEST.POST_PROCESSING.SIZE != w: output = F.interpolate( - output, - size=(config.TEST.POST_PROCESSING.SIZE, config.TEST.POST_PROCESSING.SIZE,), - mode="bilinear", + output, size=(config.TEST.POST_PROCESSING.SIZE, config.TEST.POST_PROCESSING.SIZE,), mode="bilinear", ) if config.TEST.POST_PROCESSING.CROP_PIXELS > 0: @@ -232,15 +201,7 @@ def _output_processing_pipeline(config, output): def _patch_label_2d( - model, - img, - pre_processing, - output_processing, - patch_size, - stride, - batch_size, - device, - num_classes, + model, img, pre_processing, output_processing, patch_size, stride, batch_size, device, num_classes, ): """Processes a whole section """ @@ -255,58 +216,31 @@ def _patch_label_2d( # generate output: for batch_indexes in _generate_batches(h, w, ps, patch_size, stride, batch_size=batch_size): batch = torch.stack( - [ - pipe(img_p, _extract_patch(hdx, wdx, ps, patch_size), pre_processing,) - for hdx, wdx in batch_indexes - ], + [pipe(img_p, _extract_patch(hdx, wdx, ps, patch_size), pre_processing,) for hdx, wdx in batch_indexes], dim=0, ) model_output = model(batch.to(device)) for (hdx, wdx), output in zip(batch_indexes, model_output.detach().cpu()): output = output_processing(output) - output_p[ - :, :, hdx + ps : hdx + ps + patch_size, wdx + ps : wdx + ps + patch_size, - ] += output + output_p[:, :, hdx + ps : hdx + ps + patch_size, wdx + ps : wdx + ps + patch_size,] += output # crop the output_p in the middle output = output_p[:, :, ps:-ps, ps:-ps] return output - -@curry -def to_image(label_mask, n_classes=6): - label_colours = get_seismic_labels() - r = label_mask.copy() - g = label_mask.copy() - b = label_mask.copy() - for ll in range(0, n_classes): - r[label_mask == ll] = label_colours[ll, 0] - g[label_mask == ll] = label_colours[ll, 1] - b[label_mask == ll] = label_colours[ll, 2] - rgb = np.zeros((label_mask.shape[0], label_mask.shape[1], label_mask.shape[2], 3)) - rgb[:, :, :, 0] = r - rgb[:, :, :, 1] = g - rgb[:, :, :, 2] = b - return rgb - - def _evaluate_split( - split, - section_aug, - model, - pre_processing, - output_processing, - device, - running_metrics_overall, - config, - debug=False, + split, section_aug, model, pre_processing, output_processing, device, running_metrics_overall, config, debug=False, ): logger = logging.getLogger(__name__) TestSectionLoader = get_test_loader(config) test_set = TestSectionLoader( - config.DATASET.ROOT, split=split, is_transform=True, augmentations=section_aug, + config.DATASET.ROOT, + config.DATASET.NUM_CLASSES, + split=split, + is_transform=True, + augmentations=section_aug ) n_classes = test_set.n_classes @@ -315,22 +249,14 @@ def _evaluate_split( if debug: logger.info("Running in Debug/Test mode") - test_loader = take(1, test_loader) + test_loader = take(2, test_loader) try: output_dir = generate_path( - config.OUTPUT_DIR + "_test", - git_branch(), - git_hash(), - config.MODEL.NAME, - current_datetime(), + config.OUTPUT_DIR + "_test", git_branch(), git_hash(), config.MODEL.NAME, current_datetime(), ) except TypeError: - output_dir = generate_path( - config.OUTPUT_DIR + "_test", - config.MODEL.NAME, - current_datetime(), - ) + output_dir = generate_path(config.OUTPUT_DIR + "_test", config.MODEL.NAME, current_datetime(),) running_metrics_split = runningScore(n_classes) @@ -368,8 +294,12 @@ def _evaluate_split( # Log split results logger.info(f'Pixel Acc: {score["Pixel Acc: "]:.3f}') - for cdx, class_name in enumerate(_CLASS_NAMES): - logger.info(f' {class_name}_accuracy {score["Class Accuracy: "][cdx]:.3f}') + if debug: + for cdx in range(n_classes): + logger.info(f' Class_{cdx}_accuracy {score["Class Accuracy: "][cdx]:.3f}') + else: + for cdx, class_name in enumerate(_CLASS_NAMES): + logger.info(f' {class_name}_accuracy {score["Class Accuracy: "][cdx]:.3f}') logger.info(f'Mean Class Acc: {score["Mean Class Acc: "]:.3f}') logger.info(f'Freq Weighted IoU: {score["Freq Weighted IoU: "]:.3f}') @@ -418,21 +348,19 @@ def test(*options, cfg=None, debug=False): running_metrics_overall = runningScore(n_classes) # Augmentation - section_aug = Compose( - [Normalize(mean=(config.TRAIN.MEAN,), std=(config.TRAIN.STD,), max_pixel_value=1,)] - ) + section_aug = Compose([Normalize(mean=(config.TRAIN.MEAN,), std=(config.TRAIN.STD,), max_pixel_value=1,)]) + # TODO: make sure that this is consistent with how normalization and agumentation for train.py + # issue: https://github.com/microsoft/seismic-deeplearning/issues/270 patch_aug = Compose( [ Resize( - config.TRAIN.AUGMENTATIONS.RESIZE.HEIGHT, - config.TRAIN.AUGMENTATIONS.RESIZE.WIDTH, - always_apply=True, + config.TRAIN.AUGMENTATIONS.RESIZE.HEIGHT, config.TRAIN.AUGMENTATIONS.RESIZE.WIDTH, always_apply=True, ), PadIfNeeded( min_height=config.TRAIN.AUGMENTATIONS.PAD.HEIGHT, min_width=config.TRAIN.AUGMENTATIONS.PAD.WIDTH, - border_mode=cv2.BORDER_CONSTANT, + border_mode=config.OPENCV_BORDER_CONSTANT, always_apply=True, mask_value=255, ), @@ -463,17 +391,35 @@ def test(*options, cfg=None, debug=False): score, class_iou = running_metrics_overall.get_scores() logger.info("--------------- FINAL RESULTS -----------------") - logger.info(f'Pixel Acc: {score["Pixel Acc: "]:.3f}') - for cdx, class_name in enumerate(_CLASS_NAMES): - logger.info(f' {class_name}_accuracy {score["Class Accuracy: "][cdx]:.3f}') - logger.info(f'Mean Class Acc: {score["Mean Class Acc: "]:.3f}') - logger.info(f'Freq Weighted IoU: {score["Freq Weighted IoU: "]:.3f}') - logger.info(f'Mean IoU: {score["Mean IoU: "]:0.3f}') + logger.info(f'Pixel Acc: {score["Pixel Acc: "]:.4f}') + + if debug: + for cdx in range(n_classes): + logger.info(f' Class_{cdx}_accuracy {score["Class Accuracy: "][cdx]:.3f}') + else: + for cdx, class_name in enumerate(_CLASS_NAMES): + logger.info(f' {class_name}_accuracy {score["Class Accuracy: "][cdx]:.4f}') + + logger.info(f'Mean Class Acc: {score["Mean Class Acc: "]:.4f}') + logger.info(f'Freq Weighted IoU: {score["Freq Weighted IoU: "]:.4f}') + logger.info(f'Mean IoU: {score["Mean IoU: "]:0.4f}') # Save confusion matrix: confusion = score["confusion_matrix"] np.savetxt(path.join(log_dir, "confusion.csv"), confusion, delimiter=" ") + if debug: + config_file_name = "default_config" if not cfg else cfg.split("/")[-1].split(".")[0] + fname = f"metrics_test_{config_file_name}_{config.TRAIN.MODEL_DIR}.json" + with open(fname, "w") as fid: + json.dump( + { + metric: score[metric] + for metric in ["Pixel Acc: ", "Mean Class Acc: ", "Freq Weighted IoU: ", "Mean IoU: "] + }, + fid, + ) + if __name__ == "__main__": fire.Fire(test) diff --git a/experiments/interpretation/dutchf3_patch/local/test.sh b/experiments/interpretation/dutchf3_patch/local/test.sh index ad68cf2e..a497e127 100644 --- a/experiments/interpretation/dutchf3_patch/local/test.sh +++ b/experiments/interpretation/dutchf3_patch/local/test.sh @@ -1,2 +1,2 @@ #!/bin/bash -python test.py --cfg "configs/seresnet_unet.yaml" \ No newline at end of file +python test.py --cfg "configs/hrnet.yaml" \ No newline at end of file diff --git a/experiments/interpretation/dutchf3_patch/local/train.py b/experiments/interpretation/dutchf3_patch/local/train.py index b8f49ca4..c180f2c5 100644 --- a/experiments/interpretation/dutchf3_patch/local/train.py +++ b/experiments/interpretation/dutchf3_patch/local/train.py @@ -12,60 +12,31 @@ Time to run on single V100 for 300 epochs: 4.5 days """ - +import json import logging import logging.config from os import path -import cv2 import fire import numpy as np import torch +from torch.utils import data from albumentations import Compose, HorizontalFlip, Normalize, PadIfNeeded, Resize from ignite.contrib.handlers import CosineAnnealingScheduler from ignite.engine import Events from ignite.metrics import Loss from ignite.utils import convert_tensor -from toolz import compose -from torch.utils import data - -from deepseismic_interpretation.dutchf3.data import get_patch_loader, decode_segmap -from cv_lib.utils import load_log_configuration -from cv_lib.event_handlers import ( - SnapshotHandler, - logging_handlers, - tensorboard_handlers, -) -from cv_lib.event_handlers.logging_handlers import Evaluator -from cv_lib.event_handlers.tensorboard_handlers import ( - create_image_writer, - create_summary_writer, -) -from cv_lib.segmentation import models, extract_metric_from -from cv_lib.segmentation.dutchf3.engine import ( - create_supervised_evaluator, - create_supervised_trainer, -) - -from cv_lib.segmentation.metrics import ( - pixelwise_accuracy, - class_accuracy, - mean_class_accuracy, - class_iou, - mean_iou, -) - -from cv_lib.segmentation.dutchf3.utils import ( - current_datetime, - generate_path, - git_branch, - git_hash, - np_to_tb, -) +from cv_lib.event_handlers import SnapshotHandler, logging_handlers, tensorboard_handlers +from cv_lib.event_handlers.tensorboard_handlers import create_summary_writer +from cv_lib.segmentation import extract_metric_from, models +from cv_lib.segmentation.dutchf3.engine import create_supervised_evaluator, create_supervised_trainer +from cv_lib.segmentation.dutchf3.utils import current_datetime, git_branch, git_hash +from cv_lib.segmentation.metrics import class_accuracy, class_iou, mean_class_accuracy, mean_iou, pixelwise_accuracy +from cv_lib.utils import load_log_configuration, generate_path +from deepseismic_interpretation.dutchf3.data import get_patch_loader from default import _C as config from default import update_config -from toolz import take def prepare_batch(batch, device=None, non_blocking=False): @@ -82,43 +53,64 @@ def run(*options, cfg=None, debug=False): Notes: Options can be passed in via the options argument and loaded from the cfg file Options from default.py will be overridden by options loaded from cfg file + Options from default.py will be overridden by options loaded from cfg file Options passed in via options argument will override option loaded from cfg file Args: *options (str,int ,optional): Options used to overide what is loaded from the config. To see what options are available consult default.py - cfg (str, optional): Location of config file to load. Defaults to None. + cfg (str, optional): Location of config file to load. Defaults to None. debug (bool): Places scripts in debug/test mode and only executes a few iterations """ - + # Configuration: update_config(config, options=options, config_file=cfg) - # Start logging + # The model will be saved under: outputs// + config_file_name = "default_config" if not cfg else cfg.split("/")[-1].split(".")[0] + try: + output_dir = generate_path( + config.OUTPUT_DIR, git_branch(), git_hash(), config_file_name, config.TRAIN.MODEL_DIR, current_datetime(), + ) + except TypeError: + output_dir = generate_path(config.OUTPUT_DIR, config_file_name, config.TRAIN.MODEL_DIR, current_datetime(),) + + # Logging: load_log_configuration(config.LOG_CONFIG) logger = logging.getLogger(__name__) logger.debug(config.WORKERS) - scheduler_step = config.TRAIN.END_EPOCH // config.TRAIN.SNAPSHOTS + + # Set CUDNN benchmark mode: torch.backends.cudnn.benchmark = config.CUDNN.BENCHMARK + # we will write the model under outputs / config_file_name / model_dir + config_file_name = "default_config" if not cfg else cfg.split("/")[-1].split(".")[0] + + # Fix random seeds: torch.manual_seed(config.SEED) if torch.cuda.is_available(): torch.cuda.manual_seed_all(config.SEED) np.random.seed(seed=config.SEED) - # Setup Augmentations + # Augmentation: basic_aug = Compose( [ Normalize(mean=(config.TRAIN.MEAN,), std=(config.TRAIN.STD,), max_pixel_value=1), - Resize( - config.TRAIN.AUGMENTATIONS.RESIZE.HEIGHT, - config.TRAIN.AUGMENTATIONS.RESIZE.WIDTH, + PadIfNeeded( + min_height=config.TRAIN.PATCH_SIZE, + min_width=config.TRAIN.PATCH_SIZE, + border_mode=config.OPENCV_BORDER_CONSTANT, always_apply=True, + mask_value=255, + value=0, + ), + Resize( + config.TRAIN.AUGMENTATIONS.RESIZE.HEIGHT, config.TRAIN.AUGMENTATIONS.RESIZE.WIDTH, always_apply=True, ), PadIfNeeded( min_height=config.TRAIN.AUGMENTATIONS.PAD.HEIGHT, min_width=config.TRAIN.AUGMENTATIONS.PAD.WIDTH, - border_mode=cv2.BORDER_CONSTANT, + border_mode=config.OPENCV_BORDER_CONSTANT, always_apply=True, mask_value=255, ), @@ -130,45 +122,53 @@ def run(*options, cfg=None, debug=False): else: train_aug = val_aug = basic_aug + # Training and Validation Loaders: TrainPatchLoader = get_patch_loader(config) - + logging.info(f"Using {TrainPatchLoader}") train_set = TrainPatchLoader( config.DATASET.ROOT, + config.DATASET.NUM_CLASSES, split="train", is_transform=True, stride=config.TRAIN.STRIDE, patch_size=config.TRAIN.PATCH_SIZE, augmentations=train_aug, + #augmentations=Resize(config.TRAIN.AUGMENTATIONS.RESIZE.HEIGHT, config.TRAIN.AUGMENTATIONS.RESIZE.WIDTH, always_apply=True), + debug=True ) - + logger.info(train_set) + n_classes = train_set.n_classes val_set = TrainPatchLoader( config.DATASET.ROOT, + config.DATASET.NUM_CLASSES, split="val", is_transform=True, stride=config.TRAIN.STRIDE, patch_size=config.TRAIN.PATCH_SIZE, augmentations=val_aug, + #augmentations=Resize(config.TRAIN.AUGMENTATIONS.RESIZE.HEIGHT, config.TRAIN.AUGMENTATIONS.RESIZE.WIDTH, always_apply=True), + debug=True ) + logger.info(val_set) - n_classes = train_set.n_classes + if debug: + logger.info("Running in debug mode..") + train_set = data.Subset(train_set, range(config.TRAIN.BATCH_SIZE_PER_GPU*config.NUM_DEBUG_BATCHES)) + val_set = data.Subset(val_set, range(config.VALIDATION.BATCH_SIZE_PER_GPU)) train_loader = data.DataLoader( - train_set, - batch_size=config.TRAIN.BATCH_SIZE_PER_GPU, - num_workers=config.WORKERS, - shuffle=True, + train_set, batch_size=config.TRAIN.BATCH_SIZE_PER_GPU, num_workers=config.WORKERS, shuffle=True ) val_loader = data.DataLoader( - val_set, batch_size=config.VALIDATION.BATCH_SIZE_PER_GPU, num_workers=config.WORKERS, - ) + val_set, batch_size=config.VALIDATION.BATCH_SIZE_PER_GPU, num_workers=1 + ) # config.WORKERS) + # Model: model = getattr(models, config.MODEL.NAME).get_seg_model(config) + device = "cuda" if torch.cuda.is_available() else "cpu" + model = model.to(device) - device = "cpu" - if torch.cuda.is_available(): - device = "cuda" - model = model.to(device) # Send to GPU - + # Optimizer and LR Scheduler: optimizer = torch.optim.SGD( model.parameters(), lr=config.TRAIN.MAX_LR, @@ -176,139 +176,85 @@ def run(*options, cfg=None, debug=False): weight_decay=config.TRAIN.WEIGHT_DECAY, ) - try: - output_dir = generate_path( - config.OUTPUT_DIR, git_branch(), git_hash(), config.MODEL.NAME, current_datetime(), - ) - except TypeError: - output_dir = generate_path(config.OUTPUT_DIR, config.MODEL.NAME, current_datetime(),) - - summary_writer = create_summary_writer(log_dir=path.join(output_dir, config.LOG_DIR)) - - snapshot_duration = scheduler_step * len(train_loader) + epochs_per_cycle = config.TRAIN.END_EPOCH // config.TRAIN.SNAPSHOTS + snapshot_duration = epochs_per_cycle * len(train_loader) if not debug else 2 * len(train_loader) scheduler = CosineAnnealingScheduler( - optimizer, "lr", config.TRAIN.MAX_LR, config.TRAIN.MIN_LR, snapshot_duration + optimizer, "lr", config.TRAIN.MAX_LR, config.TRAIN.MIN_LR, cycle_size=snapshot_duration ) - # weights are inversely proportional to the frequency of the classes in the - # training set + # Tensorboard writer: + summary_writer = create_summary_writer(log_dir=path.join(output_dir, "logs")) + + # class weights are inversely proportional to the frequency of the classes in the training set class_weights = torch.tensor(config.DATASET.CLASS_WEIGHTS, device=device, requires_grad=False) + # Loss: criterion = torch.nn.CrossEntropyLoss(weight=class_weights, ignore_index=255, reduction="mean") + # Ignite trainer and evaluator: trainer = create_supervised_trainer(model, optimizer, criterion, prepare_batch, device=device) - - trainer.add_event_handler(Events.ITERATION_STARTED, scheduler) - - trainer.add_event_handler( - Events.ITERATION_COMPLETED, - logging_handlers.log_training_output(log_interval=config.PRINT_FREQ), - ) - trainer.add_event_handler(Events.EPOCH_STARTED, logging_handlers.log_lr(optimizer)) - trainer.add_event_handler( - Events.EPOCH_STARTED, tensorboard_handlers.log_lr(summary_writer, optimizer, "epoch"), - ) - trainer.add_event_handler( - Events.ITERATION_COMPLETED, tensorboard_handlers.log_training_output(summary_writer), - ) - - def _select_pred_and_mask(model_out_dict): - return (model_out_dict["y_pred"].squeeze(), model_out_dict["mask"].squeeze()) - + transform_fn = lambda output_dict: (output_dict["y_pred"].squeeze(), output_dict["mask"].squeeze()) evaluator = create_supervised_evaluator( model, prepare_batch, metrics={ - "nll": Loss(criterion, output_transform=_select_pred_and_mask), - "pixacc": pixelwise_accuracy( - n_classes, output_transform=_select_pred_and_mask, device=device - ), - "cacc": class_accuracy(n_classes, output_transform=_select_pred_and_mask), - "mca": mean_class_accuracy(n_classes, output_transform=_select_pred_and_mask), - "ciou": class_iou(n_classes, output_transform=_select_pred_and_mask), - "mIoU": mean_iou(n_classes, output_transform=_select_pred_and_mask), + "nll": Loss(criterion, output_transform=transform_fn), + "pixacc": pixelwise_accuracy(n_classes, output_transform=transform_fn, device=device), + "cacc": class_accuracy(n_classes, output_transform=transform_fn), + "mca": mean_class_accuracy(n_classes, output_transform=transform_fn), + "ciou": class_iou(n_classes, output_transform=transform_fn), + "mIoU": mean_iou(n_classes, output_transform=transform_fn), }, device=device, ) + trainer.add_event_handler(Events.ITERATION_STARTED, scheduler) - # Set the validation run to start on the epoch completion of the training run - if debug: - logger.info("Running Validation in Debug/Test mode") - val_loader = take(3, val_loader) - - trainer.add_event_handler(Events.EPOCH_COMPLETED, Evaluator(evaluator, val_loader)) - - evaluator.add_event_handler( - Events.EPOCH_COMPLETED, - logging_handlers.log_metrics( - "Validation results", - metrics_dict={ - "nll": "Avg loss :", - "pixacc": "Pixelwise Accuracy :", - "mca": "Avg Class Accuracy :", - "mIoU": "Avg Class IoU :", - }, - ), - ) - - evaluator.add_event_handler( - Events.EPOCH_COMPLETED, - tensorboard_handlers.log_metrics( - summary_writer, - trainer, - "epoch", - metrics_dict={ - "mIoU": "Validation/mIoU", - "nll": "Validation/Loss", - "mca": "Validation/MCA", - "pixacc": "Validation/Pixel_Acc", - }, - ), - ) - - def _select_max(pred_tensor): - return pred_tensor.max(1)[1] - - def _tensor_to_numpy(pred_tensor): - return pred_tensor.squeeze().cpu().numpy() - - transform_func = compose(np_to_tb, decode_segmap(n_classes=n_classes), _tensor_to_numpy) - - transform_pred = compose(transform_func, _select_max) - - evaluator.add_event_handler( - Events.EPOCH_COMPLETED, create_image_writer(summary_writer, "Validation/Image", "image"), - ) - evaluator.add_event_handler( - Events.EPOCH_COMPLETED, - create_image_writer( - summary_writer, "Validation/Mask", "mask", transform_func=transform_func - ), - ) - evaluator.add_event_handler( - Events.EPOCH_COMPLETED, - create_image_writer( - summary_writer, "Validation/Pred", "y_pred", transform_func=transform_pred - ), + # Logging: + trainer.add_event_handler( + Events.ITERATION_COMPLETED, logging_handlers.log_training_output(log_interval=config.PRINT_FREQ), ) - - def snapshot_function(): - return (trainer.state.iteration % snapshot_duration) == 0 - + trainer.add_event_handler(Events.EPOCH_COMPLETED, logging_handlers.log_lr(optimizer)) + + # Tensorboard and Logging: + trainer.add_event_handler(Events.ITERATION_COMPLETED, tensorboard_handlers.log_training_output(summary_writer)) + trainer.add_event_handler(Events.ITERATION_COMPLETED, tensorboard_handlers.log_validation_output(summary_writer)) + + # add specific logger which also triggers printed metrics on training set + @trainer.on(Events.EPOCH_COMPLETED) + def log_training_results(engine): + evaluator.run(train_loader) + tensorboard_handlers.log_results(engine, evaluator, summary_writer, n_classes, stage="Training") + logging_handlers.log_metrics(engine, evaluator, stage="Training") + + # add specific logger which also triggers printed metrics on validation set + @trainer.on(Events.EPOCH_COMPLETED) + def log_validation_results(engine): + evaluator.run(val_loader) + tensorboard_handlers.log_results(engine, evaluator, summary_writer, n_classes, stage="Validation") + logging_handlers.log_metrics(engine, evaluator, stage="Validation") + # dump validation set metrics at the very end for debugging purposes + if engine.state.epoch == config.TRAIN.END_EPOCH and debug: + fname = f"metrics_{config_file_name}_{config.TRAIN.MODEL_DIR}.json" + metrics = evaluator.state.metrics + out_dict = {x: metrics[x] for x in ["nll", "pixacc", "mca", "mIoU"]} + with open(fname, "w") as fid: + json.dump(out_dict, fid) + log_msg = " ".join(f"{k}: {out_dict[k]}" for k in out_dict.keys()) + logging.info(log_msg) + + # Checkpointing: snapshotting trained models to disk checkpoint_handler = SnapshotHandler( - path.join(output_dir, config.TRAIN.MODEL_DIR), + output_dir, config.MODEL.NAME, extract_metric_from("mIoU"), - snapshot_function, + lambda: (trainer.state.iteration % snapshot_duration) == 0, ) evaluator.add_event_handler(Events.EPOCH_COMPLETED, checkpoint_handler, {"model": model}) logger.info("Starting training") - if debug: - logger.info("Running Training in Debug/Test mode") - train_loader = take(3, train_loader) + trainer.run(train_loader, max_epochs=config.TRAIN.END_EPOCH, epoch_length=len(train_loader), seed=config.SEED) - trainer.run(train_loader, max_epochs=config.TRAIN.END_EPOCH) + summary_writer.close() if __name__ == "__main__": diff --git a/interpretation/deepseismic_interpretation/dutchf3/data.py b/interpretation/deepseismic_interpretation/dutchf3/data.py index 36d69f21..e580b4bb 100644 --- a/interpretation/deepseismic_interpretation/dutchf3/data.py +++ b/interpretation/deepseismic_interpretation/dutchf3/data.py @@ -6,6 +6,10 @@ import segyio from os import path import scipy +from cv_lib.utils import generate_path, mask_to_disk, image_to_disk + +from matplotlib import pyplot as plt +from PIL import Image # bugfix for scipy imports import scipy.misc @@ -13,18 +17,11 @@ import torch from toolz import curry from torch.utils import data - +import logging from deepseismic_interpretation.dutchf3.utils.batch import ( interpolate_to_fit_data, parse_labels_in_image, get_coordinates_for_slice, - get_grid, - augment_flip, - augment_rot_xy, - augment_rot_z, - augment_stretch, - rand_int, - trilinear_interpolation, ) @@ -52,46 +49,6 @@ def _test2_labels_for(data_dir): return path.join(data_dir, "test_once", "test2_labels.npy") -def readSEGY(filename): - """[summary] - Read the segy file and return the data as a numpy array and a dictionary describing what has been read in. - - Arguments: - filename {str} -- .segy file location. - - Returns: - [type] -- 3D segy data as numy array and a dictionary with metadata information - """ - - # TODO: we really need to add logging to this repo - print("Loading data cube from", filename, "with:") - - # Read full data cube - data = segyio.tools.cube(filename) - - # Put temporal axis first - data = np.moveaxis(data, -1, 0) - - # Make data cube fast to acess - data = np.ascontiguousarray(data, "float32") - - # Read meta data - segyfile = segyio.open(filename, "r") - print(" Crosslines: ", segyfile.xlines[0], ":", segyfile.xlines[-1]) - print(" Inlines: ", segyfile.ilines[0], ":", segyfile.ilines[-1]) - print(" Timeslices: ", "1", ":", data.shape[0]) - - # Make dict with cube-info - data_info = {} - data_info["crossline_start"] = segyfile.xlines[0] - data_info["inline_start"] = segyfile.ilines[0] - data_info["timeslice_start"] = 1 # Todo: read this from segy - data_info["shape"] = data.shape - # Read dt and other params needed to do create a new - - return data, data_info - - def read_labels(fname, data_info): """ Read labels from an image. @@ -157,101 +114,23 @@ def read_labels(fname, data_info): return label_imgs, label_coordinates -def get_random_batch( - data_cube, - label_coordinates, - im_size, - batch_size, - index, - random_flip=False, - random_stretch=None, - random_rot_xy=None, - random_rot_z=None, -): +class SectionLoader(data.Dataset): """ - Returns a batch of augmented samples with center pixels randomly drawn from label_coordinates - - Args: - data_cube: 3D numpy array with floating point velocity values - label_coordinates: 3D coordinates of the labeled training slice - im_size: size of the 3D voxel which we're cutting out around each label_coordinate - batch_size: size of the batch - index: element index of this element in a batch - random_flip: bool to perform random voxel flip - random_stretch: bool to enable random stretch - random_rot_xy: bool to enable random rotation of the voxel around dim-0 and dim-1 - random_rot_z: bool to enable random rotation around dim-2 - - Returns: - a tuple of batch numpy array array of data with dimension - (batch, 1, data_cube.shape[0], data_cube.shape[1], data_cube.shape[2]) and the associated labels as an array - of size (batch). + Base class for section data loader + :param str data_dir: Root directory for training/test data + :param str n_classes: number of segmentation mask classes + :param str split: split file to use for loading patches + :param bool is_transform: Transform patch to dimensions expected by PyTorch + :param list augmentations: Data augmentations to apply to patches + :param bool debug: enable debugging output """ - # always generate only one datapoint - batch_size controls class balance - num_batch_size = 1 - - # Make 3 im_size elements - if isinstance(im_size, int): - im_size = [im_size, im_size, im_size] - - # Output arrays - batch = np.zeros([num_batch_size, 1, im_size[0], im_size[1], im_size[2]]) - ret_labels = np.zeros([num_batch_size]) - - class_keys = list(label_coordinates) - n_classes = len(class_keys) - - # We seek to have a balanced batch with equally many samples from each class. - # get total number of samples per class - samples_per_class = batch_size // n_classes - # figure out index relative to zero (not sequentially counting points) - index = index - batch_size * (index // batch_size) - # figure out which class to sample for this datapoint - class_ind = index // samples_per_class - - # Start by getting a grid centered around (0,0,0) - grid = get_grid(im_size) - - # Apply random flip - if random_flip: - grid = augment_flip(grid) - - # Apply random rotations - if random_rot_xy: - grid = augment_rot_xy(grid, random_rot_xy) - if random_rot_z: - grid = augment_rot_z(grid, random_rot_z) - - # Apply random stretch - if random_stretch: - grid = augment_stretch(grid, random_stretch) - - # Pick random location from the label_coordinates for this class: - coords_for_class = label_coordinates[class_keys[class_ind]] - random_index = rand_int(0, coords_for_class.shape[1]) - coord = coords_for_class[:, random_index : random_index + 1] - - # Move grid to be centered around this location - grid += coord - - # Interpolate samples at grid from the data: - sample = trilinear_interpolation(data_cube, grid) - - # Insert in output arrays - ret_labels[0] = class_ind - batch[0, 0, :, :, :] = np.reshape(sample, (im_size[0], im_size[1], im_size[2])) - - return batch, ret_labels - - -class SectionLoader(data.Dataset): - def __init__(self, data_dir, split="train", is_transform=True, augmentations=None): + def __init__(self, data_dir, n_classes, split="train", is_transform=True, augmentations=None, debug=False): self.split = split self.data_dir = data_dir self.is_transform = is_transform self.augmentations = augmentations - self.n_classes = 6 + self.n_classes = n_classes self.sections = list() def __len__(self): @@ -288,80 +167,53 @@ def transform(self, img, lbl): return torch.from_numpy(img).float(), torch.from_numpy(lbl).long() -class VoxelLoader(data.Dataset): - def __init__( - self, root_path, filename, window_size=65, split="train", n_classes=2, gen_coord_list=False, len=None, - ): - - assert split == "train" or split == "val" - - # location of the file - self.root_path = root_path - self.split = split - self.n_classes = n_classes - self.window_size = window_size - self.coord_list = None - self.filename = filename - self.full_filename = path.join(root_path, filename) - - # Read 3D cube - # NOTE: we cannot pass this data manually as serialization of data into each python process is costly, - # so each worker has to load the data on its own. - self.data, self.data_info = readSEGY(self.full_filename) - if len: - self.len = len - else: - self.len = self.data.size - self.labels = None - - if gen_coord_list: - # generate a list of coordinates to index the entire voxel - # memory footprint of this isn't large yet, so not need to wrap as a generator - nx, ny, nz = self.data.shape - x_list = range(self.window_size, nx - self.window_size) - y_list = range(self.window_size, ny - self.window_size) - z_list = range(self.window_size, nz - self.window_size) - - print("-- generating coord list --") - # TODO: is there any way to use a generator with pyTorch data loader? - self.coord_list = list(itertools.product(x_list, y_list, z_list)) - - def __len__(self): - return self.len - - def __getitem__(self, index): - - # TODO: can we specify a pixel mathematically by index? - pixel = self.coord_list[index] - x, y, z = pixel - # TODO: current bottleneck - can we slice out voxels any faster - small_cube = self.data[ - x - self.window : x + self.window + 1, - y - self.window : y + self.window + 1, - z - self.window : z + self.window + 1, - ] - - return small_cube[np.newaxis, :, :, :], pixel - - # TODO: do we need a transformer for voxels? +class TrainSectionLoader(SectionLoader): """ - def transform(self, img, lbl): - # to be in the BxCxHxW that PyTorch uses: - lbl = np.expand_dims(lbl, 0) - if len(img.shape) == 2: - img = np.expand_dims(img, 0) - return torch.from_numpy(img).float(), torch.from_numpy(lbl).long() + Training data loader for sections + :param str data_dir: Root directory for training/test data + :param str n_classes: number of segmentation mask classes + :param str split: split file to use for loading patches + :param bool is_transform: Transform patch to dimensions expected by PyTorch + :param list augmentations: Data augmentations to apply to patches + :param str seismic_path: Override file path for seismic data + :param str label_path: Override file path for label data + :param bool debug: enable debugging output """ - -class TrainSectionLoader(SectionLoader): - def __init__(self, data_dir, split="train", is_transform=True, augmentations=None): + def __init__( + self, + data_dir, + n_classes, + split="train", + is_transform=True, + augmentations=None, + seismic_path=None, + label_path=None, + debug=False, + ): super(TrainSectionLoader, self).__init__( - data_dir, split=split, is_transform=is_transform, augmentations=augmentations, + data_dir, + n_classes, + split=split, + is_transform=is_transform, + augmentations=augmentations, + seismic_path=seismic_path, + label_path=label_path, + debug=debug, ) - self.seismic = np.load(_train_data_for(self.data_dir)) - self.labels = np.load(_train_labels_for(self.data_dir)) + if seismic_path is not None and label_path is not None: + # Load npy files (seismc and corresponding labels) from provided + # location (path) + if not path.isfile(seismic_path): + raise Exception(f"{seismic_path} does not exist") + if not path.isfile(label_path): + raise Exception(f"{label_path} does not exist") + self.seismic = np.load(seismic_path) + self.labels = np.load(label_path) + else: + self.seismic = np.load(_train_data_for(self.data_dir)) + self.labels = np.load(_train_labels_for(self.data_dir)) # reading the file names for split txt_path = path.join(self.data_dir, "splits", "section_" + split + ".txt") @@ -371,9 +223,38 @@ def __init__(self, data_dir, split="train", is_transform=True, augmentations=Non class TrainSectionLoaderWithDepth(TrainSectionLoader): - def __init__(self, data_dir, split="train", is_transform=True, augmentations=None): + """ + Section data loader that includes additional channel for depth + :param str data_dir: Root directory for training/test data + :param str n_classes: number of segmentation mask classes + :param str split: split file to use for loading patches + :param bool is_transform: Transform patch to dimensions expected by PyTorch + :param list augmentations: Data augmentations to apply to patches + :param str seismic_path: Override file path for seismic data + :param str label_path: Override file path for label data + :param bool debug: enable debugging output + """ + + def __init__( + self, + data_dir, + n_classes, + split="train", + is_transform=True, + augmentations=None, + seismic_path=None, + label_path=None, + debug=False, + ): super(TrainSectionLoaderWithDepth, self).__init__( - data_dir, split=split, is_transform=is_transform, augmentations=augmentations, + data_dir, + n_classes, + split=split, + is_transform=is_transform, + augmentations=augmentations, + seismic_path=seismic_path, + label_path=label_path, + debug=debug, ) self.seismic = add_section_depth_channels(self.seismic) # NCWH @@ -405,51 +286,32 @@ def __getitem__(self, index): return im, lbl -class TrainVoxelWaldelandLoader(VoxelLoader): +class TestSectionLoader(SectionLoader): + """ + Test data loader for sections + :param str data_dir: Root directory for training/test data + :param str n_classes: number of segmentation mask classes + :param str split: split file to use for loading patches + :param bool is_transform: Transform patch to dimensions expected by PyTorch + :param list augmentations: Data augmentations to apply to patches + :param str seismic_path: Override file path for seismic data + :param str label_path: Override file path for label data + :param bool debug: enable debugging output + """ + def __init__( - self, root_path, filename, split="train", window_size=65, batch_size=None, len=None, + self, + data_dir, + n_classes, + split="test1", + is_transform=True, + augmentations=None, + seismic_path=None, + label_path=None, + debug=False, ): - super(TrainVoxelWaldelandLoader, self).__init__( - root_path, filename, split=split, window_size=window_size, len=len - ) - - label_fname = None - if split == "train": - label_fname = path.join(self.root_path, "inline_339.png") - elif split == "val": - label_fname = path.join(self.root_path, "inline_405.png") - else: - raise Exception("undefined split") - - self.class_imgs, self.coordinates = read_labels(label_fname, self.data_info) - - self.batch_size = batch_size if batch_size else 1 - - def __getitem__(self, index): - # print(index) - batch, labels = get_random_batch( - self.data, - self.coordinates, - self.window_size, - self.batch_size, - index, - random_flip=True, - random_stretch=0.2, - random_rot_xy=180, - random_rot_z=15, - ) - - return batch, labels - - -# TODO: write TrainVoxelLoaderWithDepth -TrainVoxelLoaderWithDepth = TrainVoxelWaldelandLoader - - -class TestSectionLoader(SectionLoader): - def __init__(self, data_dir, split="test1", is_transform=True, augmentations=None): super(TestSectionLoader, self).__init__( - data_dir, split=split, is_transform=is_transform, augmentations=augmentations, + data_dir, n_classes, split=split, is_transform=is_transform, augmentations=augmentations, debug=debug, ) if "test1" in self.split: @@ -458,6 +320,15 @@ def __init__(self, data_dir, split="test1", is_transform=True, augmentations=Non elif "test2" in self.split: self.seismic = np.load(_test2_data_for(self.data_dir)) self.labels = np.load(_test2_labels_for(self.data_dir)) + elif seismic_path is not None and label_path is not None: + # Load npy files (seismc and corresponding labels) from provided + # location (path) + if not path.isfile(seismic_path): + raise Exception(f"{seismic_path} does not exist") + if not path.isfile(label_path): + raise Exception(f"{label_path} does not exist") + self.seismic = np.load(seismic_path) + self.labels = np.load(label_path) # We are in test mode. Only read the given split. The other one might not # be available. @@ -468,9 +339,38 @@ def __init__(self, data_dir, split="test1", is_transform=True, augmentations=Non class TestSectionLoaderWithDepth(TestSectionLoader): - def __init__(self, data_dir, split="test1", is_transform=True, augmentations=None): + """ + Test data loader for sections that includes additional channel for depth + :param str data_dir: Root directory for training/test data + :param str n_classes: number of segmentation mask classes + :param str split: split file to use for loading patches + :param bool is_transform: Transform patch to dimensions expected by PyTorch + :param list augmentations: Data augmentations to apply to patches + :param str seismic_path: Override file path for seismic data + :param str label_path: Override file path for label data + :param bool debug: enable debugging output + """ + + def __init__( + self, + data_dir, + n_classes, + split="test1", + is_transform=True, + augmentations=None, + seismic_path=None, + label_path=None, + debug=False, + ): super(TestSectionLoaderWithDepth, self).__init__( - data_dir, split=split, is_transform=is_transform, augmentations=augmentations, + data_dir, + n_classes, + split=split, + is_transform=is_transform, + augmentations=augmentations, + seismic_path=seismic_path, + label_path=label_path, + debug=debug, ) self.seismic = add_section_depth_channels(self.seismic) # NCWH @@ -502,15 +402,6 @@ def __getitem__(self, index): return im, lbl -class TestVoxelWaldelandLoader(VoxelLoader): - def __init__(self, data_dir, split="test"): - super(TestVoxelWaldelandLoader, self).__init__(data_dir, split=split) - - -# TODO: write TestVoxelLoaderWithDepth -TestVoxelLoaderWithDepth = TestVoxelWaldelandLoader - - def _transform_WH_to_HW(numpy_array): assert len(numpy_array.shape) >= 2, "This method needs at least 2D arrays" return np.swapaxes(numpy_array, -2, -1) @@ -518,17 +409,28 @@ def _transform_WH_to_HW(numpy_array): class PatchLoader(data.Dataset): """ - Data loader for the patch-based deconvnet + Base Data loader for the patch-based deconvnet + :param str data_dir: Root directory for training/test data + :param str n_classes: number of segmentation mask classes + :param int stride: training data stride + :param int patch_size: Size of patch for training + :param str split: split file to use for loading patches + :param bool is_transform: Transform patch to dimensions expected by PyTorch + :param list augmentations: Data augmentations to apply to patches + :param bool debug: enable debugging output """ - def __init__(self, data_dir, stride=30, patch_size=99, is_transform=True, augmentations=None): + def __init__( + self, data_dir, n_classes, stride=30, patch_size=99, is_transform=True, augmentations=None, debug=False, + ): self.data_dir = data_dir self.is_transform = is_transform self.augmentations = augmentations - self.n_classes = 6 + self.n_classes = n_classes self.patches = list() self.patch_size = patch_size self.stride = stride + self.debug=debug def pad_volume(self, volume): """ @@ -546,7 +448,7 @@ def __getitem__(self, index): # Shift offsets the padding that is added in training # shift = self.patch_size if "test" not in self.split else 0 - # TODO: Remember we are cancelling the shift since we no longer pad + # Remember we are cancelling the shift since we no longer pad shift = 0 idx, xdx, ddx = int(idx) + shift, int(xdx) + shift, int(ddx) + shift @@ -563,6 +465,13 @@ def __getitem__(self, index): augmented_dict = self.augmentations(image=im, mask=lbl) im, lbl = augmented_dict["image"], augmented_dict["mask"] + # dump images and labels to disk + if self.debug: + outdir = f"patchLoader_{self.split}_{'aug' if self.augmentations is not None else 'noaug'}" + generate_path(outdir) + image_to_disk(im, f"{outdir}/{index}_img.png") + mask_to_disk(lbl, f"{outdir}/{index}_lbl.png") + if self.is_transform: im, lbl = self.transform(im, lbl) return im, lbl @@ -576,36 +485,87 @@ def transform(self, img, lbl): class TestPatchLoader(PatchLoader): - def __init__(self, data_dir, stride=30, patch_size=99, is_transform=True, augmentations=None): + """ + Test Data loader for the patch-based deconvnet + :param str data_dir: Root directory for training/test data + :param str n_classes: number of segmentation mask classes + :param int stride: training data stride + :param int patch_size: Size of patch for training + :param bool is_transform: Transform patch to dimensions expected by PyTorch + :param list augmentations: Data augmentations to apply to patches + :param bool debug: enable debugging output + """ + + def __init__( + self, data_dir, n_classes, stride=30, patch_size=99, is_transform=True, augmentations=None, debug=False + ): super(TestPatchLoader, self).__init__( - data_dir, stride=stride, patch_size=patch_size, is_transform=is_transform, augmentations=augmentations, + data_dir, + n_classes, + stride=stride, + patch_size=patch_size, + is_transform=is_transform, + augmentations=augmentations, + debug=debug, ) ## Warning: this is not used or tested raise NotImplementedError("This class is not correctly implemented.") self.seismic = np.load(_train_data_for(self.data_dir)) self.labels = np.load(_train_labels_for(self.data_dir)) - # We are in test mode. Only read the given split. The other one might not - # be available. - self.split = "test1" # TODO: Fix this can also be test2 - txt_path = path.join(self.data_dir, "splits", "patch_" + self.split + ".txt") patch_list = tuple(open(txt_path, "r")) patch_list = [id_.rstrip() for id_ in patch_list] self.patches = patch_list class TrainPatchLoader(PatchLoader): + """ + Train data loader for the patch-based deconvnet + :param str data_dir: Root directory for training/test data + :param int stride: training data stride + :param int patch_size: Size of patch for training + :param str split: split file to use for loading patches + :param bool is_transform: Transform patch to dimensions expected by PyTorch + :param list augmentations: Data augmentations to apply to patches + :param bool debug: enable debugging output + """ + def __init__( - self, data_dir, split="train", stride=30, patch_size=99, is_transform=True, augmentations=None, + self, + data_dir, + n_classes, + split="train", + stride=30, + patch_size=99, + is_transform=True, + augmentations=None, + seismic_path=None, + label_path=None, + debug=False, ): super(TrainPatchLoader, self).__init__( - data_dir, stride=stride, patch_size=patch_size, is_transform=is_transform, augmentations=augmentations, + data_dir, + n_classes, + stride=stride, + patch_size=patch_size, + is_transform=is_transform, + augmentations=augmentations, + debug=debug, ) - # self.seismic = self.pad_volume(np.load(seismic_path)) - # self.labels = self.pad_volume(np.load(labels_path)) + warnings.warn("This no longer pads the volume") - self.seismic = np.load(_train_data_for(self.data_dir)) - self.labels = np.load(_train_labels_for(self.data_dir)) + if seismic_path is not None and label_path is not None: + # Load npy files (seismc and corresponding labels) from provided + # location (path) + if not path.isfile(seismic_path): + raise Exception(f"{seismic_path} does not exist") + if not path.isfile(label_path): + raise Exception(f"{label_path} does not exist") + self.seismic = np.load(seismic_path) + self.labels = np.load(label_path) + else: + self.seismic = np.load(_train_data_for(self.data_dir)) + self.labels = np.load(_train_labels_for(self.data_dir)) # We are in train/val mode. Most likely the test splits are not saved yet, # so don't attempt to load them. self.split = split @@ -617,11 +577,39 @@ def __init__( class TrainPatchLoaderWithDepth(TrainPatchLoader): + """ + Train data loader for the patch-based deconvnet with patch depth channel + :param str data_dir: Root directory for training/test data + :param int stride: training data stride + :param int patch_size: Size of patch for training + :param str split: split file to use for loading patches + :param bool is_transform: Transform patch to dimensions expected by PyTorch + :param list augmentations: Data augmentations to apply to patches + :param bool debug: enable debugging output + """ + def __init__( - self, data_dir, split="train", stride=30, patch_size=99, is_transform=True, augmentations=None, + self, + data_dir, + split="train", + stride=30, + patch_size=99, + is_transform=True, + augmentations=None, + seismic_path=None, + label_path=None, + debug=False, ): super(TrainPatchLoaderWithDepth, self).__init__( - data_dir, stride=stride, patch_size=patch_size, is_transform=is_transform, augmentations=augmentations, + data_dir, + split=split, + stride=stride, + patch_size=patch_size, + is_transform=is_transform, + augmentations=augmentations, + seismic_path=seismic_path, + label_path=label_path, + debug=debug, ) def __getitem__(self, index): @@ -631,7 +619,7 @@ def __getitem__(self, index): # Shift offsets the padding that is added in training # shift = self.patch_size if "test" not in self.split else 0 - # TODO: Remember we are cancelling the shift since we no longer pad + # Remember we are cancelling the shift since we no longer pad shift = 0 idx, xdx, ddx = int(idx) + shift, int(xdx) + shift, int(ddx) + shift @@ -641,10 +629,8 @@ def __getitem__(self, index): elif direction == "x": im = self.seismic[idx : idx + self.patch_size, xdx, ddx : ddx + self.patch_size] lbl = self.labels[idx : idx + self.patch_size, xdx, ddx : ddx + self.patch_size] - im, lbl = _transform_WH_to_HW(im), _transform_WH_to_HW(lbl) - # TODO: Add check for rotation augmentations and raise warning if found if self.augmentations is not None: augmented_dict = self.augmentations(image=im, mask=lbl) im, lbl = augmented_dict["image"], augmented_dict["mask"] @@ -665,16 +651,43 @@ def _transform_HWC_to_CHW(numpy_array): class TrainPatchLoaderWithSectionDepth(TrainPatchLoader): + """ + Train data loader for the patch-based deconvnet section depth channel + :param str data_dir: Root directory for training/test data + :param int stride: training data stride + :param int patch_size: Size of patch for training + :param str split: split file to use for loading patches + :param bool is_transform: Transform patch to dimensions expected by PyTorch + :param list augmentations: Data augmentations to apply to patches + :param str seismic_path: Override file path for seismic data + :param str label_path: Override file path for label data + :param bool debug: enable debugging output + """ + def __init__( - self, data_dir, split="train", stride=30, patch_size=99, is_transform=True, augmentations=None, + self, + data_dir, + n_classes, + split="train", + stride=30, + patch_size=99, + is_transform=True, + augmentations=None, + seismic_path=None, + label_path=None, + debug=False, ): super(TrainPatchLoaderWithSectionDepth, self).__init__( data_dir, + n_classes, split=split, stride=stride, patch_size=patch_size, is_transform=is_transform, augmentations=augmentations, + seismic_path=seismic_path, + label_path=label_path, + debug=debug, ) self.seismic = add_section_depth_channels(self.seismic) @@ -685,9 +698,10 @@ def __getitem__(self, index): # Shift offsets the padding that is added in training # shift = self.patch_size if "test" not in self.split else 0 - # TODO: Remember we are cancelling the shift since we no longer pad + # Remember we are cancelling the shift since we no longer pad shift = 0 idx, xdx, ddx = int(idx) + shift, int(xdx) + shift, int(ddx) + shift + if direction == "i": im = self.seismic[idx, :, xdx : xdx + self.patch_size, ddx : ddx + self.patch_size] lbl = self.labels[idx, xdx : xdx + self.patch_size, ddx : ddx + self.patch_size] @@ -704,10 +718,22 @@ def __getitem__(self, index): im, lbl = augmented_dict["image"], augmented_dict["mask"] im = _transform_HWC_to_CHW(im) + # dump images and labels to disk + if self.debug: + outdir = f"patchLoaderWithSectionDepth_{self.split}_{'aug' if self.augmentations is not None else 'noaug'}" + generate_path(outdir) + image_to_disk(im[0,:,:], f"{outdir}/{index}_img.png") + mask_to_disk(lbl, f"{outdir}/{index}_lbl.png") + if self.is_transform: im, lbl = self.transform(im, lbl) return im, lbl + def __repr__(self): + unique, counts = np.unique(self.labels, return_counts=True) + ratio = counts / np.sum(counts) + return "\n".join(f"{lbl}: {cnt} [{rat}]" for lbl, cnt, rat in zip(unique, counts, ratio)) + _TRAIN_PATCH_LOADERS = { "section": TrainPatchLoaderWithSectionDepth, @@ -716,11 +742,9 @@ def __getitem__(self, index): _TRAIN_SECTION_LOADERS = {"section": TrainSectionLoaderWithDepth} -_TRAIN_VOXEL_LOADERS = {"voxel": TrainVoxelLoaderWithDepth} - def get_patch_loader(cfg): - assert cfg.TRAIN.DEPTH in [ + assert str(cfg.TRAIN.DEPTH).lower() in [ "section", "patch", "none", @@ -730,7 +754,7 @@ def get_patch_loader(cfg): def get_section_loader(cfg): - assert cfg.TRAIN.DEPTH in [ + assert str(cfg.TRAIN.DEPTH).lower() in [ "section", "none", ], f"Depth {cfg.TRAIN.DEPTH} not supported for section data. \ @@ -738,19 +762,12 @@ def get_section_loader(cfg): return _TRAIN_SECTION_LOADERS.get(cfg.TRAIN.DEPTH, TrainSectionLoader) -def get_voxel_loader(cfg): - assert cfg.TRAIN.DEPTH in [ - "voxel", - "none", - ], f"Depth {cfg.TRAIN.DEPTH} not supported for section data. \ - Valid values: voxel, none." - return _TRAIN_SECTION_LOADERS.get(cfg.TRAIN.DEPTH, TrainVoxelWaldelandLoader) - - _TEST_LOADERS = {"section": TestSectionLoaderWithDepth} def get_test_loader(cfg): + logger = logging.getLogger(__name__) + logger.info(f"Test loader {cfg.TRAIN.DEPTH}") return _TEST_LOADERS.get(cfg.TRAIN.DEPTH, TestSectionLoader) @@ -792,32 +809,3 @@ def add_section_depth_channels(sections_numpy): image[1, :, :, row] = const image[2] = image[0] * image[1] return np.swapaxes(image, 0, 1) - - -def get_seismic_labels(): - return np.asarray( - [[69, 117, 180], [145, 191, 219], [224, 243, 248], [254, 224, 144], [252, 141, 89], [215, 48, 39]] - ) - - -@curry -def decode_segmap(label_mask, n_classes=6, label_colours=get_seismic_labels()): - """Decode segmentation class labels into a colour image - Args: - label_mask (np.ndarray): an (N,H,W) array of integer values denoting - the class label at each spatial location. - Returns: - (np.ndarray): the resulting decoded color image (NCHW). - """ - r = label_mask.copy() - g = label_mask.copy() - b = label_mask.copy() - for ll in range(0, n_classes): - r[label_mask == ll] = label_colours[ll, 0] - g[label_mask == ll] = label_colours[ll, 1] - b[label_mask == ll] = label_colours[ll, 2] - rgb = np.zeros((label_mask.shape[0], label_mask.shape[1], label_mask.shape[2], 3)) - rgb[:, :, :, 0] = r / 255.0 - rgb[:, :, :, 1] = g / 255.0 - rgb[:, :, :, 2] = b / 255.0 - return np.transpose(rgb, (0, 3, 1, 2)) diff --git a/interpretation/deepseismic_interpretation/dutchf3/tests/test_dataloaders.py b/interpretation/deepseismic_interpretation/dutchf3/tests/test_dataloaders.py new file mode 100644 index 00000000..181fc825 --- /dev/null +++ b/interpretation/deepseismic_interpretation/dutchf3/tests/test_dataloaders.py @@ -0,0 +1,325 @@ +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. +""" +Tests for TrainLoader and TestLoader classes when overriding the file names of the seismic and label data. +""" + +import tempfile +import numpy as np +from interpretation.deepseismic_interpretation.dutchf3.data import get_test_loader, TrainPatchLoaderWithDepth, TrainSectionLoaderWithDepth +import pytest +import yacs.config +import os + +# npy files dimensions +IL = 5 +XL = 10 +D = 8 + +CONFIG_FILE = "./examples/interpretation/notebooks/configs/unet.yaml" +with open(CONFIG_FILE, "rt") as f_read: + config = yacs.config.load_cfg(f_read) + + +def generate_npy_files(path, data): + np.save(path, data) + + +def assert_dimensions(test_section_loader): + assert test_section_loader.labels.shape[0] == IL + assert test_section_loader.labels.shape[1] == XL + assert test_section_loader.labels.shape[2] == D + + # Because add_section_depth_channels method add + # 2 extra channels to a 1 channel section + assert test_section_loader.seismic.shape[0] == IL + assert test_section_loader.seismic.shape[2] == XL + assert test_section_loader.seismic.shape[3] == D + + +def test_TestSectionLoader_should_load_data_from_test1_set(): + with open(CONFIG_FILE, "rt") as f_read: + config = yacs.config.load_cfg(f_read) + + with tempfile.TemporaryDirectory() as data_dir: + os.makedirs(os.path.join(data_dir, "test_once")) + os.makedirs(os.path.join(data_dir, "splits")) + + seimic = np.zeros([IL, XL, D]) + generate_npy_files(os.path.join(data_dir, "test_once", "test1_seismic.npy"), seimic) + + labels = np.ones([IL, XL, D]) + generate_npy_files(os.path.join(data_dir, "test_once", "test1_labels.npy"), labels) + + txt_path = os.path.join(data_dir, "splits", "section_test1.txt") + open(txt_path, 'a').close() + + TestSectionLoader = get_test_loader(config) + test_set = TestSectionLoader(data_dir = data_dir, split = 'test1') + + assert_dimensions(test_set) + + +def test_TestSectionLoader_should_load_data_from_test2_set(): + with tempfile.TemporaryDirectory() as data_dir: + os.makedirs(os.path.join(data_dir, "test_once")) + os.makedirs(os.path.join(data_dir, "splits")) + + seimic = np.zeros([IL, XL, D]) + generate_npy_files(os.path.join(data_dir, "test_once", "test2_seismic.npy"), seimic) + + A = np.load(os.path.join(data_dir, "test_once", "test2_seismic.npy")) + + labels = np.ones([IL, XL, D]) + generate_npy_files(os.path.join(data_dir, "test_once", "test2_labels.npy"), labels) + + txt_path = os.path.join(data_dir, "splits", "section_test2.txt") + open(txt_path, 'a').close() + + TestSectionLoader = get_test_loader(config) + test_set = TestSectionLoader(data_dir = data_dir, split = 'test2') + + assert_dimensions(test_set) + + +def test_TestSectionLoader_should_load_data_from_path_override_data(): + with tempfile.TemporaryDirectory() as data_dir: + os.makedirs(os.path.join(data_dir, "volume_name")) + os.makedirs(os.path.join(data_dir, "splits")) + + seimic = np.zeros([IL, XL, D]) + generate_npy_files(os.path.join(data_dir, "volume_name", "seismic.npy"), seimic) + + labels = np.ones([IL, XL, D]) + generate_npy_files(os.path.join(data_dir, "volume_name", "labels.npy"), labels) + + txt_path = os.path.join(data_dir, "splits", "section_volume_name.txt") + open(txt_path, 'a').close() + + TestSectionLoader = get_test_loader(config) + test_set = TestSectionLoader(data_dir = data_dir, + split = "volume_name", + is_transform = True, + augmentations = None, + seismic_path = os.path.join(data_dir, "volume_name", "seismic.npy"), + label_path = os.path.join(data_dir, "volume_name", "labels.npy")) + + assert_dimensions(test_set) + +def test_TrainSectionLoaderWithDepth_should_fail_on_empty_file_names(tmpdir): + """ + Check for exception when files do not exist + """ + + # Test + with pytest.raises(Exception) as excinfo: + + _ = TrainSectionLoaderWithDepth( + data_dir = tmpdir, + split = "volume_name", + is_transform=True, + augmentations=None, + seismic_path = "", + label_path = "" + ) + assert "does not exist" in str(excinfo.value) + + +def test_TrainSectionLoaderWithDepth_should_fail_on_missing_seismic_file(tmpdir): + """ + Check for exception when training param is empty + """ + # Setup + os.makedirs(os.path.join(tmpdir, "volume_name")) + os.makedirs(os.path.join(tmpdir, "splits")) + + labels = np.ones([IL, XL, D]) + generate_npy_files(os.path.join(tmpdir, "volume_name", "labels.npy"), labels) + + txt_path = os.path.join(tmpdir, "splits", "patch_volume_name.txt") + open(txt_path, 'a').close() + + # Test + with pytest.raises(Exception) as excinfo: + + _ = TrainSectionLoaderWithDepth( + data_dir = tmpdir, + split = "volume_name", + is_transform=True, + augmentations=None, + seismic_path=os.path.join(tmpdir, "volume_name", "seismic.npy"), + label_path=os.path.join(tmpdir, "volume_name", "labels.npy") + ) + assert "does not exist" in str(excinfo.value) + + +def test_TrainSectionLoaderWithDepth_should_fail_on_missing_label_file(tmpdir): + """ + Check for exception when training param is empty + """ + # Setup + os.makedirs(os.path.join(tmpdir, "volume_name")) + os.makedirs(os.path.join(tmpdir, "splits")) + + labels = np.ones([IL, XL, D]) + generate_npy_files(os.path.join(tmpdir, "volume_name", "labels.npy"), labels) + + txt_path = os.path.join(tmpdir, "splits", "patch_volume_name.txt") + open(txt_path, 'a').close() + + # Test + with pytest.raises(Exception) as excinfo: + + _ = TrainSectionLoaderWithDepth( + data_dir = tmpdir, + split = "volume_name", + is_transform=True, + augmentations=None, + seismic_path=os.path.join(tmpdir, "volume_name", "seismic.npy"), + label_path=os.path.join(tmpdir, "volume_name", "labels.npy") + ) + assert "does not exist" in str(excinfo.value) + + +def test_TrainSectionLoaderWithDepth_should_load_with_one_train_and_label_file(tmpdir): + """ + Check for successful class instantiation w/ single npy file for train & label + """ + # Setup + os.makedirs(os.path.join(tmpdir, "volume_name")) + os.makedirs(os.path.join(tmpdir, "splits")) + + seimic = np.zeros([IL, XL, D]) + generate_npy_files(os.path.join(tmpdir, "volume_name", "seismic.npy"), seimic) + + labels = np.ones([IL, XL, D]) + generate_npy_files(os.path.join(tmpdir, "volume_name", "labels.npy"), labels) + + txt_path = os.path.join(tmpdir, "splits", "section_volume_name.txt") + open(txt_path, 'a').close() + + # Test + train_set = TrainSectionLoaderWithDepth( + data_dir = tmpdir, + split = "volume_name", + is_transform=True, + augmentations=None, + seismic_path=os.path.join(tmpdir, "volume_name", "seismic.npy"), + label_path=os.path.join(tmpdir, "volume_name", "labels.npy") + ) + + assert train_set.labels.shape == (IL, XL, D) + assert train_set.seismic.shape == (IL, 3, XL, D) + + +def test_TrainPatchLoaderWithDepth_should_fail_on_empty_file_names(tmpdir): + """ + Check for exception when files do not exist + """ + # Test + with pytest.raises(Exception) as excinfo: + + _ = TrainPatchLoaderWithDepth( + data_dir = tmpdir, + split = "volume_name", + is_transform=True, + stride=25, + patch_size=100, + augmentations=None, + seismic_path = "", + label_path = "" + ) + assert "does not exist" in str(excinfo.value) + + +def test_TrainPatchLoaderWithDepth_should_fail_on_missing_seismic_file(tmpdir): + """ + Check for exception when training param is empty + """ + # Setup + os.makedirs(os.path.join(tmpdir, "volume_name")) + os.makedirs(os.path.join(tmpdir, "splits")) + + labels = np.ones([IL, XL, D]) + generate_npy_files(os.path.join(tmpdir, "volume_name", "labels.npy"), labels) + + txt_path = os.path.join(tmpdir, "splits", "patch_volume_name.txt") + open(txt_path, 'a').close() + + # Test + with pytest.raises(Exception) as excinfo: + + _ = TrainPatchLoaderWithDepth( + data_dir = tmpdir, + split = "volume_name", + is_transform=True, + stride=25, + patch_size=100, + augmentations=None, + seismic_path=os.path.join(tmpdir, "volume_name", "seismic.npy"), + label_path=os.path.join(tmpdir, "volume_name", "labels.npy") + ) + assert "does not exist" in str(excinfo.value) + + +def test_TrainPatchLoaderWithDepth_should_fail_on_missing_label_file(tmpdir): + """ + Check for exception when training param is empty + """ + # Setup + os.makedirs(os.path.join(tmpdir, "volume_name")) + os.makedirs(os.path.join(tmpdir, "splits")) + + seimic = np.zeros([IL, XL, D]) + generate_npy_files(os.path.join(tmpdir, "volume_name", "seismic.npy"), seimic) + + txt_path = os.path.join(tmpdir, "splits", "patch_volume_name.txt") + open(txt_path, 'a').close() + + # Test + with pytest.raises(Exception) as excinfo: + + _ = TrainPatchLoaderWithDepth( + data_dir = tmpdir, + split = "volume_name", + is_transform=True, + stride=25, + patch_size=100, + augmentations=None, + seismic_path=os.path.join(tmpdir, "volume_name", "seismic.npy"), + label_path=os.path.join(tmpdir, "volume_name", "labels.npy") + ) + assert "does not exist" in str(excinfo.value) + + +def test_TrainPatchLoaderWithDepth_should_load_with_one_train_and_label_file(tmpdir): + """ + Check for successful class instantiation w/ single npy file for train & label + """ + # Setup + os.makedirs(os.path.join(tmpdir, "volume_name")) + os.makedirs(os.path.join(tmpdir, "splits")) + + seimic = np.zeros([IL, XL, D]) + generate_npy_files(os.path.join(tmpdir, "volume_name", "seismic.npy"), seimic) + + labels = np.ones([IL, XL, D]) + generate_npy_files(os.path.join(tmpdir, "volume_name", "labels.npy"), labels) + + txt_path = os.path.join(tmpdir, "splits", "patch_volume_name.txt") + open(txt_path, 'a').close() + + # Test + train_set = TrainPatchLoaderWithDepth( + data_dir = tmpdir, + split = "volume_name", + is_transform=True, + stride=25, + patch_size=100, + augmentations=None, + seismic_path=os.path.join(tmpdir, "volume_name", "seismic.npy"), + label_path=os.path.join(tmpdir, "volume_name", "labels.npy") + ) + + assert train_set.labels.shape == (IL, XL, D) + assert train_set.seismic.shape == (IL, XL, D) diff --git a/interpretation/deepseismic_interpretation/dutchf3/utils/batch.py b/interpretation/deepseismic_interpretation/dutchf3/utils/batch.py index 8ebc6790..110abaf5 100644 --- a/interpretation/deepseismic_interpretation/dutchf3/utils/batch.py +++ b/interpretation/deepseismic_interpretation/dutchf3/utils/batch.py @@ -182,63 +182,6 @@ def augment_flip(grid): return grid -def augment_stretch(grid, stretch_factor): - """ - Random stretch/scale - - Args: - grid: 3D coordinate grid of the voxel - stretch_factor: this is actually a boolean which triggers stretching - TODO: change this to just call the function and not do -1,1 in rand_float - - Returns: - stretched grid coordinates - """ - stretch = rand_float(-stretch_factor, stretch_factor) - grid *= 1 + stretch - return grid - - -def augment_rot_xy(grid, random_rot_xy): - """ - Random rotation - - Args: - grid: coordinate grid list of 3D points - random_rot_xy: this is actually a boolean which triggers rotation - TODO: change this to just call the function and not do -1,1 in rand_float - - Returns: - randomly rotated grid - """ - theta = np.deg2rad(rand_float(-random_rot_xy, random_rot_xy)) - x = grid[2, :] * np.cos(theta) - grid[1, :] * np.sin(theta) - y = grid[2, :] * np.sin(theta) + grid[1, :] * np.cos(theta) - grid[1, :] = x - grid[2, :] = y - return grid - - -def augment_rot_z(grid, random_rot_z): - """ - Random tilt around z-axis (dim-2) - - Args: - grid: coordinate grid list of 3D points - random_rot_z: this is actually a boolean which triggers rotation - TODO: change this to just call the function and not do -1,1 in rand_float - - Returns: - randomly tilted coordinate grid - """ - theta = np.deg2rad(rand_float(-random_rot_z, random_rot_z)) - z = grid[0, :] * np.cos(theta) - grid[1, :] * np.sin(theta) - x = grid[0, :] * np.sin(theta) + grid[1, :] * np.cos(theta) - grid[0, :] = z - grid[1, :] = x - return grid - - def trilinear_interpolation(input_array, indices): """ Linear interpolation @@ -343,63 +286,6 @@ def rand_bool(): return bool(np.random.randint(0, 2)) -def augment_stretch(grid, stretch_factor): - """ - Random stretch/scale - - Args: - grid: 3D coordinate grid of the voxel - stretch_factor: this is actually a boolean which triggers stretching - TODO: change this to just call the function and not do -1,1 in rand_float - - Returns: - stretched grid coordinates - """ - stretch = rand_float(-stretch_factor, stretch_factor) - grid *= 1 + stretch - return grid - - -def augment_rot_xy(grid, random_rot_xy): - """ - Random rotation - - Args: - grid: coordinate grid list of 3D points - random_rot_xy: this is actually a boolean which triggers rotation - TODO: change this to just call the function and not do -1,1 in rand_float - - Returns: - randomly rotated grid - """ - theta = np.deg2rad(rand_float(-random_rot_xy, random_rot_xy)) - x = grid[2, :] * np.cos(theta) - grid[1, :] * np.sin(theta) - y = grid[2, :] * np.sin(theta) + grid[1, :] * np.cos(theta) - grid[1, :] = x - grid[2, :] = y - return grid - - -def augment_rot_z(grid, random_rot_z): - """ - Random tilt around z-axis (dim-2) - - Args: - grid: coordinate grid list of 3D points - random_rot_z: this is actually a boolean which triggers rotation - TODO: change this to just call the function and not do -1,1 in rand_float - - Returns: - randomly tilted coordinate grid - """ - theta = np.deg2rad(rand_float(-random_rot_z, random_rot_z)) - z = grid[0, :] * np.cos(theta) - grid[1, :] * np.sin(theta) - x = grid[0, :] * np.sin(theta) + grid[1, :] * np.cos(theta) - grid[0, :] = z - grid[1, :] = x - return grid - - def trilinear_interpolation(input_array, indices): """ Linear interpolation diff --git a/interpretation/deepseismic_interpretation/models/texture_net.py b/interpretation/deepseismic_interpretation/models/texture_net.py index da5371d5..3bc3b1da 100644 --- a/interpretation/deepseismic_interpretation/models/texture_net.py +++ b/interpretation/deepseismic_interpretation/models/texture_net.py @@ -7,6 +7,7 @@ from torch import nn # TODO; set chanels from yaml config file +# issue: https://github.com/microsoft/seismic-deeplearning/issues/277 class TextureNet(nn.Module): def __init__(self, n_classes=2): super(TextureNet, self).__init__() diff --git a/interpretation/deepseismic_interpretation/penobscot/metrics.py b/interpretation/deepseismic_interpretation/penobscot/metrics.py index 846faacc..3411e145 100644 --- a/interpretation/deepseismic_interpretation/penobscot/metrics.py +++ b/interpretation/deepseismic_interpretation/penobscot/metrics.py @@ -18,7 +18,7 @@ def _torch_hist(label_true, label_pred, n_class): Returns: [type]: [description] """ - # TODO Add exceptions + assert len(label_true.shape) == 1, "Labels need to be 1D" assert len(label_pred.shape) == 1, "Predictions need to be 1D" mask = (label_true >= 0) & (label_true < n_class) @@ -34,6 +34,7 @@ def _default_tensor(image_height, image_width, pad_value=255): # TODO: make output transform unpad and scale down mask # scale up y_pred and remove padding +# issue: https://github.com/microsoft/seismic-deeplearning/issues/276 class InlineMeanIoU(Metric): """Compute Mean IoU for Inline @@ -95,6 +96,7 @@ def reset(self): def update(self, output): y_pred, y, ids, patch_locations = output # TODO: Make assertion exception + # issue: https://github.com/microsoft/seismic-deeplearning/issues/276 max_prediction = y_pred.max(1)[1].squeeze() assert y.shape == max_prediction.shape, "Shape not the same" diff --git a/scripts/gen_checkerboard.py b/scripts/gen_checkerboard.py new file mode 100644 index 00000000..3e0d0349 --- /dev/null +++ b/scripts/gen_checkerboard.py @@ -0,0 +1,197 @@ +#!/usr/bin/env python3 +""" Please see the def main() function for code description.""" + +""" libraries """ + +import numpy as np +import sys +import os + +np.set_printoptions(linewidth=200) +import logging + +# toggle to WARNING when running in production, or use CLI +logging.getLogger().setLevel(logging.DEBUG) +# logging.getLogger().setLevel(logging.WARNING) +import argparse + +parser = argparse.ArgumentParser() + +""" useful information when running from a GIT folder.""" +myname = os.path.realpath(__file__) +mypath = os.path.dirname(myname) +myname = os.path.basename(myname) + + +def make_box(n_inlines, n_crosslines, n_depth, box_size): + """ + Makes a 3D box in checkerboard pattern. + + :param n_inlines: dim x + :param n_crosslines: dim y + :param n_depth: dim z + :return: numpy array + """ + # inline by crossline by depth + zero_patch = np.ones((box_size, box_size)) * WHITE + one_patch = np.ones((box_size, box_size)) * BLACK + + stride = np.hstack((zero_patch, one_patch)) + + # come up with a 2D inline image + nx, ny = stride.shape + + step_col = int(np.ceil(n_crosslines / float(ny))) + step_row = int(np.ceil(n_inlines / float(nx) / 2)) + + # move in the crossline direction + crossline_band = np.hstack((stride,) * step_col) + # multiplying by negative one flips the sign + neg_crossline_band = -1 * crossline_band + + checker_stripe = np.vstack((crossline_band, neg_crossline_band)) + + # move down a section + checker_image = np.vstack((checker_stripe,) * step_row) + + # trim excess + checker_image = checker_image[0:n_inlines, 0:n_crosslines] + + # now make a box with alternating checkers + checker_box = np.ones((n_inlines, n_crosslines, box_size * 2)) + checker_box[:, :, 0:box_size] = checker_image[:, :, np.newaxis] + # now invert the colors + checker_box[:, :, box_size:] = -1 * checker_image[:, :, np.newaxis] + + # stack boxes depth wise + step_depth = int(np.ceil(n_depth / float(box_size) / 2)) + final_box = np.concatenate((checker_box,) * step_depth, axis=2) + + # trim excess again + return final_box[0:n_inlines, 0:n_crosslines, 0:n_depth] + + +def mkdir(path): + """ + Create a directory helper function + """ + if not os.path.isdir(path): + os.mkdir(path) + + +def main(args): + """ + + Generates checkerboard dataset based on Dutch F3 in Alaudah format. + + Pre-requisite: valid Dutch F3 dataset in Alaudah format. + + """ + + logging.info("loading data") + + train_seismic = np.load(os.path.join(args.dataroot, "train", "train_seismic.npy")) + train_labels = np.load(os.path.join(args.dataroot, "train", "train_labels.npy")) + test1_seismic = np.load(os.path.join(args.dataroot, "test_once", "test1_seismic.npy")) + test1_labels = np.load(os.path.join(args.dataroot, "test_once", "test1_labels.npy")) + test2_seismic = np.load(os.path.join(args.dataroot, "test_once", "test2_seismic.npy")) + test2_labels = np.load(os.path.join(args.dataroot, "test_once", "test2_labels.npy")) + + assert train_seismic.shape == train_labels.shape + assert train_seismic.min() == WHITE + assert train_seismic.max() == BLACK + assert train_labels.min() == 0 + # this is the number of classes in Alaudah's Dutch F3 dataset + assert train_labels.max() == 5 + + assert test1_seismic.shape == test1_labels.shape + assert test1_seismic.min() == WHITE + assert test1_seismic.max() == BLACK + assert test1_labels.min() == 0 + # this is the number of classes in Alaudah's Dutch F3 dataset + assert test1_labels.max() == 5 + + assert test2_seismic.shape == test2_labels.shape + assert test2_seismic.min() == WHITE + assert test2_seismic.max() == BLACK + assert test2_labels.min() == 0 + # this is the number of classes in Alaudah's Dutch F3 dataset + assert test2_labels.max() == 5 + + logging.info("train checkerbox") + n_inlines, n_crosslines, n_depth = train_seismic.shape + checkerboard_train_seismic = make_box(n_inlines, n_crosslines, n_depth, args.box_size) + checkerboard_train_seismic = checkerboard_train_seismic.astype(train_seismic.dtype) + checkerboard_train_labels = checkerboard_train_seismic.astype(train_labels.dtype) + # labels are integers and start from zero + checkerboard_train_labels[checkerboard_train_seismic < WHITE_LABEL] = WHITE_LABEL + + # create checkerbox + logging.info("test1 checkerbox") + n_inlines, n_crosslines, n_depth = test1_seismic.shape + checkerboard_test1_seismic = make_box(n_inlines, n_crosslines, n_depth, args.box_size) + checkerboard_test1_seismic = checkerboard_test1_seismic.astype(test1_seismic.dtype) + checkerboard_test1_labels = checkerboard_test1_seismic.astype(test1_labels.dtype) + # labels are integers and start from zero + checkerboard_test1_labels[checkerboard_test1_seismic < WHITE_LABEL] = WHITE_LABEL + + logging.info("test2 checkerbox") + n_inlines, n_crosslines, n_depth = test2_seismic.shape + checkerboard_test2_seismic = make_box(n_inlines, n_crosslines, n_depth, args.box_size) + checkerboard_test2_seismic = checkerboard_test2_seismic.astype(test2_seismic.dtype) + checkerboard_test2_labels = checkerboard_test2_seismic.astype(test2_labels.dtype) + # labels are integers and start from zero + checkerboard_test2_labels[checkerboard_test2_seismic < WHITE_LABEL] = WHITE_LABEL + + logging.info("writing data to disk") + mkdir(args.dataout) + mkdir(os.path.join(args.dataout, "data")) + mkdir(os.path.join(args.dataout, "data", "splits")) + mkdir(os.path.join(args.dataout, "data", "train")) + mkdir(os.path.join(args.dataout, "data", "test_once")) + + np.save(os.path.join(args.dataout, "data", "train", "train_seismic.npy"), checkerboard_train_seismic) + np.save(os.path.join(args.dataout, "data", "train", "train_labels.npy"), checkerboard_train_labels) + + np.save(os.path.join(args.dataout, "data", "test_once", "test1_seismic.npy"), checkerboard_test1_seismic) + np.save(os.path.join(args.dataout, "data", "test_once", "test1_labels.npy"), checkerboard_test1_labels) + + np.save(os.path.join(args.dataout, "data", "test_once", "test2_seismic.npy"), checkerboard_test2_seismic) + np.save(os.path.join(args.dataout, "data", "test_once", "test2_labels.npy"), checkerboard_test2_labels) + + logging.info("all done") + + +""" GLOBAL VARIABLES """ +WHITE = -1 +BLACK = 1 +WHITE_LABEL = 0 + +parser.add_argument("--dataroot", help="Root location of the input data", type=str, required=True) +parser.add_argument("--dataout", help="Root location of the output data", type=str, required=True) +parser.add_argument("--box_size", help="Size of the bounding box", type=int, required=False, default=100) +parser.add_argument("--debug", help="Turn on debug mode", type=bool, required=False, default=False) + +""" main wrapper with profiler """ +if __name__ == "__main__": + main(parser.parse_args()) + +# pretty printing of the stack +""" + try: + logging.info('before main') + main(parser.parse_args()) + logging.info('after main') + except: + for frame in traceback.extract_tb(sys.exc_info()[2]): + fname,lineno,fn,text = frame + print ("Error in %s on line %d" % (fname, lineno)) +""" +# optionally enable profiling information +# import cProfile +# name = +# cProfile.run('main.run()', name + '.prof') +# import pstats +# p = pstats.Stats(name + '.prof') +# p.sort_stats('cumulative').print_stats(10) +# p.sort_stats('time').print_stats() diff --git a/scripts/logging.conf b/scripts/logging.conf new file mode 100644 index 00000000..c4037cc2 --- /dev/null +++ b/scripts/logging.conf @@ -0,0 +1,34 @@ +[loggers] +keys=root,__main__,event_handlers + +[handlers] +keys=consoleHandler + +[formatters] +keys=simpleFormatter + +[logger_root] +level=INFO +handlers=consoleHandler + +[logger___main__] +level=DEBUG +handlers=consoleHandler +qualname=__main__ +propagate=0 + +[logger_event_handlers] +level=INFO +handlers=consoleHandler +qualname=event_handlers +propagate=0 + +[handler_consoleHandler] +class=StreamHandler +level=DEBUG +formatter=simpleFormatter +args=(sys.stdout,) + +[formatter_simpleFormatter] +format=%(asctime)s - %(name)s - %(levelname)s - %(message)s + diff --git a/scripts/prepare_dutchf3.py b/scripts/prepare_dutchf3.py index 40d9f4e6..7157214d 100755 --- a/scripts/prepare_dutchf3.py +++ b/scripts/prepare_dutchf3.py @@ -1,7 +1,7 @@ # Copyright (c) Microsoft Corporation. All rights reserved. # Licensed under the MIT License. # commitHash: c76bf579a0d5090ebd32426907d051d499f3e847 -# url: https://github.com/olivesgatech/facies_classification_benchmark +# url: https://github.com/yalaudah/facies_classification_benchmark """Script to generate train and validation sets for Netherlands F3 dataset """ import itertools @@ -9,7 +9,7 @@ import logging.config import math import warnings -from os import path +from os import path, mkdir import fire import numpy as np @@ -24,158 +24,181 @@ def _get_labels_path(data_dir): return path.join(data_dir, "train", "train_labels.npy") -def _write_split_files(splits_path, train_list, test_list, loader_type): +def get_split_function(loader_type): + return _LOADER_TYPES.get(loader_type, split_patch_train_val) + + +def run_split_func(loader_type, *args, **kwargs): + split_func = get_split_function(loader_type) + split_func(*args, **kwargs) + + +def _write_split_files(splits_path, train_list, val_list, loader_type): + if not path.isdir(splits_path): + mkdir(splits_path) file_object = open(path.join(splits_path, loader_type + "_train_val.txt"), "w") - file_object.write("\n".join(train_list + test_list)) + file_object.write("\n".join(train_list + val_list)) file_object.close() file_object = open(path.join(splits_path, loader_type + "_train.txt"), "w") file_object.write("\n".join(train_list)) file_object.close() file_object = open(path.join(splits_path, loader_type + "_val.txt"), "w") - file_object.write("\n".join(test_list)) + file_object.write("\n".join(val_list)) file_object.close() -def _get_aline_range(aline, per_val): - # Inline sections - test_aline = math.floor(aline * per_val / 2) - test_aline_range = itertools.chain(range(0, test_aline), range(aline - test_aline, aline)) - train_aline_range = range(test_aline, aline - test_aline) +def _get_aline_range(aline, per_val, section_stride=1): + """ + Args: + aline (int): number of seismic sections in the inline or + crossline directions + per_val (float): the fraction of the volume to use for + validation. Defaults to 0.2. + section_stride (int): the stride of the sections in the training data. + If greater than 1, this function will skip (section_stride-1) between each section + Defaults to 1, do not skip any section. + """ + try: + if section_stride < 1: + raise ValueError("section_stride cannot be zero or a negative number") + + if per_val < 0 or per_val >= 1: + raise ValueError("Validation percentage (per_val) should be a number in the range [0,1).") - return train_aline_range, test_aline_range + val_aline = math.floor(aline * per_val / 2) + val_range = itertools.chain(range(0, val_aline), range(aline - val_aline, aline)) + train_range = range(val_aline, aline - val_aline, section_stride) + return train_range, val_range + except (Exception, ValueError): + raise -def split_section_train_val(data_dir, per_val=0.2, log_config=None): + +def split_section_train_val(label_file, split_direction, per_val=0.2, log_config=None, section_stride=1): """Generate train and validation files for Netherlands F3 dataset. Args: - data_dir (str): data directory path - per_val (float, optional): the fraction of the volume to use for validation. - Defaults to 0.2. + label_file (str): npy files with labels. Stored in data_dir + split_direction (str): Direction in which to split the data into + train & val. Use "inline" or "crossline". + per_val (float, optional): the fraction of the volume to use for + validation. Defaults to 0.2. + log_config (str): path to log configurations + section_stride (int): the stride of the sections in the training data. + If greater than 1, this function will skip (section_stride-1) between each section + Defaults to 1, do not skip any section. """ if log_config is not None: logging.config.fileConfig(log_config) logger = logging.getLogger(__name__) - logger.info("Splitting data into sections .... ") - logger.info(f"Reading data from {data_dir}") + logger.info(f"Loading {label_file}") - labels_path = _get_labels_path(data_dir) - logger.info(f"Loading {labels_path}") - labels = np.load(labels_path) + labels = np.load(label_file) logger.debug(f"Data shape [iline|xline|depth] {labels.shape}") + iline, xline, _ = labels.shape # TODO: Must make sure in the future, all new datasets conform to this order. - iline, xline, _ = labels.shape - # Inline sections - train_iline_range, test_iline_range = _get_aline_range(iline, per_val) - train_i_list = ["i_" + str(i) for i in train_iline_range] - test_i_list = ["i_" + str(i) for i in test_iline_range] + logger.info(f"Splitting in {split_direction} direction.. ") + if split_direction.lower() == "inline": + num_sections = iline + index = "i" + elif split_direction.lower() == "crossline": + num_sections = xline + index = "x" + else: + raise ValueError(f"Unknown split_direction {split_direction}") - # Xline sections - train_xline_range, test_xline_range = _get_aline_range(xline, per_val) - train_x_list = ["x_" + str(x) for x in train_xline_range] - test_x_list = ["x_" + str(x) for x in test_xline_range] + train_range, val_range = _get_aline_range(num_sections, per_val, section_stride) + train_list = [f"{index}_" + str(section) for section in train_range] + val_list = [f"{index}_" + str(section) for section in val_range] - train_list = train_x_list + train_i_list - test_list = test_x_list + test_i_list - - # write to files to disk - splits_path = _get_splits_path(data_dir) - _write_split_files(splits_path, train_list, test_list, "section") + return train_list, val_list -def split_patch_train_val(data_dir, stride, patch, per_val=0.2, log_config=None): +def split_patch_train_val( + label_file, patch_stride, patch_size, split_direction, section_stride=1, per_val=0.2, log_config=None, +): """Generate train and validation files for Netherlands F3 dataset. Args: - data_dir (str): data directory path - stride (int): stride to use when sectioning of the volume - patch (int): size of patch to extract - per_val (float, optional): the fraction of the volume to use for validation. - Defaults to 0.2. + label_file (str): npy files with labels. Stored in data_dir + patch_stride (int): stride to use when sampling patches + patch_size (int): size of patch to extract + split_direction (str): Direction in which to split the data into + train & val. Use "inline" or "crossline". + section_stride (int): increment to the slices count. + If section_stride > 1 the function will skip: + section_stride - 1 sections in the training data. + Defaults to 1, do not skip any slice. + per_val (float, optional): the fraction of the volume to use for + validation. Defaults to 0.2. + log_config (str): path to log configurations """ if log_config is not None: logging.config.fileConfig(log_config) logger = logging.getLogger(__name__) - - logger.info("Splitting data into patches .... ") - logger.info(f"Reading data from {data_dir}") - - labels_path = _get_labels_path(data_dir) - logger.info(f"Loading {labels_path}") - labels = np.load(labels_path) + logger.info(f"Splitting data into patches along {split_direction} direction .. ") + logger.info(f"Loading {label_file}") + labels = np.load(label_file) logger.debug(f"Data shape [iline|xline|depth] {labels.shape}") iline, xline, depth = labels.shape - # Inline sections - train_iline_range, test_iline_range = _get_aline_range(iline, per_val) - # Xline sections - train_xline_range, test_xline_range = _get_aline_range(xline, per_val) - - # Generate patches from sections - # Process inlines - horz_locations = range(0, xline - patch, stride) - vert_locations = range(0, depth - patch, stride) - logger.debug("Generating Inline patches") - logger.debug(horz_locations) + split_direction = split_direction.lower() + if split_direction == "inline": + num_sections, section_length = iline, xline + elif split_direction == "crossline": + num_sections, section_length = xline, iline + else: + raise ValueError(f"Unknown split_direction: {split_direction}") + + train_range, val_range = _get_aline_range(num_sections, per_val, section_stride) + vert_locations = range(0, depth, patch_stride) + horz_locations = range(0, section_length, patch_stride) logger.debug(vert_locations) + logger.debug(horz_locations) - def _i_extract_patches(iline_range, horz_locations, vert_locations): - for i in iline_range: - locations = ([j, k] for j in horz_locations for k in vert_locations) - for j, k in locations: - yield "i_" + str(i) + "_" + str(j) + "_" + str(k) - - test_i_list = list(_i_extract_patches(test_iline_range, horz_locations, vert_locations)) - train_i_list = list(_i_extract_patches(train_iline_range, horz_locations, vert_locations)) - - # Process crosslines - horz_locations = range(0, iline - patch, stride) - vert_locations = range(0, depth - patch, stride) - - def _x_extract_patches(xline_range, horz_locations, vert_locations): - for j in xline_range: - locations = ([i, k] for i in horz_locations for k in vert_locations) - for i, k in locations: - yield "x_" + str(i) + "_" + str(j) + "_" + str(k) - - test_x_list = list(_x_extract_patches(test_xline_range, horz_locations, vert_locations)) - train_x_list = list(_x_extract_patches(train_xline_range, horz_locations, vert_locations)) - - train_list = train_x_list + train_i_list - test_list = test_x_list + test_i_list - - # write to files to disk: - # NOTE: This isn't quite right we should calculate the patches again for the whole volume - splits_path = _get_splits_path(data_dir) - _write_split_files(splits_path, train_list, test_list, "patch") - - -_LOADER_TYPES = {"section": split_section_train_val, "patch": split_patch_train_val} + # Process sections: + def _extract_patches(sections_range, direction, horz_locations, vert_locations): + locations = itertools.product(sections_range, horz_locations, vert_locations) + if direction == "inline": + idx, xdx, ddx = 0, 1, 2 + dir = "i" + elif direction == "crossline": + idx, xdx, ddx = 1, 0, 2 + dir = "x" + for loc in locations: # iline xline depth + yield f"{dir}_" + str(loc[idx]) + "_" + str(loc[xdx]) + "_" + str(loc[ddx]) -def get_split_function(loader_type): - return _LOADER_TYPES.get(loader_type, split_patch_train_val) + # Process sections - train + logger.debug("Generating patches..") + train_list = list(_extract_patches(train_range, split_direction, horz_locations, vert_locations)) + val_list = list(_extract_patches(val_range, split_direction, horz_locations, vert_locations)) + logger.debug(train_range) + logger.debug(val_range) + logger.debug(train_list) + logger.debug(val_list) -def run_split_func(loader_type, *args, **kwargs): - split_func = get_split_function(loader_type) - split_func(*args, **kwargs) + return train_list, val_list -def split_alaudah_et_al_19(data_dir, stride, fraction_validation=0.2, loader_type="patch", log_config=None): +def split_alaudah_et_al_19( + data_dir, patch_stride, patch_size, fraction_validation=0.2, loader_type="patch", log_config=None +): """Generate train and validation files (with overlap) for Netherlands F3 dataset. - The original split method from https://github.com/olivesgatech/facies_classification_benchmark + The original split method from https://github.com/yalaudah/facies_classification_benchmark DON'T USE, SEE NOTES BELOW Args: data_dir (str): data directory path - stride (int): stride to use when sectioning of the volume + patch_stride (int): stride to use when sampling patches + patch_size (int): size of patch to extract fraction_validation (float, optional): the fraction of the volume to use for validation. Defaults to 0.2. loader_type (str, optional): type of data loader, can be "patch" or "section". @@ -214,8 +237,8 @@ def split_alaudah_et_al_19(data_dir, stride, fraction_validation=0.2, loader_typ x_list = ["x_" + str(x) for x in range(xline)] elif loader_type == "patch": i_list = [] - horz_locations = range(0, xline - stride, stride) - vert_locations = range(0, depth - stride, stride) + horz_locations = range(0, xline - patch_size + 1, patch_stride) + vert_locations = range(0, depth - patch_size + 1, patch_stride) logger.debug("Generating Inline patches") logger.debug(horz_locations) logger.debug(vert_locations) @@ -230,8 +253,8 @@ def split_alaudah_et_al_19(data_dir, stride, fraction_validation=0.2, loader_typ i_list = list(itertools.chain(*i_list)) x_list = [] - horz_locations = range(0, iline - stride, stride) - vert_locations = range(0, depth - stride, stride) + horz_locations = range(0, iline - patch_size + 1, patch_stride) + vert_locations = range(0, depth - patch_size + 1, patch_stride) for j in range(xline): # for every xline: # images are references by top-left corner: @@ -244,48 +267,134 @@ def split_alaudah_et_al_19(data_dir, stride, fraction_validation=0.2, loader_typ list_train_val = i_list + x_list - # create train and test splits: - train_list, test_list = train_test_split(list_train_val, test_size=fraction_validation, shuffle=True) + # create train and validation splits: + train_list, val_list = train_test_split(list_train_val, val_size=fraction_validation, shuffle=True) # write to files to disk: splits_path = _get_splits_path(data_dir) - _write_split_files(splits_path, train_list, test_list, loader_type) + _write_split_files(splits_path, train_list, val_list, loader_type) -# TODO: Try https://github.com/Chilipp/docrep for doscstring reuse class SplitTrainValCLI(object): - def section(self, data_dir, per_val=0.2, log_config=None): - """Generate section based train and validation files for Netherlands F3 dataset. + def section( + self, + data_dir, + label_file, + split_direction, + per_val=0.2, + log_config="logging.conf", + output_dir=None, + section_stride=1, + ): + """Generate section based train and validation files for Netherlands F3 + dataset. Args: data_dir (str): data directory path - per_val (float, optional): the fraction of the volume to use for validation. - Defaults to 0.2. + output_dir (str): directory under data_dir to store the split files + label_file (str): npy files with labels. Stored in data_dir + split_direction (int): Direction in which to split the data into + train & val. Use "inline" or "crossline", or "both". + per_val (float, optional): the fraction of the volume to use for + validation. Defaults to 0.2. log_config (str): path to log configurations + section_stride (int): the stride of the sections in the training data. + If greater than 1, this function will skip (section_stride-1) between each section + Defaults to 1, do not skip any section. """ - return split_section_train_val(data_dir, per_val=per_val, log_config=log_config) - - def patch(self, data_dir, stride, patch, per_val=0.2, log_config=None): - """Generate patch based train and validation files for Netherlands F3 dataset. + if data_dir is not None: + label_file = path.join(data_dir, label_file) + output_dir = path.join(data_dir, output_dir) + + if split_direction.lower() == "both": + train_list_i, val_list_i = split_section_train_val( + label_file, "inline", per_val, log_config, section_stride + ) + train_list_x, val_list_x = split_section_train_val( + label_file, "crossline", per_val, log_config, section_stride + ) + # concatenate the two lists: + train_list = train_list_i + train_list_x + val_list = val_list_i + val_list_x + elif split_direction.lower() in ["inline", "crossline"]: + train_list, val_list = split_section_train_val( + label_file, split_direction, per_val, log_config, section_stride + ) + else: + raise ValueError(f"Unknown split_direction: {split_direction}") + # write to files to disk + _write_split_files(output_dir, train_list, val_list, "section") + + def patch( + self, + label_file, + stride, + patch_size, + split_direction, + per_val=0.2, + log_config="logging.conf", + data_dir=None, + output_dir=None, + section_stride=1, + ): + """Generate train and validation files for Netherlands F3 dataset. Args: data_dir (str): data directory path - stride (int): stride to use when sectioning of the volume - patch (int): size of patch to extract - per_val (float, optional): the fraction of the volume to use for validation. - Defaults to 0.2. + output_dir (str): directory under data_dir to store the split files + label_file (str): npy files with labels. Stored in data_dir + stride (int): stride to use when sampling patches + patch_size (int): size of patch to extract + per_val (float, optional): the fraction of the volume to use for + validation. Defaults to 0.2. log_config (str): path to log configurations + split_direction (int): Direction in which to split the data into + train & val. Use "inline" or "crossline", or "both". + section_stride (int): the stride of the sections in the training data. + If greater than 1, this function will skip (section_stride-1) between each section + Defaults to 1, do not skip any section. """ - return split_patch_train_val(data_dir, stride, patch, per_val=per_val, log_config=log_config) + if data_dir is not None: + label_file = path.join(data_dir, label_file) + output_dir = path.join(data_dir, output_dir) + + if split_direction.lower() == "both": + train_list_i, val_list_i = split_patch_train_val( + label_file, stride, patch_size, "inline", section_stride, per_val, log_config + ) + + train_list_x, val_list_x = split_patch_train_val( + label_file, stride, patch_size, "crossline", section_stride, per_val, log_config + ) + # concatenate the two lists: + train_list = train_list_i + train_list_x + val_list = val_list_i + val_list_x + elif split_direction.lower() in ["inline", "crossline"]: + train_list, val_list = split_patch_train_val( + label_file, stride, patch_size, split_direction, section_stride, per_val, log_config + ) + else: + raise ValueError(f"Unknown split_direction: {split_direction}") + + # write to files to disk: + _write_split_files(output_dir, train_list, val_list, "patch") + print(f"Successfully created the splits files in {output_dir}") +_LOADER_TYPES = {"section": split_section_train_val, "patch": split_patch_train_val} + if __name__ == "__main__": """Example: - python prepare_data.py split_train_val section --data-dir=/mnt/dutch + python prepare_data.py split_train_val section --data_dir=data \ + --label_file=label_file.npy --output_dir=splits --split_direction=both --section_stride=2 or - python prepare_data.py split_train_val patch --data-dir=/mnt/dutch --stride=50 --patch=100 - + python prepare_dutchf3.py split_train_val patch --data_dir=data \ + --label_file=label_file.npy --output_dir=splits --stride=50 \ + --patch_size=100 --split_direction=both --section_stride=2 """ fire.Fire( - {"split_train_val": SplitTrainValCLI, "split_alaudah_et_al_19": split_alaudah_et_al_19,} + {"split_train_val": SplitTrainValCLI} + # commenting the following line as this was not updated with + # the new parameters names + # "split_alaudah_et_al_19": split_alaudah_et_al_19} ) diff --git a/scripts/prepare_penobscot.py b/scripts/prepare_penobscot.py index 754993be..8f164d82 100755 --- a/scripts/prepare_penobscot.py +++ b/scripts/prepare_penobscot.py @@ -1,7 +1,7 @@ # Copyright (c) Microsoft Corporation. All rights reserved. # Licensed under the MIT License. # commitHash: c76bf579a0d5090ebd32426907d051d499f3e847 -# url: https://github.com/olivesgatech/facies_classification_benchmark +# url: https://github.com/yalaudah/facies_classification_benchmark """Script to generate train and validation sets for Netherlands F3 dataset """ import itertools diff --git a/scripts/run_all.sh b/scripts/run_all.sh new file mode 100755 index 00000000..82a26cf7 --- /dev/null +++ b/scripts/run_all.sh @@ -0,0 +1,112 @@ +#!/bin/bash + +# specify absolute locations to your models and data +MODEL_ROOT="/your/model/root" +DATA_ROOT="/your/data/root" + +# specify pretrained HRNet backbone +PRETRAINED_HRNET="${MODEL_ROOT}/hrnetv2_w48_imagenet_pretrained.pth" +DATA_F3="${DATA_ROOT}/dutchf3/data" +DATA_PENOBSCOT="${DATA_ROOT}/penobscot" + +# subdirectory where results are written +OUTPUT_DIR='output' + +# bug to fix conda not launching from a bash shell +source /data/anaconda/etc/profile.d/conda.sh +conda activate seismic-interpretation + +cd experiments/interpretation/penobscot/local + +# Penobscot seresnet unet with section depth +export CUDA_VISIBLE_DEVICES=0 +nohup time python train.py \ + 'DATASET.ROOT' "${DATA_PENOBSCOT}" \ + 'TRAIN.DEPTH' 'section' \ + 'OUTPUT_DIR' "${OUTPUT_DIR}" 'TRAIN.MODEL_DIR' 'section_depth' \ + --cfg "configs/seresnet_unet.yaml" > seresnet_unet.log 2>&1 & +# wait for python to pick up the runtime env before switching it +sleep 1 + +# Penobscot hrnet with section depth +export CUDA_VISIBLE_DEVICES=1 +nohup time python train.py \ + 'DATASET.ROOT' "${DATA_PENOBSCOT}" \ + 'TRAIN.DEPTH' 'section' \ + 'MODEL.PRETRAINED' "${PRETRAINED_HRNET}" \ + 'OUTPUT_DIR' "${OUTPUT_DIR}" 'TRAIN.MODEL_DIR' 'section_depth' \ + --cfg=configs/hrnet.yaml > hrnet.log 2>&1 & +# wait for python to pick up the runtime env before switching it +sleep 1 + +cd ../../dutchf3_patch/local + +# patch based without skip connections +export CUDA_VISIBLE_DEVICES=2 +nohup time python train.py \ + 'DATASET.ROOT' "${DATA_F3}" \ + 'TRAIN.DEPTH' 'none' \ + 'OUTPUT_DIR' "${OUTPUT_DIR}" 'TRAIN.MODEL_DIR' 'no_depth' \ + --cfg=configs/patch_deconvnet.yaml > patch_deconvnet.log 2>&1 & +# wait for python to pick up the runtime env before switching it +sleep 1 + +# patch based with skip connections +export CUDA_VISIBLE_DEVICES=3 +nohup time python train.py \ + 'DATASET.ROOT' "${DATA_F3}" \ + 'TRAIN.DEPTH' 'none' \ + 'OUTPUT_DIR' "${OUTPUT_DIR}" 'TRAIN.MODEL_DIR' 'no_depth' \ + --cfg=configs/patch_deconvnet_skip.yaml > patch_deconvnet_skip.log 2>&1 & +# wait for python to pick up the runtime env before switching it +sleep 1 + +# squeeze excitation resnet unet + section depth +export CUDA_VISIBLE_DEVICES=4 +nohup time python train.py \ + 'DATASET.ROOT' "${DATA_F3}" \ + 'TRAIN.DEPTH' 'section' \ + 'OUTPUT_DIR' "${OUTPUT_DIR}" 'TRAIN.MODEL_DIR' 'section_depth' \ + --cfg=configs/seresnet_unet.yaml > seresnet_unet.log 2>&1 & +# wait for python to pick up the runtime env before switching it +sleep 1 + +# HRNet + patch depth +export CUDA_VISIBLE_DEVICES=5 +nohup time python train.py \ + 'DATASET.ROOT' "${DATA_F3}" \ + 'TRAIN.DEPTH' 'patch' \ + 'MODEL.PRETRAINED' "${PRETRAINED_HRNET}" \ + 'OUTPUT_DIR' "${OUTPUT_DIR}" 'TRAIN.MODEL_DIR' 'patch_depth' \ + --cfg=configs/hrnet.yaml > hrnet_patch.log 2>&1 & +# wait for python to pick up the runtime env before switching it +sleep 1 + +# HRNet + section depth +export CUDA_VISIBLE_DEVICES=6 +nohup time python train.py \ + 'DATASET.ROOT' "${DATA_F3}" \ + 'TRAIN.DEPTH' 'section' \ + 'MODEL.PRETRAINED' "${PRETRAINED_HRNET}" \ + 'OUTPUT_DIR' "${OUTPUT_DIR}" 'TRAIN.MODEL_DIR' 'section_depth' \ + --cfg=configs/hrnet.yaml > hrnet_section.log 2>&1 & +# wait for python to pick up the runtime env before switching it +sleep 1 + +cd ../../dutchf3_section/local + +# and finally do a section-based model for comparison +# (deconv with skip connections and no depth) +export CUDA_VISIBLE_DEVICES=7 +nohup time python train.py \ + 'DATASET.ROOT' "${DATA_F3}" \ + 'TRAIN.DEPTH' 'none' \ + 'OUTPUT_DIR' "${OUTPUT_DIR}" 'TRAIN.MODEL_DIR' 'no_depth' \ + --cfg=configs/section_deconvnet_skip.yaml > section_deconvnet_skip.log 2>&1 & +# wait for python to pick up the runtime env before switching it +sleep 1 + +unset CUDA_VISIBLE_DEVICES + +echo "LAUNCHED ALL LOCAL JOBS" + diff --git a/scripts/run_distributed.sh b/scripts/run_distributed.sh new file mode 100755 index 00000000..80f24bae --- /dev/null +++ b/scripts/run_distributed.sh @@ -0,0 +1,59 @@ +#!/bin/bash + +# number of GPUs to train on +NGPU=8 +# specify pretrained HRNet backbone +PRETRAINED_HRNET='/home/maxkaz/models/hrnetv2_w48_imagenet_pretrained.pth' +# DATA_F3='/home/alfred/data/dutch_f3/data' +# DATA_PENOBSCOT='/home/maxkaz/data/penobscot' +DATA_F3='/storage/data/dutchf3/data' +DATA_PENOBSCOT='/storage/data/penobscot' +# subdirectory where results are written +OUTPUT_DIR='output' + +unset CUDA_VISIBLE_DEVICES +# bug to fix conda not launching from a bash shell +source /data/anaconda/etc/profile.d/conda.sh +conda activate seismic-interpretation +export PYTHONPATH=/storage/repos/forks/seismic-deeplearning-1/interpretation:$PYTHONPATH + +cd experiments/interpretation/dutchf3_patch/distributed/ + +# patch based without skip connections +nohup time python -m torch.distributed.launch --nproc_per_node=${NGPU} train.py \ + 'DATASET.ROOT' "${DATA_F3}" \ + 'TRAIN.DEPTH' 'none' \ + 'OUTPUT_DIR' "${OUTPUT_DIR}" 'TRAIN.MODEL_DIR' 'no_depth' \ + --cfg=configs/patch_deconvnet.yaml > patch_deconvnet.log 2>&1 + +# patch based with skip connections +nohup time python -m torch.distributed.launch --nproc_per_node=${NGPU} train.py \ + 'DATASET.ROOT' "${DATA_F3}" \ + 'TRAIN.DEPTH' 'none' \ + 'OUTPUT_DIR' "${OUTPUT_DIR}" 'TRAIN.MODEL_DIR' 'no_depth' \ + --cfg=configs/patch_deconvnet_skip.yaml > patch_deconvnet_skip.log 2>&1 + +# squeeze excitation resnet unet + section depth +nohup time python -m torch.distributed.launch --nproc_per_node=${NGPU} train.py \ + 'DATASET.ROOT' "${DATA_F3}" \ + 'TRAIN.DEPTH' 'section' \ + 'OUTPUT_DIR' "${OUTPUT_DIR}" 'TRAIN.MODEL_DIR' 'section_depth' \ + --cfg=configs/seresnet_unet.yaml > seresnet_unet.log 2>&1 + +# HRNet + patch depth +nohup time python -m torch.distributed.launch --nproc_per_node=${NGPU} train.py \ + 'DATASET.ROOT' "${DATA_F3}" \ + 'TRAIN.DEPTH' 'patch' \ + 'MODEL.PRETRAINED' "${PRETRAINED_HRNET}" \ + 'OUTPUT_DIR' "${OUTPUT_DIR}" 'TRAIN.MODEL_DIR' 'patch_depth' \ + --cfg=configs/hrnet.yaml > hrnet_patch.log 2>&1 + +# HRNet + section depth +nohup time python -m torch.distributed.launch --nproc_per_node=${NGPU} train.py \ + 'DATASET.ROOT' "${DATA_F3}" \ + 'TRAIN.DEPTH' 'section' \ + 'MODEL.PRETRAINED' "${PRETRAINED_HRNET}" \ + 'OUTPUT_DIR' "${OUTPUT_DIR}" 'TRAIN.MODEL_DIR' 'section_depth' \ + --cfg=configs/hrnet.yaml > hrnet_section.log 2>&1 + +echo "TADA" diff --git a/scripts/test_all.sh b/scripts/test_all.sh new file mode 100755 index 00000000..23427136 --- /dev/null +++ b/scripts/test_all.sh @@ -0,0 +1,201 @@ +#!/bin/bash + +# specify absolute locations to your models, data and storage +MODEL_ROOT="/your/model/root" +DATA_ROOT="/your/data/root" +STORAGE_ROOT="/your/storage/root" + +# specify pretrained HRNet backbone +PRETRAINED_HRNET="${MODEL_ROOT}/hrnetv2_w48_imagenet_pretrained.pth" +DATA_F3="${DATA_ROOT}/dutchf3/data" +DATA_PENOBSCOT="${DATA_ROOT}/penobscot" +# name of your git branch which you ran the training code from +BRANCH="your/git/branch/with/slashes/if/they/exist/in/branch/name" + +# name of directory where results are kept +OUTPUT_DIR="output" + +# directory where to copy pre-trained models to +OUTPUT_PRETRAINED="${STORAGE_ROOT}/pretrained_models/" + +if [ -d ${OUTPUT_PRETRAINED} ]; then + echo "erasing pre-trained models in ${OUTPUT_PRETRAINED}" + rm -rf "${OUTPUT_PRETRAINED}" +fi + +mkdir -p "${OUTPUT_PRETRAINED}" +echo "Pre-trained models will be copied to ${OUTPUT_PRETRAINED}" + +# bug to fix conda not launching from a bash shell +source /data/anaconda/etc/profile.d/conda.sh +conda activate seismic-interpretation + +cd experiments/interpretation/penobscot/local + +# Penobscot seresnet unet with section depth +export CUDA_VISIBLE_DEVICES=0 +CONFIG_NAME='seresnet_unet' +# master +# model=$(ls -td ${OUTPUT_DIR}/${BRANCH}/*/resnet_unet/*/section_depth/*.pth | head -1) +# new staging structure +model=$(ls -td ${OUTPUT_DIR}/${BRANCH}/*/${CONFIG_NAME}/section_depth/*/*.pth | head -1) +cp $model ${OUTPUT_PRETRAINED}/penobscot_seresnetunet_patch_section_depth.pth +nohup time python test.py \ + 'DATASET.ROOT' "${DATA_PENOBSCOT}" 'TEST.MODEL_PATH' "${model}" \ + --cfg "configs/${CONFIG_NAME}.yaml" > ${CONFIG_NAME}_test.log 2>&1 & +sleep 1 + +# Penobscot hrnet with section depth +export CUDA_VISIBLE_DEVICES=1 +CONFIG_NAME='hrnet' +# master +# model=$(ls -td ${OUTPUT_DIR}/${BRANCH}/*/seg_hrnet/*/section_depth/*.pth | head -1) +# new staging structure +model=$(ls -td ${OUTPUT_DIR}/${BRANCH}/*/${CONFIG_NAME}/section_depth/*/*.pth | head -1) +cp $model ${OUTPUT_PRETRAINED}/penobscot_hrnet_patch_section_depth.pth +nohup time python test.py \ + 'DATASET.ROOT' "${DATA_PENOBSCOT}" 'TEST.MODEL_PATH' "${model}" \ + 'MODEL.PRETRAINED' "${PRETRAINED_HRNET}" \ + --cfg "configs/${CONFIG_NAME}.yaml" > ${CONFIG_NAME}_test.log 2>&1 & +sleep 1 + +cd ../../dutchf3_patch/local + +# patch based without skip connections +export CUDA_VISIBLE_DEVICES=2 +CONFIG_NAME='patch_deconvnet' +model=$(ls -td ${OUTPUT_DIR}/${BRANCH}/*/${CONFIG_NAME}/no_depth/*/*.pth | head -1) +cp $model ${OUTPUT_PRETRAINED}/dutchf3_deconvnet_patch_no_depth.pth +nohup time python test.py \ + 'DATASET.ROOT' "${DATA_F3}" 'TEST.MODEL_PATH' "${model}" \ + --cfg "configs/${CONFIG_NAME}.yaml" > ${CONFIG_NAME}_test.log 2>&1 & +sleep 1 + +# patch based with skip connections +export CUDA_VISIBLE_DEVICES=3 +CONFIG_NAME='patch_deconvnet_skip' +model=$(ls -td ${OUTPUT_DIR}/${BRANCH}/*/${CONFIG_NAME}/no_depth/*/*.pth | head -1) +cp $model ${OUTPUT_PRETRAINED}/dutchf3_deconvnetskip_patch_no_depth.pth +nohup time python test.py \ + 'DATASET.ROOT' "${DATA_F3}" 'TEST.MODEL_PATH' "${model}" \ + --cfg "configs/${CONFIG_NAME}.yaml" > ${CONFIG_NAME}_test.log 2>&1 & +sleep 1 + +# squeeze excitation resnet unet + section depth +export CUDA_VISIBLE_DEVICES=4 +CONFIG_NAME='seresnet_unet' +# master +# model=$(ls -td ${OUTPUT_DIR}/${BRANCH}/*/resnet_unet/*/section_depth/*.pth | head -1) +# staging +model=$(ls -td ${OUTPUT_DIR}/${BRANCH}/*/${CONFIG_NAME}/section_depth/*/*.pth | head -1) +cp $model ${OUTPUT_PRETRAINED}/dutchf3_seresnetunet_patch_section_depth.pth +nohup time python test.py \ + 'DATASET.ROOT' "${DATA_F3}" 'TEST.MODEL_PATH' "${model}" \ + --cfg "configs/${CONFIG_NAME}.yaml" > ${CONFIG_NAME}_test.log 2>&1 & +sleep 1 + +# HRNet + patch depth +export CUDA_VISIBLE_DEVICES=5 +CONFIG_NAME='hrnet' +# master +# model=$(ls -td ${OUTPUT_DIR}/${BRANCH}/*/seg_hrnet/*/patch_depth/*.pth | head -1) +# staging +model=$(ls -td ${OUTPUT_DIR}/${BRANCH}/*/$CONFIG_NAME/patch_depth/*/*.pth | head -1) +cp $model ${OUTPUT_PRETRAINED}/dutchf3_hrnet_patch_patch_depth.pth +nohup time python test.py \ + 'DATASET.ROOT' "${DATA_F3}" 'TEST.MODEL_PATH' "${model}" \ + 'MODEL.PRETRAINED' "${PRETRAINED_HRNET}" \ + --cfg "configs/${CONFIG_NAME}.yaml" > ${CONFIG_NAME}_patch_test.log 2>&1 & +sleep 1 + +# HRNet + section depth +export CUDA_VISIBLE_DEVICES=6 +CONFIG_NAME='hrnet' +# master +# model=$(ls -td ${OUTPUT_DIR}/${BRANCH}/*/seg_hrnet/*/section_depth/*.pth | head -1) +# staging +model=$(ls -td ${OUTPUT_DIR}/${BRANCH}/*/${CONFIG_NAME}/section_depth/*/*.pth | head -1) +cp $model ${OUTPUT_PRETRAINED}/dutchf3_hrnet_patch_section_depth.pth +nohup time python test.py \ + 'DATASET.ROOT' "${DATA_F3}" 'TEST.MODEL_PATH' "${model}" \ + 'MODEL.PRETRAINED' "${PRETRAINED_HRNET}" \ + --cfg "configs/${CONFIG_NAME}.yaml" > ${CONFIG_NAME}_section_test.log 2>&1 & +sleep 1 + +cd ../../dutchf3_section/local + +# and finally do a section-based model for comparison +# (deconv with skip connections and no depth) +export CUDA_VISIBLE_DEVICES=7 +CONFIG_NAME='section_deconvnet_skip' +model=$(ls -td ${OUTPUT_DIR}/${BRANCH}/*/${CONFIG_NAME}/no_depth/*/*.pth | head -1) +cp $model ${OUTPUT_PRETRAINED}/dutchf3_deconvnetskip_section_no_depth.pth +nohup time python test.py \ + 'DATASET.ROOT' "${DATA_F3}" 'TEST.MODEL_PATH' "${model}" \ + --cfg "configs/${CONFIG_NAME}.yaml" > ${CONFIG_NAME}_test.log 2>&1 & +sleep 1 + +echo "Waiting for all local runs to finish" +wait + +# scoring scripts are in the local folder +# models are in the distributed folder +cd ../../dutchf3_patch/local + +# patch based without skip connections +export CUDA_VISIBLE_DEVICES=2 +CONFIG_NAME='patch_deconvnet' +model=$(ls -td ../distributed/${OUTPUT_DIR}/${BRANCH}/*/${CONFIG_NAME}/*/no_depth/*.pth | head -1) +cp $model ${OUTPUT_PRETRAINED}/dutchf3_deconvnet_patch_no_depth_distributed.pth +nohup time python test.py \ + 'DATASET.ROOT' "${DATA_F3}" 'TEST.MODEL_PATH' "${model}" \ + --cfg "configs/${CONFIG_NAME}.yaml" > ${CONFIG_NAME}_distributed_test.log 2>&1 & +sleep 1 + +# patch based with skip connections +export CUDA_VISIBLE_DEVICES=3 +CONFIG_NAME='patch_deconvnet_skip' +model=$(ls -td ../distributed/${OUTPUT_DIR}/${BRANCH}/*/${CONFIG_NAME}/*/no_depth/*.pth | head -1) +cp $model ${OUTPUT_PRETRAINED}/dutchf3_deconvnetskip_patch_no_depth_distributed.pth +nohup time python test.py \ + 'DATASET.ROOT' "${DATA_F3}" 'TEST.MODEL_PATH' "${model}" \ + --cfg "configs/${CONFIG_NAME}.yaml" > ${CONFIG_NAME}_distributed_test.log 2>&1 & +sleep 1 + +# squeeze excitation resnet unet + section depth +export CUDA_VISIBLE_DEVICES=4 +CONFIG_NAME='seresnet_unet' +model=$(ls -td ../distributed/${OUTPUT_DIR}/${BRANCH}/*/resnet_unet/*/section_depth/*.pth | head -1) +cp $model ${OUTPUT_PRETRAINED}/dutchf3_seresnetunet_patch_section_depth_distributed.pth +nohup time python test.py \ + 'DATASET.ROOT' "${DATA_F3}" 'TEST.MODEL_PATH' "${model}" \ + --cfg "configs/${CONFIG_NAME}.yaml" > ${CONFIG_NAME}_distributed_test.log 2>&1 & +sleep 1 + +# HRNet + patch depth +export CUDA_VISIBLE_DEVICES=5 +CONFIG_NAME='hrnet' +model=$(ls -td ../distributed/${OUTPUT_DIR}/${BRANCH}/*/seg_hrnet/*/patch_depth/*.pth | head -1) +cp $model ${OUTPUT_PRETRAINED}/dutchf3_hrnet_patch_patch_depth_distributed.pth +nohup time python test.py \ + 'DATASET.ROOT' "${DATA_F3}" 'TEST.MODEL_PATH' "${model}" \ + 'MODEL.PRETRAINED' "${PRETRAINED_HRNET}" \ + --cfg "configs/${CONFIG_NAME}.yaml" > ${CONFIG_NAME}_distributed_test.log 2>&1 & +sleep 1 + +# HRNet + section depth +export CUDA_VISIBLE_DEVICES=6 +CONFIG_NAME='hrnet' +model=$(ls -td ../distributed/${OUTPUT_DIR}/${BRANCH}/*/seg_hrnet/*/section_depth/*.pth | head -1) +cp $model ${OUTPUT_PRETRAINED}/dutchf3_hrnet_patch_section_depth_distributed.pth +nohup time python test.py \ + 'DATASET.ROOT' "${DATA_F3}" 'TEST.MODEL_PATH' "${model}" \ + 'MODEL.PRETRAINED' "${PRETRAINED_HRNET}" \ + --cfg "configs/${CONFIG_NAME}.yaml" > ${CONFIG_NAME}_distributed_test.log 2>&1 & +sleep 1 + +echo "Waiting for all distributed runs to finish" + +wait + +echo "TADA" diff --git a/tests/cicd/aml_build.yml b/tests/cicd/aml_build.yml new file mode 100644 index 00000000..a443e124 --- /dev/null +++ b/tests/cicd/aml_build.yml @@ -0,0 +1,54 @@ +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. + +# Pull request against these branches will trigger this build +pr: +- master +- staging +- contrib + +# Any commit to this branch will trigger the build. +trigger: +- master +- staging +- contrib + +jobs: + +# partially disable setup for now - done manually on build VM +- job: setup + timeoutInMinutes: 10 + displayName: Setup + pool: + name: deepseismicagentpool + + steps: + - bash: | + # terminate as soon as any internal script fails + set -e + + echo "Running setup..." + pwd + ls + git branch + uname -ra + + # ENABLE ALL FOLLOWING CODE WHEN YOU'RE READY TO ADD AML BUILD - disabled right now + # ./scripts/env_reinstall.sh + # use hardcoded root for now because not sure how env changes under ADO policy + # DATA_ROOT="/home/alfred/data_dynamic" + # ./tests/cicd/src/scripts/get_data_for_builds.sh ${DATA_ROOT} + # copy your model files like so - using dummy file to illustrate + # azcopy --quiet --source:https://$(storagename).blob.core.windows.net/models/model --source-key $(storagekey) --destination /home/alfred/models/your_model_name + +- job: AML_job_placeholder + dependsOn: setup + timeoutInMinutes: 5 + displayName: AML job placeholder + pool: + name: deepseismicagentpool + steps: + - bash: | + # UNCOMMENT THIS WHEN YOU HAVE UNCOMMENTED THE SETUP JOB + # source activate seismic-interpretation + echo "TADA!!" diff --git a/tests/cicd/component_governance.yml b/tests/cicd/component_governance.yml index cae6b7a9..959bd056 100644 --- a/tests/cicd/component_governance.yml +++ b/tests/cicd/component_governance.yml @@ -10,13 +10,17 @@ pr: - master - staging +- contrib +- correctness trigger: - master - staging +- contrib +- correctness pool: - vmImage: 'ubuntu-latest' + name: deepseismicagentpool steps: - task: ComponentGovernanceComponentDetection@0 diff --git a/tests/cicd/main_build.yml b/tests/cicd/main_build.yml index 017137b4..4a0b8f2b 100644 --- a/tests/cicd/main_build.yml +++ b/tests/cicd/main_build.yml @@ -5,14 +5,30 @@ pr: - master - staging +- contrib +- correctness # Any commit to this branch will trigger the build. trigger: - master - staging +- contrib +- correctness + +################################################################################################### +# The pre-requisite for these jobs is to have 4 GPUs on your virtual machine (K80 or better) +# Jobs are daisy-chained by stages - more relevant stages come first (the ones we're currently +# working on): +# - if they fail no point in running anything else +# - each stage _can_ have parallel jobs but that's not always needed for fast execution +################################################################################################### jobs: -# partially disable setup for now - done manually on build VM + +################################################################################################### +# Stage 1: Setup +################################################################################################### + - job: setup timeoutInMinutes: 10 displayName: Setup @@ -20,251 +36,358 @@ jobs: name: deepseismicagentpool steps: - bash: | + # terminate as soon as any internal script fails + set -e + echo "Running setup..." pwd ls git branch uname -ra - ./scripts/env_reinstall.sh + ./scripts/env_reinstall.sh + # use hardcoded root for now because not sure how env changes under ADO policy + DATA_ROOT="/home/alfred/data_dynamic" + + ./tests/cicd/src/scripts/get_data_for_builds.sh ${DATA_ROOT} + # copy your model files like so - using dummy file to illustrate azcopy --quiet --source:https://$(storagename).blob.core.windows.net/models/model --source-key $(storagekey) --destination /home/alfred/models/your_model_name -- job: unit_tests_job - dependsOn: setup - timeoutInMinutes: 5 - displayName: Unit Tests Job - pool: - name: deepseismicagentpool - steps: - - bash: | - echo "Starting unit tests" - source activate seismic-interpretation - pytest --durations=0 cv_lib/tests/ - echo "Unit test job passed" - ################################################################################################### -# LOCAL PATCH JOBS +# Stage 2: fast unit tests ################################################################################################### -- job: hrnet_penobscot +- job: scripts_unit_tests_job dependsOn: setup timeoutInMinutes: 5 - displayName: hrnet penobscot + displayName: Unit Tests pool: name: deepseismicagentpool steps: - bash: | - conda env list + set -e + echo "Starting scripts unit tests" source activate seismic-interpretation - # run the tests - cd experiments/interpretation/penobscot/local - python train.py 'DATASET.ROOT' '/home/alfred/data/penobscot' 'TRAIN.END_EPOCH' 1 'TRAIN.SNAPSHOTS' 1 --cfg=configs/hrnet.yaml --debug - # find the latest model which we just trained - model=$(ls -td */seg_hrnet/*/* | head -1) - echo ${model} - # # try running the test script - python test.py 'DATASET.ROOT' '/home/alfred/data/penobscot' 'TEST.MODEL_PATH' ${model}/seg_hrnet_running_model_1.pth --cfg=configs/hrnet.yaml --debug - + pytest --durations=0 tests/ + echo "Script unit test job passed" -- job: seresnet_unet_penobscot - dependsOn: setup +- job: cv_lib_unit_tests_job + dependsOn: scripts_unit_tests_job timeoutInMinutes: 5 - displayName: seresnet_unet penobscot + displayName: cv_lib Unit Tests pool: name: deepseismicagentpool steps: - bash: | - conda env list + set -e + echo "Starting cv_lib unit tests" source activate seismic-interpretation - # run the tests - cd experiments/interpretation/penobscot/local - python train.py 'DATASET.ROOT' '/home/alfred/data/penobscot' 'TRAIN.END_EPOCH' 1 'TRAIN.SNAPSHOTS' 1 --cfg=configs/seresnet_unet.yaml --debug - # find the latest model which we just trained - model=$(ls -td */resnet_unet/*/* | head -1) - echo ${model} - # try running the test script - python test.py 'DATASET.ROOT' '/home/alfred/data/penobscot' 'TEST.MODEL_PATH' ${model}/resnet_unet_running_model_1.pth --cfg=configs/seresnet_unet.yaml --debug + pytest --durations=0 cv_lib/tests/ + echo "cv_lib unit test job passed" -- job: hrnet_dutchf3 - dependsOn: setup - timeoutInMinutes: 5 - displayName: hrnet dutchf3 +################################################################################################### +# Stage 3: Dutch F3 patch models on checkerboard test set: +# deconvnet, unet, HRNet patch depth, HRNet section depth +# CAUTION: reverted these builds to single-GPU leaving new multi-GPU code in to be reverted later +################################################################################################### + +- job: checkerboard_dutchf3_patch + dependsOn: cv_lib_unit_tests_job + timeoutInMinutes: 20 + displayName: Checkerboard Dutch F3 patch local pool: name: deepseismicagentpool steps: - bash: | + source activate seismic-interpretation - # run the tests - cd experiments/interpretation/dutchf3_patch/local - python train.py 'DATASET.ROOT' '/home/alfred/data/dutch_f3/data' 'TRAIN.END_EPOCH' 1 'TRAIN.SNAPSHOTS' 1 --cfg=configs/hrnet.yaml --debug - # find the latest model which we just trained - model=$(ls -td */seg_hrnet/*/* | head -1) - echo ${model} - # try running the test script - python test.py 'DATASET.ROOT' '/home/alfred/data/dutch_f3/data' 'TEST.MODEL_PATH' ${model}/seg_hrnet_running_model_1.pth --cfg=configs/hrnet.yaml --debug + # disable auto error handling as we flag it manually + set +e -- job: unet_dutchf3 - dependsOn: setup - timeoutInMinutes: 5 - displayName: unet dutchf3 - pool: - name: deepseismicagentpool - steps: - - bash: | - source activate seismic-interpretation - # run the tests cd experiments/interpretation/dutchf3_patch/local - python train.py 'DATASET.ROOT' '/home/alfred/data/dutch_f3/data' 'TRAIN.END_EPOCH' 1 'TRAIN.SNAPSHOTS' 1 --cfg=configs/unet.yaml --debug + + # Create a temporary directory to store the statuses + dir=$(mktemp -d) + + # we are running a single batch in debug mode through, so increase the + # number of epochs to obtain a representative set of results + + pids= + # export CUDA_VISIBLE_DEVICES=0 + { python train.py 'DATASET.ROOT' '/home/alfred/data_dynamic/checkerboard/data' \ + 'NUM_DEBUG_BATCHES' 5 'TRAIN.END_EPOCH' 1 'TRAIN.SNAPSHOTS' 1 \ + 'DATASET.NUM_CLASSES' 2 'DATASET.CLASS_WEIGHTS' '[1.0, 1.0]' \ + 'TRAIN.DEPTH' 'none' \ + 'OUTPUT_DIR' 'output' 'TRAIN.MODEL_DIR' 'no_depth' \ + --cfg=configs/patch_deconvnet.yaml --debug ; echo "$?" > "$dir/$BASHPID"; } + pids+=" $!" + # export CUDA_VISIBLE_DEVICES=1 + { python train.py 'DATASET.ROOT' '/home/alfred/data_dynamic/checkerboard/data' \ + 'NUM_DEBUG_BATCHES' 5 'TRAIN.END_EPOCH' 1 'TRAIN.SNAPSHOTS' 1 \ + 'DATASET.NUM_CLASSES' 2 'DATASET.CLASS_WEIGHTS' '[1.0, 1.0]' \ + 'TRAIN.DEPTH' 'section' \ + 'OUTPUT_DIR' 'output' 'TRAIN.MODEL_DIR' 'section_depth' \ + --cfg=configs/unet.yaml --debug ; echo "$?" > "$dir/$BASHPID"; } + pids+=" $!" + # export CUDA_VISIBLE_DEVICES=2 + { python train.py 'DATASET.ROOT' '/home/alfred/data_dynamic/checkerboard/data' \ + 'NUM_DEBUG_BATCHES' 5 'TRAIN.END_EPOCH' 1 'TRAIN.SNAPSHOTS' 1 \ + 'DATASET.NUM_CLASSES' 2 'DATASET.CLASS_WEIGHTS' '[1.0, 1.0]' \ + 'TRAIN.DEPTH' 'section' \ + 'OUTPUT_DIR' 'output' 'TRAIN.MODEL_DIR' 'section_depth' \ + --cfg=configs/seresnet_unet.yaml --debug ; echo "$?" > "$dir/$BASHPID"; } + pids+=" $!" + # export CUDA_VISIBLE_DEVICES=3 + { python train.py 'DATASET.ROOT' '/home/alfred/data_dynamic/checkerboard/data' \ + 'NUM_DEBUG_BATCHES' 5 'TRAIN.END_EPOCH' 1 'TRAIN.SNAPSHOTS' 1 \ + 'DATASET.NUM_CLASSES' 2 'DATASET.CLASS_WEIGHTS' '[1.0, 1.0]' \ + 'TRAIN.DEPTH' 'section' \ + 'MODEL.PRETRAINED' '/home/alfred/models/hrnetv2_w48_imagenet_pretrained.pth' \ + 'OUTPUT_DIR' 'output' 'TRAIN.MODEL_DIR' 'section_depth' \ + --cfg=configs/hrnet.yaml --debug ; echo "$?" > "$dir/$BASHPID"; } + pids+=" $!" + + wait $pids || exit 1 + + # check if any of the models had an error during execution + # Get return information for each pid + for file in "$dir"/*; do + printf 'PID %d returned %d\n' "${file##*/}" "$(<"$file")" + [[ "$(<"$file")" -ne "0" ]] && exit 1 || echo "pass" + done + + # Remove the temporary directory + rm -r "$dir" + + # check validation set performance + set -e + python ../../../../tests/cicd/src/check_performance.py --infile metrics_patch_deconvnet_no_depth.json + python ../../../../tests/cicd/src/check_performance.py --infile metrics_unet_section_depth.json + python ../../../../tests/cicd/src/check_performance.py --infile metrics_seresnet_unet_section_depth.json + python ../../../../tests/cicd/src/check_performance.py --infile metrics_hrnet_section_depth.json + set +e + echo "All models finished training - start scoring" + + # Create a temporary directory to store the statuses + dir=$(mktemp -d) + + pids= + # export CUDA_VISIBLE_DEVICES=0 # find the latest model which we just trained - model=$(ls -td */resnet_unet/*/* | head -1) - echo ${model} + model=$(ls -td output/patch_deconvnet/no_depth/* | head -1) # try running the test script - python test.py 'DATASET.ROOT' '/home/alfred/data/dutch_f3/data' 'TEST.MODEL_PATH' ${model}/resnet_unet_running_model_1.pth --cfg=configs/unet.yaml --debug - -- job: seresnet_unet_dutchf3 - dependsOn: setup - timeoutInMinutes: 5 - displayName: seresnet unet dutchf3 - pool: - name: deepseismicagentpool - steps: - - bash: | - source activate seismic-interpretation - # run the tests - cd experiments/interpretation/dutchf3_patch/local - python train.py 'DATASET.ROOT' '/home/alfred/data/dutch_f3/data' 'TRAIN.END_EPOCH' 1 'TRAIN.SNAPSHOTS' 1 --cfg=configs/seresnet_unet.yaml --debug + { python test.py 'DATASET.ROOT' '/home/alfred/data_dynamic/checkerboard/data' \ + 'TEST.SPLIT' 'Both' 'TRAIN.MODEL_DIR' 'no_depth' \ + 'DATASET.NUM_CLASSES' 2 'DATASET.CLASS_WEIGHTS' '[1.0, 1.0]' \ + 'TEST.MODEL_PATH' ${model}/patch_deconvnet_running_model_0.*.pth \ + --cfg=configs/patch_deconvnet.yaml --debug ; echo "$?" > "$dir/$BASHPID"; } + pids+=" $!" + # export CUDA_VISIBLE_DEVICES=1 # find the latest model which we just trained - model=$(ls -td */resnet_unet/*/* | head -1) + model=$(ls -td output/unet/section_depth/* | head -1) # try running the test script - python test.py 'DATASET.ROOT' '/home/alfred/data/dutch_f3/data' 'TEST.MODEL_PATH' ${model}/resnet_unet_running_model_1.pth --cfg=configs/seresnet_unet.yaml --debug - -- job: patch_deconvnet_dutchf3 - dependsOn: setup - timeoutInMinutes: 5 - displayName: patch deconvnet dutchf3 - pool: - name: deepseismicagentpool - steps: - - bash: | - source activate seismic-interpretation - # run the tests - cd experiments/interpretation/dutchf3_patch/local - python train.py 'DATASET.ROOT' '/home/alfred/data/dutch_f3/data' 'TRAIN.BATCH_SIZE_PER_GPU' 1 'TRAIN.END_EPOCH' 1 'TRAIN.SNAPSHOTS' 1 --cfg=configs/patch_deconvnet.yaml --debug + { python test.py 'DATASET.ROOT' '/home/alfred/data_dynamic/checkerboard/data' \ + 'TEST.SPLIT' 'Both' 'TRAIN.MODEL_DIR' 'section_depth' \ + 'DATASET.NUM_CLASSES' 2 'DATASET.CLASS_WEIGHTS' '[1.0, 1.0]' \ + 'TEST.MODEL_PATH' ${model}/resnet_unet_running_model_0.*.pth \ + --cfg=configs/unet.yaml --debug ; echo "$?" > "$dir/$BASHPID"; } + pids+=" $!" + # export CUDA_VISIBLE_DEVICES=2 # find the latest model which we just trained - model=$(ls -td */patch_deconvnet/*/* | head -1) + model=$(ls -td output/seresnet_unet/section_depth/* | head -1) # try running the test script - python test.py 'DATASET.ROOT' '/home/alfred/data/dutch_f3/data' 'VALIDATION.BATCH_SIZE_PER_GPU' 1 'TEST.MODEL_PATH' ${model}/patch_deconvnet_running_model_1.pth --cfg=configs/patch_deconvnet.yaml --debug - -- job: patch_deconvnet_skip_dutchf3 - dependsOn: setup - timeoutInMinutes: 5 - displayName: patch deconvnet skip dutchf3 - pool: - name: deepseismicagentpool - steps: - - bash: | - source activate seismic-interpretation - # run the tests - cd experiments/interpretation/dutchf3_patch/local - python train.py 'DATASET.ROOT' '/home/alfred/data/dutch_f3/data' 'TRAIN.BATCH_SIZE_PER_GPU' 1 'TRAIN.END_EPOCH' 1 'TRAIN.SNAPSHOTS' 1 --cfg=configs/patch_deconvnet_skip.yaml --debug + { python test.py 'DATASET.ROOT' '/home/alfred/data_dynamic/checkerboard/data' \ + 'DATASET.NUM_CLASSES' 2 'DATASET.CLASS_WEIGHTS' '[1.0, 1.0]' \ + 'TEST.SPLIT' 'Both' 'TRAIN.MODEL_DIR' 'section_depth' \ + 'TEST.MODEL_PATH' ${model}/resnet_unet_running_model_0.*.pth \ + --cfg=configs/seresnet_unet.yaml --debug ; echo "$?" > "$dir/$BASHPID"; } + pids+=" $!" + # export CUDA_VISIBLE_DEVICES=3 # find the latest model which we just trained - model=$(ls -td */patch_deconvnet_skip/*/* | head -1) + model=$(ls -td output/hrnet/section_depth/* | head -1) # try running the test script - python test.py 'DATASET.ROOT' '/home/alfred/data/dutch_f3/data' 'VALIDATION.BATCH_SIZE_PER_GPU' 1 'TEST.MODEL_PATH' ${model}/patch_deconvnet_skip_running_model_1.pth --cfg=configs/patch_deconvnet_skip.yaml --debug + { python test.py 'DATASET.ROOT' '/home/alfred/data_dynamic/checkerboard/data' \ + 'DATASET.NUM_CLASSES' 2 'DATASET.CLASS_WEIGHTS' '[1.0, 1.0]' \ + 'TEST.SPLIT' 'Both' 'TRAIN.MODEL_DIR' 'section_depth' \ + 'MODEL.PRETRAINED' '/home/alfred/models/hrnetv2_w48_imagenet_pretrained.pth' \ + 'TEST.MODEL_PATH' ${model}/seg_hrnet_running_model_0.*.pth \ + --cfg=configs/hrnet.yaml --debug ; echo "$?" > "$dir/$BASHPID"; } + pids+=" $!" + + # wait for completion + wait $pids || exit 1 + + # check if any of the models had an error during execution + # Get return information for each pid + for file in "$dir"/*; do + printf 'PID %d returned %d\n' "${file##*/}" "$(<"$file")" + [[ "$(<"$file")" -ne "0" ]] && exit 1 || echo "pass" + done + + # Remove the temporary directory + rm -r "$dir" + + # check test set performance + set -e + python ../../../../tests/cicd/src/check_performance.py --infile metrics_test_patch_deconvnet_no_depth.json --test + python ../../../../tests/cicd/src/check_performance.py --infile metrics_test_unet_section_depth.json --test + python ../../../../tests/cicd/src/check_performance.py --infile metrics_test_seresnet_unet_section_depth.json --test + python ../../../../tests/cicd/src/check_performance.py --infile metrics_test_hrnet_section_depth.json --test + + echo "PASSED" ################################################################################################### -# DISTRIBUTED PATCH JOBS +# Stage 3: Dutch F3 patch models: deconvnet, unet, HRNet patch depth, HRNet section depth +# CAUTION: reverted these builds to single-GPU leaving new multi-GPU code in to be reverted later ################################################################################################### -- job: hrnet_dutchf3_dist - dependsOn: setup - timeoutInMinutes: 5 - displayName: hrnet dutchf3 distributed +- job: dutchf3_patch + dependsOn: checkerboard_dutchf3_patch + timeoutInMinutes: 20 + displayName: Dutch F3 patch local pool: name: deepseismicagentpool steps: - bash: | - source activate seismic-interpretation - # run the tests - cd experiments/interpretation/dutchf3_patch/distributed - python -m torch.distributed.launch --nproc_per_node=$(nproc) train.py 'DATASET.ROOT' '/home/alfred/data/dutch_f3/data' 'TRAIN.END_EPOCH' 1 'TRAIN.SNAPSHOTS' 1 --cfg=configs/hrnet.yaml --debug -- job: patch_deconvnet_skip_dist - dependsOn: setup - timeoutInMinutes: 5 - displayName: patch deconvnet skip distributed - pool: - name: deepseismicagentpool - steps: - - bash: | source activate seismic-interpretation - # run the tests - cd experiments/interpretation/dutchf3_patch/distributed - python -m torch.distributed.launch --nproc_per_node=$(nproc) train.py 'TRAIN.BATCH_SIZE_PER_GPU' 1 'DATASET.ROOT' '/home/alfred/data/dutch_f3/data' 'TRAIN.END_EPOCH' 1 'TRAIN.SNAPSHOTS' 1 --cfg=configs/patch_deconvnet_skip.yaml --debug -- job: patch_deconvnet_dist - dependsOn: setup - timeoutInMinutes: 5 - displayName: patch deconvnet distributed - pool: - name: deepseismicagentpool - steps: - - bash: | - source activate seismic-interpretation - # run the tests - cd experiments/interpretation/dutchf3_patch/distributed - python -m torch.distributed.launch --nproc_per_node=$(nproc) train.py 'TRAIN.BATCH_SIZE_PER_GPU' 1 'DATASET.ROOT' '/home/alfred/data/dutch_f3/data' 'TRAIN.END_EPOCH' 1 'TRAIN.SNAPSHOTS' 1 --cfg=configs/patch_deconvnet.yaml --debug + # disable auto error handling as we flag it manually + set +e -- job: seresnet_unet_dutchf3_dist - dependsOn: setup - timeoutInMinutes: 5 - displayName: seresnet unet dutchf3 distributed - pool: - name: deepseismicagentpool - steps: - - bash: | - source activate seismic-interpretation - # run the tests - cd experiments/interpretation/dutchf3_patch/distributed - python -m torch.distributed.launch --nproc_per_node=$(nproc) train.py 'DATASET.ROOT' '/home/alfred/data/dutch_f3/data' 'TRAIN.END_EPOCH' 1 'TRAIN.SNAPSHOTS' 1 --cfg=configs/seresnet_unet.yaml --debug + cd experiments/interpretation/dutchf3_patch/local + + # Create a temporary directory to store the statuses + dir=$(mktemp -d) + + pids= + # export CUDA_VISIBLE_DEVICES=0 + { python train.py 'DATASET.ROOT' '/home/alfred/data_dynamic/dutch_f3/data' 'TRAIN.END_EPOCH' 1 'TRAIN.SNAPSHOTS' 1 \ + 'TRAIN.DEPTH' 'none' \ + 'TRAIN.BATCH_SIZE_PER_GPU' 2 'VALIDATION.BATCH_SIZE_PER_GPU' 2 \ + 'OUTPUT_DIR' 'output' 'TRAIN.MODEL_DIR' 'no_depth' \ + --cfg=configs/patch_deconvnet.yaml --debug ; echo "$?" > "$dir/$BASHPID"; } + pids+=" $!" + # export CUDA_VISIBLE_DEVICES=1 + { python train.py 'DATASET.ROOT' '/home/alfred/data_dynamic/dutch_f3/data' 'TRAIN.END_EPOCH' 1 'TRAIN.SNAPSHOTS' 1 \ + 'TRAIN.DEPTH' 'section' \ + 'TRAIN.BATCH_SIZE_PER_GPU' 2 'VALIDATION.BATCH_SIZE_PER_GPU' 2 \ + 'OUTPUT_DIR' 'output' 'TRAIN.MODEL_DIR' 'section_depth' \ + --cfg=configs/unet.yaml --debug ; echo "$?" > "$dir/$BASHPID"; } + pids+=" $!" + # export CUDA_VISIBLE_DEVICES=2 + { python train.py 'DATASET.ROOT' '/home/alfred/data_dynamic/dutch_f3/data' 'TRAIN.END_EPOCH' 1 'TRAIN.SNAPSHOTS' 1 \ + 'TRAIN.DEPTH' 'section' \ + 'TRAIN.BATCH_SIZE_PER_GPU' 2 'VALIDATION.BATCH_SIZE_PER_GPU' 2 \ + 'OUTPUT_DIR' 'output' 'TRAIN.MODEL_DIR' 'section_depth' \ + --cfg=configs/seresnet_unet.yaml --debug ; echo "$?" > "$dir/$BASHPID"; } + pids+=" $!" + # export CUDA_VISIBLE_DEVICES=3 + { python train.py 'DATASET.ROOT' '/home/alfred/data_dynamic/dutch_f3/data' 'TRAIN.END_EPOCH' 1 'TRAIN.SNAPSHOTS' 1 \ + 'TRAIN.DEPTH' 'section' \ + 'TRAIN.BATCH_SIZE_PER_GPU' 2 'VALIDATION.BATCH_SIZE_PER_GPU' 2 \ + 'MODEL.PRETRAINED' '/home/alfred/models/hrnetv2_w48_imagenet_pretrained.pth' \ + 'OUTPUT_DIR' 'output' 'TRAIN.MODEL_DIR' 'section_depth' \ + --cfg=configs/hrnet.yaml --debug ; echo "$?" > "$dir/$BASHPID"; } + pids+=" $!" + + wait $pids || exit 1 + + # check if any of the models had an error during execution + # Get return information for each pid + for file in "$dir"/*; do + printf 'PID %d returned %d\n' "${file##*/}" "$(<"$file")" + [[ "$(<"$file")" -ne "0" ]] && exit 1 || echo "pass" + done + + # Remove the temporary directory + rm -r "$dir" + + echo "All models finished training - start scoring" + + # Create a temporary directory to store the statuses + dir=$(mktemp -d) + + pids= + # export CUDA_VISIBLE_DEVICES=0 + # find the latest model which we just trained + model_dir=$(ls -td output/patch_deconvnet/no_depth/* | head -1) + model=$(ls -t ${model_dir}/*.pth | head -1) + # try running the test script + { python test.py 'DATASET.ROOT' '/home/alfred/data_dynamic/dutch_f3/data' \ + 'TEST.SPLIT' 'Both' 'TRAIN.MODEL_DIR' 'no_depth' \ + 'TEST.MODEL_PATH' ${model} \ + --cfg=configs/patch_deconvnet.yaml --debug ; echo "$?" > "$dir/$BASHPID"; } + pids+=" $!" + # export CUDA_VISIBLE_DEVICES=1 + # find the latest model which we just trained + model_dir=$(ls -td output/unet/section_depth/* | head -1) + model=$(ls -t ${model_dir}/*.pth | head -1) + + # try running the test script + { python test.py 'DATASET.ROOT' '/home/alfred/data_dynamic/dutch_f3/data' \ + 'TEST.SPLIT' 'Both' 'TRAIN.MODEL_DIR' 'section_depth' \ + 'TEST.MODEL_PATH' ${model} \ + --cfg=configs/unet.yaml --debug ; echo "$?" > "$dir/$BASHPID"; } + pids+=" $!" + # export CUDA_VISIBLE_DEVICES=2 + # find the latest model which we just trained + model_dir=$(ls -td output/seresnet_unet/section_depth/* | head -1) + model=$(ls -t ${model_dir}/*.pth | head -1) + + # try running the test script + { python test.py 'DATASET.ROOT' '/home/alfred/data_dynamic/dutch_f3/data' \ + 'TEST.SPLIT' 'Both' 'TRAIN.MODEL_DIR' 'section_depth' \ + 'TEST.MODEL_PATH' ${model} \ + --cfg=configs/seresnet_unet.yaml --debug ; echo "$?" > "$dir/$BASHPID"; } + pids+=" $!" + # export CUDA_VISIBLE_DEVICES=3 + # find the latest model which we just trained + model_dir=$(ls -td output/hrnet/section_depth/* | head -1) + model=$(ls -t ${model_dir}/*.pth | head -1) + + # try running the test script + { python test.py 'DATASET.ROOT' '/home/alfred/data_dynamic/dutch_f3/data' \ + 'TEST.SPLIT' 'Both' 'TRAIN.MODEL_DIR' 'section_depth' \ + 'MODEL.PRETRAINED' '/home/alfred/models/hrnetv2_w48_imagenet_pretrained.pth' \ + 'TEST.MODEL_PATH' ${model} \ + --cfg=configs/hrnet.yaml --debug ; echo "$?" > "$dir/$BASHPID"; } + pids+=" $!" + + # wait for completion + wait $pids || exit 1 + + # check if any of the models had an error during execution + # Get return information for each pid + for file in "$dir"/*; do + printf 'PID %d returned %d\n' "${file##*/}" "$(<"$file")" + [[ "$(<"$file")" -ne "0" ]] && exit 1 || echo "pass" + done + + # Remove the temporary directory + rm -r "$dir" + + echo "PASSED" -- job: unet_dutchf3_dist - dependsOn: setup - timeoutInMinutes: 5 - displayName: unet dutchf3 distributed - pool: - name: deepseismicagentpool - steps: - - bash: | - source activate seismic-interpretation - # run the tests - cd experiments/interpretation/dutchf3_patch/distributed - python -m torch.distributed.launch --nproc_per_node=$(nproc) train.py 'DATASET.ROOT' '/home/alfred/data/dutch_f3/data' 'TRAIN.END_EPOCH' 1 'TRAIN.SNAPSHOTS' 1 --cfg=configs/unet.yaml --debug - ################################################################################################### -# LOCAL SECTION JOBS +# Stage 5: Notebook tests ################################################################################################### -- job: section_deconvnet_skip - dependsOn: setup +- job: F3_block_training_and_evaluation_local_notebook + dependsOn: dutchf3_patch timeoutInMinutes: 5 - displayName: section deconvnet skip + displayName: F3 block training and evaluation local notebook pool: name: deepseismicagentpool steps: - bash: | source activate seismic-interpretation - # run the tests - cd experiments/interpretation/dutchf3_section/local - python train.py 'DATASET.ROOT' '/home/alfred/data/dutch_f3/data' 'TRAIN.END_EPOCH' 1 'TRAIN.SNAPSHOTS' 1 --cfg=configs/section_deconvnet_skip.yaml --debug - # find the latest model which we just trained - model=$(ls -td */section_deconvnet_skip/*/* | head -1) - echo ${model} - # try running the test script - python test.py 'DATASET.ROOT' '/home/alfred/data/dutch_f3/data' 'TEST.MODEL_PATH' ${model}/section_deconvnet_skip_running_model_1.pth --cfg=configs/section_deconvnet_skip.yaml --debug - + pytest -s tests/cicd/src/notebook_integration_tests.py \ + --nbname examples/interpretation/notebooks/Dutch_F3_patch_model_training_and_evaluation.ipynb \ + --dataset_root /home/alfred/data_dynamic/dutch_f3/data \ + --model_pretrained download diff --git a/tests/cicd/src/check_performance.py b/tests/cicd/src/check_performance.py new file mode 100644 index 00000000..925e6ad6 --- /dev/null +++ b/tests/cicd/src/check_performance.py @@ -0,0 +1,97 @@ +#!/usr/bin/env python3 +""" Please see the def main() function for code description.""" +import json + +""" libraries """ + +import numpy as np +import sys +import os + +np.set_printoptions(linewidth=200) +import logging + +# toggle to WARNING when running in production, or use CLI +logging.getLogger().setLevel(logging.DEBUG) +# logging.getLogger().setLevel(logging.WARNING) +import argparse + +parser = argparse.ArgumentParser() + +""" useful information when running from a GIT folder.""" +myname = os.path.realpath(__file__) +mypath = os.path.dirname(myname) +myname = os.path.basename(myname) + + +def main(args): + """ + + Check to see whether performance metrics are within range on both validation + and test sets. + + """ + + logging.info("loading data") + + with open(args.infile, 'r') as fp: + data = json.load(fp) + + if args.test: + # process training set results + assert data["Pixel Acc: "] > 0.0 + assert data["Pixel Acc: "] <= 1.0 + # TODO make these into proper tests + # assert data["Pixel Acc: "] == 1.0 + # TODO: add more tests as we fix performance + # assert data["Mean Class Acc: "] == 1.0 + # assert data["Freq Weighted IoU: "] == 1.0 + # assert data["Mean IoU: "] == 1.0 + + else: + # process validation results + assert data['pixacc'] > 0.0 + assert data['pixacc'] <= 1.0 + # TODO make these into proper tests + # assert data['pixacc'] == 1.0 + # TODO: add more tests as we fix performance + # assert data['mIoU'] < 1e-3 + + + logging.info("all done") + + +""" GLOBAL VARIABLES """ + + +""" cmd-line arguments """ +parser.add_argument("--infile", help="Location of the file which has the metrics", type=str, required=True) +parser.add_argument( + "--test", + help="Flag to indicate that these are test set results - validation by default", + action="store_true" +) + +""" main wrapper with profiler """ +if __name__ == "__main__": + main(parser.parse_args()) + +# pretty printing of the stack +""" + try: + logging.info('before main') + main(parser.parse_args()) + logging.info('after main') + except: + for frame in traceback.extract_tb(sys.exc_info()[2]): + fname,lineno,fn,text = frame + print ("Error in %s on line %d" % (fname, lineno)) +""" +# optionally enable profiling information +# import cProfile +# name = +# cProfile.run('main.run()', name + '.prof') +# import pstats +# p = pstats.Stats(name + '.prof') +# p.sort_stats('cumulative').print_stats(10) +# p.sort_stats('time').print_stats() diff --git a/tests/cicd/src/conftest.py b/tests/cicd/src/conftest.py index c222d3b9..c7c627fc 100644 --- a/tests/cicd/src/conftest.py +++ b/tests/cicd/src/conftest.py @@ -7,6 +7,8 @@ def pytest_addoption(parser): parser.addoption("--nbname", action="store", type=str, default=None) parser.addoption("--dataset_root", action="store", type=str, default=None) + parser.addoption("--model_pretrained", action="store", type=str, default=None) + parser.addoption("--cwd", action="store", type=str, default="examples/interpretation/notebooks") @pytest.fixture @@ -18,6 +20,13 @@ def nbname(request): def dataset_root(request): return request.config.getoption("--dataset_root") +@pytest.fixture +def model_pretrained(request): + return request.config.getoption("--model_pretrained") + +@pytest.fixture +def cwd(request): + return request.config.getoption("--cwd") """ def pytest_generate_tests(metafunc): diff --git a/tests/cicd/src/notebook_integration_tests.py b/tests/cicd/src/notebook_integration_tests.py index 1c0ccbf8..98ac84d8 100644 --- a/tests/cicd/src/notebook_integration_tests.py +++ b/tests/cicd/src/notebook_integration_tests.py @@ -9,11 +9,17 @@ # don't add any markup as this just runs any notebook which name is supplied # @pytest.mark.integration # @pytest.mark.notebooks -def test_notebook_run(nbname, dataset_root): +def test_notebook_run(nbname, dataset_root, model_pretrained, cwd): pm.execute_notebook( nbname, OUTPUT_NOTEBOOK, kernel_name=KERNEL_NAME, - parameters={"max_iterations": 3, "max_epochs": 1, "max_snapshots": 1, "dataset_root": dataset_root}, - cwd="examples/interpretation/notebooks", + parameters={ + "max_epochs": 1, + "max_snapshots": 1, + "papermill": True, + "dataset_root": dataset_root, + "model_pretrained": model_pretrained, + }, + cwd=cwd, ) diff --git a/tests/cicd/src/scripts/get_data_for_builds.sh b/tests/cicd/src/scripts/get_data_for_builds.sh new file mode 100755 index 00000000..46dc1e95 --- /dev/null +++ b/tests/cicd/src/scripts/get_data_for_builds.sh @@ -0,0 +1,53 @@ +#!/bin/bash +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. + +# Downloads and prepares the data for the rest of the builds to use + +# Download the Dutch F3 dataset and extract +if [ -z $1 ]; then + echo "You need to specify a download location for the data" + exit 1; +fi; + +DATA_ROOT=$1 + +source activate seismic-interpretation + +# these have to match the rest of the build jobs unless we want to +# define this in ADO pipelines +DATA_CHECKERBOARD="${DATA_ROOT}/checkerboard" +DATA_F3="${DATA_ROOT}/dutch_f3" +DATA_PENOBSCOT="${DATA_ROOT}/penobscot" + +# remove data +if [ -d ${DATA_ROOT} ]; then + echo "Erasing data root dir ${DATA_ROOT}" + rm -rf "${DATA_ROOT}" +fi +mkdir -p "${DATA_F3}" +mkdir -p "${DATA_PENOBSCOT}" + +# test download scripts in parallel +./scripts/download_penobscot.sh "${DATA_PENOBSCOT}" & +./scripts/download_dutch_f3.sh "${DATA_F3}" & +wait + +# change imposed by download script +DATA_F3="${DATA_F3}/data" + +cd scripts + +python gen_checkerboard.py --dataroot ${DATA_F3} --dataout ${DATA_CHECKERBOARD} + +# finished data download and generation + +# test preprocessing scripts +python prepare_penobscot.py split_inline --data-dir=${DATA_PENOBSCOT} --val-ratio=.1 --test-ratio=.2 +python prepare_dutchf3.py split_train_val section --data_dir=${DATA_F3} --label_file=train/train_labels.npy --output_dir=splits --split_direction=both +python prepare_dutchf3.py split_train_val patch --data_dir=${DATA_F3} --label_file=train/train_labels.npy --output_dir=splits --stride=50 --patch_size=100 --split_direction=both + +DATA_CHECKERBOARD="${DATA_CHECKERBOARD}/data" +# repeat for checkerboard dataset +python prepare_dutchf3.py split_train_val section --data_dir=${DATA_CHECKERBOARD} --label_file=train/train_labels.npy --output_dir=splits --split_direction=both +python prepare_dutchf3.py split_train_val patch --data_dir=${DATA_CHECKERBOARD} --label_file=train/train_labels.npy --output_dir=splits --stride=50 --patch_size=100 --split_direction=both diff --git a/tests/test_prepare_dutchf3.py b/tests/test_prepare_dutchf3.py new file mode 100644 index 00000000..34349e29 --- /dev/null +++ b/tests/test_prepare_dutchf3.py @@ -0,0 +1,427 @@ +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. +"""Test the extract functions against a variety of SEGY files and trace_header scenarioes +""" +import math +import os.path as path +import tempfile + +import numpy as np +import pandas as pd +import pytest + +import scripts.prepare_dutchf3 as prep_dutchf3 + +# Setup +OUTPUT = None +ILINE = 551 +XLINE = 1008 +DEPTH = 351 +ALINE = np.zeros((ILINE, XLINE, DEPTH)) +STRIDE = 50 +PATCH = 100 +PER_VAL = 0.2 +LOG_CONFIG = None + + +def test_get_aline_range_step_one(): + + """check if it includes the step in the range if step = 1 + """ + SECTION_STRIDE = 1 + + # Test + output_iline = prep_dutchf3._get_aline_range(ILINE, PER_VAL, SECTION_STRIDE) + output_xline = prep_dutchf3._get_aline_range(XLINE, PER_VAL, SECTION_STRIDE) + + assert str(output_iline[0].step) == str(SECTION_STRIDE) + assert str(output_xline[0].step) == str(SECTION_STRIDE) + + +def test_get_aline_range_step_zero(): + + """check if a ValueError exception is raised when section_stride = 0 + """ + with pytest.raises(ValueError, match="section_stride cannot be zero or a negative number"): + SECTION_STRIDE = 0 + + # Test + output_iline = prep_dutchf3._get_aline_range(ILINE, PER_VAL, SECTION_STRIDE) + output_xline = prep_dutchf3._get_aline_range(XLINE, PER_VAL, SECTION_STRIDE) + + assert output_iline + assert output_xline + + +def test_get_aline_range_negative_step(): + + """check if a ValueError exception is raised when section_stride = -1 + """ + with pytest.raises(ValueError, match="section_stride cannot be zero or a negative number"): + SECTION_STRIDE = -1 + + # Test + output_iline = prep_dutchf3._get_aline_range(ILINE, PER_VAL, SECTION_STRIDE) + output_xline = prep_dutchf3._get_aline_range(XLINE, PER_VAL, SECTION_STRIDE) + + assert output_iline + assert output_xline + + +def test_get_aline_range_float_step(): + + """check if a ValueError exception is raised when section_stride = 1.1 + """ + with pytest.raises(TypeError, match="'float' object cannot be interpreted as an integer"): + SECTION_STRIDE = 1.0 + + # Test + output_iline = prep_dutchf3._get_aline_range(ILINE, PER_VAL, SECTION_STRIDE) + output_xline = prep_dutchf3._get_aline_range(XLINE, PER_VAL, SECTION_STRIDE) + + assert output_iline + assert output_xline + + +def test_get_aline_range_single_digit_step(): + + """check if it includes the step in the range if 1 < step < 10 + """ + SECTION_STRIDE = 1 + # Test + output_iline = prep_dutchf3._get_aline_range(ILINE, PER_VAL, SECTION_STRIDE) + output_xline = prep_dutchf3._get_aline_range(XLINE, PER_VAL, SECTION_STRIDE) + + assert str(output_iline[0].step) == str(SECTION_STRIDE) + assert str(output_xline[0].step) == str(SECTION_STRIDE) + + +def test_get_aline_range_double_digit_step(): + + """check if it includes the step in the range if step > 10 + """ + SECTION_STRIDE = 17 + # Test + output_iline = prep_dutchf3._get_aline_range(ILINE, PER_VAL, SECTION_STRIDE) + output_xline = prep_dutchf3._get_aline_range(XLINE, PER_VAL, SECTION_STRIDE) + + assert str(output_iline[0].step) == str(SECTION_STRIDE) + assert str(output_xline[0].step) == str(SECTION_STRIDE) + + +def test_prepare_dutchf3_patch_step_1(): + + """check a complete run for the script in case further changes are needed + """ + # setting a value to SECTION_STRIDE as needed to test the values + SECTION_STRIDE = 1 + DIRECTION = "inline" + + # use a temp dir that will be discarded at the end of the execution + with tempfile.TemporaryDirectory() as tmpdirname: + + # saving the file to be used by the script + label_file = tmpdirname + "/label_file.npy" + np.save(label_file, ALINE) + + # stting the output directory to be used by the script + output = tmpdirname + "/split" + + # calling the main function of the script without SECTION_STRIDE, to check default value + train_list, val_list = prep_dutchf3.split_patch_train_val( + label_file=label_file, + section_stride=SECTION_STRIDE, + patch_stride=STRIDE, + split_direction=DIRECTION, + patch_size=PATCH, + per_val=PER_VAL, + log_config=LOG_CONFIG, + ) + prep_dutchf3._write_split_files(output, train_list, val_list, "patch") + + # reading the file and splitting the data + patch_train = pd.read_csv(output + "/patch_train.txt", header=None, names=["row"]) + patch_train = pd.DataFrame(patch_train.row.str.split("_").tolist(), columns=["dir", "i", "x", "d"]) + + # test patch_train and section_stride=2 + x = list(sorted(set(patch_train.x.astype(int)))) + i = list(sorted(set(patch_train.i.astype(int)))) + + if DIRECTION == "crossline": + assert x[1] - x[0] == SECTION_STRIDE + assert i[1] - i[0] == STRIDE + elif DIRECTION == "inline": + assert x[1] - x[0] == STRIDE + assert i[1] - i[0] == SECTION_STRIDE + + # reading the file and splitting the data + patch_val = pd.read_csv(output + "/patch_val.txt", header=None, names=["row"]) + patch_val = pd.DataFrame(patch_val.row.str.split("_").tolist(), columns=["dir", "i", "x", "d"]) + + # test patch_val and section_stride=2 + x = list(sorted(set(patch_val.x.astype(int)))) + i = list(sorted(set(patch_val.i.astype(int)))) + + if DIRECTION == "crossline": + assert x[1] - x[0] == 1 # SECTION_STRIDE is only used in training. + assert i[1] - i[0] == STRIDE + elif DIRECTION == "inline": + assert x[1] - x[0] == STRIDE + assert i[1] - i[0] == 1 + + # test validation set is, at least, PER_VAL + PER_VAL_CHK = len(set(patch_train.x)) / (len(set(patch_train.x)) + len(set(patch_val.x))) * 100 + assert round(PER_VAL_CHK, 0) >= int(PER_VAL * 100) + PER_VAL_CHK = len(set(patch_train.i)) / (len(set(patch_train.i)) + len(set(patch_val.i))) * 100 + assert round(PER_VAL_CHK, 0) >= int(PER_VAL * 100) + + +def test_prepare_dutchf3_patch_step_2(): + + """check a complete run for the script in case further changes are needed + """ + # setting a value to SECTION_STRIDE as needed to test the values + SECTION_STRIDE = 2 + DIRECTION = "crossline" + + # use a temp dir that will be discarded at the end of the execution + with tempfile.TemporaryDirectory() as tmpdirname: + + # saving the file to be used by the script + label_file = tmpdirname + "/label_file.npy" + np.save(label_file, ALINE) + + # stting the output directory to be used by the script + output = tmpdirname + "/split" + + # calling the main function of the script without SECTION_STRIDE, to check default value + train_list, val_list = prep_dutchf3.split_patch_train_val( + label_file=label_file, + section_stride=SECTION_STRIDE, + patch_stride=STRIDE, + split_direction=DIRECTION, + patch_size=PATCH, + per_val=PER_VAL, + log_config=LOG_CONFIG, + ) + prep_dutchf3._write_split_files(output, train_list, val_list, "patch") + + # reading the file and splitting the data + patch_train = pd.read_csv(output + "/patch_train.txt", header=None, names=["row"]) + patch_train = pd.DataFrame(patch_train.row.str.split("_").tolist(), columns=["dir", "i", "x", "d"]) + + # test patch_train and section_stride=2 + x = list(sorted(set(patch_train.x.astype(int)))) + i = list(sorted(set(patch_train.i.astype(int)))) + + if DIRECTION == "crossline": + assert x[1] - x[0] == SECTION_STRIDE + assert i[1] - i[0] == STRIDE + elif DIRECTION == "inline": + assert x[1] - x[0] == STRIDE + assert i[1] - i[0] == SECTION_STRIDE + + # reading the file and splitting the data + patch_val = pd.read_csv(output + "/patch_val.txt", header=None, names=["row"]) + patch_val = pd.DataFrame(patch_val.row.str.split("_").tolist(), columns=["dir", "i", "x", "d"]) + + # test patch_val and section_stride=2 + x = list(sorted(set(patch_val.x.astype(int)))) + i = list(sorted(set(patch_val.i.astype(int)))) + + if DIRECTION == "crossline": + assert x[1] - x[0] == 1 # SECTION_STRIDE is only used in training. + assert i[1] - i[0] == STRIDE + elif DIRECTION == "inline": + assert x[1] - x[0] == STRIDE + assert i[1] - i[0] == 1 + + # test validation set is, at least, PER_VAL + PER_VAL_CHK = len(set(patch_train.x)) / (len(set(patch_train.x)) + len(set(patch_val.x))) * 100 + assert round(PER_VAL_CHK, 0) >= int(PER_VAL * 100) + PER_VAL_CHK = len(set(patch_train.i)) / (len(set(patch_train.i)) + len(set(patch_val.i))) * 100 + assert round(PER_VAL_CHK, 0) >= int(PER_VAL * 100) + + +def test_prepare_dutchf3_patch_train_and_test_sets_inline(): + + """check a complete run for the script in case further changes are needed + """ + # setting a value to SECTION_STRIDE as needed to test the values + SECTION_STRIDE = 1 + DIRECTION = "inline" + + # use a temp dir that will be discarded at the end of the execution + with tempfile.TemporaryDirectory() as tmpdirname: + + # saving the file to be used by the script + label_file = tmpdirname + "/label_file.npy" + np.save(label_file, ALINE) + + # stting the output directory to be used by the script + output = tmpdirname + "/split" + + # calling the main function of the script without SECTION_STRIDE, to check default value + train_list, val_list = prep_dutchf3.split_patch_train_val( + label_file=label_file, + section_stride=SECTION_STRIDE, + patch_stride=STRIDE, + split_direction=DIRECTION, + patch_size=PATCH, + per_val=PER_VAL, + log_config=LOG_CONFIG, + ) + prep_dutchf3._write_split_files(output, train_list, val_list, "patch") + + # reading the file and splitting the data + patch_train = pd.read_csv(output + "/patch_train.txt", header=None, names=["row"]) + patch_train = patch_train.row.tolist() + + # reading the file and splitting the data + patch_val = pd.read_csv(output + "/patch_val.txt", header=None, names=["row"]) + patch_val = patch_val.row.tolist() + + # assert patches are unique + assert set(patch_train) & set(patch_val) == set() + + # test validation set is, at least, PER_VAL + PER_VAL_CHK = 100 * len(patch_train) / (len(patch_train) + len(patch_val)) + assert round(PER_VAL_CHK, 0) >= int(PER_VAL * 100) + PER_VAL_CHK = 100 * len(patch_train) / (len(patch_train) + len(patch_val)) + assert round(PER_VAL_CHK, 0) >= int(PER_VAL * 100) + + +def test_prepare_dutchf3_patch_train_and_test_sets_crossline(): + + """check a complete run for the script in case further changes are needed + """ + # setting a value to SECTION_STRIDE as needed to test the values + SECTION_STRIDE = 1 + DIRECTION = "crossline" + + # use a temp dir that will be discarded at the end of the execution + with tempfile.TemporaryDirectory() as tmpdirname: + + # saving the file to be used by the script + label_file = tmpdirname + "/label_file.npy" + np.save(label_file, ALINE) + + # stting the output directory to be used by the script + output = tmpdirname + "/split" + + # calling the main function of the script without SECTION_STRIDE, to check default value + train_list, val_list = prep_dutchf3.split_patch_train_val( + label_file=label_file, + section_stride=SECTION_STRIDE, + patch_stride=STRIDE, + split_direction=DIRECTION, + patch_size=PATCH, + per_val=PER_VAL, + log_config=LOG_CONFIG, + ) + prep_dutchf3._write_split_files(output, train_list, val_list, "patch") + + # reading the file and splitting the data + patch_train = pd.read_csv(output + "/patch_train.txt", header=None, names=["row"]) + patch_train = patch_train.row.tolist() + + # reading the file and splitting the data + patch_val = pd.read_csv(output + "/patch_val.txt", header=None, names=["row"]) + patch_val = patch_val.row.tolist() + + # assert patches are unique + assert set(patch_train) & set(patch_val) == set() + + # test validation set is, at least, PER_VAL + PER_VAL_CHK = 100 * len(patch_train) / (len(patch_train) + len(patch_val)) + assert round(PER_VAL_CHK, 0) >= int(PER_VAL * 100) + PER_VAL_CHK = 100 * len(patch_train) / (len(patch_train) + len(patch_val)) + assert round(PER_VAL_CHK, 0) >= int(PER_VAL * 100) + + +def test_prepare_dutchf3_section_step_1_crossline(): + + """check a complete run for the script in case further changes are needed + """ + # setting a value to SECTION_STRIDE as needed to test the values + SECTION_STRIDE = 2 + DIRECTION = "crossline" + + # use a temp dir that will be discarded at the end of the execution + with tempfile.TemporaryDirectory() as tmpdirname: + + # saving the file to be used by the script + label_file = tmpdirname + "/label_file.npy" + np.save(label_file, ALINE) + + # stting the output directory to be used by the script + output = tmpdirname + "/split" + + # calling the main function of the script without SECTION_STRIDE, to check default value + train_list, val_list = prep_dutchf3.split_section_train_val( + label_file=label_file, + section_stride=SECTION_STRIDE, + split_direction=DIRECTION, + per_val=PER_VAL, + log_config=LOG_CONFIG, + ) + prep_dutchf3._write_split_files(output, train_list, val_list, "section") + + # reading the file and splitting the data + section_train = pd.read_csv(output + "/section_train.txt", header=None, names=["row"]) + section_train = pd.DataFrame(section_train.row.str.split("_").tolist(), columns=["dir", "section"]) + + section_val = pd.read_csv(output + "/section_val.txt", header=None, names=["row"]) + section_val = pd.DataFrame(section_val.row.str.split("_").tolist(), columns=["dir", "section"]) + + # test + assert (float(section_train.section[1]) - float(section_train.section[0])) % float(SECTION_STRIDE) == 0.0 + assert (float(section_val.section[1]) - float(section_val.section[0])) % float(SECTION_STRIDE) > 0.0 + + # test validation set is, at least, PER_VAL + PER_VAL_CHK = len(section_val) / (len(section_val) + len(section_train)) * 100 + assert round(PER_VAL_CHK, 0) >= int(PER_VAL * 100) + + +def test_prepare_dutchf3_section_step_2_inline(): + + """check a complete run for the script in case further changes are needed + """ + # setting a value to SECTION_STRIDE as needed to test the values + SECTION_STRIDE = 1 + DIRECTION = "inline" + + # use a temp dir that will be discarded at the end of the execution + with tempfile.TemporaryDirectory() as tmpdirname: + + # saving the file to be used by the script + label_file = tmpdirname + "/label_file.npy" + np.save(label_file, ALINE) + + # stting the output directory to be used by the script + output = tmpdirname + "/split" + + # calling the main function of the script without SECTION_STRIDE, to check default value + train_list, val_list = prep_dutchf3.split_section_train_val( + label_file=label_file, + section_stride=SECTION_STRIDE, + split_direction=DIRECTION, + per_val=PER_VAL, + log_config=LOG_CONFIG, + ) + prep_dutchf3._write_split_files(output, train_list, val_list, "section") + + # reading the file and splitting the data + section_train = pd.read_csv(output + "/section_train.txt", header=None, names=["row"]) + section_train = pd.DataFrame(section_train.row.str.split("_").tolist(), columns=["dir", "section"]) + + section_val = pd.read_csv(output + "/section_val.txt", header=None, names=["row"]) + section_val = pd.DataFrame(section_val.row.str.split("_").tolist(), columns=["dir", "section"]) + + # test + assert (float(section_train.section[1]) - float(section_train.section[0])) % float(SECTION_STRIDE) == 0.0 + assert (float(section_val.section[1]) - float(section_val.section[0])) % float(SECTION_STRIDE) == 0.0 + + # test validation set is, at least, PER_VAL + PER_VAL_CHK = len(section_val) / (len(section_val) + len(section_train)) * 100 + assert round(PER_VAL_CHK, 0) >= int(PER_VAL * 100)