[PM-1942] Support transfers for a job on the HOST OS instead of from within the container #2055
Labels
affects-5.0.7
affects-master
Current Trunk Version
fix-5.1.0
fix-master
Current Trunk Version
major
Major loss of function.
Planner: Containers
Planner: Transfer Module
Refers to Transfer Refiners, Second and First Level Staging, Transfer implementations in Pegasus Cod
sync-from-jira
Synced from Jira
As part of #1435 , in Pegasus, for containerized jobs, the data for the job gets pulled in when the container starts from within the container. This requires the container to have the pegasus worker package deployed within the container. While the pegasus worker package does get deployed inside the container at runtime, certain python dependencies still need to be fulfilled in the container build file. This approach was selected to allow users to use transfer tools whose dependencies are not fulfilled on the hostos.
However, there are legitimate cases for the supporting data transfers on the Host OS also for containerized jobs. For example, for ML applications using tensor flow etc that have ntasks set to > 1.
Reporter: @vahi
Watchers:
@vahi
The text was updated successfully, but these errors were encountered: