You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to copy files from s3 to a local system. The files were downloaded from a variety of sources and kept their original filenames, some of which seem to have included ** or other wildcard patterns. Is there a way to disable globbing or escape the filenames? I am using filenames directly as supplied by an s3fsls call.
The text was updated successfully, but these errors were encountered:
The top level user methods like cat expect to expand paths. However, there are other methods like cat_file that work one file at a time and don't expand.
Currently, only recursive= is passed on by the bulk functions to .expand_path. There has been talk of disabling or being able to configure this process more, but these is no such option right now.
Maybe it would be good to mention this in the documentation? Something like adding "does not expand paths" to the get_file documentation, and something like "You can use get_file to get a single file" in the get documentation. Now that you've explained it it's a clear convention, but being new to the library I never would have realized it.
Updating docstring to make things clearer is always a good idea. Would you like to contribute an update? The change would be in fsspec.spec.AbstractFileSystem , which is S3FileSystem's superclass.
I'm trying to copy files from s3 to a local system. The files were downloaded from a variety of sources and kept their original filenames, some of which seem to have included
**
or other wildcard patterns. Is there a way to disable globbing or escape the filenames? I am using filenames directly as supplied by ans3fs
ls
call.The text was updated successfully, but these errors were encountered: