I just tried an upload test case: 3.6 GB, 770 folders, ~ 10k files.
I found the uploading process ridiculously slow: The reason seems to be that, even with a queue size of 3, the transaction overhead of almost half a second per file applying for each and every kb-sized file results in an exponential slow-down rendering the current uploader unsuitable even for a folder containing a few hundred files.
(I know such a comparison doesn’t hold justice for Pydio but I feel I should mention that using direct-storage s3cmd/rclone/rsync/ftp/… we would not even have the time to read the file names being uploaded)
Uploader configuration
I don’t know how related to the S3 backend this is.
In the context of this test-case I found that the *upload would simply stop a definitive way with no chance to resume… with the Uncaught (in promise) invalid token
console message after one of the uploads received an unexpected 403.
Like in "PutObjectPart has failed" (the S3 upload failure monster revival) - #4 by charles no trace of the 403 or any error got logged in the backend (with all the frustration this implies)
I think this case is related to the session expiring which, in my opinion, isn’t something that must happen under any condition: Ongoing uploads are sacred and must be preserved whatever the login-timeout policy is. Anyway, I’ll see if tweaking accessTokenLifespan
works (as advised in Unable to upload big files - #2 by sam0204), but in that case that should definitely be added to the documentation given the number of threads created around this very issue.