Files can not be deleted

Some of my files can not be deleted.

2020-03-05T14:25:58.238Z	ERROR	Compute Etag Copy	{"error": "We encountered an internal error, please try again."}
2020-03-05T14:25:58.241Z	INFO*Client).s3forceComputeEtag
2020-03-05T14:25:58.242Z	INFO		/opt/teamcity/agent/work/fb9e7e7133d45375/go/src/
2020-03-05T14:25:58.242Z	INFO*Client).ComputeChecksum
2020-03-05T14:25:58.242Z	INFO		/opt/teamcity/agent/work/fb9e7e7133d45375/go/src/
2020-03-05T14:25:58.250Z	INFO
2020-03-05T14:25:58.250Z	INFO		/opt/teamcity/agent/work/fb9e7e7133d45375/go/src/
2020-03-05T14:25:58.250Z	INFO	2020-03-05T14:25:58.238Z	ERROR	Cannot compute checksum for TEST FOO/bar.MP4	{"error": "We encountered an internal error, please try again."}
2020-03-05T14:25:58.251Z	INFO
2020-03-05T14:25:58.251Z	INFO		/opt/teamcity/agent/work/fb9e7e7133d45375/go/src/
2020-03-05T14:25:58.532Z	DEBUG	pydio.grpc.tasks	Force close session now:0c4506ee-91da-4e63-b557-9bbad83eb9f5	{"OperationUuid": "copy-move-599351af-fe5d-4599-99af-89d7458e0d3b-8a8f9da3"}

Buckets state:

$ openstack object list pydio
| .pydio
| TEST FOO/.pydio
| TEST FOO/bar.MP4
| recycle_bin/.pydio
$ openstack object list pydio+segments
| ...

Possibly related ?

Hi @drzraf,

Could you take a look at the other thread (the one that you suggest), it is indeed related.

By the way, I can see a compute etag error, could you tell me if the files that you are using are empty or with data (and if it is the same data)?

Files are not empty (and scale on multiple segments), some are a couple of gigabytes.

Do you know more about how encryption relates to file not being deletable?
(I’d like to know more in order to to avoid this situation happening again)
Could it be a race-condition of any form?

I started fresh and reset the storage. Deleted the backend S3 bucket and recreated encrypted it from the start.

We started uploading again, but the same problem manifests: We can not remove uploaded files/directories (it’s kind of a moot one!)

The removed directory and its files are expected to be inside “Recycle Bin” after the first attempt, but they are not: Only the (empty) Foo Test directory is created there.

On any subsequent attempts I can see (from tasks.log):

{"level":"error","ts":"2020-03-16T12:20:27Z","logger":"pydio.grpc.tasks","msg":"Error while running action actions.tree.copymove","LogType":"tasks","SpanRootUuid":"5f9602e5-6780-11ea-9ff2-fa163ee72a12","SpanParentUuid":"5f9602e5-6780-11ea-9ff2-fa163ee72a12","SpanUuid":"5fa84913-6780-11ea-bac6-fa163ee72a12","OperationUuid":"copy-move-e47b9517-674f-4e5d-96a9-f9484a1aaada-0e67749b","error":"We encountered an internal error, please try again."}

Another couple of frequent messages in this situation include (pydio.log):

{"level":"info","ts":"2020-03-16T12:21:56Z","logger":"","msg":"{\"level\":\"error\",\"ts\":\"2020-03-16T12:21:56Z\",\"msg\":\"Cannot compute checksum for Foo Test/BAR.MP4\",\"error\":\"We encountered an internal error, please try again.\"}"}

{"level":"error","ts":"2020-03-16T12:22:23Z","logger":"","msg":"Error while deleting file","path":"recycle_bin/Foo Test/.pydio","target":"index://ovh","error":"{\"id\":\"\",\"code\":404,\"detail\":\"Could not compute path /recycle_bin/Foo Test/.pydio (Cache:GetNodeByPath [recycle_bin,Foo Test,.pydio] : potentialNodes not reduced to 1 value (potential:2, newPotential:0)\",\"status\":\"Not Found\"}"}

The storage backend contains:

Foo Test/BAR.MP4              # <---- still here
recycle_bin/Foo Test/.pydio

and the storage+segments backend’s bucket contains the regular segments:

Foo Test/BAR.MP4/xxxx/{0..125}

What’s happening??

Hi, did you upgrade to last version?
There were issues with S3 api not supporting CopyObjects for files > 5GB
On the other hand, you seem to be using an s3-compatible storage, can you check that it does support CopyObjectMultipart api ?

Tried 2.0.4, but same problem.

One additional logline I just noticed is this one:

{"level":"info","ts":"2020-03-16T19:26:16Z","logger":"","msg":"{\"level\":\"info\",\"ts\":\"2020-03-16T19:26:16Z\",\"msg\":\"Got errors on datasource, should resync now branch: /ovh/baz.dmg\"}"}

(I don’t know if it’s related nor important)

Needless to say it’s a blocking usability issue. How can I help further debug the issue?
Any S3 traffic/advanced logging capabilities offered?

May I hope to receive some kind of help or hints in order to tackle this fairly blocking issue? :slight_smile: