New folder does not appear after creating

Expected behavior: after creating a new folder, it shows up in web UI.
Actual behavior: newly created folder does not show up. Rest request returns 200. Folder is created in underlying storage. Folder shows up after storage re-sync.

Cells version: 4.0.2
Storage type: Local file system
Deployment type: single node

Relevant log entry:

ERROR   pydio.rest.tree Rest Error 500  {"error": "Put \"http://127.0.0.1:38187/cellsdata/username/cell1/new-folder/.pydio\": context canceled", "SpanUuid": "a07c34c7-67a5-4571-914e-52b2f83e7065", "RemoteAddress": "127.0.0.1", "UserAgent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36", "ContentType": "application/json", "HttpProtocol": "HTTP/1.1", "UserName": "username", "UserUuid": "cd746254-5493-4f19-95aa-12406cdc597e", "GroupPath": "/", "Profile": "admin", "Roles": "ROOT_GROUP,ADMINS,cd746254-5493-4f19-95aa-12406cdc597e"}

Question: how did you create the new folder? Was it from the Web interface, via WebDAV, cec CLI command, S3 API…?

Because if you just went into the structured storage folder and made a mkdir NewDir, then, no, Pydio Cells will have no knowledge of that folder until you do a storage re-sync.

In other words, that would be the expected behaviour.

Hi. It was from the web interface. It’s shown in the attached log entry.

Ah! I saw the log error entry, but I’m not really familiar enough with the logs to understand if they came from the web interface or not.

All I can infer from the log is that Cells seemed to have created the new folder but then failed to create the required .pydio file inside it.

Do you have a reverse proxy (such as nginx) in front of your Cells installation? I would assume so from the 127.0.0.1 on the URL. Might there be a rule forbidding access to files starting with a dot? Most nginx ‘standard’ configurations have them (mostly to restrict access ti eventual .htaccess files inherited from a previous Apache installation). You might at least take a look at that…

Both the folder and the .pydio files were successfully created.

Regarding the reverse proxy, I do have an instance of Traefik, but it isn’t that relevant in this case, since there shouldn’t be any “webserver style” file accesses happening.

Regarding the log, my proxy sets up forwarding headers, so Cells sees the real IPs. I just edited them out of the log.

I should probably add that both proxy and Cells are running in docker containers, so no leftover files anywhere…

Hi @giesmininkas,

What do you mean by storage resync ? Do you go to the console, Storage then resync or do you just use the reload button directly on the page ?

Does it happen every time you create a folder ? Does it happen for file uploads ?

Seeing the folder is created properly I’m leaning towards a websocket issue but I could be wrong. The context canceled could just be a request that has been started and the page was left too early for the request to finish. Not necessarily a big issue in that context. Can you see anything in the browser’s dev console wrt websockets / js issues ?

Hi. The last time I tried, it happened every time I tried to manually create a folder (left click → new folder). Interestingly, uploading a folder with files in it works.

The REST request for folder creation returns “200 OK”. I don’t see any other indications of something going wrong either, except the attached log, and the folder not appearing on the screen. Refreshing WebUI does not help either.

By “storage resync” I mean going to the WebUI “Cells Console” → Storage → Resynchronize.

Thanks, that rules out a websocket problem then.

Did you have the “context cancelled” error message for every folder created ? Did it always happen on the creation of the .pydio file ?

Thanks,
Greg

Also, is it possible for you to bypass Traefik and create a folder using Cells directly without a proxy ?

We 'd like to see if it is the way Traefik closes a connection that causes an issue.

@giesmininkas I tried some changes in the latest build, maybe that is could fix that event propagation issue. Try the latest build as mentioned in the other thread :wink:

Hi. This issue seems to be fixed after upgrade to v4.0.3. I’m not sure if this was intended. If it wasn’t, let’s investigate further. I’ll update this thread, if it repeats.

Good news, it was intended indeed ! But update if it repeats, it was a “blind fix” …

Hi again. I’ve encountered a different problem, but it may be related. Creating a folder still works, but renaming a newly created empty folder does not. Neither the Web GUI, nor underlying structured storage updates. There’s no error log entry. Exactly same story with deleting the newly created folder.

I’ve updated Cells to v4.0.4.

Let me know if I should create a separate thread here. Thanks for your work!

2022-11-13T17:01:49.372Z        INFO    pydio.rest.jobs Creating copy/move job  {"paths": ["cellsdata/user1/cell1/folder1"], "target": "cellsdata/user1/cell1/folder2", "SpanUuid": "c9c39541-b5f4-4fa5-a9f6-0142594539c2", "RemoteAddress": "123.123.123.123", "UserAgent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36", "ContentType": "application/json", "HttpProtocol": "HTTP/1.1", "UserName": "user1", "UserUuid": "cd746254-5493-4f19-95aa-12406cdc597e", "GroupPath": "/", "Profile": "admin", "Roles": "ROOT_GROUP,ADMINS,cd746254-5493-4f19-95aa-12406cdc597e"}

2022-11-13T17:09:02.895Z        INFO    pydio.rest.tree Definitively deleting [cellsdata/user1/cell1/folder1]     {"SpanUuid": "9260d999-efbe-4219-b22b-43c62f1dac62", "RemoteAddress": "123.123.123.123", "UserAgent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36", "ContentType": "application/json", "HttpProtocol": "HTTP/1.1", "UserName": "user1", "UserUuid": "cd746254-5493-4f19-95aa-12406cdc597e", "GroupPath": "/", "Profile": "admin", "Roles": "ROOT_GROUP,ADMINS,cd746254-5493-4f19-95aa-12406cdc597e"}

Another few issues I’ve noticed for quite some time now.

When I’m drop ~5 folders with ~500 files in total, the Web GUI uploader spends some time just showing “analyzing XXX files”, then the folders to upload appear. Underneath every folder there is a “Creating folder” label. And then it breaks with “Timeout of 60000ms exceeded” and no further uploading happens.

Also, neither file nor folder deletion and renaming (either empty or otherwise) works on v4.0.4.

Also, in “Cells Console” → “Storage” → “Re-synchronize” buttons do not trigger resync in v4.0.4.

Hello @giesmininkas it seems to me that you must have other issues that create the effects you describe (as side-effects).
Can you give more info about your install, especially the hardware used ? (docker or not, cpu, ram, DB version, mounted filesystem, etc…)
Thx

Hi. Yeah, I’ve been trying to use Cells since v2.0.x, and had those kinds of problems all the time.

My current setup is:
Ryzen-based server (x64) with plenty of ram, and ~30 other docker containers running good and well.
Docker v20.10
Cells v4.0.4 in docker
MySQL v8.0.31 running in docker on the same server
Traefik reverse proxy, also used by multitude of other services.
btrfs filesystem bind-mounted to /var/cells/data. Was previously using an ext4 partition. The directory is not used by any other container and/or service.

Let me know what other stuff you’d need.

The Cells container is running with mostly default settings, except the following env variables:

Also, if it helps, I’ve ~300000 files and ~4000 folders in the Cells /cellsdata, /personal and /pydiods1 combined.

Another update:

Also, neither file nor folder deletion and renaming (either empty or otherwise) works on v4.0.4.
Also, in “Cells Console” → “Storage” → “Re-synchronize” buttons do not trigger resync in v4.0.4.

Said functions started working after a recent Cells container restart. Not sure what caused them to stop working in the first case. I’ll try to track them down if it repeats.

In the logs I see many entries just like this:

GRPC broker stream.Send stuck after 30s subscriber was s:^pydio.grpc.tasks$
GRPC broker stream.Send stuck after 30s subscriber was s:^pydio.grpc.tasks$
GRPC broker stream.Send stuck after 30s subscriber was s:^pydio.grpc.tasks$
GRPC broker stream.Send stuck after 30s subscriber was s:^pydio.grpc.tasks$
GRPC broker stream.Send stuck after 30s subscriber was s:^pydio.gateway.websocket$
GRPC broker stream.Send stuck after 30s subscriber was s:^pydio.grpc.tasks$
GRPC broker stream.Send stuck after 30s subscriber was s:^pydio.grpc.tasks$
GRPC broker stream.Send stuck after 30s subscriber was s:^pydio.grpc.tasks$

It seems the issues did not repeat since. This can be closed.