Fun with the development version! (NOT!)

Mmmh. I’m fine, thanks, and you hope you are as well :grinning_face_with_smiling_eyes:

Aye, I do love to live dangerously! You know, I’ve never given up the hope that, one day, I will finally be able to access Pydio via its S3 API, and do beautiful syncing with all the fantastic tools I’ve got… and if that means going through all the nightmares of enduring the configuration of a rough alpha version… so be it! I’m ready :wink:

Anyway, you’re absolutely right: I did not install 4.9.92. I was afraid of overwriting everything and, well, losing all my data!..

Well, I guess I should have taken the latter as granted, and go ahead and do everything from scratch…

So… I got v4.9.2-alpha14 now, compiled it, and started a ‘new’ configuration. I got stuck right on the beginning — I have MariaDB running locally from a Unix socket, but Cells didn’t like it:

address /var/run/mysqld/mysqld.sock: missing port in address

Weird, right? Aye, I tried giving it a ‘fake’ port (e.g., /var/run/mysqld/mysqld.sock:3306 or so), which it sort of accepts, but eventually will break the first time it actually accesses the database.

Fortunately, by sheer neglect, my database server is still listening on port 3306 (on localhost only), and Cells did have no problems accessing it that way. I might manually revert that in the new configuration, and see if it’s just an input issue (i.e., an extraneous check for : which should only be there for the TCP option). But, for now, I’m happy to go ahead with port 3306.

The next steps went smoothly. I skipped the need of creating a new admin user (maybe that wasn’t such a good idea) and restarted Cells (from systemd)

Now I get a different error when trying to log in:

Nov 01 10:37:03 my-server cells[2096100]: {"level":"error","ts":"2025-11-01T10:37:03Z","logger":"pydio.grpc.oauth","msg":"[GRPC]/auth.PasswordCredentialsToken/PasswordCredentialsToken invalid_grant","errorId":"567c4ed9-6220","ClientCaller":"/[redacted]/Developer/cells/common/auth/hydra/hydra.go:195:hydra.PasswordCredentialsToken()","error":"invalid_grant"}
Nov 01 10:37:03 my-server cells[2096100]: {"level":"error","ts":"2025-11-01T10:37:03Z","logger":"pydio.rest.frontend","msg":"[REST]/a/frontend/session invalid_grant","ErrorCauseId":"567c4ed9-6220","error":"invalid_grant","tag":"frontend","RemoteAddress":"127.0.0.1","UserAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/138.0.0.0 Safari/537.36","ContentType":"application/json","HttpProtocol":"HTTP/1.1","RequestHost":"my-server.domain.name"}

And on the frontend, I now see:

Could not recognize your name or password, did you check your Caps Lock key?

I’m sure there must be somewhere a way to flush the ‘grants’ database, wherever it might be stored, and force the system to generate new grants, or whatever is required for that to happen.

There is, however, a lot of good news: from the CLI, I can see that most things seem to be fully operational. A quick copy & paste from the old configuration gets my datastores back, and I can confirm that everything seems to be there. For instance:

./cells admin file ls --path pydiods1 --uuid
Listing nodes at pydiods1
+------+--------------------------------+--------------------------------------+--------+-----------------+
| TYPE |              PATH              |                 UUID                 |  SIZE  |    MODIFIED     |
+------+--------------------------------+--------------------------------------+--------+-----------------+
| File | Frank a dormir na nossa        | 827e7625-... | 3.3 MB | 15 May 20 08:10 |
|      | cama.jpg                       |                                      |        |                 |
| File | Mies pequenino a comer na      | 33de865c-... | 40 MB  | 15 May 20 08:11 |
|      | varanda todo esfomeado.mov     |                                      |        |                 |
+------+--------------------------------+--------------------------------------+--------+-----------------+
Showing 2 results

Even more esoteric things like reading/setting metadata work:

./cells admin file meta-read --uuid 827e7625-...
Listing meta for node 827e7625-...
+--------------------+------------------------------------------------------------------------------------------------------------------+
|        NAME        |                                                      VALUE                                                       |
+--------------------+------------------------------------------------------------------------------------------------------------------+
| ImageThumbnails    | {"Processing":false,"thumbnails":[{"format":"jpg","id":"sm","size":300},{"format":"jpg","id":"md","size":1024}]} |
| image_height       |                                                                                                             3024 |
| image_width        |                                                                                                             4032 |
| is_image           | true                                                                                                             |
| name               | "Frank a dormir na nossa                                                                                         |
|                    | cama.jpg"                                                                                                        |
| readable_dimension | "4032px X 3024px"                                                                                                |
| GeoLocation        | {"GPS_altitude":".....","GPS_latitude":"...                                                                      |
|                    | deg ...' ... N","GPS_longitude":"... deg ...' ...                                                                    |
|                    | N","lat":.....,"lon":.....}                                                             |
| ImageDimensions    | {"Height":3024,"Width":4032}                                                                                     |
+--------------------+------------------------------------------------------------------------------------------------------------------+

Granted, that only works for the pydiods1 workspace. All the others (all are structured data on disk), even though they get correctly listed with ./cells admin file ls, they cannot be further inspected; everything will fail with:

Nov 01 13:21:21 my-server cells[2206232]: {"level":"error","ts":"2025-11-01T13:21:21Z","logger":"pydio.grpc.tasks","msg":"[goque] Received many errors while consuming messages (prefix:resync-ds-personal), data may be corrupted, you may have to restart and clear the fifo corresponding folder: ","error":"goque: ID used is outside range of stack or queue","tag":"scheduler"}
[... a restart at some point...]
Nov 01 17:00:24 my-server cells[2531037]: {"level":"error","ts":"2025-11-01T17:00:24Z","logger":"pydio.grpc.tree","msg":"Cannot compute DataSource size, skipping","dsName":"name-of-my-datasource","error":"rpc error: code = Canceled desc = context canceled","tag":"data"}
[... back to many more [goque] Received many errors ...]

Eventually, after doing some three attempts (or so) at reading a specific filesystem, the system gives up (timeout) and fails — it’s stuck at the root and cannot proceed further.

Now, I’m personally not surprised: after all, I’m reusing the v4.X configuration and trying to patch it unto the v4.9.2/v5 ‘new’ configuration. I’m sure that the main issue here is that all the bad/wrong keys to access the datasources and workspaces.

That said, I’m intrigued by the suggestion given in the actual error: ‘clear the fifo corresponding folder:’. Interesting, because the code on common/broker/goque/goque.go shows for that line (124, on the version I got right now) that it should have concatenated a g.dataDir — which, apparently, is empty, since it doesn’t show up in the logs.

Now, don’t expect me to read your million lines of code and become an ‘instant expert’ :smiley: The code is intricate and complex (and I love Go :heart: !) but, to fully understand it, it would take much more time than what I have at my disposal right now.

The little bit I could understand was that this dataDir is generated when calling OpenURL(), which sets things up with a path and the queue name, e.g. queueName := "fifo-" + streamName.

I also don’t know what ‘path’ means in this context: I’m assuming it’s a fully virtualised ‘path’ (because the storage backend may be ‘anything’), which only makes sense in the context of the Cells ‘deep’ internals. But perhaps there is a (hidden) feature somewhere which allows one to force the FIFO queue to be ‘cleaned up’.

I did also re-install everything from scratch, using the CLI, instead of assembling bits & pieces from (potentially broken) configurations; this time, I just created one registered user (admin, of course!), added a password etc… which, unfortunately, no matter what password I choose, never works.

The errors are the same as before, that is, there is no real difference before or after my heavy tweaking, copying, and pasting old configurations on top of new ones.

The only other thing that comes to my mind is that the MariaDB database is corrupted — from the perspective of 4.9.2-alpha14, that is — and there are errors not being propagated to the logging level. AFAICS, there is nothing ‘surprising’ on the database; a lot of what I had expected to be there, is, well, there. But I can imagine that things such as signatures or access keys might have been changed, but the database schema wasn’t.

And of course it’s not just the relational database(s); there is also the KVP storage for boltdb, bleve, leveldb, etc. These are not as easily viewed. I’m not quite sure what is stored in all of those.

…maybe I should just give up on ‘migration’ attempts and start from scratch itself? While I have several GBytes of data, they’re all in structured directories, i.e. no flat-file. I would obviously have to recreate all users, all groups, and all ACLs, and of course lose all history and metadata, but since I don’t have that many users, this might be an alternative.

But I’ll avoid the ‘nuclear option’ for as long as I can! :rofl: