Fun with the development version! (NOT!)

My issue

When attempting to log in with a local user (no other possibility has been configured), I get the following error:

Unauthorized (cannot find sessions.DAO)

and the logs show:

ERROR        pydio.rest.frontend        Cannot resolve DAO: could not find compatible storage for DAO parameter: dao resolution failed

And, sure enough, when restarting Cells, I get these entries:

JSON logs
{"level":"info","ts":"2025-09-01T12:55:18+01:00","logger":"pydio.caddy.tls","msg":"finished cleaning storage units"}
{"level":"warn","ts":"2025-09-01T17:25:07+01:00","logger":"pydio.caddy.http.handlers.reverse_proxy","msg":"aborting with incomplete response","upstream":":8032","duration":0.879121858,"request":{"client_ip":"127.0.0.1","headers":{"User-Agent":[""],"X-Forwarded-For":["127.0.0.1"],"X-Forwarded-Host":["127.0.0.1:8443"],"X-Forwarded-Proto":["https"],"X-Pydio-Site-Hash":["---[REDACTED]---"],"X-Real-Ip":["127.0.0.1:40700"]},"host":"127.0.0.1:8443","method":"GET","proto":"HTTP/1.1","remote_ip":"127.0.0.1","remote_port":"40700","tls":{"cipher_suite":4865,"proto":"","resumed":false,"server_name":"","version":772},"uri":"/"},"error":"writing: write tcp 127.0.0.1:8443->127.0.0.1:40700: write: broken pipe"}
{"level":"error","ts":"2025-09-01T17:25:28+01:00","logger":"pydio.rest.frontend","msg":"Cannot resolve DAO: could not find compatible storage for DAO parameter: dao resolution failed"}
{"level":"info","ts":"2025-09-01T17:25:36+01:00","logger":"pydio.caddy.http","msg":"servers shutting down with eternal grace period"}
{"level":"info","ts":"2025-09-01T17:25:40+01:00","logger":"pydio.web.proxy","msg":"Starting caddy as reverse-proxy"}
{"level":"info","ts":"2025-09-01T17:25:40+01:00","msg":"Registered Queue queue-debouncer with pool from uri mem://?debounce={{ .debounce }}&idle={{ .idle }}&max={{ .max }}&openerID={{ .openerID }}"}
{"level":"info","ts":"2025-09-01T17:25:40+01:00","msg":"Registered Queue queue-persistent with pool from uri fifo://{{ autoMkdir ( serviceDataDir .Service ) }}?name={{ .name }}&prefix={{ .prefix }}"}
{"level":"info","ts":"2025-09-01T17:25:40+01:00","msg":"Registered Cache cache-shared with pool from uri bigcache://?evictionTime={{ .evictionTime }}&cleanWindow={{ .cleanWindow }}&prefix={{ .prefix }}"}
{"level":"info","ts":"2025-09-01T17:25:40+01:00","msg":"Registered Cache cache-local with pool from uri pm://?evictionTime={{ .evictionTime }}&cleanWindow={{ .cleanWindow }}&prefix={{ .prefix }}"}
{"level":"warn","ts":"2025-09-01T17:25:40+01:00","logger":"pydio.caddy.admin","msg":"admin endpoint disabled"}
{"level":"info","ts":"2025-09-01T17:25:40+01:00","logger":"pydio.caddy.tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc00243af00"}
{"level":"info","ts":"2025-09-01T17:25:40+01:00","logger":"pydio.rest.policy","msg":"starting","service":"pydio.rest.policy","hook router to":"/policy","tag":"idm"}
{"level":"info","ts":"2025-09-01T17:25:40+01:00","logger":"pydio.rest.acl","msg":"starting","service":"pydio.rest.acl","hook router to":"/acl","tag":"idm"}
{"level":"warn","ts":"2025-09-01T17:25:40+01:00","logger":"pydio.caddy.tls","msg":"stapling OCSP","error":"no OCSP stapling for [127.0.0.1]: no OCSP server specified in certificate"}
{"level":"info","ts":"2025-09-01T17:25:40+01:00","logger":"pydio.caddy.http.auto_https","msg":"skipping automatic certificate management becauseone or more matching certificates are already loaded","domain":"127.0.0.1","server_name":"srv0"}
{"level":"info","ts":"2025-09-01T17:25:40+01:00","logger":"pydio.caddy.http.auto_https","msg":"automatic HTTP->HTTPS redirects are disabled","server_name":"srv0"}
{"level":"info","ts":"2025-09-01T17:25:40+01:00","logger":"pydio.caddy.http","msg":"enabling HTTP/3 listener","addr":":8443"}
{"level":"info","ts":"2025-09-01T17:25:41+01:00","logger":"pydio.caddy.http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
...
{"level":"warn","ts":"2025-09-01T17:25:41+01:00","logger":"pydio.caddy.tls","msg":"stapling OCSP","error":"no OCSP stapling for [127.0.0.1]: no OCSP server specified in certificate"}
{"level":"info","ts":"2025-09-01T17:25:41+01:00","logger":"pydio.gateway.wopi","msg":"ready"}
{"level":"info","ts":"2025-09-01T17:25:41+01:00","logger":"pydio.caddy.http.auto_https","msg":"skipping automatic certificate management becauseone or more matching certificates are already loaded","server_name":"srv0","domain":"127.0.0.1"}
{"level":"info","ts":"2025-09-01T17:25:41+01:00","logger":"pydio.caddy.http.auto_https","msg":"automatic HTTP->HTTPS redirects are disabled","server_name":"srv0"}
{"level":"info","ts":"2025-09-01T17:25:41+01:00","logger":"pydio.grpc.install","msg":"ready"}
{"level":"info","ts":"2025-09-01T17:25:41+01:00","logger":"pydio.caddy.http","msg":"enabling HTTP/3 listener","addr":":8443"}
{"level":"info","ts":"2025-09-01T17:25:41+01:00","logger":"pydio.caddy.http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
{"level":"info","ts":"2025-09-01T17:25:41+01:00","logger":"pydio.caddy.http","msg":"servers shutting down with eternal grace period"}
...
{"level":"warn","ts":"2025-09-01T17:25:42+01:00","logger":"pydio.caddy.admin","msg":"admin endpoint disabled"}
{"level":"warn","ts":"2025-09-01T17:25:42+01:00","logger":"pydio.caddy.tls","msg":"stapling OCSP","error":"no OCSP stapling for [127.0.0.1]: no OCSP server specified in certificate"}
{"level":"info","ts":"2025-09-01T17:25:42+01:00","logger":"pydio.caddy.http.auto_https","msg":"skipping automatic certificate management becauseone or more matching certificates are already loaded","domain":"127.0.0.1","server_name":"srv0"}
{"level":"info","ts":"2025-09-01T17:25:42+01:00","logger":"pydio.caddy.http.auto_https","msg":"automatic HTTP->HTTPS redirects are disabled","server_name":"srv0"}
{"level":"info","ts":"2025-09-01T17:25:42+01:00","logger":"pydio.grpc.jobs","msg":"ready"}
{"level":"info","ts":"2025-09-01T17:25:42+01:00","logger":"pydio.caddy.http","msg":"enabling HTTP/3 listener","addr":":8443"}
{"level":"info","ts":"2025-09-01T17:25:42+01:00","logger":"pydio.caddy.http.log","msg":"server running","protocols":["h1","h2","h3"],"name":"srv0"}
{"level":"info","ts":"2025-09-01T17:25:42+01:00","logger":"pydio.caddy.http","msg":"servers shutting down with eternal grace period"}
{"level":"info","ts":"2025-09-01T17:25:42+01:00","logger":"pydio.grpc.data.index","msg":"ready"}
{"level":"info","ts":"2025-09-01T17:25:42+01:00","logger":"pydio.grpc.oauth","msg":"ready"}
...

And now the juicy bits:

{"level":"error","ts":"2025-09-01T17:25:42+01:00","logger":"pydio.grpc.install","msg":"could not initialise service at version 4.9.92-alpha08","e
rror":"could not find compatible storage for DAO parameter: dao resolution failed","errorVerbose":"could not find compatible storage for DAO para
meter: dao resolution failed\ngithub.com/pydio/cells/v5/common/runtime/manager.Resolve[...]\n\tgithub.com/pydio/cells/v5/common/runtime/manager/r
esolve.go:249\ngithub.com/pydio/cells/v5/idm/role/grpc.InitRoles\n\tgithub.com/pydio/cells/v5/idm/role/grpc/first-run.go:51\ngithub.com/pydio/cel
ls/v5/common/service.applyMigrations\n\tgithub.com/pydio/cells/v5/common/service/versions.go:240\ngithub.com/pydio/cells/v5/common/service.UpdateServiceVersion\n\tgithub.com/pydio/cells/v5/common/service/versions.go:119\ngithub.com/pydio/cells/v5/discovery/install/grpc.(*Handler).Migrate\n\tgithub.com/pydio/cells/v5/discovery/install/grpc/handler.go:144\nreflect.Value.call\n\treflect/value.go:584\nreflect.Value.Call\n\treflect/value.go:368\ngithub.com/pydio/cells/v5/common/server/grpc.(*Server).prepareInternalOptions.func1\n\tgithub.com/pydio/cells/v5/common/server/grpc/grpc.go:300\ngithub.com/pydio/cells/v5/common/server/grpc.getChainUnaryHandler.func1\n\tgithub.com/pydio/cells/v5/common/server/grpc/interceptor.go:23\ngithub.com/pydio/cells/v5/common/server/grpc.(*Server).prepareInternalOptions.func2\n\tgithub.com/pydio/cells/v5/common/server/grpc/grpc.go:318\ngithub.com/pydio/cells/v5/common/server/grpc.(*Server).prepareInternalOptions.HandlerUnaryInterceptor.func6\n\tgithub.com/pydio/cells/v5/common/server/grpc/interceptor.go:14\ngoogle.golang.org/grpc.getChainUnaryHandler.func1.getChainUnaryHandler.1\n\tgoogle.golang.org/grpc@v1.69.2/server.go:1212\ngithub.com/pydio/cells/v5/common/server/grpc.(*Server).prepareInternalOptions.unaryEndpointInterceptor.func5\n\tgithub.com/pydio/cells/v5/common/server/grpc/grpc-endpoint.go:79\ngoogle.golang.org/grpc.getChainUnaryHandler.func1\n\tgoogle.golang.org/grpc@v1.69.2/server.go:1212\ngithub.com/pydio/cells/v5/common/middleware.ErrorFormatUnaryInterceptor\n\tgithub.com/pydio/cells/v5/common/middleware/errors-grpc.go:79\ngoogle.golang.org/grpc.getChainUnaryHandler.func1.getChainUnaryHandler.1\n\tgoogle.golang.org/grpc@v1.69.2/server.go:1212\ngithub.com/pydio/cells/v5/common/middleware.GrpcUnaryServerInterceptors.ContextUnaryServerInterceptor.func10\n\tgithub.com/pydio/cells/v5/common/utils/propagator/grpc.go:43\ngoogle.golang.org/grpc.getChainUnaryHandler.func1\n\tgoogle.golang.org/grpc@v1.69.2/server.go:1212\ngithub.com/pydio/cells/v5/common/middleware.GrpcUnaryServerInterceptors.ContextUnaryServerInterceptor.func9\n\tgithub.com/pydio/cells/v5/common/utils/propagator/grpc.go:43\ngoogle.golang.org/grpc.getChainUnaryHandler.func1.getChainUnaryHandler.1\n\tgoogle.golang.org/grpc@v1.69.2/server.go:1212\ngithub.com/pydio/cells/v5/common/middleware.GrpcUnaryServerInterceptors.ContextUnaryServerInterceptor.func7\n\tgithub.com/pydio/cells/v5/common/utils/propagator/grpc.go:43\ngoogle.golang.org/grpc.getChainUnaryHandler.func1\n\tgoogle.golang.org/grpc@v1.69.2/server.go:1212\ngithub.com/pydio/cells/v5/common/middleware.GrpcUnaryServerInterceptors.ContextUnaryServerInterceptor.func5\n\tgithub.com/pydio/cells/v5/common/utils/propagator/grpc.go:43\ngoogle.golang.org/grpc.getChainUnaryHandler.func1.getChainUnaryHandler.1\n\tgoogle.golang.org/grpc@v1.69.2/server.go:1212\ngithub.com/grpc-ecosystem/go-grpc-middleware/recovery.UnaryServerInterceptor.func1\n\tgithub.com/grpc-ecosystem/go-grpc-middleware@v1.4.0/recovery/interceptors.go:33\ngoogle.golang.org/grpc.getChainUnaryHandler.func1\n\tgoogle.golang.org/grpc@v1.69.2/server.go:1212\ngithub.com/pydio/cells/v5/common/middleware.GrpcUnaryServerInterceptors.ContextUnaryServerInterceptor.func4\n\tgithub.com/pydio/cells/v5/common/utils/propagator/grpc.go:43\ngoogle.golang.org/grpc.getChainUnaryHandler.func1.getChainUnaryHandler.1\n\tgoogle.golang.org/grpc@v1.69.2/server.go:1212\ngithub.com/pydio/cells/v5/common/middleware.GrpcUnaryServerInterceptors.ContextUnaryServerInterceptor.func2\n\tgithub.com/pydio/cells/v5/common/utils/propagator/grpc.go:43\ngoogle.golang.org/grpc.getChainUnaryHandler.func1\n\tgoogle.golang.org/grpc@v1.69.2/server.go:1212\ngithub.com/pydio/cells/v5/common/middleware.GrpcUnaryServerInterceptors.MetricsUnaryServerInterceptor.func1\n\tgithub.com/pydio/cells/v5/common/middleware/metrics.go:59\ngoogle.golang.org/grpc.NewServer.chainUnaryServerInterceptors.chainUnaryInterceptors.func1\n\tgoogle.golang.org/grpc@v1.69.2/server.go:1203\n","tag":"idm"}
{"level":"info","ts":"2025-09-01T17:25:42+01:00","logger":"pydio.grpc.install","msg":"Error while updating service version for pydio.grpc.role","error":"cannot update service version for pydio.grpc.role (could not find compatible storage for DAO parameter: dao resolution failed)","tag":"idm"}
{"level":"error","ts":"2025-09-01T17:25:42+01:00","logger":"pydio.grpc.install","msg":"[GRPC]/service.MigrateService/Migrate cannot update service version for pydio.grpc.role (could not find compatible storage for DAO parameter: dao resolution failed)","errorId":"2757b86f-f105","ClientCaller":"github.com/pydio/cells/v5/cmd/start.go:386:cmd.init.func86()","error":"cannot update service version for pydio.grpc.role (could not find compatible storage for DAO parameter: dao resolution failed)"}
{"level":"warn","ts":"2025-09-01T17:25:42+01:00","msg":"Ignoring migration failure","error":"rpc error: code = Unknown desc = cannot update service version for pydio.grpc.role (could not find compatible storage for DAO parameter: dao resolution failed)\nhandled\nhandled","errorVerbose":"rpc error: code = Unknown desc = cannot update service version for pydio.grpc.role (could not find compatible storage for DAO parameter: dao resolution failed)\nhandled\nhandled\ngithub.com/pydio/cells/v5/common/errors.Tag\n\tgithub.com/pydio/cells/v5/common/errors/lib.go:102\ngithub.com/pydio/cells/v5/common/middleware.FromGRPC\n\tgithub.com/pydio/cells/v5/common/middleware/errors-grpc.go:235\ngithub.com/pydio/cells/v5/common/middleware.GrpcUnaryClientInterceptors.ErrorFormatUnaryClientInterceptor.func2\n\tgithub.com/pydio/cells/v5/common/middleware/errors-grpc.go:93\ngoogle.golang.org/grpc.getChainUnaryInvoker.func1\n\tgoogle.golang.org/grpc@v1.69.2/clientconn.go:470\ngithub.com/pydio/cells/v5/common/middleware.GrpcUnaryClientInterceptors.ErrorNoMatchedRouteRetryUnaryClientInterceptor.func1\n\tgithub.com/pydio/cells/v5/common/middleware/errors-grpc.go:51\ngoogle.golang.org/grpc.NewClient.chainUnaryClientInterceptors.func1\n\tgoogle.golang.org/grpc@v1.69.2/clientconn.go:458\ngoogle.golang.org/grpc.(*ClientConn).Invoke\n\tgoogle.golang.org/grpc@v1.69.2/call.go:35\ngithub.com/pydio/cells/v5/common/client/grpc.(*clientConn).Invoke\n\tgithub.com/pydio/cells/v5/common/client/grpc/grpc.go:269\ngithub.com/pydio/cells/v5/common/proto/service.(*migrateServiceClient).Migrate\n\tgithub.com/pydio/cells/v5/common/proto/service/cells-service_grpc.pb.go:242\ngithub.com/pydio/cells/v5/cmd.init.func86\n\tgithub.com/pydio/cells/v5/cmd/start.go:386\ngithub.com/spf13/cobra.(*Command).execute\n\tgithub.com/spf13/cobra@v1.9.1/command.go:1015\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tgithub.com/spf13/cobra@v1.9.1/command.go:1148\ngithub.com/spf13/cobra.(*Command).Execute\n\tgithub.com/spf13/cobra@v1.9.1/command.go:1071\ngithub.com/spf13/cobra.(*Command).ExecuteContext\n\tgithub.com/spf13/cobra@v1.9.1/command.go:1064\ngithub.com/pydio/cells/v5/cmd.Execute\n\tgithub.com/pydio/cells/v5/cmd/root.go:125\nmain.main\n\tgithub.com/pydio/cells/v5/main.go:203\nruntime.main\n\truntime/proc.go:283\nruntime.goexit\n\truntime/asm_amd64.s:1700\n"}

Fun, isn’t it?

My question is actually a simple one: what exactly is “DAO” and how do I get it to behave?

I’m sure it’s nothing that my acronym database has found:

$ wtf DAO
DAO	Data Access Objects (DB)
DAO	Destination Address Omitted [flag] (CATNIP)
DAO	Disk At Once (CD-R, SAO)

What version of Cells are you using?

4.9.92-alpha08 (aye, I know: it’s an alpha version!), Home Edition

What is the server OS? Database name/version? Browser name or mobile device description (if issue appears client-side)?

OS: Ubuntu Linux 24.04.3 LTS (kernel 6.8.0-79)
Database server: MariaDB Ver 15.1 Distrib 10.11.13

To make things more complicted (or perhaps not), I run this version of Pydio Cells in three layers.

  1. Layer one: There is no Docker or VM — Cells runs directly on metal (it needs all the CPU is can!). It’s bound to 127.0.0.1.
  2. Layer two: Nginx reverse proxy (for both HTTP/S traffic as well as gRPC). Stable configuration for years. Local hostname has a Let’s Encrypt ECC certificate. Local firewall (ufw/iptables) configured and also operational for years.
  3. Layer three: Firewall at the data centre layer. No issues there for years as well (but sometimes I turn it off to be really sure).

(Production version runs sometimes under the Cloudflare umbrella as well, but to keep things easier to debug and configure, this alpha version doesn’t)

While the logs are a bit obscure here, I think there is some kind of communications problem between Caddy and one of the many gRPC serverlets — error "no OCSP stapling for [127.0.0.1]: no OCSP server specified in certificate" is highly suspicious, since Let’s Encrypt has discontinued OCSP stapling quite a while ago[1].

I wonder, therefore, if it’s Caddy that is requesting OCSP stapling for this certificate, fails to retrieve it (because Let’s Encrypt certificates already don’t mention the OCSP endpoint), crashes or something similar, and some services, namely the mysterious “DAO”, stop working.

If that’s the case, my guess is that the simple fix would be to disable OCSP stapling requests in Caddy, and all else would work. Right?

There is one file for my domain under .config/pydio/cells/caddy/ocsp — it’s dated from Oct. 12, 2022 (!). Should I remove it?

What steps have you taken to resolve this issue already?

Uh, none, to be honest… I read some Web pages and did some searches on Google, in vain!

I did set "disable_ocsp_stapling": true inside .config/pydio/cells/caddy/autosave.json, but it had no effect; I presume that Caddy is really using a different configuration file, injected directly by Cells.

One thing what I did try to change was the self-signed certificate for Caddy, and replace it with the real certificate. This didn’t work out as intended: Caddy seems to be even more confused, and still tries to use its own self-signed certificates, but now fails with a weird error:

ERROR        pydio.caddy.pki.ca.local        failed to install root certificate        {"error": "failed to execute sudo: exit status 1", "certificate_file": "storage:pki/authorities/local/root.crt"} 

Erm… no, I most definitely do not want Cells or Caddy to get sudo access on that machine!

Whatever it was trying to do, in any case, it failed, and the remaining errors are as listed before — it’s still complaining about could not find compatible storage for DAO parameter: dao resolution failed. No luck there! :grin:


  1. Also see a relevant discussion on the Let’s Encrypt Community Forums: What will happen to Must-Staple - #26 by jvanasco - Issuance Policy - Let's Encrypt Community Support ↩︎

That’s it.
It seems to be a problem with the database uri.

1 Like

Hmm. At a glance, indeed, the “new” pydio.json, which was (allegedly?) generated from scratch, only tried to retrieve some data from the old configuration file, blanking out when it doesn’t find anything it likes.

In particular, database configuration seems to be… missing!? But copying & pasting that configuration from the previously stable version did not result in any success whatsoever — Cells seems to skip/ignore those. Maybe the database configuration parameters are now under a completely new section, or have a totally different format; whatever might be the case, it’s clear that the system is unable to figure out how the workspaces are set up, and what gives access to them.

Thanks for the insight; I haven’t been able to fix anything, but at least I know where to look for.

Hey @GwynethLlewelyn hope you are well!
So you like to live on the edge with alpha versions :wink:
Bottom line : no compatible dao => misconfiguration for db connexion somewhere.
The big question: did you install this alpha from scratch or did you simply upgrade a v4 ?

The migration v4 => v5 was not even ready in alpha8, so your JSON is probably not “v5-compatible”.
Try reinstalling a latest dev (e.g. nightly) from scratch

Mmmh. I’m fine, thanks, and you hope you are as well :grinning_face_with_smiling_eyes:

Aye, I do love to live dangerously! You know, I’ve never given up the hope that, one day, I will finally be able to access Pydio via its S3 API, and do beautiful syncing with all the fantastic tools I’ve got… and if that means going through all the nightmares of enduring the configuration of a rough alpha version… so be it! I’m ready :wink:

Anyway, you’re absolutely right: I did not install 4.9.92. I was afraid of overwriting everything and, well, losing all my data!..

Well, I guess I should have taken the latter as granted, and go ahead and do everything from scratch…

So… I got v4.9.2-alpha14 now, compiled it, and started a ‘new’ configuration. I got stuck right on the beginning — I have MariaDB running locally from a Unix socket, but Cells didn’t like it:

address /var/run/mysqld/mysqld.sock: missing port in address

Weird, right? Aye, I tried giving it a ‘fake’ port (e.g., /var/run/mysqld/mysqld.sock:3306 or so), which it sort of accepts, but eventually will break the first time it actually accesses the database.

Fortunately, by sheer neglect, my database server is still listening on port 3306 (on localhost only), and Cells did have no problems accessing it that way. I might manually revert that in the new configuration, and see if it’s just an input issue (i.e., an extraneous check for : which should only be there for the TCP option). But, for now, I’m happy to go ahead with port 3306.

The next steps went smoothly. I skipped the need of creating a new admin user (maybe that wasn’t such a good idea) and restarted Cells (from systemd)

Now I get a different error when trying to log in:

Nov 01 10:37:03 my-server cells[2096100]: {"level":"error","ts":"2025-11-01T10:37:03Z","logger":"pydio.grpc.oauth","msg":"[GRPC]/auth.PasswordCredentialsToken/PasswordCredentialsToken invalid_grant","errorId":"567c4ed9-6220","ClientCaller":"/[redacted]/Developer/cells/common/auth/hydra/hydra.go:195:hydra.PasswordCredentialsToken()","error":"invalid_grant"}
Nov 01 10:37:03 my-server cells[2096100]: {"level":"error","ts":"2025-11-01T10:37:03Z","logger":"pydio.rest.frontend","msg":"[REST]/a/frontend/session invalid_grant","ErrorCauseId":"567c4ed9-6220","error":"invalid_grant","tag":"frontend","RemoteAddress":"127.0.0.1","UserAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/138.0.0.0 Safari/537.36","ContentType":"application/json","HttpProtocol":"HTTP/1.1","RequestHost":"my-server.domain.name"}

And on the frontend, I now see:

Could not recognize your name or password, did you check your Caps Lock key?

I’m sure there must be somewhere a way to flush the ‘grants’ database, wherever it might be stored, and force the system to generate new grants, or whatever is required for that to happen.

There is, however, a lot of good news: from the CLI, I can see that most things seem to be fully operational. A quick copy & paste from the old configuration gets my datastores back, and I can confirm that everything seems to be there. For instance:

./cells admin file ls --path pydiods1 --uuid
Listing nodes at pydiods1
+------+--------------------------------+--------------------------------------+--------+-----------------+
| TYPE |              PATH              |                 UUID                 |  SIZE  |    MODIFIED     |
+------+--------------------------------+--------------------------------------+--------+-----------------+
| File | Frank a dormir na nossa        | 827e7625-... | 3.3 MB | 15 May 20 08:10 |
|      | cama.jpg                       |                                      |        |                 |
| File | Mies pequenino a comer na      | 33de865c-... | 40 MB  | 15 May 20 08:11 |
|      | varanda todo esfomeado.mov     |                                      |        |                 |
+------+--------------------------------+--------------------------------------+--------+-----------------+
Showing 2 results

Even more esoteric things like reading/setting metadata work:

./cells admin file meta-read --uuid 827e7625-...
Listing meta for node 827e7625-...
+--------------------+------------------------------------------------------------------------------------------------------------------+
|        NAME        |                                                      VALUE                                                       |
+--------------------+------------------------------------------------------------------------------------------------------------------+
| ImageThumbnails    | {"Processing":false,"thumbnails":[{"format":"jpg","id":"sm","size":300},{"format":"jpg","id":"md","size":1024}]} |
| image_height       |                                                                                                             3024 |
| image_width        |                                                                                                             4032 |
| is_image           | true                                                                                                             |
| name               | "Frank a dormir na nossa                                                                                         |
|                    | cama.jpg"                                                                                                        |
| readable_dimension | "4032px X 3024px"                                                                                                |
| GeoLocation        | {"GPS_altitude":".....","GPS_latitude":"...                                                                      |
|                    | deg ...' ... N","GPS_longitude":"... deg ...' ...                                                                    |
|                    | N","lat":.....,"lon":.....}                                                             |
| ImageDimensions    | {"Height":3024,"Width":4032}                                                                                     |
+--------------------+------------------------------------------------------------------------------------------------------------------+

Granted, that only works for the pydiods1 workspace. All the others (all are structured data on disk), even though they get correctly listed with ./cells admin file ls, they cannot be further inspected; everything will fail with:

Nov 01 13:21:21 my-server cells[2206232]: {"level":"error","ts":"2025-11-01T13:21:21Z","logger":"pydio.grpc.tasks","msg":"[goque] Received many errors while consuming messages (prefix:resync-ds-personal), data may be corrupted, you may have to restart and clear the fifo corresponding folder: ","error":"goque: ID used is outside range of stack or queue","tag":"scheduler"}
[... a restart at some point...]
Nov 01 17:00:24 my-server cells[2531037]: {"level":"error","ts":"2025-11-01T17:00:24Z","logger":"pydio.grpc.tree","msg":"Cannot compute DataSource size, skipping","dsName":"name-of-my-datasource","error":"rpc error: code = Canceled desc = context canceled","tag":"data"}
[... back to many more [goque] Received many errors ...]

Eventually, after doing some three attempts (or so) at reading a specific filesystem, the system gives up (timeout) and fails — it’s stuck at the root and cannot proceed further.

Now, I’m personally not surprised: after all, I’m reusing the v4.X configuration and trying to patch it unto the v4.9.2/v5 ‘new’ configuration. I’m sure that the main issue here is that all the bad/wrong keys to access the datasources and workspaces.

That said, I’m intrigued by the suggestion given in the actual error: ‘clear the fifo corresponding folder:’. Interesting, because the code on common/broker/goque/goque.go shows for that line (124, on the version I got right now) that it should have concatenated a g.dataDir — which, apparently, is empty, since it doesn’t show up in the logs.

Now, don’t expect me to read your million lines of code and become an ‘instant expert’ :smiley: The code is intricate and complex (and I love Go :heart: !) but, to fully understand it, it would take much more time than what I have at my disposal right now.

The little bit I could understand was that this dataDir is generated when calling OpenURL(), which sets things up with a path and the queue name, e.g. queueName := "fifo-" + streamName.

I also don’t know what ‘path’ means in this context: I’m assuming it’s a fully virtualised ‘path’ (because the storage backend may be ‘anything’), which only makes sense in the context of the Cells ‘deep’ internals. But perhaps there is a (hidden) feature somewhere which allows one to force the FIFO queue to be ‘cleaned up’.

I did also re-install everything from scratch, using the CLI, instead of assembling bits & pieces from (potentially broken) configurations; this time, I just created one registered user (admin, of course!), added a password etc… which, unfortunately, no matter what password I choose, never works.

The errors are the same as before, that is, there is no real difference before or after my heavy tweaking, copying, and pasting old configurations on top of new ones.

The only other thing that comes to my mind is that the MariaDB database is corrupted — from the perspective of 4.9.2-alpha14, that is — and there are errors not being propagated to the logging level. AFAICS, there is nothing ‘surprising’ on the database; a lot of what I had expected to be there, is, well, there. But I can imagine that things such as signatures or access keys might have been changed, but the database schema wasn’t.

And of course it’s not just the relational database(s); there is also the KVP storage for boltdb, bleve, leveldb, etc. These are not as easily viewed. I’m not quite sure what is stored in all of those.

…maybe I should just give up on ‘migration’ attempts and start from scratch itself? While I have several GBytes of data, they’re all in structured directories, i.e. no flat-file. I would obviously have to recreate all users, all groups, and all ACLs, and of course lose all history and metadata, but since I don’t have that many users, this might be an alternative.

But I’ll avoid the ‘nuclear option’ for as long as I can! :rofl:

:waving_hand: hey
You kind of loose me
The right path currently is :

  • backup your DB
  • backup the configuration files, most simple is to backup all CELLS_WORKING_DIR except for the data/ folder (if your GBytes of data are in there)
  • [maybe] fix the db definition inside pydio.json to ensure there will be no issue with unix socket access
    Do not re-run configure command
    Replace the v4 binary by v5 one and restart;

=> you should see some migrations happening. Tell us if smth goes wrong?

Or the other way round for even more security:

Replicate your instance to another CELLS_WORKING_DIR, ignoring the data folder for now + clone the DB, start this new instance on a different port with v4, then replace the binary to test the migration…

1 Like