[webdav] Gnome gvfs/goa DAV mount and final slash

When it comes to mounting on a recent Gnome Linux desktop, two components are involved.

One is Gnome Online Account, which currently does not support Pydio (which is entirely their fault):

The other is gvfs/gio, here the issue is more subtle.
When running:

GVFS_HTTP_DEBUG=all /usr/lib/gvfs/gvfsd-dav ssl=true user=<user> host=my-domain prefix=/dav/

the first request is:


Pydio replies:

 HTTP/1.1 200 OK

<!DOCTYPE html>
<html xmlns:ajxp>
<title>Pydio Cells</title>

This is obviously not expected.

Three possibility.

Related routing code:

Related code of the handler:

Related implementation: https://github.com/pydio/cells/blob/master/gateway/dav/filesystem.go

Ping & up & help | support :slight_smile: ?


I had a quick look at this using the latest dev version (that will be published as 2.0.5 very soon) and that you can retrieve here for Ubuntu: https://download.pydio.com/pub/cells/dev/linux-amd64/cells

I could mount a given workspace on nautilus with no issue, using for instance:

After login, I can browse the server tree. Furthermore, all the workspaces that can be accessed by the current logged-in user are also listed in the mounted network location.

Yet, I could not achieve to directly mount the drive by using neither davs://localhost:8080/dav/ nor davs://localhost:8080/dav

So could you please:

  • Try with the latest binary to see if it can yet address your use case (we have fixed quite a few glitch in these layers during the last few weeks)
    and, you want us to further dig into it
  • Be more explicit in the debugging process you use so that we can reproduce your issue and dig further to address it.


Very interesting feedback. Thank you!
I’ll try 2.0.5 as soon as it’s released.
In the meantime I made an attempt with “common-files”.

Here’s the log (Tried with the latest release of gvfsd):

$ GVFS_DEBUG=1 GVFS_HTTP_DEBUG=all /usr/lib/gvfs/gvfsd-dav ssl=true user=username host=my-domain prefix=/dav/common-files

dav: setting 'ssl' to 'true'
dav: setting 'user' to 'username'
dav: setting 'host' to 'my-domain'
dav: setting 'prefix' to '/dav/common-files'
dav: Added new job source 0x563b5bff51a0 (GVfsBackendDav)
dav: Queued new job 0x563b5bff13e0 (GVfsJobMount)
dav: + mount
> OPTIONS /dav/common-files HTTP/1.1
> Soup-Debug-Timestamp: 1585341980
> Soup-Debug: SoupSession 1 (0x563b5bff5100), SoupMessage 1 (0x7f73780070d0), SoupSocket 1 (0x7f73783498c0)
> Host: my-domain
> Accept-Encoding: gzip, deflate
> User-Agent: gvfs/1.44.0
> Accept-Language: en-us, en;q=0.9
> Connection: Keep-Alive
dav: + soup_authenticate_interactive (first auth) 
dav: - soup_authenticate 
< HTTP/1.1 1 Cancelled
< Soup-Debug-Timestamp: 1585341982
< Soup-Debug: SoupMessage 1 (0x7f73780070d0)
< Date: Fri, 27 Mar 2020 20:46:22 GMT
< Content-Type: text/plain; charset=utf-8
< Content-Length: 14
< Connection: keep-alive
< Set-Cookie: __cfduid=d43461785d4f052083007e89e638ffee31585341980; expires=Sun, 26-Apr-20 20:46:20 GMT; path=/; domain=.animalequality.org; HttpOnly; SameSite=Lax
< Www-Authenticate: Basic realm=""
< CF-Cache-Status: DYNAMIC
< Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
< Server: cloudflare
< CF-RAY: 57abf5d11d33d2dc-EZE
< Unauthorized.
dav: send_reply(0x563b5bff13e0), failed=1 (HTTP Error: Cancelled)
dav: Mount failed: HTTP Error: Cancelled

I wonder if it could not be a bug in gvfsd itself.

I also tried Nautilus (against 2.0.4), but without much more success “Can’t parse the response” (but I could not find a way to obtain debug messages from it).

Your call, but if you could try with the snapshot before we release it, it would help to insure your problem is solved. If you confirm the problem is still there and give moe hints to reproduce, we are testing and fixing glitches this week.
Otherwise, you might have to wait longer (a.k.a until next release): as it is working for me on Ubuntu18 with nautilus, we won’t do further digging.

I tried again with just release 2.0.5:


Testcase 0:
cadaver https://$U:$P@$HOST/dav/
=> works

Testcase 1: I created test-dir
curl -sX PROPFIND https://$U:$P@$HOST/dav/common-files/test-dir
=> works

Testcase 2:
curl -sX PROPFIND https://$U:$P@$HOST/dav/common-files
=> Does not work
=> Response: <?xml version="1.0" encoding="UTF-8"?><D:multistatus xmlns:D="DAV:"><D:response><D:href>/dav/common-files/</D:href><D:propstat><D:prop><D:supportedlock><D:lockentry xmlns:D="DAV:"><D:lockscope><D:exclusive/></D:lockscope><D:locktype><D:write/></D:locktype></D:lockentry></D:supportedlock><D:getlastmodified>Tue, 25 Feb 2020 22:23:36 GMT</D:getlastmodified><D:resourcetype><D:collection xmlns:D="DAV:"/></D:resourcetype><D:displayname>common-files</D:displayname></D:prop><D:status>HTTP/1.1 200 OK</D:status></D:propstat></D:response></D:multistatus>Internal Server Error
=> Server logs: {"level":"error","ts":"2020-03-30T17:44:45Z","logger":"pydio.gateway.dav","msg":"|- DAV END","method":"PROPFIND","path":"/dav/common-files","error":"{\"id\":\"views.handler.encryption.GetObject\",\"code\":404,\"detail\":\"node Uuid and size are both required\",\"status\":\"Not Found\"}"}

Testcase 3:
Using Nautilus
=> Does not work.
=> I see the common-files directory, but can’t list its content because of a parse-error
=> Server logs: Identical to above testcase 2.

My guess is:

  • That Internal Server Error string breaks XML
  • Since logs mention views.handler.encryption.GetObject let me precise that my instance is using an [encrypted] [S3] datasource, in case it helps.

We quickly spoke about this during stand up meeting this morning and we might have a trail that could explain the issue you have spotted.
We have to do some further testing and we will let you know. (But please be patient, we have heavy load and are a little bit less efficient than usual these days).

1 Like