I’ve been trying for a while to set up a Pydio server for trial. I’m coming over from Nextcloud (it has crashed on me for the last time) and trying to get this going, but even the Docker method isn’t working.
The documentation is probably great for directly hosted configurations, but is no help when using a Docker nginx reverse proxy image. So far, binary image in VM or via the Docker image, I haven’t been able to get things working.
I have a workstation running Centos 8. It’s running Docker in the base OS, but there are also VMs running. Per Nextcloud I’ve got nginx-proxy and nginx-proxy-letsencrypt running, which do a great job of automatically setting up web services (both VM and Docker based). I’ve since stopped using Nextcloud, it is deactivated and not occupying any ports. Short background of spinning up new web service:
- Create a Docker container, or docker-compose.yml and spin it up, feeding VIRTUAL_HOST, LETSENCRYPT_HOST, and VIRTUAL_PORT as environment variables (the first two being the external address, the last the port the container listens on from the host side)
- Spinning up the container (for testing I’ve created a proper website via plain ol’ httpd) with these set properly, the letsencrypt side properly acquires a certificate and registers it for the proxy side
- Access the appropriate address specified in VIRTUAL_HOST/LETSENCRYPT_HOST and you’ll see the server (ranging from “It works!”, the default, to a proper website) pretty much immediately if your DNS is already set up.
I’ve set up a pydio subdomain (for security not my real address: pydio.mydomain.ca), and my router allows access to the appropriate ports (I’ve been able to access test sites internally and externally, as per above). I did have no issues with Nextcloud with this same setup (different subdomain obviously), until an update, hence why I’m looking elsewhere now (I think 5 failures is enough abuse for this relationship). For the uninitiated, the nginx-proxy and nginx-proxy-letsencrypt containers work as a pair. If it isn’t clear by now, one does the routing and web serving (if I set it up to do that) while also automatically generating configurations for new containers that show up with a VIRTUAL_HOST environment variable, the other arranges for new SSL certificates automatically for any newly started containers that have LETSENCRYPT_HOST as an environment variable if there is no currently valid certificate. The two share the certificate directory between each other and handle the external SSL verification.
I’ve set up a docker-compose.yml as per below, which I think is more or less the default recommended, modified to access my appropriate external directories. Important note: I have a multi-terabyte RAID array for mass storage, so I want to reference that external directory for storing the data. The main drive the system OS is on doesn’t have much space (plenty remaining for day to day, not enough for videos, images, binaries, family stuff, etc. to be stored and shared).
version: '3.7' services: cells: image: pydio/cells:latest restart: unless-stopped ports: ["8080:8080"] environment: - CELLS_LOG_LEVEL=production - CELLS_BIND=0.0.0.0:8080 - CELLS_EXTERNAL=pydio.mydomain.ca - CELLS_NO_SSL=1 - VIRTUAL_HOST=pydio.mydomain.ca - LETSENCRYPT_HOST=pydio.mydomain.ca - VIRTUAL_PORT=8080 volumes: - /srv/storage/docker/pydio/data:/var/cells/data - /srv/storage/docker/pydio:/var/cells network_mode: "bridge" mysql: image: mysql:5.7 restart: unless-stopped environment: MYSQL_ROOT_PASSWORD: P@ssw0rd MYSQL_DATABASE: cells MYSQL_USER: pydio MYSQL_PASSWORD: P@ssw0rd command: [mysqld, --character-set-server=utf8mb4, --collation-server=utf8mb4_unicode_ci] volumes: - /srv/storage/docker/mysql:/var/lib/mysql network_mode: "bridge"
If I try to access the server, I’ll get one of four errors via my browser and the nginx-proxy logs, depending on how I’ve attempted to configure the CELLS_BIND and CELLS_EXTERNAL settings:
- 503 error
- Client sent an HTTP request to an HTTPS server.
- A 400 error which internally is shown via the logs as:
nginx.1 | pydio.mydomain.ca 22.214.171.124 - - [20/Jul/2021:01:59:08 -0400] "GET / HTTP/2.0" 400 48 "-" "Mozilla/5.0" "172.17.0.6:8080"
- Or a 502 error shown via the logs as:
nginx.1 | pydio.mydomain.ca 126.96.36.199 - - [20/Jul/2021:01:02:54 -0400] "GET / HTTP/2.0" 502 157 "-" "Mozilla/5.0" "172.17.0.6:8080" nginx.1 | 2021/07/20 01:02:54 [error] 243#243: *301 connect() failed (111: Connection refused) while connecting to upstream, client: 188.8.131.52, server: pydio.mydomain.ca, request: "GET / HTTP/2.0", upstream: "http://172.17.0.6:8080/", host: "pydio.mydomain.ca"
I HAVE had the setup page load when I use localhost:8080 sometimes, but that doesn’t resolve my external access issue. The generated default.conf for nginx-proxy (created inside of the container) is as follows:
Now to be clear, I’m not 100% certain that my connection attempts are getting through to the Cells container because I don’t see any responses in the logs for that container, in spite of the multitude of errors I’m able to bring up with nginx-proxy. Since I get different error numbers, I feel the Cells web server is responding, it’s just not telling me what the errors are in the Cell container logs. At this point I’m lost. I’ve been at this for a few weeks now without success and will accept any advice possible.