Pydio cells docker-compose example does not work

See example on docker hub.

Tried the one given on the page. When logging in via browser, I receive “404 Site <EXTERNAL_IP>:8080 is not served on this interface”

Tried changing the bind address to be an external IP. That does not work.

Tried changing the CELLS_EXTERNAL port to a different port because it explicitly states that it must be a different port than the CELLS_BIND port (although why the example has the ports as the same is a question). That does not work. There is no service running on the external port specified in CELLS_EXTERNAL.

Also, is there an upgrade path from pydio community 8.0 to cells?

Also, why after logging into, then selecting support, I have to re-log into

Could you show me your configuration and are you using a reverse proxy?

could you show the line please, if so i will take look at it.

This is on the way, but you should contact our sales service on this subject.

The forum and our website are different platforms.

Please review the pydio/cells container on docker hub. As described in my original, this was documentation by Pydio. Did you review this?

Why? Single-sign-on technology exists and is used by many website. It’s annoying to sign into a website more than once.

if this is the sentence that got you confused i will see how to rephrase it,
CELLS_BIND and CELLS_EXTERNAL can only use different ports at the time of writing. They must share the same host.
and the phrase does not explicitly tell you to “do”, it tells you that if you have a configuration with for instance a reverse proxy with a different port you will have to change your CELLS_external according to it you can only set them at this time.( it’s regarding the docker because you have to expose ports, because if you have your cells running on a server you could easily change the ports inside the config file. )

Yes indeed, there is actually even a way to log in with your google account, github on the forum.

Try with this configuration for instance,
CELLS_BIND = host:port

Your response is not making sense. Maybe I need more coffee this morning but…

The statement “CELLS_BIND and CELLS_EXTERNAL can only use different ports” is contradictory to the docker-compose example which shows these using the SAME port. If there is a reverse proxy involved, that is a completely separate service (run on a different container or machine) so any port exposed for that service would by definition require a different external port if it is run on the same docker host as pydio/cells because one cannot bind 2 services to the same docker host port.

Also, the variables would appear to bind a service to a port within a container but setting the CELLS_EXTERNAL does not change the port in the container. This is evidenced by running ‘ss -taln’ (or ‘netstat -taln’) within the container and viewing that there is only a service running on port 8080 (set most likely by the CELLS_BIND).

Finally, localhost from the container is NOT the same as the localhost of the docker host. The ‘host’ in you last reply seems to refer to the docker host whereas it is unclear whether the ‘localhost’ in the docker-compose example refers to the localhost of the container or the localhost of the docker host. I assume the latter but that is just a guess based on the context. If you want to specify the docker host, the docker-compose configuration should be changed to ‘CELLS_EXTERNAL:<DOCKER_HOST>:’ for clarity. But that doesn’t necessarily make sense, because if you want to bind a service to an external host/port, you would do it in the ports definition (of the compose file) and not via the service running within the container. E.g., 'ports: [“”].

Please provide clarify to the above because at this point I’m playing “the guessing game” trying to figure out what Pydio meant by the documentation and comments.

Regarding SSO, the technology also exists any website implementation and does not require Google, Facebook, etc.

I will ask the devs about what does this exactly mean and will change it according to that.

The example is showing a simple case,
let’s say that you docker run -d -p 8080:8080 pydio/cells on your machine, then you will only be able to access to it through https://localhost:8080 on the same machine that has the container running ( because of the external ) if you wanted to have access from another machine (for my example i will talk about local network) the best practice would be to have CELLS_EXTERNAL = the machine that has the container running) ip + port, then you can access it from outside your own machine (right now i’m only talking on a local scale) but you can apply the same process with a server.

I never said it was the same, it will only work in the case that you are launching and accessing the container on the same machine ( but what would be the point of this, that was just an example ), sorry if was not clear but the host is the ip of the server/machine running the container.

I was just saying what method we also have, i also would have liked if my forum & site account were the same but i can understand your pain.

I myself did not really pay attention to this sentence about the ports, i mean i have a container running with exposed port 8080 then i would put the same port on both or alteast on CELLS_BIND.

What is the case that requires you to have 2 different ports, (That’s why i asked if you are going to use Reverse proxy, but it could also be that you exposed a different port on your container)


Sorry if my answer was kinda long,
i actually wanted to add a detail that might help you understand the CELLS_BIND, CELLS_EXTERNAL

when you start cells, it actually has an intergrated webserver that also works like a proxy exposing all kind of services therefore it you want external access to this you need to set your CELLS_EXTERNAL to the door(figuratively speaking) of the server hosting cells.

Regarding your statement “let’s say that you docker run -d -p 8080:8080 pydio/cells on your machine, then you will only be able to access to it through https://localhost:8080 on the same machine that has the container running ( because of the external )”

The above is not true. The command represented above does not bind the container to the loopback device but rather to all interfaces and thus is accessible from any machine that is able to access the docker host/IP.

I have followed the modified instructions on docker hub which corresponds somewhat to the instructions above but I’m receiving the error that I initially started this post with. The instruction on docker hub state:

"* CELLS_BIND : address where the application http server is bound to. It MUST contain a server name and a port.

  • CELLS_EXTERNAL : url the end user will use to connect to the application.

If you want your application to run on the localhost at port 8080 and use the url, then set CELLS_BIND to localhost:8080 and CELLS_EXTERNAL to"

I have modified the relevant environment variables that are of concern in this post to be:
- CELLS_BIND=localhost:8080

where “URL” is an internal url.

I have also tried adding the port to the end of CELLS_EXTERNAL, e.g. CELLS_EXTERNAL=URL:8080
but that does not work either.

Can you please provide a real working example that has been tested so that I can replicate on my machine without issue? It’s fine if the url is different but the instruction on docker hub and the instructions above do not work. Has the example on docker hub been tested? Are there any dependency settings that are excluded from the example behind the default, e.g. the hosts file?

the easiest and best way would be
url would be your domain name, registered in dns.

here’s also .a docker compose configuration,
i had to test it because my setup is different,
you can replace the with your address:port the port part is Mandatory the port has to be the same.

    # Cells image with two named volumes for the static and for the data
        image: pydio/cells:latest
        container_name: cells 
        ports: ["8080:8080"]
            - CELLS_BIND=
            - CELLS_EXTERNAL=
            - CELLS_NO_SSL=1

    # MySQL image with a default database cells and a dedicated user pydio
         image: mariadb:latest
         container_name: database
             MYSQL_ROOT_PASSWORD: root
             MYSQL_DATABASE: cells
         ports: ["3306:3306"]
            driver: "none"

Your example works. I suspect it works because the CELLS_BIND variable in the example immediately above is bond to ALL interfaces ( and not just the loopback interface (localhost). Note, your comment from just prior to the last stating “CELLS_BIND=IPofHostMachine:PORT” is incorrect as “” is not the IP of the host.

This example should be included on Docker Hub and any clarification in the description changed to correspond to the example. The description you provided above does not correspond with the current description under “Environmental variable[s]”, for example, because it explicitly omits the port from the description of CELLS_EXTERNAL stating only the URL (and even providing an example) whereas your comment requires the port.

I would also suggest having the Docker Hub description be reviewed by someone that is technically knowledgeable on docker/compose and who can effectively communicate the same in writing, The erroneous description has caused a significant waste of time and as of today, even after this post which has latest two weeks, the description on Docker Hub is still wrong.

I appreciate the example and your quick replies.

1 Like

actually the docker hub documentation is updated, because it’s true that if you don’t have knowledge the CELLS_BIND, EXTERNAL can be confusing.
For the part i updated myself the admin guide to explain how and what cases you can use it,
for instance if you put CELLS_BIND= then cells external must be like this <domain name, address ...>:333 the port is mandatory in that case.

No, it is not updated on Docker Hub (DH) The DH example shows:

  • CELLS_BIND=localhost:8080
  • CELLS_EXTERNAL=localhost:8080

This does not work as previous explained. The DH example explains: “If you want your application to run on the localhost at port 8080 and use the url, then set CELLS_BIND to localhost:8080 and CELLS_EXTERNAL to

This is also wrong because the port is missing off CELLS_EXTERNAL yet is required

The compose file is not confusing when the explanation and the example are correct. Please have someone that understands docker networking review the DH description so that it makes sense and the example provided works (the example above does work and the reasons for why are speculated above).

It is frustrating enough to have wasted time trying to guess at what the settings should be but even after the fact, there is no recognition that the DH example/explanation are technically incorrect and doesn’t work.

i’m not in charge of the HUB documentation, but i will ask the person if the case is correct.

Just wasted 2 hours on this. Giving up on it. I’ve docker-composed a dozen containers over the past few days and this is the only one that doesn’t work with the example docker-compose. I run docker on a headless machine so I can only check things with curl.

@alpha23 was ignored for years?

@J_V The compose file below is what I currently have working. Note the local-persist volume, which can be changed to a local volume type. If you test without privileged on the cells service and it works, let me know. Also, make sure your volumes are not on NFS drives. This may have been an issue with my original setup; I moved the volumes to reside on an ext4 drive.

version: '3'
    driver: bridge
    driver: local-persist
      mountpoint: ${PWD}/rsync-backup
    container_name: pydio
    image: pydio/cells:2.1.5
    restart: on-failure:5
    privileged: true
    - ./data:/var/cells/data
    - ./cellsdir:/var/cells
    - /etc/timezone:/etc/timezone:ro
    - ./ssl/server.pem:/root/ssl/ssl.cert
    - ./ssl/server.key:/root/ssl/ssl.key
    - back
    - CELLS_NO_SSL=0
    - CELLS_SSL_CERT_FILE=/root/ssl/ssl.cert
    - CELLS_SSL_KEY_FILE=/root/ssl/ssl.key
    - "<PORT>:<PORT>"
    container_name: pydio_db
    image: mariadb:10.3
    restart: on-failure:5
    - no-new-privileges
    - NET_RAW
    - MKNOD
    command: [mysqld, --character-set-server=utf8mb4, --collation-server=utf8mb4_unicode_ci]
    - ./db-vol:/var/lib/mysql
    - /etc/timezone:/etc/timezone:ro
    - back

This seems really complex. I’ve never seen software read an environment variable and then write that “variable” to a file pydio.json so you can no longer change it. Also, I could only get it to spin up at 8080 and I don’t want to use that port.

This whole CELLS_BIND and CELLS_EXTERNAL is too complex. Couldn’t that complexity be removed somehow so it works like 90% of all other dockerized software?

  container_name: cells
  image: pydio/cells:latest
  restart: always
    - 8990:8080
    - CELLS_NO_TLS=1
    - "${USERDIR}/docker/cells:/var/cells"
    - "${USERDIR}/data:/var/cells/data"

Here is what I eventually got working. I see two major problems.

First, when I setup dockerized software, I like to get it running first within the intranet. Make sure it works, test it out, configure it etc. THEN expose externally through a reverse proxy. Cells expects you to do this in one step.

For example, I can’t easily spin up cells on a port and then access it within the intranet like raspberrypi:8080

Second, the service just flat out refuses to work if you screw up the CELLS_BIND and/or CELLS_EXTERNAL which is likely because it is so confusing. And if you screw it up, the only way to recover is:

  • Delete volume and start over
  • Manually modify pydio.json.

I hope this feedback can be used to make the docker setup experience better.

@J_V Does setting up a volume for pydio.json work? It may resolve “Manually modify *pydio.json” I believe this would need to be copied from the container after install and then the cells service re-instantiated.