„Upload failed: You are not allowed to upload files here“ directly after installation

I just installed Pydio on Ubuntu 22.04.1 LTS

Whenever I try to upload a file through, e.g., drag-and-drop the upload wizard shows

Upload failed: You are not allowed to upload files here

I tried the docker setups from the main documentation (https://pydio.com/en/docs/cells/v4/docker) and the one with “Let’s Encrypt”
(cells/tools/docker/compose/lets-encrypt at main · pydio/cells · GitHub). Both show the same issue.

Hello @Crown0815 and welcome in our community!

I’m quite sure you have missed something that seems evident to us, as we are using Cells regularly.

Could you please, precise a few points so that we can help you further:

  • version of Cells?
  • where do you try to upload? on the home page, in a workspace, in a folder?
  • with which user?
  • do you see any related message in the log when you try to upload?

Hello @bsinou,

Thank you for your reply and the warm welcome. I will try my best to answer your questions.

I used the latest docker image which according to dockerhub would be Pydio Cells version 4.0.2.

I tried uploading through

  • a public link of a folder (with upload permission enabled). I am not aware of any user being used for this upload.
  • drag&drop into a new folder I created. I assume the user is the one I was logged in with at the time, which was an account with Administrator privileges.

(I assume „user“ means user of Pydio Cells, not user on the Linux machine)

To my surprise I did not see any messages in the logs. Neither in the WebUI nor in the logs spit out into the Console by docker (I ran without the -d argument to see the logs).

I hope these details help nailing the issue down.
If you need anything else please let me know.

Strange indeed.

Could you please provide the full docker command you use (or the corresponding docker file) so that I can have a look if something seems wrong?

And by the way, are you 100% sure that:

  • you did not change anythings in the permissions and the security policies after the update?
  • you have saved again your public link after adding the “upload” permission?

Hello @bsinou,

Here the full docker compose file contents


    image: pydio/cells:latest
    restart: unless-stopped
    ports: ["8080:8080"]
      # Directly pass server configuration as yaml file
      - CELLS_INSTALL_YAML=/pydio/config/install.yml
      # Pass env var to yaml install conf
      - cellsdir:/var/cells
      - data:/var/cells/data
      - ./install-conf.yml:/pydio/config/install.yml:ro

    image: mysql:8
    restart: unless-stopped
      MYSQL_DATABASE: cells
      MYSQL_USER: pydio
    command: [mysqld, --character-set-server=utf8mb4, --collation-server=utf8mb4_unicode_ci]
      - mysqldir:/var/lib/mysql

    data: {}
    cellsdir: {}
    mysqldir: {}

and the install-conf.yml file

# WebUI Admin definition
frontendlogin: admin
frontendpassword: {$CELLS_ADMIN_PWD}

# DB connection
dbconnectiontype: tcp
dbtcphostname: mysql
dbtcpport: 3306
dbtcpname: cells
dbtcpuser: pydio
dbtcppassword: {$MYSQL_PYDIO_PWD}

plus the .env file


This setup is heavily inspired by the Let’s Encrypt setup example from the Pydio GitHub.

I am 100% certain. I even reran docker compose up after cleaning all containers and volumes to get a clean install, and just tried to upload a small text file through drag and drop into the “Personal Files” of the admin account

There is one error in the log.

HttpProtocol : HTTP/2.0
JsonZaps : {"ContentType":"application/json"}
Level : error
Logger : pydio.rest.frontend
Msg : Rest Error 401 - No refresh token
RemoteAddress :
SpanUuid : f322f5ae-d627-4ccd-b7c5-8ec025dc268c
Ts : 1667485277
UserAgent : Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/ Safari/537.36

Could this lead to an authentication problem? It is not the error I see during the upload, and it does not happen every time I try to upload something.

The latest error is unrelated. (did you has another tab that was still open when you re-installed your instance?)

Thus said, I have tried to reproduce your scenario on a similar setup:

  • same docker compose
  • on a Ubuntu 22.04 laptop

And it worked.

Here are some things we should then consider:

  • how powerful is your machine?

  • did you try to restart cells only after the install / first start:
    docker-compose restart cells; docker-compose logs -f cells

  • could you make screenshots of the “workspace accesses” page of both the Root_Group (in Cells console >> Identity management >> Roles >> Edit Root_Group ) and you admin user: some policies might have not been inserted correctly upon installation (and that would be a bug…)

  • What URL do you use to access your instance? Is it behind a reverse proxy?

Potentially, but if it has nothing to do with the upload issue I guess it is fine

  • I am not certain how powerful the machine is. Our system administrator set up a virtual machine for me, but in the past those were able to handle any service we put on them. Would the detailed spec help investigate the issue? If so, I would ask our system administrator tomorrow.

  • I just tried to restart cells only using the given commands. It had no effect. Still the same behavior.

  • I attached the screenshots below

  • I access through https://dbs-filestorage.dbs.local:8080/. But the connection is not secured, since I did not get any connection when using the Let’s Encrypt setup (that is something I still want to discuss with our systems administrator).

Does that help?
Are there any other logs, maybe where I could find information about the error?

Just for completeness, a screenshot from a failed upload

Admin screenshot (I may only put one image per post

Failed upload example

Thanks for all the screenshots, everything seems to be OK on this side.

This is definitively worth investigating:

  • typically CPU / RAM
  • type of file system: from what you said, it seems that the machine is a “quick and dirty” VM spinned by your sysadmin and I suspect it might rely on a network filesystem that might not behave as expected when used by Cells.

And also, do you think it might be possible to give us a temporary access to this machine so that we can have a look ?

Hi @bsinou,
Here the CPU information ($ lscpu)

Architecture:            x86_64
  CPU op-mode(s):        32-bit, 64-bit
  Address sizes:         45 bits physical, 48 bits virtual
  Byte Order:            Little Endian
CPU(s):                  1
  On-line CPU(s) list:   0
Vendor ID:               GenuineIntel
  Model name:            Intel(R) Xeon(R) Silver 4309Y CPU @ 2.80GHz
    CPU family:          6
    Model:               106
    Thread(s) per core:  1
    Core(s) per socket:  1
    Socket(s):           1
    Stepping:            6
    BogoMIPS:            5586.87
    Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single
                          ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves wbnoinvd arat avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm md_clear flush_l1d arch_capabilities
Virtualization features:
  Hypervisor vendor:     VMware
  Virtualization type:   full
Caches (sum of all):
  L1d:                   48 KiB (1 instance)
  L1i:                   32 KiB (1 instance)
  L2:                    1.3 MiB (1 instance)
  L3:                    12 MiB (1 instance)
  NUMA node(s):          1
  NUMA node0 CPU(s):     0
  Itlb multihit:         KVM: Mitigation: VMX unsupported
  L1tf:                  Not affected
  Mds:                   Not affected
  Meltdown:              Not affected
  Mmio stale data:       Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
  Retbleed:              Not affected
  Spec store bypass:     Mitigation; Speculative Store Bypass disabled via prctl and seccomp
  Spectre v1:            Mitigation; usercopy/swapgs barriers and __user pointer sanitization
  Spectre v2:            Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
  Srbds:                 Not affected
  Tsx async abort:       Not affected

and the RAM info ($ lsmem)

RANGE                                 SIZE  STATE REMOVABLE BLOCK
0x0000000000000000-0x000000007fffffff   2G online       yes  0-15

Memory block size:       128M
Total online memory:       2G
Total offline memory:      0B

Filesystem info ($ df -Th)

tmpfs                             tmpfs  198M  1.4M  197M   1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv ext4    48G  6.8G   39G  15% /
tmpfs                             tmpfs  988M     0  988M   0% /dev/shm
tmpfs                             tmpfs  5.0M     0  5.0M   0% /run/lock
/dev/sda2                         ext4   2.0G  245M  1.6G  14% /boot
tmpfs                             tmpfs  198M  4.0K  198M   1% /run/user/1000

I hope this information already helps.

I will contact our sysadmin to see if we could provide any kind of temporary access.

OK, we had a quick look on the machine to insure it is not a bug on our side and we nailed it:
the time on the machine was not correctly set

For others, when you get an Error 403 upon upload, if all internal ACLs seems to be OK (no error at install & start) it is most probably a time issue.

→ this happened on this setup because the Cells docker container cannot access the internet.

And for the record, the spec of the VM are kind of too small if you want to use this in production, see our requirement page

1 Like

@bsinou I would have never expected the system time to be the problem here.
We corrected the setup and with the correct system time, everything works as expected.

Just for my understanding (and maybe anybody else who comes across this thread), why is the system time critical for uploads to work?

We will update the VM to provide the spec required for Pydio.

Thank you so much for your assistance!
We could have never solved this on our own.

This was not an easy one indeed :slight_smile:
This issue is linked to the fact that uploads/downloads are using the S3 protocol (presigned URLs) and these URL are “signed” with a specific mechanism: it uses the time as a reference to compute a unique signature of the request, and if time is wrong, the signature is not recognized.


1 Like

This topic was automatically closed 11 days after the last reply. New replies are no longer allowed.