SMB CIFS & NFS Mounts to Pydio Cells Community Edition

I’m not understanding the process to mount a remote NAS share to Pydio Cells CE for testing.

I have a Synology NAS share I would like to mount as a Workspace for 1 user to test with R/W permissions.

I’ve enabled NFS, SMB and CIFS on the NAS and added the NFS IP to the Synology of the host running Pydio Cells CE in Docker.

When I try to add the workspace it appears Pydio is listing only the containers directory filesystem.

How do I get the Pydio container to see the remote filesystem to add the path of the shares?

Do I need to mount the volume in the Docker container first?
peer address

1 Like

Do I need to mount the volume in the Docker container first?

Yes.

As explained in post #2843 (I answered there before getting to the current post):

In a few bullet points:

  • Create and configure your NFS share
  • Mount it in your docker host, typically at /mnt/nfs1
  • Add this volume to Cells container volume list, e.g:
        volumes: 
            - cells_working_dir:/var/cells
            - cells_logs:/var/cells/logs
            - /mnt/nfs1:/data
  • Launch your docker setup and configure Cells
  • Create a new datasource that points toward /data/dss/datasource1

For the record the dss folder level (not its name) is compulsory for the datasource to be correctly configured.

If you want to define more than one data source at this location, it is better to have also the second level, so that you can easily configure it at /data/dss/datasource2

Please refer to our admin doc if you want / need deeper understanding of the what and why.

nmincone

5h

Noted on the archive post. But reading through documentation, if you mount a NFS will the data there remain as flat storage untouched and modified by Pydio but accessible from within the Pydio environment, or does Pydio require a separate NFS/SMB mount just for itself and minio style filesystem?

We’re looking to use it as a google docs replacement where users can access and edit documents using Collabra off the NAS all from within Pydio and then also have those documents there in mounted drive storage when they are back in the office working in Office.

Ok, so we have been trying on and off over the past few weeks in our spare time to get this to work…

We’ve been able to successfully create a NFS share on our NAS and mount it to everything expect Pydio.

NFS location is at x.x.x.x/volume1/nfs-share

We are not sure if we are doing it one of these two ways correctly.

  1. Creating a Portainer volume mounting the NFS share in Portainer then adding the containing bind mount point to the Pydio stack of the volume or
  2. Mounting the NFS share directly to the host OS then adding that mount point to the stack.

We prefer to use method 1 as it offers the most flexibility to use these NFS shares on other containers as well easily within the Portainer environments.

We then go to Pydio add a storage space and we see /mnt/nfs-share as the option but it errors out.

I’m back ;-)… we were able to mount the NAS share to Pydio using CIFS and method 1 otlined above.

Did the following;

  1. created a CIFS Volume in Portainer with proper credentials
  2. modified the Pydio stack as such…
version: '4'
services:

  cells:
    image: pydio/cells:latest
    restart: unless-stopped
    ports: ["8081:8080"]
    volumes:
      - /home/docker/pydio-cells/cellsdir:/var/cells
      - /home/docker/pydio-cells/data:/var/cells/data
      - nas:/mnt/nas/data

    environment:
      - CELLS_BIND_ADDRESS=0.0.0.0
      - CELLS_EXTERNAL=https://our.domain.com

  mysql:
    image: mysql:8
    restart: unless-stopped
    environment:
      MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
      MYSQL_DATABASE: ${MYSQL_DATABASE}
      MYSQL_USER: ${MYSQL_USER}
      MYSQL_PASSWORD: ${MYSQL_PASSWORD}
    command: [mysqld, --character-set-server=utf8mb4, --collation-server=utf8mb4_unicode_ci]
    volumes:
      - /home/docker/pydio-cells/mysqldir:/var/lib/mysql

volumes:
    data: {}
    cellsdir: {}
    mysqldir: {}
    nas:
      external: true
      
  1. In Pydio created a local filesystem storage space and chose /mnt/nas/subfolder

We can now upload/download & delete files into Pydio’s Workspace and they appear/disappear from the local NAS folder.
Success!!!

Two new questions now…

  1. Is it possible for Pydio to retrieve a list of existing files from the NAS? If we copy files to the directory there, they do not show up in Pydio’s Workspace. Notice that any modifications to the file directly on the NAS do not show up when opening in Pydio. But as long as we only use Pydio to modify files, it works fine.

  2. Is it possible to go only 1 layer deep so we have access to all the files on the CIFS root mount point and not just a subfolder?

Good to hear, congratulation !

There 2 main types of datasource in Cells:

  • flat storage: the real files are all stored in a single folder with technical names. The tree structure is only kept in a DB index
  • structured storage: the underlying file system has the same tree structure as the one seen in Cells:
    Let’s say the root of datasource ds1 is at /nfs/dss/datasource1/
    you can find the file /<workspace on ds1>/my-folder/my-file.txt
    at /nfs/dss/datasource1/my-folder/my-file.txt

If you directly modify or add a file via the file system, you can update the file system seen in Cells by triggering a resync job via the scheduler.

That said:

  • flat storage is much more efficient and reliable, especially if you have large amount of data
  • the resync jobs consume resources (and take time). It is considered bad practice to build a solution where we rely on externally modified files and regular resync. Yet, we know it kinda-of works and is used by a few for very specific corner cases.

Please refer to the admin doc for further details.

Thanks! Well, yes specific cases indeed. The model we want to experiment with is when in the office some users have mapped drives to their desktops and drag & drop a lot of files there, when on the go we’d like them to use Pydio as a collaboration tool with Collabora to make on the go changes, etc.

Controlling where they modify their files might prove difficult so the ability to work directly through SMB and inside of Pydio is ‘ideal’.

I suppose the sync would need to be triggered manually for now? Am I correct to assume custom sync schedules (Flows) are not available in Pydio CE?

I suppose I got my answer :wink:

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.