Advice on deploying Cells to multiple machines


#1

I am experimenting with pydio cells at home, currently running on a single node in my k8s cluster.

I noticed here;

https://pydio.com/en/docs/cells/v1/whats-new-pydio-cells

it says;

DevOps: micro-services architecture

The technology shift was also the opportunity to actually “break the monolith” . Pydio Cells is a set of micro-services that can be distributed of different physical or virtual machines on a network, allowing greater flexibility, interfacing and scaling.

Is there any advice/docs on how i would go about splitting up this monolith such that I can run each microservice on its own k8s node?

I see here

https://pydio.com/en/docs/cells/v1/cells-binary-client-commands

that i can do things like

Start and the server or a given service:

# For instance, to enable debug mode
./cells start --log debug

# Start just a couple of services
./cells start nats pydio.grpc.config pydio.api.proxy

# Start all services except one, here the dav server
./cells start -x pydio.rest.gateway.dav

to start specific microservices.

Do you see a way forward where the same docker image is deployed but with different entry points specifiing which service should be running?

Assuming that they share the same config files then “everything should just work” TM

I also assume I need to know the which machine NATS and infact that hostname should be static, would this be the only service?

If i get this working im happy to write up my findings…

thanks for any help!

Ross


#2

Hi,
we are currently preparing a piece of documentation that will explain how and what you need to deploy your services on multiple machines, etc…, using for instance kubernetes and such.

When the article will be on site i will try to keep you updated.

And if you have notes that you want to share don’t hesitate.

Regards


#3

good to know, i have various christmas related things over the next week so progress will be slow for me.

if you want to provide early access im happy to give feedback

likewise ill try and capture my thoughts somewhere so you can see how I fail as i go :slight_smile:


#4

actually hit my first hurdle right away.

my local disk datasources don’t recover when rebuilding the pods, i think because they get different IP and maybe associated sync services dont start up.

will need to do some digging through docs


#5

im just doing this weapons grade hackery for now using this as my entrypoint

#!/bin/sh
IP=ifconfig eth0 | grep inet | awk '{print $2}' | sed 's/addr://'
sed -i “s/PeerAddress”:\ “.*”,/PeerAddress":\ “$IP”,/g" /root/.config/pydio/cells/pydio.json
/bin/docker-entrypoint.sh cells start


#6

im starting to capture some thoughts here

its mostly incoherent at the moment

it may be of some use to you