Advice on deploying Cells to multiple machines

I am experimenting with pydio cells at home, currently running on a single node in my k8s cluster.

I noticed here;

https://pydio.com/en/docs/cells/v1/whats-new-pydio-cells

it says;

DevOps: micro-services architecture

The technology shift was also the opportunity to actually “break the monolith” . Pydio Cells is a set of micro-services that can be distributed of different physical or virtual machines on a network, allowing greater flexibility, interfacing and scaling.

Is there any advice/docs on how i would go about splitting up this monolith such that I can run each microservice on its own k8s node?

I see here

https://pydio.com/en/docs/cells/v1/cells-binary-client-commands

that i can do things like

Start and the server or a given service:

# For instance, to enable debug mode
./cells start --log debug

# Start just a couple of services
./cells start nats pydio.grpc.config pydio.api.proxy

# Start all services except one, here the dav server
./cells start -x pydio.rest.gateway.dav

to start specific microservices.

Do you see a way forward where the same docker image is deployed but with different entry points specifiing which service should be running?

Assuming that they share the same config files then “everything should just work” TM

I also assume I need to know the which machine NATS and infact that hostname should be static, would this be the only service?

If i get this working im happy to write up my findings…

thanks for any help!

Ross

Hi,
we are currently preparing a piece of documentation that will explain how and what you need to deploy your services on multiple machines, etc…, using for instance kubernetes and such.

When the article will be on site i will try to keep you updated.

And if you have notes that you want to share don’t hesitate.

Regards

1 Like

good to know, i have various christmas related things over the next week so progress will be slow for me.

if you want to provide early access im happy to give feedback

likewise ill try and capture my thoughts somewhere so you can see how I fail as i go :slight_smile:

actually hit my first hurdle right away.

my local disk datasources don’t recover when rebuilding the pods, i think because they get different IP and maybe associated sync services dont start up.

will need to do some digging through docs

im just doing this weapons grade hackery for now using this as my entrypoint

#!/bin/sh
IP=ifconfig eth0 | grep inet | awk '{print $2}' | sed 's/addr://'
sed -i “s/PeerAddress”:\ “.*”,/PeerAddress":\ “$IP”,/g" /root/.config/pydio/cells/pydio.json
/bin/docker-entrypoint.sh cells start

im starting to capture some thoughts here

its mostly incoherent at the moment

it may be of some use to you

@zayn ive got things mostly working now, i had some problems and could only fix the through editing config files. Want to take a read and see if you had the same issue?

Hi @rossbeazley thanks for sharing your thoughts on that ! We still have some work to produce a comprehensive documentation for the usecase you describe. The team is mostly on holidays right now, but we’ll work on that more in the forthcoming weeks
Charles

Hey, no problem. I’ve been sidetracked by getting Collabora working so not split up the services yet.

I’m thinking of picking off one to try first of all, probably data source related since that’s been problematic requiring a script to fix IPs.

Can I replace IPs in the config file with a DNS name?

not at that time, but I guess this would be a better option that what we have now. This ip-based stuff can be a bit complicated when IP’s are changing :slight_smile:

If I move the services to their own pods should I share one config file across all instances?