this post was submitted on 01 Jan 2024
981 points (97.2% liked)

Selfhosted

40347 readers
463 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] u_tamtam@programming.dev 0 points 10 months ago (3 children)

Take that as you want but a vast majority of the complaints I hear about nextcloud are from people running it through docker.

[–] xantoxis@lemmy.world 2 points 10 months ago (2 children)

Does that make it not a substantive complaint about nextcloud, if it can't run well in docker?

I have a dozen apps all running perfectly happy in Docker, i don't see why Nextcloud should get a pass for this

[–] recapitated@lemmy.world 2 points 10 months ago

I have only ever run nextcloud in docker. No idea what people are complaining about. I guess I'll have to lurk more and find out.

[–] u_tamtam@programming.dev 0 points 10 months ago

See my reply to a sibling post. Nextcloud can do a great many things, are your dozen other containers really comparable? Would throwing in another "heavy" container like Gitlab not also result in the same outcome?

[–] recapitated@lemmy.world 2 points 10 months ago (1 children)

Things should not care or mostly even know if they're being run in docker.

[–] u_tamtam@programming.dev 1 points 10 months ago (1 children)

Well, that is boldly assuming:

  • that endlessly duplicating services across containers causes no overhead: you probably already have a SQL server, a Redis server, a PHP daemon, a Web server, … but a docker image doesn't know, and indeed, doesn't care about redundancy and wasting storage and memory

  • that the sum of those individual components work as well and as efficiently as a single (highly-optimized) pooled instance: every service/database in its own container duplicates tight event loops, socket communications, JITs, caches, … instead of pooling it and optimizing globally for the whole server, wasting threads, causing CPU cache misses, missing optimization paths, and increasing CPU load in the process

  • that those images are configured according to your actual end-users needs, and not to some packager's conception of a "typical user": do you do mailing? A/V calling? collaborative document editing? … Your container probably includes (and runs) those things, and more, whether you want it or not

  • that those images are properly tuned for your hardware, by somehow betting on the packager to know in advance (and for every deployment) about your usable memory, storage layout, available cores/threads, baseline load and service prioritization

And this is even before assuming that docker abstractions are free (which they are not)

[–] bdonvr@thelemmy.club 1 points 10 months ago* (last edited 10 months ago) (1 children)

Most containers don't package DB servers, Precisely so you don't have to run 10 different database servers. You can have one Postgres container or whatever. And if it's a shitty container that DOES package the db, you can always make your own container.

that those images are configured according to your actual end-users needs, and not to some packager's conception of a "typical user": do you do mailing? A/V calling? collaborative document editing? … Your container probably includes (and runs) those things, and more, whether you want it or not

that those images are properly tuned for your hardware, by somehow betting on the packager to know in advance (and for every deployment) about your usable memory, storage layout, available cores/threads, baseline load and service prioritization

You can typically configure the software in a docker container just as much as you could if you installed it on your host OS.... what are you on about? They're not locked up little boxes. You can edit the config files, environment variables, whatever you want.

[–] u_tamtam@programming.dev 2 points 10 months ago

Most containers don’t package DB programs. Precisely so you don’t have to run 10 different database programs. You can have one Postgres container or whatever.

Well, that's not the case of the official Nextcloud image: https://hub.docker.com/_/nextcloud (it defaults to sqlite which might as well be the reason of so many complaints), and the point about services duplication still holds: https://github.com/docker-library/repo-info/tree/master/repos/nextcloud

You can typically configure the software in a docker container just as much as you could if you installed it on your host OS…

True, but how large do you estimate the intersection of "users using docker by default because it's convenient" and "users using docker and having the knowledge and putting the effort to fine-tune each and every container, optimizing/rebuilding/recomposing images as needed"?

I'm not saying it's not feasible, I'm saying that nextcloud's packaging can be quite tricky due to the breadth of its scope, and by the time you've given yourself fair chances for success, you've already thrown away most of the convenience docker brings.

[–] bdonvr@thelemmy.club 1 points 10 months ago (1 children)

Docker containers should be MORE stable, if anything.

[–] u_tamtam@programming.dev 1 points 10 months ago (1 children)

and why would that be? More abstraction thrown in for the sake of sysadmin convenience doesn't magically make things more efficient…

[–] bdonvr@thelemmy.club 2 points 10 months ago

Nothing to do with efficiency, more because the containers are come with all dependencies at exactly the right version, tested together, in an environment configured by the container creator. It provides reproducibility. As long as you have the Docker daemon running fine on the host OS, you shouldn't have any issues running the container. (You'll still have to configure some things, of course)