And it failed spectacularly.
We only needed a simple form, but we wanted to be fancy, so we used “nextcloud forms”.
The docker image automatically updated the install to nextcloud 30, but the forms app requires nextcloud 29 or lower. No warning whatsoever. It’s an official app, couldn’t they wait that it was ready for NC 30 before launching it? The newsletter boasts “NC hub 9 is the best thing after sliced bread” yet i don’t see any difference both in visual or performance compared to NC hub 2
Conclusion: we made our business to rely on nextcloud forms as a signup form, but the only reason we were using it was disabled who knows how many weeks ago.
Wait, you update productions systems without running a staging environment? Or even checking the update notes and your installed apps? Also no backups? What kind of business are you running over there?
Oh, Nextcloud docker is a joke. They follow no standards or best practices when it comes to docker. They keep the entire app directory mounted as a volume, which means it does upgrade you without you “needing” to upgrade the docker image. They have volumes within volumes they need to mount. Their configs can (and do) override environment variables. Most actions that need to be taken require running an
occ
command which can only be done by exec’ing into the container.Nextcloud docker is honestly just such a joke. They should have rethought their application from a docker sense and they didn’t. God just number one - Docker images should never update. It’s a freaking pinned version for a reason. If I want to update, it should be as simple as upping the version tag, and it does any upgrades in place when I do that.
I honestly steer people away from Nextcloud now because of how mismanaged their images are.
Yep, and I’d guess there’s probably a huge component of “it must be as easy as possible” because the primary target is selfhosters that don’t really even want to learn how to set up Docker containers properly.
The AIO Docker image is an abomination. The other ones are slightly more sane but they still fundamentally mix code and data in the same folder so it’s not trivial to just replace the app.
In Docker, the auto updater should be completely neutered, it’s the wrong way to update the app.
The packages in the Arch repo are legit saner than the Docker version.
I had to learn how to mount subpaths for their terrible container, and god just the updater is mind boggling. And I have to store their code in a volume, because of course I have to, why would code and configuration ever need to be… configurable? I actually just tried to put their
config.php
into a ConfigMap just to try, and of course PHP doesn’t allow that - not that I blame PHP for it - but ffs it’s been years, it’s time to allow config to also come from a yaml or something.OwnCloud rewrite in Go is way better
https://owncloud.dev/ocis/
Yeah I’ve thought about migrating, but I have a few users on it who use nextcloud regularly now, so I’m forced to support it - unless there’s an easy migration path
I’m attracted to it because of the posix backend. Did anyone try it? Is it stable?
For reference, https://owncloud.dev/architecture/posixfs-storage-driver/
I’m testing it now. Seems way faster and more stable.
I’m just trying to get the oauth login to work but the actual file sync works great.
Is this compatible with existing (Android) clients? I need offline file support for KeePass.
Yes it works with the android app
Having the web server be able to overwrite its own app code is such a good feature for security. Very safe. Only need a path traversal exploit to backdoor
config.php
!What’s the better way of hosting it?
I do it in docker at home, for myself, in an environment I am okay with accidentally destroying - and even then I have nightly backups of the volumes.
In a professional system, as mentioned in my other comment, I would simply just do it in a VM with the disk scheduled also for nightly backups. Nextcloud just hardcoded too many things dependent on thinking the underlying system was mutable. Unfortuantely that’s just the easiest way to handle it.
However, also as mentioned, if I were in a professional environment, I’d have to really look at the cost for all of that infrastructure and my time to run it - and decide if I really thought I could run it myself with all of that overhead, and that it would still make sense compared to just doing google docs or something. Remember it’d be my ass on the line, as OP is learning
I wiped a whole drive (luckily it was filled with a redundant backup) with the docker image, as the behavior was (or still is, don’t know if it was fixed) to
rm -rf .
and replace with fresh stuff ifocc
isn’t found. So in the docker compose I accidentally mistyped the wrong volume as /mnt/disk2 instead of /mnt/disk3 and it erased itOh yeah, if you’re in a professional environment, I’m sorry but that’s just not great. The only way I’d consider running Nextcloud professionally would be on a VM of it’s own with nightly disk backups, with blob storage as the backing - and even then with the cloud costs really how close are you to just paying for an enterprise license to Google or Microsoft? Plus the headache of not having to worry about it yourself
The images work fine for me. The problem is that Nextcloud is a complex app that doesn’t really work with the design of one container to do one job. It is pretty much a regular application that uses docker for packaging.
That doesn’t make up for bad container decisions. I run much more complex containers both that split out responsibilities and that contain everything as one container. The size and complexity is irrelevant to the bad design decisions. You can have an image that eats up gigabytes of space that runs off of proper environment/config variables with properly mounted volumes.
Again there docker image is just a packaging format and a health check. I very much wish it were better but for now it works
Just because it works doesn’t mean it follows best practices.
https://docs.docker.com/build/building/best-practices/#create-ephemeral-containers
One that lacks a good IT department apparently
To be fair a certain security company was in global news for exactly that same send it behavior. Why waste precious resources on multiple instances? Investors hate waste. 😅
The world is your
oystertest envIt worked on my box!
If I understand correctly, nextcloud automatically updated … which I didn’t think it would, normally. Maybe it’s a “feature” of the AIO docker image?
Yes no staging because it’s something used at most by 2 concurrent users, we were ok with 95% reliability (we discovered it was disabled after at least two weeks lol)
Otherwise we would just have signed up to one of the many cloud forms sites at $100/year
Backups daily but it’s unthinkable to revert something like nextcloud to a months old one
Subscribed to both newsletter and RSS feed to know about issues (the command to update the docker images isn’t automated but manually issued). The maintainer of the forms app is nextcloud itself so any incompatibility should have been written in red bold characters in the blog posts and newsletter.
Why are your backups so out of date? Just setup daily snapshots and call it a day if it isn’t critical. You never want to update major versions first thing. Wait 3 months and then update.
This smells like shadow IT
I have daily backups and hourly zfs snapshot. The problem is that, because nobody used the useless survey plugin, I have no idea when it broke. It could have been yesterday or it could have been 4 months ago