Please stop spreading fud like Docker services using more resources and coming with a huge overhead.
It's not a virtual machine, it's really just a (fairly convenient) frontend to a bunch of kernel features, like namespaces & cgroups.
If you don't want a container-like abstraction on your system, that's totally fine, but please, don't make up silly arguments against it.
@fribbledom Seriously. All you need to do is point out that docker is often abused as a substitute for package management and user maintainability. Software as an appliance. That's...a pretty big argument against.
It can (and will) certainly be abused like that, but it's not like people didn't do that before Docker. (static builds, LD_LIBRARY_PATH hacks, etc.)
I would argue the entire point of Docker (or the kernel features it's using) is declarative configuration and separation of various namespaces:
Its own execution environment:
If I'm not mistaken LXC is still an option, but don't quote me on this 😄
@fribbledom @Agris @BalooUriza I prefer systemd-nspawn. In a way, it's even more vendor agnostic than LXC: the runtime is something you already get for free as part of a Linux distro, machines are systemd services, network is managed by networkd, container logs can be tailed by journalctl -M, and so on. Just use what you already know instead of wrestling with a runtime that has its own opinions about everything from file systems to the routing table.
yeah, that's fine
I'm not really about going all pro-skub, anti-skub on init systems. It's just this too has a lot of editor-war energy.
I will note the path wasn't for everyone a direct sysv vs systemd situation. There've been a lot of contenders, & systemd has sort of steamrolled over a lot of functions beyond init.
@angdraug I'm unfamiliar with LXC, but on a quick look it seems to rely only on the Linux kernel.
What does it require beyond that which makes it less vendor agnostic than requiring SystemD?
@Asimech LXC relies on the same kernel features as systemd-nspawn (and Docker, crun, etc.) and requires its own set of userspace tools—a container runtime.
On a system w/o systemd, LXC and systemd-nspawn are two alternative choices of a container runtime. LXC even makes more sense because you've already made a choice not to have systemd.
On a system that already has systemd, the LXC tools are redundant, and so is any other container runtime except the one you already have in systemd.
@angdraug So systemd-nspawn isn't more vendor agnostic than LXC, unless we assume every vendor of Linux is using/ships with systemd?
@Asimech No, you got a quantification fallacy here. Vendor-agnostic means not tied to a single vendor. For any open technology, there might be vendors refusing to use it, as long as there's still multiple vendors able and willing to support it, it remains vendor-agnostic.
You originally stated "in a way [systemd-nspawn] is even more vendor agnostic [than LXC]".
I fail to see how me following the premise of "it's possible to rank how vendor-agnostic something is" and disagreeing with your conclusion means I'm the one falling for a fallacy.
If your original statement _wasn't_ made with the assumption that it's possible to rank how vendor-agnostic something is then I don't know what you were even trying to say originally.
@angdraug There was something bugging me about this post and I think it's this:
"-refusing to use it-"
I don't care how common the "it" is, describing the choice to use something else as "refusal to use it" feels a bit presumptuous if there's no other context for the decision.
Especially when the context here is something that is exclusive and not like e.g. a login shell where you could have multiple kinds installed _and_ running at the same time with little to no risk of conflict.
@angdraug I should also point out I never said that I thought systemd-nspawn is _less_ vendor agnostic than LXC, just that I don't see how it it would be _more_ so than LXC.
Though outside of "vendor agnosticism" I do feel like it's more reasonable to ask people to install userspace tools than to swap their init system.
Even if for some they would be redundant, because redundancy is in my opinion a far lesser problem than discouraging choice.
@Asimech Yes, I do assume that you can measure vendor-agnostic qualities of software. You can weigh the use cases in which your solution ends up dependent on a single vendor for at least one component. Or you can count how many such single-vendor dependencies you end up with.
@Asimech The quantification fallacy I saw in your comment is assuming a negation relation between "tied to a single vendor" and "assume every vendor." Negative of "single vendor" isn't "all vendors", it's "more than one." As in, as long as there are multiple vendors supporting systemd, making it a dependency doesn't make it less vendor-agnostic, even if it isn't every vendor that supports it.
@Asimech The unique property of nspawn that makes it more vendor-agnostic for some use cases is that systemd is both a container runtime and a required component of a systemd-based system.
Mind that I am not talking about telling people what to install on their laptops, that's not what LXC and nspawn are for. When you build a container based stack, you don't have to start from what you use on a desktop system, you pick the simplest possible combination of components that gets the job done.
For me the main point against Docker is that it favours the mindset of “we have no idea how our thing works or won't document how to install it, so here you have a possibly outdated, bloated full system image of what developers use when working on the code; run it in prduction YOLO”
It may enable people to (do that more easily), but how does it favor that mindset?
Pretty much all the docker images I use are super clean and light-weight. Crappy docker images exist, of course, just like crappy distro packages. I see no difference here: just don't use them.
My current view is, that docker obviously is the building block to package multiple standardized workloads for multiple clients over a set of physical machines, and the fact that my services hide between a layer of virtual ethernets & tcp proxies is just the unavoidable side effect when using it “standalone”. Am I mistaken there?
@fribbledom @Agris @BalooUriza when half of the published images have security vulnerabilities in them, or at least half at the time https://www.securityweek.com/analysis-4-million-docker-images-shows-half-have-critical-vulnerabilities was written (to cite one example of sorts), I would say the norm is crappiness.
Sure one can build good images, but if I have to do it myself (likely easier than analyzing existing images I would want to use), then I can as well shove the tarball in an HTTP server and use “machinectl pull-raw” 🧙♂️
@fribbledom @Agris @BalooUriza
if the entire point of docker is to give developers a declarative configuration and separation of namespaces, and not distribution of software bundled with all its dependencies, then why does dockerhub - a platform for distributing software in the form of docker containers - exist, and why is `docker pull` a recommended thing to do?
For the same reason you get your .deb's from an apt repository or ppa. For the sake of discussion I'd treat Docker and its Hub as separate entities. Just like you can get your Debian packages from somewhere else, you can use Docker without ever touching a registry.
Feel free to directly build your images from a Dockerfile if that feels right(er).
@Agris @cjd @fribbledom @wolf480pl @BalooUriza Package management is an unmitigated nightmare. If your project is cross-platform then it’s not just one package, it’s several, and even if you only target Linux there are countless distributions with their own formats. Distro maintainers are a PITA and often don’t make it easy to submit/maintain packages and many distros don’t stay up-to-date anyway (cough Debian). Used to try to do packages for Ygg but it’s a major uphill battle and now I refuse.
@Agris @cjd @fribbledom @wolf480pl @BalooUriza At some point, doing packaging actually became harder than working on the project itself and it’s basically a sunk cost fallacy from that point forward. I don’t know if I can call myself a super-fan of Docker necessarily but I can absolutely understand why people would rather just “ship their machines” than try to figure out the clusterfuck that is trying to be compatible with someone else’s. The whole thing is a broken laughable mess.
@neilalexander @Agris @cjd @fribbledom @BalooUriza the idea is that you shouldn't do this yourself. Instead you should have a friend among each distro's packagers, who will take care of any distro-specific stuff for you.
I have no idea how well that works in practice, but I guess it's far from ideal...
@wolf480pl @Agris @cjd @fribbledom @BalooUriza For that to be true, you’d have to a) have a lot of friends, b) have distribution packagers with sufficient bandwidth to do the legwork and c) have distribution packagers that know and understand your requirements well enough to get it right. In anything except for the biggest and most popular projects, that’s really not very likely.
Server run by the main developers of the project It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!