Follow

Please stop spreading fud like Docker services using more resources and coming with a huge overhead.

It's not a virtual machine, it's really just a (fairly convenient) frontend to a bunch of kernel features, like namespaces & cgroups.

If you don't want a container-like abstraction on your system, that's totally fine, but please, don't make up silly arguments against it.

@fribbledom Seriously. All you need to do is point out that docker is often abused as a substitute for package management and user maintainability. Software as an appliance. That's...a pretty big argument against.

@BalooUriza @fribbledom Isn't that the entire point of docker? To normalize proprietary shitware on Linux and to act as a crutch for when a developer says "it works on my machine" docker is like "it's fine, then we'll ship your machine". Legit asking, I thought this was the main purpose of docker.

@Agris @BalooUriza

It can (and will) certainly be abused like that, but it's not like people didn't do that before Docker. (static builds, LD_LIBRARY_PATH hacks, etc.)

I would argue the entire point of Docker (or the kernel features it's using) is declarative configuration and separation of various namespaces:

- mounts
- network
- processes
- user/uid

@fribbledom @BalooUriza That's all just provided by LXC in the backend. Docker is just a set of poorly written scripts ontop of lxc utilities. I do use LXC but use it directly. LXC is vendor-agnostic Linux containers (kernel namespaces) similar to FreeBSD Jails.

@Agris @BalooUriza

... and how many times have I pointed that out in this thread?

It's not a set of scripts, but alas, I'm not here to argue about its code or code-quality.

@fribbledom @BalooUriza what I'm saying is that LXC can be used directly for the purposes of containers, kernel namespaces, mounts, etc. It's quite practical and that's what I do.

Docker seems explicitly geared to servicing the proprietary shovelware or software as an appliance industry.

@Agris @BalooUriza

Agreed, you can, and containers is nothing Docker invented.

Just to point this out tho: Docker isn't using LXC (anymore).

@Agris @BalooUriza

Its own execution environment:

github.com/opencontainers/runc

If I'm not mistaken LXC is still an option, but don't quote me on this 😄

@fribbledom @Agris @BalooUriza I prefer systemd-nspawn. In a way, it's even more vendor agnostic than LXC: the runtime is something you already get for free as part of a Linux distro, machines are systemd services, network is managed by networkd, container logs can be tailed by journalctl -M, and so on. Just use what you already know instead of wrestling with a runtime that has its own opinions about everything from file systems to the routing table.

@Agris @fribbledom @BalooUriza Why? Because Lennart works for Red Hat? IBM also sponsors a lot of Linux kernel development, does that make Linux not vendor agnostic?

Debian uses systemd, and last I checked it wasn't beholden to IBM in any way. Is Debian vendor agnostic?

@BalooUriza

yeah, that's fine

I'm not really about going all pro-skub, anti-skub on init systems. It's just this too has a lot of editor-war energy.

I will note the path wasn't for everyone a direct sysv vs systemd situation. There've been a lot of contenders, & systemd has sort of steamrolled over a lot of functions beyond init.

@angdraug @Agris @fribbledom

@angdraug I'm unfamiliar with LXC, but on a quick look it seems to rely only on the Linux kernel.
What does it require beyond that which makes it less vendor agnostic than requiring SystemD?

@Asimech LXC relies on the same kernel features as systemd-nspawn (and Docker, crun, etc.) and requires its own set of userspace tools—a container runtime.

On a system w/o systemd, LXC and systemd-nspawn are two alternative choices of a container runtime. LXC even makes more sense because you've already made a choice not to have systemd.

On a system that already has systemd, the LXC tools are redundant, and so is any other container runtime except the one you already have in systemd.

@angdraug So systemd-nspawn isn't more vendor agnostic than LXC, unless we assume every vendor of Linux is using/ships with systemd?

@Asimech No, you got a quantification fallacy here. Vendor-agnostic means not tied to a single vendor. For any open technology, there might be vendors refusing to use it, as long as there's still multiple vendors able and willing to support it, it remains vendor-agnostic.

@angdraug What?
You originally stated "in a way [systemd-nspawn] is even more vendor agnostic [than LXC]".
I fail to see how me following the premise of "it's possible to rank how vendor-agnostic something is" and disagreeing with your conclusion means I'm the one falling for a fallacy.

If your original statement _wasn't_ made with the assumption that it's possible to rank how vendor-agnostic something is then I don't know what you were even trying to say originally.

@angdraug There was something bugging me about this post and I think it's this:
"-refusing to use it-"

I don't care how common the "it" is, describing the choice to use something else as "refusal to use it" feels a bit presumptuous if there's no other context for the decision.
Especially when the context here is something that is exclusive and not like e.g. a login shell where you could have multiple kinds installed _and_ running at the same time with little to no risk of conflict.

@angdraug I should also point out I never said that I thought systemd-nspawn is _less_ vendor agnostic than LXC, just that I don't see how it it would be _more_ so than LXC.

Though outside of "vendor agnosticism" I do feel like it's more reasonable to ask people to install userspace tools than to swap their init system.
Even if for some they would be redundant, because redundancy is in my opinion a far lesser problem than discouraging choice.

@Asimech Yes, I do assume that you can measure vendor-agnostic qualities of software. You can weigh the use cases in which your solution ends up dependent on a single vendor for at least one component. Or you can count how many such single-vendor dependencies you end up with.

@Asimech The quantification fallacy I saw in your comment is assuming a negation relation between "tied to a single vendor" and "assume every vendor." Negative of "single vendor" isn't "all vendors", it's "more than one." As in, as long as there are multiple vendors supporting systemd, making it a dependency doesn't make it less vendor-agnostic, even if it isn't every vendor that supports it.

@Asimech The unique property of nspawn that makes it more vendor-agnostic for some use cases is that systemd is both a container runtime and a required component of a systemd-based system.

Mind that I am not talking about telling people what to install on their laptops, that's not what LXC and nspawn are for. When you build a container based stack, you don't have to start from what you use on a desktop system, you pick the simplest possible combination of components that gets the job done.

Show newer
@fribbledom @BalooUriza

@Azure do you have anything to add to this conversation? It seems like something you'd have something to say about.

@Agris @fribbledom @BalooUriza while Docker is not scripts, one can certainly replace it with 100 lines of shell and existing tools: github.com/p8952/bocker

For me the main point against Docker is that it favours the mindset of “we have no idea how our thing works or won't document how to install it, so here you have a possibly outdated, bloated full system image of what developers use when working on the code; run it in prduction YOLO”

@aperezdc @Agris @BalooUriza

It may enable people to (do that more easily), but how does it favor that mindset?

Pretty much all the docker images I use are super clean and light-weight. Crappy docker images exist, of course, just like crappy distro packages. I see no difference here: just don't use them.

@fribbledom I would strongly argue that crappy uses for #Docker are the overwhelming majority. Are there good uses for Docker? Yes. Are the majority of Docker users doing it right? No. It's a niche product that's seen overly broad use.

@aperezdc @Agris

@fribbledom @aperezdc @Agris @BalooUriza
I mostly use docker to as a development environment, in which I can install and configure software to mirror the production environment as good as possible. I could use a VM as well, but I have no need for a separate kernel.
Though people running docker on MacOS will run a full linux vm, as far as I understand. Is there a way to share development environments between linux and MacOS, which doesn't have this overhead?

@fribbledom @aperezdc @Agris @BalooUriza is there a best practice document, on the cleanest way to package various services? Or the general “best practice concepts”?

My current view is, that docker obviously is the building block to package multiple standardized workloads for multiple clients over a set of physical machines, and the fact that my services hide between a layer of virtual ethernets & tcp proxies is just the unavoidable side effect when using it “standalone”. Am I mistaken there?

@vogelchr @fribbledom @Agris @BalooUriza FWIW there are certain services and situations that I would rather run on bare metal, even nowadays.

@fribbledom @Agris @BalooUriza when half of the published images have security vulnerabilities in them, or at least half at the time securityweek.com/analysis-4-mi was written (to cite one example of sorts), I would say the norm is crappiness.

Sure one can build good images, but if I have to do it myself (likely easier than analyzing existing images I would want to use), then I can as well shove the tarball in an HTTP server and use “machinectl pull-raw” 🧙‍♂️

@fribbledom @Agris @BalooUriza
if the entire point of docker is to give developers a declarative configuration and separation of namespaces, and not distribution of software bundled with all its dependencies, then why does dockerhub - a platform for distributing software in the form of docker containers - exist, and why is `docker pull` a recommended thing to do?

@wolf480pl @Agris @BalooUriza

For the same reason you get your .deb's from an apt repository or ppa. For the sake of discussion I'd treat Docker and its Hub as separate entities. Just like you can get your Debian packages from somewhere else, you can use Docker without ever touching a registry.

Feel free to directly build your images from a Dockerfile if that feels right(er).

@fribbledom @wolf480pl @Agris @BalooUriza
Hot take: The whole reason docker exists in the first place is because Linux package management is full of politics and gatekeepers, like everyone who says that docker shouldn't exist.

@cjd @fribbledom @wolf480pl @BalooUriza there's nothing stopping you from creating a package and distributing it.

@Agris @cjd @fribbledom @wolf480pl @BalooUriza Package management is an unmitigated nightmare. If your project is cross-platform then it’s not just one package, it’s several, and even if you only target Linux there are countless distributions with their own formats. Distro maintainers are a PITA and often don’t make it easy to submit/maintain packages and many distros don’t stay up-to-date anyway (cough Debian). Used to try to do packages for Ygg but it’s a major uphill battle and now I refuse.

@Agris @cjd @fribbledom @wolf480pl @BalooUriza At some point, doing packaging actually became harder than working on the project itself and it’s basically a sunk cost fallacy from that point forward. I don’t know if I can call myself a super-fan of Docker necessarily but I can absolutely understand why people would rather just “ship their machines” than try to figure out the clusterfuck that is trying to be compatible with someone else’s. The whole thing is a broken laughable mess.

@neilalexander @Agris @cjd @fribbledom @BalooUriza the idea is that you shouldn't do this yourself. Instead you should have a friend among each distro's packagers, who will take care of any distro-specific stuff for you.

I have no idea how well that works in practice, but I guess it's far from ideal...

@wolf480pl @Agris @cjd @fribbledom @BalooUriza For that to be true, you’d have to a) have a lot of friends, b) have distribution packagers with sufficient bandwidth to do the legwork and c) have distribution packagers that know and understand your requirements well enough to get it right. In anything except for the biggest and most popular projects, that’s really not very likely.

@neilalexander Just target Debian. It's pretty much the definitive Linux distro. If downstream distros can't deal with Debian, that's their problem. Red Hat/CentOS is for pinhead suits who want the illusion of someone to sue.

@wolf480pl @Agris @cjd @fribbledom

@neilalexander @Agris @fribbledom @wolf480pl @BalooUriza

This seems like a good use for NIX package manager, if it could be self-contained such that you could just use it like a curl-bash installer...

@cjd @fribbledom @Agris @BalooUriza that'd imply the point of docker is to have uncurated package management, which would confirm my point. But AFAIK muesli is trying to argue docker is just a tool for managing namespaces, so let's give him a fair chance to argue his point.

Sign in to participate in the conversation
Mastodon

Server run by the main developers of the project 🐘 It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!