@BadAtNames @fribbledom i don't think I had even considered that there might be options

@BadAtNames

@fribbledom

-vte
shows non-printing characters, useful in ux<->dos comparisons/conversions

@fribbledom i have legit never used TAR in my whole experience...

@fribbledom That's OK. Soon enough all commands and options will be delegated to "sub parameters" to systemd-bash and no longer count :P

@fribbledom To be fair, you don't need to touch most of those options.

@fribbledom well that settles it: clear is the best cli command

@fribbledom
I just wanted to express my allyship to _clear_ and my deep disappointment if the state ever changed.

I feel betrayed.

@fribbledom I always suspected tar had too many options. all I ever do is extract and pack archives, why do I need so many options for that?

@woodcat

I guess this is part of the issue: tar wasn't intended to be a file compression utility, but a t(ape) ar(chiver). If all it did was file compression, it probably would be a whole lot slimmer.

@fribbledom
Ah, the unix philosophy, do 139 things and do them well.

@fribbledom I kind of want to memorise the 139 tar options now, as a challenge, or penance

@fribbledom that's really interesting from the perspective of rewriting coreutils

@fribbledom
> "The growth of command line options, 1979-Present": danluu.com/cli-complexity/

Thanks for linking this. I hadn't seen it, and it gives me a *lot* to think about

A few stand-out passages:

> Ironically, one of the reasons for the rise in the number of command line options is another McIlroy dictum, "Write programs to handle text streams, because that is a universal interface"… If structured data or objects were passed around, formatting could be left to a final formatting pass.

1/4

@fribbledom

> [Adding options to CLI programs has a cost]—more options means more maintenance burden—but that's a cost that maintainers pay to benefit users, which isn't obviously unreasonable considering the ratio of maintainers to users. This is analogous to Gary Bernhardt's comment that it's reasonable to practice a talk fifty times since, if there's a three hundred person audience, the ratio of time spent watching to the talk to time spent practicing will still only be 1:6.

2/4

@fribbledom

> If you think of the set of command line tools along with a shell as forming a language, a language where anyone can write a new method and it effectively gets added to the standard library if it becomes popular, where standards are defined by dicta like "write programs to handle text streams, because that is a universal interface", the language was always going to turn into a write-only incoherent mess when taken as a whole.

3/4

@fribbledom

> People make fun of… javascript for having all sorts of warts and weird inconsistencies, but as a language and a standard library, any commonly used shell plus the collection of widely used *nix tools taken together is much worse and contains much more accidental complexity due to inconsistency even within a single Linux distro and there's no other way it could have turned out.

Not entirely sure what to make of all this. I don't *want* to agree, but don't have great answers

4/4

@fribbledom I an super unconvinced by the author’s assertion that this is inevitable. As I pointed out when I saw this elsewhere:

Plan 9, today:
ls:13
rm:2
mkdir:2
mv:0
cp:3
cat:0
pwd:0
chmod:0
echo:1
man:7
tar:17 (function+modifiers)
ps:3
ping:10
kill:0
chown:(N/A - chgrp: 2)
grep:11
tail:9
top:1 (3rd party)

Others N/A.

@fribbledom haven't read the article yet, but:

why do we

chown -R user:group dir

instead of

find dir -exec chown user:group {} \;

@fribbledom
or better yet

find dir -print0 |xargs -0 chown user:group

@wolf480pl

For one: convenience. But also: not spawning a thousand chown processes.

@fribbledom
as for process spawning:
- there will be at most one chown at a time
- you could do:

find dir -exec chown user:group {} +

which would put as many files in each chown invocation as can fit in argv

- the whole Unix paradigm works best when creating processes is cheap. If creating processes is expensive, what can we do to make it cheaper?

@fribbledom
Convenience still stands though.
And that's the major reason why BSD and then GNU added most of these options

@wolf480pl

I know and didn't (necessarily) mean parallely. It will still launch chown a thousand times, though, and that's also not exactly the most efficient way of doing things 😉

@wolf480pl

To the second part of your reply, I guess some overhead we simply won't be able to get rid off:

- signal handlers
- env vars
- keeping state like current working dir, uid/gid/pid, open files
- arch specific stuff: registers
- address space / memory

Some things are optional, but probably are here for a good reason:

- cgroups
- namespaces

I'm probably also forgetting a whole bunch of other things here.

@wolf480pl @fribbledom I didn't know that detail! Thank you for sharing it. I guess I have some mass-spawning problems on my server when cron jobs run.

@vanitasvitae @fribbledom That one amused me because, as a UI/UX enthusiast, I agree with the sentiment but, perhaps ironically, I can also do that because I habitually unpack archives in the terminal.

Sign in to participate in the conversation
Mastodon

Server run by the main developers of the project 🐘 It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!