If attacker can modify the shell script, they can modify checksums in the script.

But still that sounds like `curl && sh` is *safer* than downloading the payload directly, because the script can add an integrity check, which users may not do themselves.

HTTPS would notice attack on your DNS (and without transport security you're totally screwed no matter what you download from where).

@kornel There is a pretty cool timing check that you can use to see if the file is being piped to shell or just downloaded as a file or with a browser — so you can serve different content in each case.

(Even if you couldn't do that, you could still serve the malicious script 1% of a time and with enough downloads have a good chance of taking over a good percentage of machines, while at the same time remaining undetected.)

ALWAYS download and check the script before running it!

@deshipu @kornel

> There is a pretty cool timing check that [an attacker] can use to see if the file is being piped to shell or just downloaded as a file or with a browser — so you can serve different content in each case.

That's really interesting! Do you happen to have any details about how that works? I wouldn't have guessed it was possible

@deshipu I know this. That's why I've been referring to curl && ./sh, not curl | sh :)
But that is what Schneier calls Hollywood Movie Plot Threat. It sounds very hacky and scary. In practice nobody does this, nobody needs to.
You serve clean sh script that that downloads binary blob. Victim will verify it to get false sense of security. And then hit victim from the blob, while also cleaning up the evidence.

@deshipu But it all goes back to the trust issue.

If you expect the server to launch a devastating undetectable attack specifically against you, why do you download and run 5 million lines of unverified potentially-malicious code from it?

But if you trust the server not to hack you to the point you would blindly run 5 million lines of code from it, then fearing the first 200 lines is irrational.

@kornel That is not a trust issue. That is an issue with the "all or nothing" approach of Unix and its derivatives to permissions. You either have to "trust" completely (not actually trust, but tolerate the risk), or be unable to run any software at all.

That is mainly because every single process you run has access to all your data, including all the ssh keys, e-mail and messenger messages and passwords, browser history and passwords, etc.

Which is, if you think about it for a moment, pretty insane.

@deshipu In security terminology "trust" doesn't mean actually trusting in a human-relationship sense, but more like "acceptable level of confidence it's not controlled by the attacker". With HTTPS you "trust" certificate authorities, even though you don't know any of them and wouldn't lend them $5.

@kornel You know, the fact that you feel compelled to explain that "trust" is not really "trust", but something in every way like trust, except that no actual trust is involved is a good example of how the whole design is intentionally broken, made into a huge trap for any new people who happily wade into it.

@deshipu It's a simplified term useful to talk about security properties. Same way "authoritative" DNS is not an elected institution with a mandate. Just (re)uses a word to distinguish one configuration from another.

Can use "florp" instead.

If you don't florp the 200-line install script, why would you florp a 5 million-line binary blob from the same source?

Conversely, if you florp unverified 5 million lines of code, you can as well florp 5000200.

@kornel I think the term you are looking for is "depend".

@deshipu No, definitely not it. The infosec "trust" is an assumption that a thing won't behave maliciously. Things that you don't depend on can be trusted. You could depend on things that are untrusted (e.g. a network connection).

@kornel I don't quite see the difference. When I depend on a network connection, I make an assumption that it is not (maliciously or otherwise) disconnected. Using your language, I trust that it is connected. Are you trying to imply that infosec makes a distinction between intentionally targeted attacks and random technical glitches? How do I detect if the network has been disconnected maliciously from it being disconnected in good faith (and as such, can be continued to be trusted)?

@deshipu Yes, infosec's "trust" is different from human meaning of trust. It doesn't concern itself with reliability. Unreliable network won't inject a virus into exe you download, untrusted network will.
Disconnected network is harmless, so not an infosec problem.

@kornel Let me get this right, so if my personal data gets leaked all over the internet due to a technical glitch making it possible, that's not infosec's concern, you only care if someone specifically made that happen because they wanted that data? Is that correct?

@deshipu It doesn't work like that. That is such a deep misunderstanding that I can't even answer your question, because I don't know how to connect it to the topic I was speaking about.

I'm trying to explain to you what a term means, and you seem to just don't like the meaning, or look for someone to be angry at?

@deshipu So trusted/untrusted may be closer to military friendly territory/enemy territory terms. You assume you can stay on friendly territory, and are in danger in enemy territory.

And your questions are like "but how can this be friendly territory if not everyone is my true friend?" and "I could break a leg in friendly territory, is that not a danger!?"

@kornel And you really don't see how artificial and useless in practice that distinction is?

@deshipu It is incredibly useful. Without defining threat model, it's not possible to say whether something is secure or not (people still say it, but they have their own threat model in mind).

For example, if you "trust" your computer, "trust" PKI, then HTTPS is secure.

If you don't "trust" your computer, or don't "trust" PKI, then HTTPS is not secure.

If you "trust" the network, then HTTP is secure.

So both HTTP and HTTPS are or aren't secure, depending on what components are "trusted".

@kornel But this redefinition of "trust" and "security" only serves one purpose: to claim something is secure when in practice it really isn't, and then saying your work is done. That strikes me not only as extremely self-serving and dishonest, but also harmful to all those people who came to trust (in the common sense) you.

When you only focus on known enemies, sooner or later you are going to get flanked by something either random or simply unanticipated. Why not build reliable software instead?

@deshipu You're looking for malice in *words*.

Just like when a door lock installation manual says "this side inside, this side outside", the "inside"/"outside" is not dishonest and self-serving. It's not trying to secretly lock you out of your apartment. It isn't ignorant about security of your windows. It just describes security property of a device - which side is "trusted" and which isn't.

@deshipu No, it's not a claim. I've tried to explain to you dozen times that it's not a description of actual real-world state or judgement of behavior.

It's an assumption. It's a description of requirements.

When a door lock has a knob on one side, that's the "trusted" side. Not because it's safe or not safe, or because any single person on earth trusts it. Only because it'd be dumb to have the knob on the outside of the door.

@deshipu And the infosec needs to have terms for "outside of the door where bad guys may be" and "inside of the door where you live", so that they have words to explain which way they expect you to install the locks.

So "is X trusted?" is infosec equivalent of "is this side of the door facing your flat?"

Sign in to participate in the conversation

Server run by the main developers of the project 🐘 It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!