Give it a try!
> When a model’s predictions and prohibitions line up with observable reality, the model is true. When those predictions are easy to make and check, it is useful.
> A model is wrong not because it is not precisely quantified (like satiety), or because it wasn’t published in a science journal (like MBTI), or because it has been superseded by a more reductionist model (like Bohr’s atom).
> It is wrong when it predicts things that don’t happen or prohibits things that do.
Q. What do you call a hamburger that does math?
A. A presburger.
Fixed a joke from #gpt3
Nice illustration for
Ceci n'est pas un twitter.
Server run by the main developers of the project It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!