Hot take on economics
I have heard people talking about "economics" and "economic systems" as if they're this unnatural thing imposed on innocent humans. But the fact that we use the same word for both the thing being studied and the science of studying it is just an unfortunate word choice.
"Economics" is really just the study of a particular class of human activity that pre-dates agriculture. An "economic system" is what you have any time you have a complex society.
Hot take on economics
So "not having economics" just means ceasing to care about a particular class of human activity. It doesn't make that activity stop. And having an "economic system" isn't optional; it's just a question of whether you pay any attention to what that system actually is and how it behaves.
Hot take on economics
@freakazoid i agree, but i don't quite understand. who is for "not having economics"?
Hot take on economics
@o I don't remember who it was specifically and don't want to point fingers at people anyway. It's even possible I misunderstood them. My point is a general one.
Hot take on economics
@o I suspect it's a similar issue to the widely varied definitions of "markets" and "capitalism" that people use. They might have been talking about the combination of macroeconomics and finance, in which case there's a reasonable chance I at least partially agree with them.
Hot take on economics
@freakazoid So ...
Back on Google+, there were a set of self-described Libertarians (and @woozle will remember some of these conversations -- they're not one, but were a fellow participant) who I'd occasionally engage with mostly to try to understand what the hell they were on about.
This included a few rounds trying to suss out just what they meant by terms such as "markets" and "capitalism", in particular.
It's one thing to disagree with someone.
Hot take on economics
@freakazoid ... tend to go quite poorly.
This includes reading and referencing their generally prefered sources (if you can even wrest these from them at all). Stuff like Hazlitt, Rothbard, and von Mises, if you're lucky.
There was one YT vid in particular @woozle had turned up at one point, I _think_ it was "Objectivist Girl", discussing something. Which we realised was basically word salad. Oh, on von Mises and "praxeology". Let me see if I can find that...
Hot take on economics
@dredmorbius @o @woozle I've read Economics in One Lesson. I've also read Economics for Real People (Gene Callahan's intro to Austrian economics), Free to Choose, and Machinery of Freedom. I think a big issue with libertarians is that they don't realize just how much the framework in which markets exist matters. There's no such thing as a "free market".
Hot take on economics
@freakazoid For a long time my trite dismissal of Libertarianism was: "It's a fundamental inability to understand or acknowledge that wealth is in fact power."
When I finally started reading Adam Smith and found his "Wealth, as Mr Hobbes says, is power", I was gobstopped. That had been a principle gripe about economics (Libertarian or otherwise), and Smith directly confronted and acknowledged it.
There's another error I see now that is deeper: Weber & NAP.
Hot take on economics
@freakazoid Akerloff's "Market for Lemons" addresses a *part* of this, but only in part, and only in some cases.
The dynamic ends up being a validation for establishing minimum standards which *must* be met for market entry in many instances. Ergo: fully unregulated markets fail, and spectacularly.
(So do *badly* regulated ones, another issues.)
And that's not the only problem, but I'll stop there for now.
Hot take on economics
@dredmorbius @o @woozle That's specifically about situations where there's heterogeneous quality and no reputation for the seller, though. People certainly sell high-quality used cars, and you can get used cars with warranties from dealers. I think we do see this phenomenon with some consumer electronics. Often it's not asymmetric information so much as lazy consumers, though.
Hot take on economics
@woozle @o @dredmorbius It's a little different from Gresham's Law because in that case the "price" is dictated to the the same for all coins of a given denomination by law, whereas prices of goods in the coinage assume the coins with the lowest actual precious metal content.
(Now that I say this I realize Gresham's Law has nothing to do with non-use of cryptocurrencies, since relative prices can adjust.)
There's fiat or imposed value, as with coin. Also with transjurisdictional standards, such as divorce law and shipping registries ("flags of convenience"). Whatever the *minimum* acceptable *somewhere* is, is acceptable *everywhere*.
There's effective perceived value -- Mencken's "Brayard", or consumer technologies, or bicycles.
@freakazoid Underlying quality is difficult to communicate, so some *quality indicator* is substituted. Accent. Vocabulary. Cultural myths. Clothing. Food. Table manners. Branding. Musical tastes. Books read. Schools attended. Management fads.
These signal *both* quality *and* group alignment -- and the wrong set can easily get you killed in many cases.
*Changing* signifiers is highly traumatic: culture wars and value shifts.
This also leads to cargo culting.
@dredmorbius @woozle @o These fall into a few different possibly overlapping categories: implicit bias, laziness or ignorance (because the information is available but people don't bother to look or don't know it's there), and places where it's genuinely hard to know, like interviewing and managing (though there's a lot we do know about management and interviewing so laziness and ignorance applies there).
Mature markets tend to end up with two market leaders and a bunch of also-rans. In that kind of market, the #1 is often complacent and of poor quality, but the #2 tends to be better because it wants to knock the leader off the top spot.
e.g. VHS vs Betamax, Windows vs macOS, VW vs Toyota for cars, etc.
(Obviously there are counterexamples, and I think the trend is becoming less clear as markets fragment.)
@mathew @o @woozle @dredmorbius Two of the three examples you cite have strong network effects, where that's certainly true. But car manufacturers don't have this problem. Globally, in 2014 (the year I can easily find data for), the number 8 automaker by number of cars (Honda) sold almost 43% of the number of cars of the number one (Toyota). In the US, the number 7 manufacturer, Kia, sold 43% as many passenger cars as the top manufacturer, GM. And number 3, Toyota, has almost 83% of GM's sales.
@dredmorbius @woozle @o @mathew Actually, now that I think about it, VHS vs Betamax happened in a market that wasn't remotely mature, and it was a competition among standards, not companies. There were plenty of manufacturers of both tapes and players.
I'm having difficulty coming up with an example in any situation where there aren't strong network effects, at least in the US.
@mathew @o @woozle @dredmorbius During my orientation at Google, when they were talking about the datacenters, someone asked if they ever planned to open source the designs like Facebook had. The speaker replied, "Open source is what the company in second place does." That wasn't the only thing in orientation that made me think about just walking out.
@dredmorbius @woozle @o @mathew Maybe commercial airline manufacture is a good example? Boeing and Airbus are definitely the top two, and Bombardier is the only other manufacturer I can even think of, but they only do regional jets AFAIK. But I'm not sure either Boeing or Airbus ever really acts like they're either especially comfortable or hungry; the competition seems to keep both companies on their toes pretty well.
@freakazoid China and Russia both have indigenous aircraft industries, and there's Embrar of Brasil, though theirs are also largely regional / corporate jets.
There are more small- and mid-sized aircraft manufacturers.
The industry as a whole is *extremely* conservative, almost wholly governed by engineering and aeronautical constraints (there are only so many arrangements of sausages, engines, and lifting surfaces).
Plus insurance risks and regulation.
@freakazoid An interesting parallel is actually cargo ship design and use in the 13th / 14th centuries, about the time the lateen rig was adopted by Europeans, a millennium or more after its appearance on the Indian Ocean and Arabia.
The problem was insurers.
Shipping is high-risk, and voyages were insured individually, as separate ventures. Insuring syndicates wouldn't take risks on new-fangled tech like lateen rigs.
As a consequence, European ships could ...
Bombardier tried to break into the low end of the narrow body market, scared Boeing, and got slapped with nasty tariffs, at which point Airbus swooped in and bought a majority stake in the project and moved production from Canada to the US, the resulting plane is now called the Airbus A220.
Bombardier’s also selling their turboprop commercial airliners (the Q400) to Viking Air (who already owned the rights to the predecessors to them), and their regional jets to Mitsubishi (who has their own homegrown design that will effectively replace the CRJ line, they’re just in it for the CRJ sales and service infrastructure as I understand).
And there’s also Embraer, who does regional jets and some small short-range narrow-bodies. (And Boeing’s buying into a joint venture with them.)
And there’s ATR doing turboprop airliners, although they’re a joint venture between Airbus and Leonardo.
@freakazoid There are some interesting exceptions, yes, but most of them show strong evidence of forces encouraging regionalisation.
The film industry is a key case in point. Reels of film, or now, digitial streams or recordings, can be transmitted virtually effortlessly worldwide. The *fixed* infrastructure of film development is largely the support industry: carpenters, casting agencies, caterers, coaches, costume & set designers, electricians.
@freakazoid So centres of specialisation appear.
But you also have *globalised* centres, especially in India, China, Japan, and multiple European countries.
Most of that is language, though culture also plays a major role, and government programmes specifically encouraging and suporting an indigenous film industry -- a powerful propaganda and cultural tool, kept under local control.
The auto industry is similar, in respects. Not because it's projectable...
@freakazoid ... though cars ship easily, factories don't, so it centralises, at least within countries.
(JIT and improved transport networks are changing that somewhat. Factories are more distributed in the US than they were in Detroit's heyday, but still cluster somewhat.)
But: there are both regional taste differences and economics, as well as national interests involved.
Building cars and military vehicles shares much in common, and military manufacture is ...
@dredmorbius @mathew @o @woozle Is physical colocation a problem? It's generally centralization of control or coordination that somehow discourages defection (cartels have tended to disintegrate rapidly historically) that is the problem, right? And often when a company does manage to dominate that's because new entrants have to face regulatory barriers to entry it didn't, like Amazon with sales taxes.
@freakazoid Colocation used to be highly important because there was a lot of interplay between the automanufacturers themselves and supplier pipelines. Sometimes meeting F2F and getting your mitts on metal is the best way to resolve stuff.
That's either not so much the case, or other factors matter more, but you still have forms of clustering which matter, ranging from support industries to education and infrastructure.
Early colo was driven by bulk materials.
@freakazoid Detroit was where raw iron ore and steel could be directly offloaded via ship and cars shipped by rail to mostly Eastern markets.
Interestingly, Los Angeles once featured pretty much the largest of every factory plant *outside* the primary core group, within the US. Which is to say: at LA's distance from the Rust Belt, 2ndary localisation made sense.
@freakazoid ... of strategic interest. So countries otherwise not particularly vested in car manufacturing sustain it.
Local tastes and regulations vary, so cars get built for specific regulatory and cultural markets, as well as price points -- both inputs and consumers. Hence: much more variance *between* national markets, but typically little *within* them.
Aircraft are somewhat similar, though more constrained.
@freakazoid @dredmorbius @woozle @o Cars may not be a two-player market, but I still maintain that VW has gotten lazy (and indeed downright criminal), lets its quality slip and failed to invest in new tech, while Toyota has focused on making better cars, even if they did make a disastrously bad move betting on hydrogen rather than battery storage. (There's probably an interesting case study there on why they went the way they did.)
VW's failure to invest in new tech is the case with car makers across the board. Their cheating was to try to avoid losing a bunch of car sales as diesel was essentially getting regulated out of business. Which IMO was a stupid move on the government's part since diesel has lower CO2 emissions than gasoline.
@dredmorbius @woozle @o @mathew Actually I should qualify that - it has lower emission not because its specific CO2 is lower but because diesel engines have higher compression ratios so tend to be more efficient. You can also get more of it from oil without having to resort to cracking. But hybrids are better, so probably not stupid to regulate its emissions, really.
@freakazoid Deisel fuel itself has a slightly higher energy content than petrol/gasoline, the engines run at higher compression ratios, and at higher temperatures (Carnot efficiency), all of which net more mileage and lower CO2 emissions.
Emissions of *particulates* (especially PM2.5, v. bad for lungs and health), and of NOx (nitrogen oxidising at high temps and pressures) are *worse* for deisel than petrol engines.
Also possibly sulfer and other sour crude contaminants.
@freakazoid Incidentally, two cases of dyanamics I've been describingl
Toyota's forray into hydrogen fuel cells is based on government policies and incentives, creating a localised specialisation.
Volkswagon's diesel emissions fraud is a #GreshamsLaw dynamic: trying to substitute a lower-value quality for a higher-value one, through fraud.
@dredmorbius @mathew @o @woozle It seems like this is also the case with software. People pick software on the basis of features or price, because they have no idea how to measure quality. So there's no market for high-quality software.
A "Consumer Reports for software" might help. It could track historical bugs, usability/accessibility problems, vulnerabilities, attacks, and the maker's response to them, etc.
@freakazoid "The Tyranny of the Minimum Viable User"
Since users' _capabilities_ also vary strongly, the problem goes beyond this.
You see similar types of dynamics in, e.g., "audiophile" gear, much of which seems principally engineered to separate rich idiots from their lucre.
A better comparison might be precision or highly-skilled equipment, also somewhat affected.
@freakazoid P.T. Barnum's dictum isn't an absolute universal, but it's close.
You can swim upstream, but you're going to find yourself in niche space. That *may* be a *profitable* niche, but it's still a niche.
The useful thing to do is look for cases of exceptions to the rule -- where is coplex, respectful, high-information-density content (or products or services) found?
Quality literature, news, education, music, information gear, etc.
@woozle @o @mathew @dredmorbius To expand on the "You already know how to use it" example, it could be that pedagogical Apple failed because they didn't have much business sense, and dumbed-down Jobs apple succeeded not because of Jobs's dumbing-down of their products but because he understood business and marketing.
@freakazoid The winner-take-all dynamic of many tech-based products (hardware, OS, software, services, social media) makes attribution highly risk prone: success succeeds, failure fails. Survivor bias is manifest.
But having witnessed enough cases directly, and studied numerous others, the general rule of "don't outsmart your market" seems to hold.
Apple's big success is smartphones. Mac is a fairly small share of their market. Though they seem to be catering it again.
@freakazoid I was looking at the specs for the upcoming Mac Pro release. It's mind-boggling.
Base model: $4000. Top of the line is 28 cores and 1.5 TB RAM, 4 TB SSD, 4xGPU. Speculation is that this will run north of $35,000, and I suspect that's low. This is a supercomputer in a mesh cage.
(I'm wondering what Linux or Window equivalents there might be.)
Definitely drool-worthy: https://apple.com/mac-pro
@freakazoid Mostly it's cases where one of several conditions is met:
1. The good is a signalling mechanism. I can advertise my own capabilities in a space by using (or producing) the good. Uni education especially.
2. Direct beneficial use. If the good provides a _direct_ and _quantifiable_ or _perceptible_ benefit, it may find a niche. That will by definition be limited, and faces challenges by imitative competitors and measurement difficulty/costs. Examples ...
@freakazoid ... include business/financial news, policy news, etc.
3. Quality professional gear: audio, photo, video gear. Linux/BSD vs. Windows/Mac, Mac vs. Windows. Small, focused, niche audiences.
4. Regulatory quality floor on goods *or* users. Commercial and civil aviation vs. automobiles. Any idiot can (and does) drive a car. Pilots are licensed. Commercial pilots are certified to specific aircraft. There is a very strong quality floor.
@freakazoid 5. With some limits: self-use. Especially where tools are mutually developed by specialists within a craft. Linux *used* to occupy this space, it's drifting from it. Whether there's a replacement isn't yet clear. The death of the desktop, may, paradoxically, save Linux, if the idiots all use smartphones instead.
There are some parameters that may influence this. The scope of network effects especially. If intelligence counters network, then a ...
@woozle @mathew @o @dredmorbius Linux has dramatically dumbed down over time. I wouldn't really call it "self-teaching" at any point, but it used to be that, *if you used Linux*, you made heavy use of man pages and documentation that was included with Linux. So simply having sufficient interest in using Linux to get you over the hurdles would have left you significantly more competent in using Linux than it does today.
@dredmorbius @o @mathew @woozle Today people's response to some random thing breaking in GNOME 3 or KDE (let's ignore Android and Chrome OS) seems to be about the same as it is if something breaks in Windows: format and reinstall.
Some of that is just an increase in accessibility. But it's specifically an increase in accessibility gained by dumbing down the system instead of by improving the system's self-documentation/self-teaching.
@woozle Not so much that, as "developed principally by its own users", much as early Unix had been (1970 - 1990 or so).
That is, "users" weren't a separate class, they were "us", from the developers standpoint.
Today, you've got a much larger nontechnical userbase. The total installed base hasn't changed much by _percentage_ but it's vastly greater by _number_ than in the late 1990s.
Self-documentation through code, manpages, info docs, & HOWTOs has varied.
@freakazoid ... quality product stands a far better chance. If production can be readily distributed and decentralised, similarly. Open source software seems vastly more tractable than open source hardware. Fabrication, logistics, and distribution are far harder for physical commodities.
In particular, if there's no way to impose some kind of effective floor (as with pilots/aircraft, certified industrial equipment, etc.), the market will seek the minimum viable user.
@freakazoid To counter that, you've got to raise the bound on that minimum.
You can gatekeep the users (certification). Or you can make sufficient degrees of incompetence nonviable -- harms or at least does not help the incompetent user is one route. This will still limit the scope of the market, but at least won't dilute the product. Call it a talent bar.
This also means a noneconomic motivation. You're not profit-maximising, but maximising for individual benefit.
@dredmorbius @mathew @o @woozle I don't agree with that, because it assumes that the reason for incompetence is lack of ability or desire to become competent. If it's lack of desire, let's exclude them not just from products, but from the planet, since they're ruining humanity. And I suspect lack of ability represents only a tiny fraction of the population.
But I think the real answer is that our system selects for people who are shitty at teaching.
@freakazoid So ... well, current use of idiots notwithstanding, I try to avoid prejudiced language, and the whole long first part of the Reddit essay goes into detail about why simple tools are often a net win.
The problem is where the dynamic directly impedes development of useful tools, systems, goods, services, etc.
And I really _don't_ think it's something you can chalk up only to pedagogy. Put another way: we're at the end of a phenomenal 300 yr ramp up in literacy.
@freakazoid Which includes a hell of a lot of pedagical reforms (the history's interesting, esp. for trivia, or trivium, fans).
Literacy ~1700 was ~10%. By 1900 in US/Europe, 90%+
HS graduation in the US 1900: 6% 1950: 90%. Bachellors is now 30%+ and PhD > 8%. There are more PhDs in the US now than HS grads 120 years ago.
But: the quality of that HS education is also, in some measures, much lower: lower language/logic skills, better scientific knowledge.
@freakazoid And the general informational tools we have *are* better than 300 years ago. (In part: they deliver us those 300 year old works instantly.) But they're far short of their potential.
And I'm trying to suss out *why*.
Teaching/training is _part_ of it, and yes, the education system likely doesn't meet its potential either, but it does a tremendous amount, for a tremendous number. And hasn't _worsened_ appreciably since the 1950s.
Most variation is ...
@freakazoid ... actually, if you look at it, changes in either who's included in classes or testing. Increased access => falling test scores. Rising test scores => falling access. That points to some population-level intractability.
(With exceptions. "Stand and Deliver".)
But trying to make all the children above average is a Sysiphean task, and a doomed premise for progress. You've got to work with the talent you've got.
My point is to not get in its way.
@dredmorbius @mathew @o @woozle Replying mid-thread because I think a lot of your reasoning farther down hinges on what I believe to be a mistake in this post. The fall in test scores from "increased access" is not necessarily because the larger group is not learning as well, but because the test wasn't actually testing how effectively the students were being taught. Most of our standardized tests are really indirect tests of socioeconomic class, not of how much students are learning.
@woozle @o @mathew @dredmorbius I think that the real problem is that "education systems" are super bad at educating. They can take a subset of students who have the right background and right set of parents and get them to do well on standardized tests, but they cannot take a random person out of a population of, say, English speakers, and on net provide them significant benefit.
@dredmorbius @mathew @o @woozle The reason students at "elite schools" tend to do better is that the school only allows in students who are going to be successful no matter what. They're *filtering*, not teaching. But they're not really filtering for innate skill. They're filtering for what the student has already absorbed from the world, largely due to the circumstances of their birth.
@dredmorbius @mathew @o @woozle The fundamental problem IMO is that almost all societies treat teaching and learning as just one function among many, and something that's confined to particular institutions and particular phases of a person's life.
IOW it's not just Americans who are anti-intellectual but most of human society. And the reason is that we have entrenched groups who have a vested interest in a stupid population.
@freakazoid I agree.
Teaching works best where:
1. The students are roughly equivalent in skill.
2. The teacher can work 1:1 with students (often described as "the gold standard" -- direct tutoring -- very expensive BTW), to see _how the lesson is being assimilated_.
#2 is a failure of technical teaching "solutions". Teachers aren't merely information delivery systems, they are *guides*, who see where students stumble, and can give learning cues, not just repeat rote.
@freakazoid That's a really good point.
It's also directly testable: _if_ we can agree on a set of non-socially-discriminatory test criteria, we can apply those to a wide sample of subjects and see how well they do.
At the same time, this problem highlights a fundamental dynamic of ToTMVU: that *assessing quality is itself difficult, expensive, and not generally agreed upon*. How do you find what is best *if you cannot even agree on what is "best"?*
@freakazoid The other issue is that as more things get more complicated, you've got more to teach, to more people, but only so much time, bandwidth, and effectiveness with which to do it.
Civilisations progress based on intergenerational knowledge transfer. *That* is based on both explicit (linguistic) and tacit (experiential) knowledge. We can bump up explicit knowledge transfer fairly effectively. Tacit not so much. Students need to DO, under direct guidance.
Server run by the main developers of the project It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!