In this topic, you share a text relevant to EA, such as an article, essay, blog post, book or academic paper. I tell you three errors.
I’m offering to help EA by finding errors that people didn’t know about. Please only submit texts for which knowing errors would be valuable to you. I hope this will be useful and appreciated.
Details
If I post errors for your text, you must choose to debate one of the errors with me or choose not to debate. A one sentence reply explicitly opting out of debating is fine, but silence violates the game rules. Other feedback, such as which errors you agree or disagree with, is also welcome.
I only guarantee to do this for up to 5 submissions made within 5 days. First come, first serve. Limit 1 per person.
You must have already read the text in full yourself and like it a lot. (If you skipped reading notes or appendices, that’s fine, but state it.)
I must be able to find a free, electronic copy of the text. I can frequently find this for paywalled texts. If you already have a link or the file itself, please send it to me (DMs are fine).
If I can’t find three errors, I’ll say that. I don’t expect this to come up much. If it does, my expectation was wrong. I have two beliefs here. First, I’ll be able to find errors in texts that I disagree with. Second, people here are likely to share stuff I have disagreements with. To give a number, I predict finding errors for at least 80% of texts.
I expect prose texts over 1000 words long that say something reasonably substantial and complex. Otherwise I may aim to find fewer errors.
I’m not agreeing to read the whole text. My plan is to read enough to find three errors then stop. If something is addressed in a part I didn’t read, you can tell me and I’ll respond. I have experience replying based on partial reading and it’s rarely a problem.
I will only post errors that I consider important. If you consider an error unimportant, let me know and I’ll explain my perspective. You’re welcome to do that before either choosing an error to debate or choosing not to debate. You may want to state why you think it’s unimportant so I can address your reasoning, but I can explain importance regardless.
Your EA forum account must have been created in Sept 2022 or earlier.
Bonuses
Maybe this will inspire someone else to host the same game or a similar game.
This can serve as some examples of replying to the first error (well, first three).
Sounds interesting. Can we submit our own writing? If so, I'm curious what might be important errors in this post.
Error One
It's ambiguous/confusing about whether by "quality" you mean different quantity sizes, as in your example (substitution between small pains and a big pain), or you actually mean qualitatively different things (e.g. substitution between pain and the thrill of skydiving).
Is the claim that 3 1lb steaks can always substitute for 1 3lb steak, or that 3 1lb pork chops can always substitute for 1 ~3lb steak? (Maybe more or less if pork is valued less or more than steak.)
The point appears to be about whether multiple things can be added together for a total value or not – can a ton of small wins ever make up for a big win? In that case, don't use the word "quality" to refer to a big win, because it invokes concepts like a qualitative difference rather than a quantitative difference.
I thought it was probably about whether a group of small things could substitute for a bigger thing but then later I read:
This seems to be about qualitative differences: some types/kinds/categories have priority over others. Pork is not the same thing as steak. Maybe steak has priority and having no steak can't be made up for with a million pork chops. This is a different issue. Whether qualitative differences exist and matter and are strict is one issue, and whether many small quantities can add together to equal a large quantity is a separate issue (though the issues are related in some ways). So I think there's some confusion or lack of clarity about this.
I didn't read linked material to try to clarify matters, except to notice that this linked paper abstract doesn't use the word "quality". I think, for this issue, the article should stand on its own OK rather than rely on supplemental literature to clarify this.
Actually, I looked again while editing, and I've now noticed that in the full paper (as linked to and hosted by PhilPapers, the same site as before), the abstract text is totally different and does use the word "quality". What is going on!? PhilPapers is broken? Also this paper, despite using the word "quality" in the abstract once (and twice in the references), does not use that word in the body, so I guess it doesn't clarify the ambiguity I was bringing up, at least not directly.
Error Two
I suspect you're using an offsetting view in epistemology when making this statement concluding against offsetting views in axiology. My guess is you don't know you're doing this or see the connection between the issues.
I take a "strong point in favor" to refer to the following basic model:
We have a bunch of ideas to evaluate, compare, choose between, etc.
Each idea has points in favor and points against.
We weight and sum the points for each idea.
We look at which idea has the highest overall score and favor that.
This is an offsetting model where points in favor of an idea can offset points against that same idea. Also, in some sense, points in favor of an idea offset points in favor of rival ideas.
I think offsetting views are wrong, in both epistemology and axiology, and there's overlap in the reasons for why they're wrong, so it's problematic (though not necessarily wrong) to favor them in one field while rejecting them in another field.
Error Three
The article jumps into details without enough framing about why this matters. This is understandable for a part 4, but on the other hand you chose to link me to this rather than to part 1 and you wrote:
Since the article is supposed to be readable independently, then the article should have explained why this matters in order to work well independently.
A related issue is I think the article is mostly discussing details in a specific subfield that is confused and doesn't particularly matter – the field's premises should be challenged instead.
And another related issue is the lack of any consideration of win/win approaches, discussion of whether there are inherent conflicts of interest between rational people, etc. A lot of the article topics are related to political philosophy issues (like classical liberalism's social harmony vs. Marxism's class warfare) that have already been debated a bunch, and it'd make sense to connect claims and viewpoints to that the existing knowledge. I think imagining societies with different agents with different amounts of utility or suffering, fully out of context of imagining any particular type of society, or design or organization or guiding principles of society, is not very productive or meaningful, so it's no wonder it's gotten bogged down in abstract concerns like the very repugnant conclusion stuff with no sign of any actually useful conclusions coming up.
This is not the sort of error I primarily wanted to point out. However, the article does a lot of literature summarizing instead of making its own claims. So I noticed some errors in the summarized ideas but that's different than errors in the articles. To point out errors in an article itself, when its summarizing other ideas, I'd have to point out that it has inaccurately summarized the ideas. That requires reading the cites and comparing them to the summaries. Which I don't think would be especially useful/valuable to do. Sometimes people summarize stuff they agree with, so criticizing the content works OK. But here a lot of it was summarizing stuff the author and I both disagree with, in order to criticize it, which doesn't provide many potential targets for criticism. So that's why I went ahead and made some more indirect criticism (and included more than one point) for the third error.
But I'd suggest that @Teo Ajantaival watch my screen recording (below) which has a bunch of commentary and feedback on the article. I expect some of it will be useful and some of the criticisms I make will be relevant to him. He could maybe pick out some things I said and recognize them as criticisms of ideas he holds, whereas sometimes it was hard for me to tell what he believes because he was just summarizing other people's ideas. (When looking for criticism, consider if I'm right, does it mean you're wrong? If so, then it's a claim by me about an error, even if I'm actually mistaken.) My guess is I said some things that would work as better error claims than some of the three I actually used, but I don't know which things they are. Also, I think if we were to debate, discussing the underlying premises, and whether this sub-field even matters, would acutally be more important than discussing within-field details, so it's a good thing to bring up. I think my disagreement with the niche that the article is working within is actually more important than some of the within-niche issues.
Offsetting and Repugnance
This section is about something @Teo Ajantaival also disagrees with, so it's not an error by him. It could possibly be an error of omission if he sees this as a good point that he didn't know but would have wanted to think of but didn't. To me it looks pretty important and relevant, and problematic to just ignore like there's no issue here.
If offsetting actually works – if you're a true believer in offsetting – then you should not find the very repugnant scenario to be repugnant at all.
I'll illustrate with a comparison. I am, like most people, to a reasonable approximation, a true believer in offsetting for money. That is, $100 in my bank account fully offsets $100 of credit card debt that I will pay off before there are any interest charges. There do exist people who say credit cards are evil and you shouldn't have one even if you pay it off in full every month, but I am not one of those people. I don't think debt is very repugnant when it's offset by assets like cash.
And similarly, spreading out the assets doesn't particularly matter. A billion bank accounts with a dollar each, ignoring some adminstrative hassle details, are just as good as one bank account with a billion dollars. That money can offset a million dollars of credit card debt just fine despite being spread out.
If you really think offsetting works, then you shouldn't find it repugnant to have some negatives that are offset. If you find it repugnant, you disagree with offsetting in that case.
I disagree with offsetting suffering – one person being happy does not simply cancel out someone else being victimized – and I figure most people also disagree with suffering offsetting. I also disagree with offsetting in epistemology. Money, as a fungible commodity, is something where offsetting works especially well. Similarly, offsetting would work well for barrels of oil of a standard size and quality, although oil is harder to transport than money so location matters more.
Bonus Error by Upvoters
At a glance (I haven't read it yet as I write this section), the article looks high effort. It has ~22 upvoters but no comments, no feedback, no hints about how to get feedback next time, no engagement with its ideas. I think that's really problematic and says something bad about the community and upvoting norms. I talk about this more at the beginning of my screen recording.
Update after reading the article: I can see some more potential reasons the article got no engagement (too specialized, too hard to read if you aren't familiar with the field, not enough introductory framing of why this matters) but someone could have at least said that. Upvoting is actually misleading feedback if you have problems like that with the article.
Bonus Literature on Maximizing or Minimizing Moral Values
https://www.curi.us/1169-morality
This article, by me, is about maximizing squirrels as a moral value, and more generally about there being a lot of actions and values which are largely independent of your goal. So if it was minimizing squirrels or maximizing bison, most of the conclusions are the same.
I commented on this some in my screen recorded after the upvoters criticism, maybe 20min in.
Bonus Comments on Offsetting
(This section was written before the three errors, one of which ended up being related to this.)
Offsetting views are problematic in epistemology too, not just morality/axiology. I've been complaining about them for years. There's a huge, widespread issue where people basically ignore criticism – don't engage with it and don't give counter-arguments or solutions to the problems it raises – because it's easier to go get a bunch more positive points elsewhere to offset the criticism. Or if they already think their idea already has a ton of positive points and a significant lead, then they can basically ignore criticism without even doing anything. I commented on this verbally around 25min into the screen recording.
Screen Recording
I recorded my screen and talked while creating this. The recording has a lot of commentary that isn't written down in this post.
https://www.youtube.com/watch?v=d2T2OPSCBi4
Thanks for the screencast. I listened to it — with a ‘skip silence’ feature to skip the typing parts — instead of watching, so I may have missed some points. But I’ll comment on some points that felt salient to me. (I opt out of debating due to lack of time, as it seems that we may not have that many relevantly diverging perspectives to try to bridge.)
Good catch; the rough definition that I used for Archimedean views — that “quantity can always substitute for quality” — was actually from this open access version.
Here, the main point (for my examination of Archimedean and lexical views) is just that Archimedean views always imply the “can add together” part (i.e. aggregation & outweighing), and that Archimedean views essentially deny the existence of any “strict” morally relevant qualitative differences over and above the quantitative differences (of e.g. two intensities of suffering). By comparison, lexical views can entail that two different intensities of suffering differ not only in terms of their quantitative intensity but also in terms of a strict moral priority (e.g. that any torture is worse than any amount of barely noticeable pains, all else equal).
I agree that money and debt are good examples of ‘positive’ and ‘negative’ values that can sometimes be aggregated in the way that offsetting requires; after all, it seems reasonable for some purposes to model debt as negative money. We also seem to agree that ‘happiness’ or ‘positive welfare’ is not ‘negative suffering’ in this sense (cf. Vinding, 2022).
Re: “I figure most people also disagree with suffering offsetting” — I wish but am not sure this is true. But perhaps most people also haven’t deeply considered what kind of impartial axiology they would reflectively endorse.
Re: “offsetting in epistemology” — interesting points, though I’m not immediately sold on the analogy. :) (Of course, you don’t claim that the analogy is perfect; “there's overlap in the reasons for why they're wrong, so it's problematic (though not necessarily wrong) to favor them in one field while rejecting them in another field”.)
My impression is that population axiology is widely seen as a ‘pick your poison’ type of choice in which each option has purportedly absurd implications and then people pick the view whose implications seem to them intuitively the least ‘absurd’ (i.e. ‘repugnant’). And, similarly, if/when people introduce e.g. deontological side-constraints on top of a purely consequentialist axiology, it seems that one can model the epistemological process (of deciding whether to subscribe to pure consequentialism or to e.g. consequentialism+deontology) as a process of intuitively weighing up the felt ‘absurdity’ (‘repugnance’) of the implications that follow from these views. (Moreover, one could think of the choice criterion as just “pick the view whose most repugnant implication seems the least repugnant”, with no offsetting of repugnance.)
I would think that my post does not necessarily imply an offsetting view in epistemology. After all, when I called my conclusion — i.e. that “the XVRCs generated by minimalist views are consistently less repugnant than are those generated by the corresponding offsetting views” — “a strong point in favor of minimalist views over offsetting views in population axiology, regardless of one’s theory of aggregation”, this doesn’t need to imply that these XVRC comparisons would “offset” any intuitive downsides of minimalist views. All it says, or is meant to say, is that the offsetting XVRCs are comparatively worse. Of course, one may question (and I imagine you would :) whether these XVRC comparisons are the most relevant — or even a relevant — consideration when deciding whether to endorse an offsetting or a minimalist axiology.
Re: framing about why this matters — the article begins with the hyperlinked claim that “Population axiology matters greatly for our priorities.” It’s also framed as a response to the XVRC article by Budolfon and Spears, so I trust that my article would be read mostly by people who know what population axiology is and why it matters (or quickly find out before reading fully). I guess an article can only be read independently of other sources after people are first sufficiently familiar with some inevitable implicit assumptions an article makes. (On the forum, I also contextualize my articles with the tags, which one can hover over for their descriptions.)
To say that population axiology doesn’t particularly matter seems like a strong claim given that the field seems to influence people’s views on the (arguably quite fundamentally relevant) question of what things do or don’t have intrinsic (dis)value. But I might agree that the field “is confused” given that so much of population axiology entails assumptions, such as Archimedean aggregative frameworks, that often seem to get a free pass without being separately argued for at all.
Regarding the implicit assumptions of population axiology — and re: my not mentioning political philosophy (etc.) — I would note that the field of population axiology seems to be about ‘isolating’ the morally relevant features of the world in an ‘all else equal’ kind of comparison, i.e. about figuring out what makes one outcome intrinsically better than another. So it seems to me that the field of population axiology is by design focused on hard tradeoffs (thus excluding “win/win approaches”) and on “out of context” situations, with the latter meant to isolate the intrinsically relevant aspects of an outcome and exclude all instrumental aspects — even though the instrumental aspects may in practice be more weighty, which I also explore in the series.
One could think of axiology as the theoretical core question of what matters in the first place, and political philosophy (etc.) as the practical questions of how to best organize society around a given axiology or a variety of axiologies interacting / competing / cooperating in the actual complex world. (When people neglect to isolate the core question, I would argue that people often unwittingly conflate intrinsic with instrumental value, which also seems to me a huge flaw in a lot of supposedly isolated thought experiments because these don’t take the isolation far enough for our practical intuitions to register what the imagined situations are actually supposed to be like. I also explored these things earlier in the series.)
My attempt to answer this was actually buried in footnote 8 :) > “Lexicographic preferences” seem to be named after the logic of alphabetical ordering. Thus, value entities with top priority are prioritized first regardless of how many other value entities there are in the “queue”.
I think ‘minimalist’ does also work in the other evoked sense that you mentioned, because it seems to me that offsetting axiologies add further assumptions on top of those that are entailed by the offsetting and the minimalist axiologies. For example, my series tends to explore welfarist minimalist axiologies that assume only some single disvalue (such as suffering, or craving, or disturbance), with no second value entity that would correspond to a positive counterpart to this first one (cf. Vinding, 2022). By comparison, offsetting axiologies such as classical utilitarianism are arguably dualistic in that they assume two different value entities with opposite signs. And monism is arguably a theoretically desirable feature given the problem of value incommensurability between multiple intrinsic (dis)values.
(Thanks also for the comments on upvote norms. I agree with those. Certainly one shouldn’t be unthinkingly misled into assuming that the community wants to see more of whatever gets upvoted-without-comment, because the lack of comments may indeed reflect some problems that one would ideally fix so as to make things easier to more deeply engage with.)
yes that's fine
You appear to be in violation of the game rules because you haven't opted into a debate or opted out of debating.
Hey, I had PM'd you that I've been busy and will reply once I've checked out the longish recording. It's on my list for next week. :) Edit: Unfortunately I fell ill with a lot of urgent stuff piling up, so I'll just reply to this once I get to it.
Underappreciated consequentialist reasons to avoid consuming animal products
Magnus Vinding
1
The issue isn’t your consumption at the margin. The issue is all of your consumption (actually purchasing) of these foods.
2
A ticking bomb (approximately) hasn’t caused any harm yet. Racism has already caused immense harm. So that analogy is wrong. And it’s presented as something the author claims is widely acknowledged, so that’s wrong too.
3
Common sense says that it’s difficult to think clearly when you have some large incentive or bias. But it doesn’t make an impossibility claim (“cannot”).
4
Title:
Later:
The title suggests he’ll give arguments from a consequentialist perspective, but then he started arguing with consequentialism (at least the “naive” types, but he didn’t explain what types exist and how the naive and non-naive types differ).
Thanks for writing this.
2-4 I agree with you. I particularly appreciate the point about 'naive vs. non-naive'.
cheers
You appear to be in violation of the game rules because you haven't opted into a debate or opted out of debating.
Against the singularity hypothesis
Introduction
FYI, I disagree with the singularity hypothesis, but primarily due to epistemology, which isn't even discussed in this article.
Error One
There are many other reasons for drug research progress to slow down. The healthcare industry, as well as science in general (see e.g. the replication crisis), are really broken, and some of the problems are newer. Also maybe they're putting a bunch of work into updates to existing drugs instead of new drugs.
Similarly, decreasing crop yield growths (in other words, yields are still increasing but by lower percentages) could have many other causes. And also decreasing crop yields are a different thing than a decrease in the number of new agricultural ideas that researchers come up with – it's not even the right quantity to measure to make his point. It's a proxy for the actual thing his argument relies on, and he makes no attempt to consider how good or bad of a proxy it is, and I can easily think of some reasons it wouldn't be a very good proxy.
The comment about researchers not becoming lazy, poorly educated or overpaid is an unargued assertion.
So these are bad arguments which shouldn't convince us of the author's conclusion.
Error Two
Asserting something is unlikely isn't an argument. His followup is to bring up Moore's law potentially ending, not to give an actual argument.
As with the drug and agricultural research, his points are bad because singularity claims are not based on extrapolating patterns from current data, but rather on conceptual reasoning. He didn't even claim his opponents were doing that in the section formulating their position, and my pre-existing understanding of their views is they use conceptual arguments not extrapolating from existing data/patterns (there is no existing data about AGI to extrapolate from, so they use speculative arguments, which is OK).
Error Three
You can't just assume that AGIs will be anything like current software including "AI" software like AlphaGo. You have to consider what an AGI would be like before you can even know if it'd be especially good at this or not. If the goal with AGI is in some sense to make a machine with human-like thinking, then maybe it will end up with some of the weaknesses of humans too. You can't just assume it won't. You have to envision what an AGI would be like, or what many different things it might be like that would work (narrow it down to various categories and rule some things out) before you consider the traits it'd have.
Put another way, in MIRI's conception, wouldn't mind design space include both AGIs that are good or bad at this particular category of task?
Error Four
This is wrong due to "at once" at the end. It'd be fine without that. You could speed up up 9 out of 10 parts, then speed up the 10th part a minute later. You don't have to speed everything up at once. I know it's just two extra words but it doesn't make sense when you stop and think about it, so I think it's important. How did it seem to make sense to the author? What was he thinking? What process created this error? This is the kind of error that's good to post mortem. (It doesn't look like any sort of typo; I think it's actually based on some sort of thought process about the topic.)
Error Five
Section 3.2 doesn't even try to consider any specific type of research an AGI would be doing and claim that good ideas would get harder to find for that and thereby slow down singularity-relevant progress.
Similarly, section 3.3 doesn't try to propose a specific bottleneck and explain how it'd get in the way of the singularity. He does bring up one specific type of algorithm – search – but doesn't say why search speed would be a constraint on reaching the singularity. Whether exponential search speed progress is needed depends on specific models of how the hardware and/or software are improving and what they're doing.
There's also a general lack of acknowledgement of, or engagement with, counter-arguments that I can easily imagine pro-singularity people making (e.g. responding to the good ideas getting harder to find point by saying some stuff about mind design space containing plenty of minds that are powerful enough for a singularity with a discontinuity, even if progress slows down later as it approaches some fundamental limits). Similarly, maybe there is something super powerful in mind design space that doesn't rely on super fast search. Whether there is, or not, seems hard to analyze, but this paper doesn't even try. (The way I'd approach it myself is indirectly via epistemology first.)
Error Six
Section 2 mixes Formulating the singularity hypothesis (the section title) with other activities. This is confusing and biasing, because we don't get to read about what the singularity hypothesis is without the author's objections and dislikes mixed in. The section is also vague on some key points (mentioned in my screen recording) such as what an order of magnitude of intelligence is.
Examples:
Here he's mixing explaining the other side's view with setting it up to attack it (as requiring a super high evidential burden due to such strong claims). He's not talking from the other side's perspective, trying to present it how they would present it (positively); he's instead focusing on highlighting traits he dislikes.
This isn't formulating the singularity hypothesis. It's about ways of opposing it.
Again this doesn't fit the section it's in.
Padding
Section 3 opens with some restatements of material from section 2 which was also in the introduction some. And look at this repetitiveness (my bolds):
Near the bottom of page 7 begins section 3.2:
Below that we read:
Page 8 near the top:
Later in that paragraph:
Also, page 11:
Page 17
Amount Read
I read to the end of section 3.3 then briefly skimmed the rest.
Screen Recording
I recorded my screen and made verbal comments while writing this:
https://www.youtube.com/watch?v=T1Wu-086frA
Thanks!
I'm choosing not to debate.
If I'm reading your rules correctly, I'm still allowed to state if I consider some errors unimportant, with or without giving reasons.
I think error 4 is unimportant because the point is about bottlenecks and it stands without the last two words as you said.
If you've written anything against the singularity hypothesis, I would be curious to read it.
To be clear, you're welcome to say whatever extra stuff you want.
Here is something https://curi.us/2478-super-fast-super-ais
One way the error 4 matters, besides what I said preemptively, is that it means none of the cites in the paper can be trusted without checking them.
FWIW I generally take this to be the case; unless I have strong prior evidence that someone's citations are consistently to a high standard, I don't assume their citations can be easily trusted, at least not for important things.
I don't think the preemptive stuff you said is too important because I think people make mistakes all the time and I was more interested in the fundamental arguments outlined and evaluating them for myself.
Awesome. I think most people do not do that.
Thank you for following the game rules. You're the only person out of four who did that.
BTW, I think that 25% rule-following rate is important evidence about the world, and rates much lower than 100% would be repeatable for many types of simple, clear rules that people voluntarily opt into. It's a major concern for my debate policy proposals: you can put conditions on debates such as that people follow certain methodology, including regarding how to stop debating, and people can agree to those conditions ... and then just break their word later (which has happened to me before).
I love this idea! Would you consider reading "The Case Against Education" by Bryan Caplan? In particular, I'd be interested in errors you find in chapters 1-6 (which cover statistics about education and learning, and which I find broadly convincing). I'm really disagree with many of the arguments in chapters 7-10 (which are primarily comprised of Caplan's policy recommendations), so I would not be likely to defend many of the points made there.
Introduction
I'm no fan of university nor academia, so I do partly agree with The Case Against Education by Bryan Caplan. I do think social climbing is a major aspect of university. (It's not just status signalling. There's also e.g. social networking.)
I'm assuming you can electronically search the book to read additional context for quotes if you want to.
Error One
You only need to find one job. Spending even a year on a difficult job search, convincing one employer to give you a chance, can easily beat spending four years at university and paying tuition. If you do well at that job and get a few years of work experience, getting another job in the same industry is usually much easier.
So I disagree that education pays, under the signalling model, for a single individual. I think a difficult job search is typically more efficient than university.
This works in some industries, like software, better than others. Caplan made a universal claim so there's no need to debate how many industries this is viable in.
Another option is starting a company. That's a lot of work, but it can still easily be a better option than going to university just so you can get hired.
Suppose, as a simple model, that 99% of jobs hire based on signalling and 1% don't. If lots of people stop going to university, there's a big problem. But if you individually don't go, you can get one of the 1% of non-signalling jobs. Whereas if 3% of the population skipped university and competed for 1% of the jobs, a lot of those people would have a rough time. (McDonalds doesn't hire cashiers based on signalling – or at least not the same kind of signalling – so imagine we're only considering good jobs in certain industries so the 1% non-signalling jobs model becomes more realistic.)
I've been reading chapter 5 trying to figure out if Caplan ever considers alternatives to university besides just entering the job market in the standard way. This is a hint that he doesn't.
Foregone earnings are not a cost of going to university. They are a benefit that should be added on to some, but not all, alternatives to university. Then univeristy should be compared to alternatives for how much benefit it gives. When doing that comparison, you should not subtract income available in some alternatives from the benefit of university. Doing that subtraction only makes sense and works out OK if you're only considering two options: university or get a job earlier. When there are only two options, taking a benfit from one and instead subtracting it from the other as an opportunity cost doesn't change the mathematical result.
See also Capitalism: A Treatise on Economics by George Reisman (one of the students of Ludwig von Mises) which criticizes opportunity costs:
That's from the section "Critique of the Concept of Imputed Income" which is followed by the section "Critique of the Opportunity-Cost Doctrine". The book explains its point in more detail than this quote. I highly recommend Reisman's whole book to anyone who cares about economics.
Risk: I looked for discussion of alternatives besides university or entering the job market early, such as a higher effort job search or starting a business. I didn't find it, but I haven't read most of the book so I could have missed it. I primarily looked in chapter 5.
Error Two
(Bold added to quote.)
The full cost of the cruise is not just the fare. It's also the time cost of going on the cruise. It's very easy to value the cruise experience at more than the ticket price, but still not go, because you'd rather vacation somewhere else or stay home and write your book.
BTW, Caplan is certainly familiar with time costs in general (see e.g. the last sentence quoted).
Error Three
(Bold added.)
First, minor point, some economists have that kind of perspective about rate of return. Not all of them.
And I sympathize with the laymen. You should consider whether you want to go to university. Will you enjoy your time there? Future income isn't all that matters. Money is nice but it doesn't really buy happiness. People should think about what they want to do with their lives, in realistic ways that take money into account, but which don't focus exclusively on money. In the final quoted sentence he mentions that students (on average) probably "prize worldly success even more than they admit". I agree, but I think some of those students are making a mistake and will end up unhappy as a result. Lots of people focus their goals too much on money and never figure out how to be happy (also they end up unhappy if they don't get a bunch of money, which is a risk).
But here's the more concrete error: The survey does not actually show that students view education in terms of economic returns only. It doesn't show that students agree with Caplan.
The issue, highlighted in the first sentence, is "economists use a single metric—rate of return". Do students agree with that? In other words, do students use a single metric? A survey where e.g. 90% of them care about that metric does not mean they use it exclusively. They care about many metrics, not a single one. Caplan immediately admits that so I don't even have to look the study up. He says 'Less than half [of students surveyed] say the same [very important or essential reason to go to university] about “developing a meaningful philosophy of life.”' Let's assume less than half means a third. Caplan tries to present this like the study is backing him up and showing how students agree with him. But a third disagreeing with him on a single metric is a ton of disaagreement. If they surveyed 50 things, and 40 aren't about money, and just 10% of students thought each of those 40 mattered, then maybe around zero students would agree with Caplan about only the single metric being important (the answers aren't independent so you can't just use math to estimate this scenario btw).
Bonus Error
This neglects to consider the classical liberal view (which I believe, and which an economist ought to be familiar with) of the harmony of (rational) interests of society and the individual. There is no necessary conflict or tradeoff here. (I searched the whole book for "conflict", "harmony", "interests" and "classical" but didn't find this covered elsewhere.)
I do think errors of omission are important but I still didn't want to count this as one of my three errors. I was trying to find somewhat more concrete errors than just not talking about something important and relevant.
Bonus Error Two
This doesn't work because lots of things people care about are incommensurable. They're in different dimensions that you can't convert between. I wrote about the general issue of taking into account multiple dimensions at once at https://forum.effectivealtruism.org/posts/K8Jvw7xjRxQz8jKgE/multi-factor-decision-making-math
A different way to look at it is that the value of X in money is wildly variable by context, not a stable number. Also how much people would pay to obtain something is wildly variable by how much money they have, not a stable number.
Potential Error
If university education correlates with higher income, that doesn't mean it causes higher income. Maybe people who are likely to get high incomes are more likely to go to university. There are also some other correlation isn't causation counter-arguments that could be made. Is this addressed in the book? I didn't find it, but I didn't look nearly enough to know whether it's covered. Actually I barely read anything about his claims that university results in higher income, which I assume are at least partly based on correlation data, but I didn't really check. So I don't know if there's an error here but I wanted to mention it. If I were to read the book more, this is something I'd look into.
Screen Recording
Want to see me look through the book and write this post? I recorded my process with sporadic verbal commentary:
https://www.youtube.com/watch?v=BQ70qzRG61Y
My response to Error 1:
As I understand it your key points are this:
Here's my response:
1.
I get what you're saying here - in fact I was offered a software engineering job out of high school and turned it down. I a friend who made the same decision. I don't think this argument works overall, though, for three reasons. First, getting your foot in the door is decently challenging. Second, it limits your employment options in a way that's not practical. Third, college is an extremely good value proposition for the sort of person who could get a high paying job out of high school.
So how could you get your foot in the door? In my case, a former teacher got me an internship that was meant for a college student - then I had to interview well. In the case of my friend, he had some really impressive projects on GitHub which got him noticed (for a summer job). So there's an element of luck (having connections) or perhaps innate talent (not many people, regardless of if they have a degree, build a really interesting solo project). Luck is luck, but perhaps a motivated person of average talent could build a solo project good enough to land them a job with no degree. Doing so, however, is a risky proposition. You'd be investing a lot of time and effort into a chance for a job. At the same time, you wouldn't have a good understanding of the odds because it's such an uncommon path.
Even if you landed one of those jobs, there's a good chance it would be far away from your family because companies that are willing to hire someone straight out of high school are so few and far between. Even for someone who's willing to move away from family, they'd then need to have the money saved up to make that leap. And if you do get the job? Better hope you don't get fired. If so, you'll have extremely limited employment options compared to someone with a degree because 99% of employers are simply going to throw away your resume. You may have to uproot your life and move again.
Lastly, for the few people who are in a really good position to get a high paying job out of high school, college is a really good value proposition. If you're impressive enough to land that job, you can probably also get a merit-based scholarship. You can also go to a top-tier school where you'll be able to marry rich (as Caplan discusses), network, and take advantages of opportunities for research and entrepreneurship. Alternatively, you might be able to skate by without putting a lot of hours in and use your free time for something else, further reducing the opportunity cost of college.
2.
Only 40% of small businesses turn a profit (https://www.chamberofcommerce.org/small-business-statistics/). A 60% chance of making no money or losing money is an unacceptable risk for 18 year olds. Where are the savings accounts that are going to pay for their food and housing if they aren't making an income?
Plus, they'd need funding. A business loan for a new entrepreneur out of high school is not a thing. They look at your personal credit score. They may require collateral. SBA loans look at invested equity.
VC-backed ventures are even riskier. Founders typically work for nothing for years (which recent high school grads just can't do because they don't have money saved up to live on) for a slim shot at getting rich.
Overall, entrepreneurship is a high risk, high reward option which is not a similar value proposition to college.
3.
Caplan mentions this in chapter 8. Caplan essentially argues that vocational education also pays. Comparing between vocational and collegiate education is challenging due limited data.
4.
If you don't count opportunity costs, doesn't that make college look even better?
5.
I agree with you - Caplan is way too dismissive of this.
Are you looking to have a debate with me or just sharing your thoughts? Either way is fine; I just want to clarify.
Should have specified. That was meant as my debate response under the rules.
OK. Would you write a thesis statement that you think is true, and expect me to disagree with, that you'd like to debate? (Or a thesis for me that you want to refute would also work.) So we can clarify what we're debating.
I didn't understand that by "debate" you meant an extended back and forth. I considered my response to be the debate. Sorry for the misunderstanding, but I am not interested in what I think you are looking for.
You might be interested in some pieces I wrote on this recently, which don't explicitly show factual errors but do offer a criticism of the book. See here and here.
Thanks, I enjoy reading these. I appreciate that you're cautious to be too strong in your criticism but I do think that Caplan's dismissal of quasi-experiments is a more or less a factual error.
Sure, just chapters 1-6 of that book is fine.