This is a special post for quick takes by JWS. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

The HLI discussion on the Forum recently felt off to me, bad vibes all around. It seems very heated, not a lot of scout mindset, and reading the various back-and-forth chains I felt like I was  'getting Eulered' as Scott once described. 

I'm not an expert on evaluating charities, but I followed a lot of links to previous discussions and found this discussion involving one of the people running an RCT on Strongminds (which a lot of people are waiting for the final results of) who was highly sceptical of SM efficacy. But the person offering counterarguments in the thread seems to be just as valid to me? My current position, for what it's worth,[1] is:

  • the initial Strongminds results of 10x cash transfer should raise a sceptical response. most things aren't that effective
  • it's worth there being exploration of what the SWB approach would recommend as the top charities (think of this as trying other bandits in a multi-armed bandit charity evaluation problem)
  • it's very difficult to do good social science, and the RCT won't give us dispositive evidence about the effectiveness of Strongminds (especially at scale), but it may help us update. In general we should be mindful of how far we can make rigorous empirical claims in the social sciences
  • HLI has used language too loosely in the past and overclaimed/been overconfident, which Michael has apologised for, though perhaps some critics would like a stronger signal of neutrality (this links to the 'epistemic probation' comments)
  • GiveWell's own 'best guess' analysis seems to be that Strongminds is 2.3x that of GiveDirectly.[2] I'm generally a big fan of the GiveDirectly approach for reasons of autonomy - even if Strongminds got reduced in efficacy to around ~1x GD, it'd still be a good intervention? I'm much more concerned with what this number is than the tone of HLIs or Michael's claims tbh (though not at the expense of epistemic rigour).
  • The world is rife with actively wasted or even negative action, spending, and charity. The integrity of EA research, and holding charity evaluators to account is important to both the EA mission and EAs identity. But HLI seems to have been singled out for very harsh criticism,[3] but so much of the world is worse.

I'm also quite unsettled by a lot of what I call 'drive-by downvoting'. While writing a comment is a lot more effort than clicking to vote on a comment/post, I think the signal is a lot higher, and would help those involved in debates reach consensus better. Some people with high-karma accounts seem to be making some very strong votes on that thread, and very few are making their reasoning clear (though I salute those who are in either direction).

So I'm very unsure how to feel. It's an important issue, but I'm not sure the Forum has shown itself in a good light in this instance.

  1. ^

    And I stress this isn't much in this area, I generally defer to evaluators

  2. ^

    On the table at the top of the link, go to the column 'GiveWell best guess' and the row 'Cost-effectiveness, relative to cash'

  3. ^

    Again, I don't think I have the ability to adjudicate here, which is part of why I'm so confused.

Some people with high-karma accounts seem to be making some very strong votes on that thread, and very few are making their reasoning clear (though I salute those who are in either direction).

I think this is a significant datum in favor of being able to see the strong up/up/down/strong down spread for each post/comment. If it appeared that much of the karma activity was the result of a handful of people strongvoting each comment in a directional activity, that would influence how I read the karma count as evidence in trying to discern the community's viewpoint. More importantly, it would probably inform HLI's takeaways -- in its shoes, I would treat evidence of a broad consensus of support for certain negative statements much, much more seriously than evidence of carpet-bomb voting by a small group on those statements.

Indeed our new reacts system separates them. But our new reacts system also doesn't have strong votes. A problem with displaying the number of types of votes when strong votes are involved is that it much more easily allows for deanonymization if there are only a few people in the thread.

That makes sense. On the karma side, I think some of my discomfort comes from the underlying operationalization of post/comment karma as merely additive of individual karma weights. 

True opinion of the value of the bulk of posts/comments probably lies on a bell curve, so I would expect most posts/comments to have significantly more upvotes than strong upvotes if voters are "honestly" conveying preferences and those preferences are fairly representative of the user base. Where the karma is coming predominately from strongvotes, the odds that the displayed total reflects the opinion of a smallish minority that feels passionately is much higher. That can be problematic if it gives the impression of community consensus where no such consensus exists.

If it were up to me, I would probably favor a rule along the lines of: a post/comment can't get more than X% of its net positive karma from strongvotes, to ensure that a high karma count reflects some degree of breadth of community support rather than voting by a small handful of people with powerful strongvotes. Downvotes are a bit trickier, because the strong downvote hammer is an effective way of quickly pushing down norm-breaking and otherwise problematic content, and I think putting posts into deep negative territory is generally used for that purpose.

Looks like this feature is being rolled out on new posts. Or at least one post: https://forum.effectivealtruism.org/posts/gEmkxFuMck8SHC55w/introducing-the-effective-altruism-addiction-recovery-group

EA is just a few months out from a massive scandal caused in part by socially enforced artificial consensus (FTX), but judging by this post nothing has been learned and the "shut up and just be nice to everyone else on the team" culture is back again, even when truth gets sacrificed on the process. No thinks HLI is stealing billions of dollars of course, but the charge that they keep quasi-deliberately stacking the deck in StrongMinds' favour is far from outrageous and should be discussed honestly and straightforwardly.

JWS' quick take has often been in negative agreevote territory and is +3 at this writing. Meanwhile, the comments of the lead HLI critic suggesting potential bad faith have seen consistent patterns of high upvote / agreevote. I don't see much evidence of "shut up and just be nice to everyone else on the team" culture here.

Hey Sol, some thoughts on this comment:

  • I don't think the Forum's reaction to the HLI post has been "shut up and just be nice to everyone else on the team", as Jason's response suggested.
  • I don't think mine suggests that either! In fact, my first bullet point has a similar sceptical prior to what you express in this comment[1] I also literally say "holding charity evaluators to account is important to both the EA mission and EAs identity", and point that I don't want to sacrifice epistemic rigour. In fact, one of my main points is that people - even those disagreeing with HLI, are shutting up too much! I think disagreement without explanation is bad, and I salute the thorough critics on that post who have made their reasoning for putting HLI in 'epistemic probation' clear.
  • I don't suggest 'sacrificing the truth'. My position is that the truth on StrongMind's efficacy is hard to get a strong signal on, and therefore HLI should have been more modest early on their history, instead of framing it as the most effective way to donate.
  • As for the question of whether HLI were "quasi-deliberately stacking the deck", well I was quite open that I think I am confused on where the truth is, and find it difficult to adjudicate what the correct takeway should be.

I don't think we really disagree that much, and I definitely agree that the HLI discussion should proceed transparently and EA has a lot to learn from the last year, including FTX. I think if you maybe re-read my Quick Take, I'm not taking the position you think I am.

  1. ^

    That's my interpretation of course, please correct me if I've misunderstood

Some personal reflections on EAG London:[1]

  • Congrats to the CEA Events Team for their hard work and for organising such a good event! 👏
  • The vibe was really positive! Anecdotally I had heard that the last EAG SF was gloom central, but this event felt much more cheery. I'm not entirely sure why, but it might have had something to do with the open venue, the good weather, or there being more places to touch grass in London compared to the Bay. 
  • I left the conference intellectually energised (though physically exhausted). I'm ready to start drafting some more Forum Post ideas that I will vastly overestimate my ability to finish and publish 😌
  • AI was (unsurprisingly) the talk of the town. But I found that quite a few people,[2] myself included, were actually more optimistic on AI because of the speed of the social response to AI progress and how pro-safety it seems to be, along with low polarisation along partisan lines.
  • Related to the above, I came away with the impression that AI Governance may be as if not more important than Technical Alignment in the next 6-12 months. The window for signficiant political opportunity is open now but may not stay open forever, so the AI Governance Space is probably where the most impactful opportunties might be at the moment.
  • My main negative takeaway was that there seemed to be so little reflection on the difficult last ~6 months for the EA movement. There was 1 session on the FTX, but none at all on the other problems we've faced as a community such as Sexual Abuse and Harassment, Trust in EA Leadership, Community Epistemic Health, and whether EA Institutions ought to be reformed. In the opening talk, the only reference was that the ship of EA feels like it's been 'going through a storm', and the ideas presented weren't really accompanied by a route to embed them in the movement. To me it felt like another missed opportunity after Toby Ord's speech, and I don't feel like we as a community have fully grasped or reckoned with the consequences of the last 6 months. I think this was also a common sentiment among people I talked to[3] regardless of whether they agreed with me on proposed solutions.
  • Shrimp Welfare is now post-meme and part of the orthodoxy, and the Shrimp Welfare Project is unironically an S(hrimp)-Tier Charity. 🦐🥰
  • EA Conference Food continues to be good to me. I don't know why it seems to get consistently low ratings in the feedback surveys 🤷 I'd happily tile the universe with those red velvet desserts.
  1. ^

    Feel free to add your own thoughts and responses, I'd love to hear them :)

  2. ^

    {warning! sampling bias alert!} 

  3. ^

    {warning! sampling bias alert!} 

I had heard that the last EAG SF was gloom central, but this event felt much more cheery. I'm not entirely sure why

I assume any event in SF gets a higher proportion of AI doomers than one in London.

Suing people nearly always makes you look like the assholes I think. 

As for Torres, it is fine for people to push back against specific false things they say. But fundamentally, even once you get past the misrepresentations, there is a bunch of stuff that they highlight that various prominent EAs really do believe and say that genuinely does seem outrageous or scary to most people, and no amount of pushback is likely to persuade most of those people otherwise. 

In some cases, I think that outrage fairly clearly isn't really justified once you think things through very carefully: i.e. for example the quote from Nick Beckstead about saving lives being all-things-equal higher value in rich countries, because of flow-through effects which Torres always says makes Beckstead a white supremacist.  But in other cases well, it's hardly news that utilitarianism has a bunch of implications that strongly contradict moral commonsense, or that EAs are sympathetic to utilitarianism. And 'oh, but I don't endorse [outrageous sounding view], I merely think there is like a 60% chance it is true, and you should be careful about moral uncertainty' does not sound very reassuring to a normal outside person. 

For example, take Will on double-or-nothing gambles (https://conversationswithtyler.com/episodes/william-macaskill/) where you do something that has a 51% chance of destroying everyone, and a 49% chance of doubling the number of humans in existence (now and in the future). It's a little hard to make out exactly what Will's overall position on this, but he does say it is hard to justify not taking those gambles:

'Then, in this case, it’s not an example of very low probabilities, very large amounts of value. Then your view would have to argue that, “Well, the future, as it is, is like close to the upper bound of value,” in order to make sense of the idea that you shouldn’t flip 50/50. I think, actually, that position would be pretty hard to defend, is my guess. My thought is that, probably, within a situation where any view you say ends up having pretty bad, implausible consequences'

And he does seem to say there are some gambles of this kind he might take:

'Also, just briefly on the 51/49: Because of the pluralism that I talked about — although, again, it’s meta pluralism — of putting weight on many different model views, I would at least need the probabilities to be quite a bit wider in order to take the gamble...'

Or to give another example, the Bostrom and Shulman paper on digital minds talks about how if digital minds really have better lives than us, than classical (total) utilitarianism says they should take all our resources and let us starve. Bostrom and Shulman are against that in the paper. But I think it is fair to say they take utilitarianism seriously as a moral theory. And lots of people are going to think taking seriously the idea that this could be right is already corrupt, and vaguely Hitler-ish/reminiscent of white settler expansionism against Native Americans. 

In my view, EAs should be more clearly committed to rejecting (total*) utilitarianism in these sorts of cases than they actually are. Though I understand that moral philosophers correctly think the arguments for utilitarianism, or views which have similar implications to utilitarianism in these contexts, are disturbingly strong. 

*In both of the cases described, person-affecting versions of classical utilitarianism which deny creating happy people is good don't have the scary consequences. 


 

First, I want to thank you for engaging David. I get the sense we've disagreed a lot on some recent topics on the Forum, so I do want to say I appreciate you explaining your point of view to me on them, even if I do struggle to understand. Your comment above covers a lot of ground, so if you want to switch to a higher-bandwidth way of discussing them, I would be happy to. I apologise in advance if my reply below comes across as overly hostile or in bad-faith - it's not my intention, but I do admit I've somewhat lost my cool on this topic of late. But in my defence, sometimes that's the appropriate response. As I tried to summarise in my earlier comment, continuing to co-operate when the other player is defecting is a bad approach.

As for your comment/reply though, I'm not entirely sure what to make of it. To try to clarify, I was trying to understand why the Twitter discourse between people focused on AI xRisk and the FAact Community[1] has been so toxic over the last week, almost entirely (as far as I can see) from the latter to the former.  Instead, I feel like you've steered the conversation away to a discussion about the implications of naïve utilitariansim. I also feel we may disagree on how much Torres has legitimate criticisms and how of their work is simply wilful 'misrepresentation' (I wonder if you've changed your mind on Torres since last year?). There are definitely connections there, but I don't think it's quite the same conversation, and I think it somewhat telling that you responded to suggestions 3 & 4, and not 1 & 2, which I think are far less controversial (fwiw I agree that legal action should only be used once all other courses of actions have failed). 

To clarify what I'm trying to get at here with some more examples, which I hope will be reasonably unobjectionable even if incorrect:

  • Yesterday Timnit again insinuated that William MacAskill was a eugenicist. You can read that tweet and I don't think she means this in a 'belongs to a historical tradition' way I think she means it in a 'this is what he believes' way. I haven't seen anyone from the FAact Community call this out. In fact, Margaret Mitchell's response to Jess Whittlestone's attempt to offer an olive branch was met with confusion that there's any extreme behaviour amongst the AI Ethics field.
  • People working in AI Safety and/or associated with EA should therefore expect to be called eugencists, and the more Timnit's perspective gains prominence that more they will have to deal with the consequences of this.
  • Noah Giansiracua's thread that I linked in the last tweet is highly conspiratiorial, spreads reckless misinformation, and is often just wrong. But not only has he doubled down despite pushback,[2] but he today tried to bridge the Safety/Ethics divide today seemingly unware that trashing the other side in a 26 tweet screed is massively damaging to this goal.
  • This suggests that while AI Safety efforts to build bridges may have some success, there may a strong and connected group of scholars who will either not countenance it at all, or be happy to stick the knife in once the opportunity appears. If I were an AI Safety academic, I wouldn't trust Noah.
  • In general, my hope is that work is going on behind the scenes and off Twitter to build bridges between the two camps. But a lot of names on the FAact side that seem to be more toxic are quite prominent, and given the culture of silence/bullying involved there (again, see the Rumman Chowdhury tweet in the original comment, with further evidence here) I'm not sure I feel as hopeful it will happen as I did in recent weeks.
  • The more I look into it, the more I see the hostility as asymmetric. I'd be very open to counter-evidence on this point, but I don't see AI Safety people treating the other camp with such naked hostility, and definitely not from the more influential members of the movement, as far as I can tell. (And almost certainly not any more than usual over the past week or so? As I said, a lot of this seems to have kicked off post CAIS Letter).
  • My call to not 'be passive' was one in which I expect hostility to the field of AI Safety to continue, perhaps grow, and be amplified by influential figures in the AI space. I maintian the general EA media strategy of ignoring critics, and if engaging them only doing so with the utmost politeness, has been a net negative strategy, and will continue to be so if continued - with perhaps very bad consequences.

Anyway, I'd like to thank you for sharing your perspective, and I do hope my perceptions have been skewed to be too pessimistic. To others reading, I'd really appreciate hearing your thoughts on these topics, and points of view or explanations that might change my mind

  1. ^

    I think this is better than the Safety/Ethics labelling, but I'm referring to the same divide here

  2. ^

    Long may EA Twitter dunk on him until a retraction appears

I mean in a sense a venue that hosts torres is definitionally trashy due to https://markfuentes1.substack.com/p/emile-p-torress-history-of-dishonesty except insofar as they haven't seen or don't believe this Fuentes person. 

I guess I thought my points about total utilitarianism were relevant, because 'we can make people like us more by pushing back more against misrepresentation' is only true insofar as the real views we have will not offend people. I'm also just generically anxious about people in EA believing things that feel scary to me.  (As I say, I'm not actually against people correcting misrepresentations obviously.) 

I don't really have much sense of how reasonable critics are or aren't being, beyond the claim that sometimes they touch on genuinely scary things about total utilitarianism, and that it's a bit of a problem that the main group arguing for AI safety contains a lot of prominent people with views that (theoretically) imply that we should be prepared to take big chances of AI catastrophe rather than pass up small chances of lots of v. happy digital people.


On Torres specifically: I don't really follow them in detail (these topics make me anxious), but I didn't intend to be claiming that they are a fair or measured critic, just that they have decent technical understanding of the philosophical issues involved and sometimes puts their finger on real weaknesses. That is compatible with them also saying a lot of stuff that's just false. I think motivated reasoning is a more likely explanation for why they says false things than conscious lying, but that's just because that's my prior about most people. (Edit: Actually, I'm a little less sure of that, after being reminded of the sockpuppetry allegations by quinn below. If those are true, that is deliberate dishonesty.) 

Regarding Gebru calling Will a eugenicist. Well, I really doubt you could "sue" over that, or demonstrate to the people most concerned about this that he doesn't count as one by any reasonable definition. Some people use "eugenicist" for any preference that a non-disabled person comes into existence rather than a different disabled person. And Will does have that preference. In What We Owe the Future, he takes it as obvious that if you have a medical condition that means if you conceive right now, your child will have awful painful migraines, then you should wait a few weeks to conceive so that you have a different child who doesn't have migraines. I think plenty ordinary people would be fine with that and puzzled by Gebru-like reactions, but it probably does meet some literal definitions that have been given for "eugenics". Just suggesting he is a "eugenicist" without further clarification is nonetheless misleading and unfair in my view, but that's not quite what libel is. Certainly I have met philosophers with strong disability rights views who regard Will's kind of reaction to the migraine case as bigoted.  (Not endorsing that view myself.) 

None of this is some kind of overall endorsement of how the 'AI ethics' crowd on Twitter talk overall, or about EAs specifically. I haven't been much exposed to it, and when I have been, I generally haven't liked it. 
 

I've generally been quite optimistic that the increased awareness AI xRisk has got recently can lead to some actual progress in reducing the risks and harms from AI. However, I've become increasingly sad at the ongoing rivalry between the AI 'Safety' and 'Ethics' camps[1] 😔 Since the CAIS Letter was released, there seems to have been an increasing level of hostility on Twitter between the two camps, though my impression is that the holistility is mainly one-directional.[2]

I dearly hope that a coalition of some form can be built here, even if it is an uneasy one, but I fear that it might not be possible. It unfortunately seems like a textbook case of mistake vs conflict theory approaches at work? I'd love someone to change my mind, and say that Twitter amplifies the loudest voices,[3] and that in the background people are making attempts to build bridges. But I fear that instead that the centre cannot hold, and that there will be not just simmering resentment but open hostility between the two camps. 

If that happens, then I don't think those involved in AI Safety work can afford to remain passive in response to sustained attack. I think that this has already damaged the prospects of the movement,[4] and future consequences could be even worse. If the other player in your game is constantly defecting, it's probably time to start defecting back.

Can someone please persuade me that my pessimism is unfounded?

  1. ^

    FWIW I don't like these terms, but people seem to intuitively grok what is meant by them

  2. ^

    I'm open to be corrected here, but I feel like those sceptical of the AI xRisk/AI Safety communities have upped the ante in terms of the amount of criticism and its vitriol - though I am open to the explanation that I've been looking out for it more

  3. ^

    It also seems very bad that the two camps do most of their talking to each other (if they do at all) via Twitter, that seems clearly suboptimal!!

  4. ^

    The EA community's silence regarding Torres has led to the acronym 'TESCREAL' gaining increasing prominence amongst academic circles - and it is not a neutral one, and gives them more prominence and a larger platform.

What does not "remaining passive" involve? 

I can't say I have a strategy David. I've just been quite upset and riled up by the discourse over the last week just as I had gained some optimism :( I'm afraid that by trying to turn the other cheek to hostility, those working to mitigate AI xRisk end up ceding the court of public opinion to those hostile to it.

I think some suggestions would be:

  • Standing up to, and callying out, bullying in these discussions can cause a preference cascade of pushback to it - see here - but someone needs to stand up for people to realise that dominant voices are not representative of a field, and silence may obscure areas for collaboration and mutual coalitions to form.
  • Being aware of what critiques of EA/AI xRisk get traction in adjacent communities. Some of it might be malicious, but a lot of it seems to be a default attitude of scepticism merged with misunderstandings. While not everyone would change their mind, I think people reaching 'across the aisle' might correct the record in many people's minds. Even if not for the person making the claims, perhaps for those watching and reading online. 
  • Publicly pushing back on Torres. I don't know what went down when they were more involved in the EA movement that caused their opinion to flip 180 degrees, but I think the main 'strategy' has been to ignore their work and not respond to their criticism. The result: their ideas gaining prominence in the AI Ethics field, publications in notable outlets, despite acting consistently in bad faith. To their credit, they are voraciously productive in their output and I don't expect to it slow down. Continuing with a failed strategy doesn't sound like the right call here.
  • In cases of the most severe hostility, potential considering legal or institutional action? In this example, can you really just get away with calling someone a eugenicist when it's so obviously false? But there have been cases where people have successfully sued for defamation for statements made on Twitter. That's an extreme option though, but not worth ignoring entirely.

I would recommend trying to figure out how much loud people matter. Like it's unclear if anyone is both susceptible to sneer/dunk culture and potentially useful someday. Kindness and rigor come with pretty massive selection effects, i.e., people who want the world to be better and are willing to apply scrutiny to their goals will pretty naturally discredit hostile pundits and just as naturally get funneled toward more sophisticated framings or literatures. 

I don't claim this attitude would work for all the scicomm and public opinion strategy sectors of the movement or classes of levers, but it works well to help me stay busy and focused and epistemically virtuous. 

I wrote some notes about a way forward last february, I just CC'd them to shortform so I could share with you https://forum.effectivealtruism.org/posts/r5GbSZ7dcb6nbuWch/quinn-s-shortform?commentId=nskr6XbPghTfTQoag 

related comment I made: https://forum.effectivealtruism.org/posts/nsLTKCd3Bvdwzj9x8/ingroup-deference?commentId=zZNNTk5YNYZRykbTu 

Oh hi. Just rubber-ducking a failure mode some of my Forum takes[1] seem to fall into, but please add your takes if you think that would help :)

----------------------------------------------------------------------------

Some of my posts/comments can be quite long - I like responding with as much context as possible on the Forum, but as some of the original content itself is quite long, that means my responses can be quite long! I don't think that's necessarily a problem in itself, but the problem then comes with receiving disagree votes without comments elaborating them.

<I want to say, this isn't complaining about disagreement. I like disagreement[2], it means I get to test my ideas and arguments>

However, it does pose an issue with updating my thoughts. A long post that has positive upvotes, negative disagree votes, and no (or few) comments means it's hard for me to know where my opinion differs from other EA Forum users, and how far and in what direction I ought to update in. The best examples from my own history:

----------------------------------------------------------------------------

Potential Solutions?:

  • Post shorter takes: This means the agree/disagree signal will be more clearly linked to my context, but it means I won't be able to add all the content that I do now, which I think adds value.
  • Post fewer takes: Ceteris paribus this might not be expected work, but the argument would be that with more time between fewer contributions, the quality of them would go up so they'd usually make one clear point for readers to agree/disagree with
  • Post more takes: Splitting a large comment into lots of sub-comments would do the same thing as 'post shorter takes', and keep context. The cost is a karma-farming effect, and splitting a thread into multiple parts, making it harder to follow.
  • Explicitly ask for comments: I feel like this wouldn't work? It feels a bit whiny and has no enforcement mechanism?
  • Pre-commit to up-voting responses: Relies on others to trust me, might not be credible for those who strongly disagree with me.

Suggestions/thoughts on any of the above welcome

  1. ^

    I'm using 'takes' to refer to both posts and comments, it's not meant to imply low-quality contributions to the forum

  2. ^

    Modulo good faith norms etc

  3. ^

    Also, the agree-vote karma on this piece went all over the place, and comments might have been a place to hash this disagreement out

Has anyone else listened to the latest episode of Clearer Thinking ? Spencer interviews Richard Lang about Douglas Harding's "Headless Way", and if you squint enough it's related to the classic philosophical problems of consciousness, but it did remind me a bit of Scott A's classic story "Universal Love, Said The Cactus Person" which made me laugh. (N.B. Spencer is a lot more gracious and inquisitive than the protagonist!)

But yeah if you find the conversation interesting and/or like practising mindfulness meditation, Richard has a series of guided meditations on the Waking Up App, so go and check those out.

In this comment I was going to quote the following from R. M. Hare:

"Think of one world into whose fabric values are objectively built; and think of another in which those values have been annihilated. And remember that in both worlds the people in them go on being concerned about the same things - there is no difference in the 'subjective' concern which people have for things, only in their 'objective' value. Now I ask, What is the difference between the states of affairs in these two worlds? Can any other answer be given except 'None whatever'?"

I remember this being quoted in Mackie's Ethics during my undergraduate degree, and it's always stuck with me as a powerful argument against moral non-naturalism and a close approximation of my thoughts on moral philosophy and meta-ethics.

But after some Google-Fu I couldn't actually track down the original quote. Most people think it comes from the Essay Nothing Matters in Hare's Applications of Moral Philosophy. While this definitely seems to be in the same spirit of the quote, the online scanned pdf version of Nothing Matters that I found doesn't contain this quote at all. I don't have access to any of the academic institutions to check other versions of the paper or book.

Maybe I just missed the quote by skimreading too quickly? Are there multiple versions of the article? Is it possible that this is a case of citogenesis? Perhaps Mackie misquoted what R.M. Hare said, or perhaps misattributed it and and it actually came from somewhere else? Maybe it was Mackie's quote all along? 

Help me EA Forum, you're my only hope! I'm placing a £50 bounty to a charity of your choice for anyone who can find the original source of this quote, in R. M. Hare's work or otherwise, as long as I can verify it (e.g. a screenshot of the quote if it's from a journal/book I don't have access to).

More from JWS
Curated and popular this week
Relevant opportunities