Philosophy
Philosophy
Investigation of the abstract features of the world, including morals, ethics, and systems of value

Quick takes

11
2y
1
Steelmanning is typically described as responding to the “strongest” version of an argument you can think of. Recently, I heard someone describe it a slightly different way, as responding to the argument that you “agree with the most.”  I like this framing because it signals an extra layer of epistemic humility: I am not a perfect judge of what the best possible argument is for a claim. In fact, reasonable people often disagree on what constitutes a strong argument for a given claim. This framing also helps avoid a tone of condescension that sometimes comes with steelmanning. I’ve been in a few conversations in which someone says they are “steelmanning” some claim X, but says it in a tone of voice that communicates two things: * The speaker thinks that X is crazy. * The speaker thinks that those who believe X need help coming up with a sane justification for X, because X-believers are either stupid or crazy. It’s probably fine to have this tone of voice if you’re talking about flat earthers or young earth creationists, and are only “steelmanning” X as a silly intellectual exercise. But if you’re in a serious discussion, framing “steelmanning” as being about the argument you "agree with the most" rather than the "strongest" argument might help signal that you take the other side seriously. Anyone have thoughts on this? Has this been discussed before? 
8
2y
I think we separate causes and interventions into "neartermist" and "longtermist" causes too much. Just as some members of the EA community have complained that AI safety is pigeonholed as a "long-term" risk when it's actually imminent within our lifetimes[1], I think we've been too quick to dismiss conventionally "neartermist" EA causes and interventions as not valuable from a longtermist perspective. This is the opposite failure mode of surprising and suspicious convergence - instead of assuming (or rationalizing) that the spaces of interventions that are promising from neartermist and longtermist perspectives overlap a lot, we tend to assume they don't overlap at all, because it's more surprising if the top longtermist causes are all different from the top neartermist ones. If the cost-effectiveness of different causes according to neartermism and longtermism are independent from one another (or at least somewhat positively correlated), I'd expect at least some causes to be valuable according to both ethical frameworks. I've noticed this in my own thinking, and I suspect that this is a common pattern among EA decision makers; for example, Open Phil's "Longtermism" and "Global Health and Wellbeing" grantmaking portfolios don't seem to overlap. Consider global health and poverty. These are usually considered "neartermist" causes, but we can also tell a just-so story about how global development interventions such as cash transfers might also be valuable from the perspective of longtermism: * People in extreme poverty who receive cash transfers often spend the money on investments as well as consumption. For example, a study by GiveDirectly found that people who received cash transfers owned 40% more durable goods (assets) than the control group. Also, anecdotes show that cash transfer recipients often spend their funds on education for their kids (a type of human capital investment), starting new businesses, building infrastructure for their communities, and h
4
2y
It seems like decibels (dB) are a natural unit for perceived pleasure and pain, since they account for the fact that humans and other beings mostly perceive sensations in proportion to the logarithm of their actual strength. (This is discussed at length in "Logarithmic Scales of Pleasure and Pain".) Decibels are a relative quantity: they express the intensity of a signal relative to another. A 10x difference is 10 dB, a 100x difference is 20 dB, and so on. The "just noticeable difference" in amplitude of sound is ~1 dB, or a ~25% increase. But decibels can also be used in an "absolute" sense by quantifying the ratio of the signal to a reference value. In the case of sound, the reference value is the smallest value that most humans can hear (a sound pressure of 20 micropascals).[1] Since pleasure and pain are perceived according to a log scale, the utility of a sensation could be approximated by: U(S)=U0max(0,log(S/S0)) where S is the intensity of the sensation, S0 is the smallest perceptible sensation, and U0 is a constant that is positive for pleasurable sensations and negative for painful ones. (This is only an approximation because Fechner's law, the principle that governs logarithmic perception of signals, breaks down for very strong and very weak signals.) It seems very natural, therefore, to use decibels as the main unit for pleasure and pain, alongside utils for the utility of perceived sensations, as the relationship between decibels and utils is linear. For example, if a utility function is given by U(D)=0.5max(0,D) where D is the decibel amount, then we have 10 dB of pleasure = 5 utils, 20 dB = 10 utils, 30 dB = 15 utils, and so on. 1. ^ https://en.wikipedia.org/wiki/Decibel#Acoustics_2
4
2y
1
Discussions of the long-term future often leave me worrying that there is a tension between democratic decision-making and protecting the interests of all moral patients (e.g. animals). I imagine two possible outcomes: 1. Mainstream political coalitions make the decisions in their usual haphazard manner.  1. RISK: vast numbers of moral patients are ignored. 2. A small political cadre gains power and ensures that all moral patients are represented in decision-making. 1. RISK: the cadre lacks restraint and leaves its fingerprints on the future. Neither of these is what we should want. CLAIM: The most straightforward way to dissolve this tradeoff is to get the mainstream coalitions to care about all sentient beings before they make irreversible decisions. How? * A major push to change public opinion on animal welfare. Conventional wisdom in EA is to prioritize corporate campaigns over veg outreach for cost effectiveness reasons. The tradeoff I've described here is a point in favor of large-scale outreach. I don't just mean 10x of your grandpa's vegan leafletting. A megaproject-scale campaign would be an entirely different phenomenon.   * A Long Reflection. Give society time to come to its senses on nonhuman sentience. Of course, the importance of changing public opinion depends a lot on how hingey you think the future is, and tractability depends on how close you think we are to the hinge. But in general, I think this is an underrated point for moral circle expansion.
4
2y
1
An unpolished attempt at moral philosophy Summary: I propose a view combining classic utilitarianism with a rule that says not to end streams of consciousness.  Under classic utilitarianism, the only thing that matters is hedonic experiences. People with a person affecting view object to this, but that view comes with issues of its own.  To solve the tension between these two philosophies, I propose a view that adds a rule to classical utilitarianism disallowing directly ending streams of consciousness (SOC)  This is a way to bridge the gap between the person-affecting view and 'personal identity doesn't exist' view and tries to solve some population ethics issues. I like the simplicity of classic utilitarianism. But I have a strong intuition that a stream of consciousness is valuable intrinsically, meaning that it shouldn't be stopped/destroyed. Creating a new stream of consciousness isn't intrinsically valuable (except for the utility it creates).  A SOC isn't infinitely valuable. Here are some exceptions: 1. When not ending a SOC would result in more SOCs ending (see trolley problem): basically you want to break the rule as little as possible  2. The SOC experiences negative utility and there are no signs it will become positive utility (see euthanasia) 3. Ending the SOC will create at least 10x its utility (or a different critical level)  I believe this is compatible with the non-identity problem (it's still unclear who's you if you're duplicated or if you're 20 years older). But I've never felt comfortable with the teleportation argument, and this intuition explains why (as a SOC is being ended).  So generally this means: Making current population happier (or making sure few people die) > increasing amount of people Future people don't have SOCs as they don't exist yet, but it's still important to make their lives go well. Say we live in a simulation. If our simulation gets turned off and gets replaced by a different one of equal value (pain/pleasure
3
2y
Second-best theories & Nash equilibria A general frame I often find comes in handy while analysing systems is to look for look for equilibria, figure out the key variables sustaining it (e.g., strategic complements, balancing selection, latency or asymmetrical information in commons-tragedies), and well, that's it. Those are the leverage points to the system. If you understand them, you're in a much better position to evaluate whether some suggested changes might work, is guaranteed to fail, or suffers from a lack of imagination. Suggestions that fail to consider the relevant system variables are often what I call "second-best theories". Though they might be locally correct, they're also blind to the broader implications or underappreciative of the full space of possibilities. Examples * The allele that causes sickle-cell anaemia is good because it confers resistance against malaria. (A) * Just cure malaria, and sickle-cell disease ceases to be a problem as well. * Sexual liberalism is bad because people need predictable rules to avoid getting hurt. (B) * Imo, allow people to figure out how to deal with the complexities of human relationships and you eventually remove the need for excessive rules as well. * We should encourage profit-maximising behaviour because the market efficiently balances prices according to demand. (A/B) * Everyone being motivated by altruism is better because market prices only correlate with actual human need insofar as wealth is equally distributed. The more inequality there is, the less you can rely on willingness-to-pay to signal urgency of need. Modern capitalism is far from the global-optimal equilibrium in market design. * If I have a limp in one leg, I should start limping with my other leg to balance it out. (A) * Maybe the immediate effect is that you'll walk more efficiently on the margin, but don't forget to focus on healing whatever's causing you to limp in the first place. * Effective altruists seem to have
2
2y
About the Sleeping Beauty Problem. Epistemic status: this is a quick reaction to the latest 80k Hours podcast episode with Joe Carlsmith. This has been my first encounter with the anthropogenic principle. I haven’t read up on this afterwards, so my argument might be easily debunked or the statement in question might be a misrepresentation of the thought experiment.   In the 80,000 hours podcast episode number 152 featuring Joe Carlsmith, Rob Wiblin states that if one thinks that Sleeping Beauty should put 2/3 credence on heads (or whatever option leads to the outcome of being waken up twice, and having the memory of the first awakening erased), this creates a problematic conclusion: An event which creates more observers - such as Sleeping Beauty, who observes the awakening twice in the Heads scenario - would thus be more likely. However, it seems to me like this is a misguided interpretation of the view. In fact, putting 2/3 credence on Heads doesn’t make this more likely, but is rather just the better strategy for the observer who has to guess to which group of observers they belong.
2
2y
Hi, Everyone I dare to insert here a proposal which is, at the same time, vague and ambitious; just to discuss about it. It is nothing firm, just an idea. I excuse for my defective English.  After a long life and many books read, I realize that if we want to improve human life in the sense of prosociality the real target must be human behaviour. If we improve our moral view –our ethos- in a sense of benevolence, altruism and non-aggression –in a rational way, of course- the charities, the economic acts, the deeds must be a necessary consequence of this previous human change.  We know that moral evolution exists. Humanitarian movements like EA show that. Why not to try to go farther? Which is the limit to moral change?  I don´t see anything in this forum dealing with the possibilities of improving moral behavior in individuals –and, consequently, in groups and societies- in order to achieve the highest effective altruism. I mean, doing the job that in the past did the moralistic religions of the Axial Age... but now, independently from irrational religious traditions. So, doing it finally the right way: non-political social change.  We have today the experience, the knowledge in social sciences and the clarity of thought enough to ponder the means for improving human behavior in the sense of extreme prosociality. I realize that no one is just discussing the question. You write about getting as much as possible –with charitable goals- from the people as they are. Don´t you realize that you could get much more by changing morally the people first?  A person is made out of motivations, feelings, rewards and desires. Moral change can act on them. This is historical evidence… And if the outcome of that process of change turns to be unconventional, is not that also the usual result of social change along the history?  At least, just to discuss it…  
Load more (8/14)