All posts

New & upvoted

Friday, 12 July 2024
Fri, 12 Jul 2024

Frontpage Posts

Quick takes

David Rubinstein recently interviewed Philippe Laffont, the founder of Coatue (probably worth $5-10b). When asked about his philanthropic activities, Laffont basically said he’s been too busy to think about it, but wanted to do something someday. I admit I was shocked. Laffont is a savant technology investor and entrepreneur (including in AI companies) and it sounded like he literally hadn’t put much thought into what to do with his fortune. Are there concerted efforts in the EA community to get these people on board? Like, is there a google doc with a six degrees of separation plan to get dinner with Laffont? The guy went to MIT and invests in AI companies. In just wouldn’t be hard to get in touch. It seems like increasing the probability he aims some of his fortune at effective charities would justify a significant effort here. And I imagine there are dozens or hundreds of people like this. Am I missing some obvious reason this isn’t worth pursuing or likely to fail? Have people tried? I’m a bit of an outsider here so I’d love to hear people’s thoughts on what I’m sure seems like a pretty naive take! https://youtu.be/_nuSOMooReY?si=6582NoLPtSYRwdMe
What's the lower bound on vaccine development? Toby Ord writes in a recent post: > The expert consensus was that it would take at least a couple of years for Covid, but instead we had several completely different vaccines ready within just a single year My intuition is that there's a lot more we can shave off from this. The reason I think this is because it seems like vaccine development is mostly bottlenecked by the human-trial phase, which can take upwards of months, whereas developing the vaccine itself can be done in far less time (perhaps a month, but someone correct me if I'm wrong). What are current methods to accelerate the human-trial phase so it's down to a handful of weeks, rather than months? 

Topic Page Edits and Discussion

Thursday, 11 July 2024
Thu, 11 Jul 2024

Frontpage Posts

16
· · 16m read

Quick takes

something I persistently struggle with is that it's near-impossible to know everything that has been said about a topic, and that makes it really hard to know when an additional contribution is adding something or just repeating what's already been said, or worse, repeating things that have already been refuted to an extent this seems inevitable and I just have to do my best and sometimes live with having contributed more noise than signal in a particular case, but I feel like I have an internal tuning knob for "say more" vs. "listen more" and I find it really hard to know which direction is overall best

Topic Page Edits and Discussion

Wednesday, 10 July 2024
Wed, 10 Jul 2024

Frontpage Posts

Quick takes

Microsoft have backed out of their OpenAI board observer seat, and Apple will refuse a rumoured seat, both in response to antitrust threats from US regulators, per Reuters. I don’t know how to parse this—I think it’s likely that the US regulators don’t care much about safety in this decision, and nor do I think it meaningfully changes Microsoft’s power over the firm. Apple’s rumoured seat was interesting, but unlikely to have any bearing either.
A core part of the longtermist project is making it very clear to people today that 21st century humanity is far from the peak of complex civilization. Imagine an inhabitant of a 16th-century medieval city looking at their civilization and thinking “This is it; this is civilization close to its epitome. Sure, we may build a few more castles over there, expand our army and conquer those nearby kingdoms, and develop a new way to breed ultra-fast horses, but I think the future will be like this, just bigger”. As citizens of the 21st century we’re in the position to see how wrong this would be, yet I think we’re prone to making a very similar type of error.  To get past this error, a fun exercise is to try to explain the scale of 21st century civilization in terms of concepts that would be familiar to our 16th century friends. Then we can extrapolate this into the future to better intuit the scale of future civilisations. Here are two ways to do so:    Military power: The United States military is the strongest armed force in the world today. How do we convey the power of such a force to citizens of the distant past? One way would be to ask them to consider their own military - foot soldiers, bowmen, cavalry, and all - and then ask how many such armies would be needed to rival the power of the modern-day US military. I’d guess that the combined armies of 100 medieval kingdoms would struggle to pose a challenge to the US military. Ditto for the 21st century. I expect the combined strength of 100 US militaries[1] to struggle to make a scratch in the military power of future civilizations.  Infrastructure and engineering capability: Men and women of the distant past would view modern-day human civilization as god-like engineers. Today, we build continent-spanning electric grids to power our homes and construct entire cities in a handful of years. How do we communicate this engineering prowess to our 16th century medieval city counterparts? I’m no civil engineer, but I estimate that the largest state governments of today could rebuild the entire infrastructure of a medieval city in a handful of months if they tried. Ditto for the 21st century. I expect that the civilisations of the future will be able to rebuild the entirety of Earth’s infrastructure - cities, power grids, factories, etc. - within a few months. To put that into context, imagine a civilisation that, starting in January, could rebuild London, Shanghai, New York, every highway, airport, bridge, port, and dam, by the time summer rolled around. That would certainly qualify them for the title of a supercivilisation! 1. ^ Again 100 is a rough guess - it could be more or less, potentially by orders of magnitude.
I'm pretty confident that a majority of the population will soon have very negative attitudes towards big AI labs. I'm extremely unsure about what impact this will have on the AI Safety and EA communities (because we work with those labs in all sorts of ways). I think this could increase the likelihood of "Ethics" advocates becoming much more popular, but I don't know if this necessarily increases catastrophic or existential risks.
This is a sloppy rough draft that I have had sitting in a Google doc for months, and I figured that if I don't share it now, it will sit there forever. So please read this is a rough grouping of some brainstormy ideas, rather than as some sort of highly confident and well-polished thesis. - - - - - -  What feedback do rejected applicants want? From speaking with rejected job applicants within the EA ecosystem during the past year, I roughly conclude that they want feedback in two different ways: * The first way is just emotional care, which is really just a different way of saying “be kind rather than being mean or being neutral.”[1] They don’t want to feel bad, because rejection isn’t fun. Anybody who has been excluded from a group of friends, or kicked out of a company, or in any way excluded from something that you want to be included in knows that it can feel bad.[2] It feels even worse if you appear to meet the requirements of the job, put in time and effort to try really hard, care a lot about the community and the mission, perceive this as one of only a few paths available to you for more/higher impact, and then you get summarily excluded with a formulaic email template. There isn’t any feasible way to make a rejection feel great, but you can minimize how crappy it feels. Thank the candidates for their time/effort, and emphasize that you are rejecting this application for this role rather than rejecting this person in general. Don't reject people immediately after their submission; wait a couple of days. If Alice submits a work trial task and less than 24 hours later you reject her, it feels to her like you barely glanced at her work, even if you spent several hours diligently going over it. * Improving. People want actionable feedback. If they lack a particular skill, they would like to know how to get better so that they can go learn that skill and then be a stronger candidate for this type of role in the future. If the main differentiator between me and John Doe was that John Doe scored 50 points better on an IQ test or that he attended Impressive School while I attended No Name School, maybe don’t tell me that.[3] But if the main differentiator is that John Doe has spent a year being a volunteer for the EA Virtual Program or that he is really good with spreadsheets or that this candidate didn’t format documents well, let the candidate know. Now the candidate knows something they can do to make become a more competitive candidate. They will practice their Excel skills and look up spreadsheet tutorials, they can get some volunteering experience with a relevant organization, and they can learn more about how to use headers and to adjust line spacing. Think of this just like a big company investing in the local community college and sponsoring a professorship at the college: they are building a pipeline of potential future employees. Here is a rough hierarchy of what, in an ideal world, I’d like to receive when I am rejected from a job application: * “Thanks for applying. We won’t be moving forward with your application. Although it is never fun to receive an email like this, we want to express appreciation for the time you spent on this selection process. Regarding why we choose to not move forward with your application, it looks like you don’t have as much experience directly related to X as the candidates we are moving forward with, and we also want someone who is able to Y. Getting experience with Y is challenging, but some ideas are here: [LINK].” * “Thanks for applying. We won’t be moving forward with your application. It looks like you don’t have as much experience directly related to X as the most competitive candidates, and we also want someone who is able to Y.” * “Thanks for applying. We won’t be moving forward with your application.” That last bullet point is what 90% of EA organizations send in my experience. I have seen two or three that sometimes send rejections that are similar to the first or similar to the second.[4] If the first bullet point looks too challenging and you think that it would take too much staff time, then see if you can do the second bullet point: simply telling people why (although this will dependent on the context) can make rejections a lot less hurtful, and also points them in the right direction for how to get better. 1. ^ I haven't seen any EA orgs being mean in their rejections, but I have seen and heard of most of them being neutral. 2. ^ I still remember how bad it felt being told that I couldn't join a feminist reading group because they didn't want any men there. I think that was totally understandable, but it still felt bad to be excluded. I remember not being able to join a professional networking group because I was older than the cutoff age (they required new members to be under 30, and I was 31 when I learned about it). These things happened years ago, and were not particularly influential in my life. But people remember being excluded. 3. ^ Things that people cannot change with a reasonable amount of time and effort (or things that would require a time machine, such as what university someone attended) are generally not good pieces of feedback to give people. 4. ^ Shout out to Centre for Effective Altruism and Animal Advocacy Careers for doing a better than average job. It has been a while since I've interaction with the internals of either of their hiring systems, but last I checked they both send useful and actionable feedback for at least some of their rejections.
I'm been mulling over the idea of proportional reciprocity for a while. I've had some musings sitting a a Google Doc for several months, and I think that I either share a rough/sloppy version of this, or it will never get shared. So here is my idea. Note that this is in relation to job applications within EA, and I felt nudged to share this after seeing Thank You For Your Time: Understanding the Experiences of Job Seekers in Effective Altruism. - - - -  Proportional reciprocity  I made this concept up.[1] The general idea is that relationships tend to be somewhat reciprocal, but in proportion to the maturity/growth of the relationship: the level of care and effort that I express toward you should be roughly proportional to the level of effort and care that you express toward me. When that is violated (either upward or downward) people feel that something is wrong.[2] The general idea (as far as it relates to job applications and hiring rounds) is that the more of a relationship the two parties have, the more care and consideration the rejection should involve. How does this relate to hiring in the context of EA? If Alice puts in 3 hours of work, and then Alice perceives that Bob puts in 3 minutes of work, Alice feels bad. That the simplistic model. As a person running a hiring round, you might not view yourself as having a relationship with these people, but there is a sort of psychological contract which exists, especially after an interview; the candidate expects you to behave in certain ways. One particularly frustrating experience I had was with an EA organization that had a role with a title, skills, and responsibilities that matched my experience fairly well. That organization reached out to me and requested that I answer multiple short essay-type questions as a part of the job application.[3] I did so, and I ended up receiving a template email from a noreply email address that stated “we have made the decision to move forward with other candidates whose experience and skills are a closer match to the position.” In my mind, this is a situation in which a reasonable candidate (say, someone not in the bottom 10%) who spent a decent chunk of time thoughtfully responding to multiple questions and who actually does meet the stated requirements for the role, is blandly rejected. This kind of scenario appears to be fairly common. And I wouldn't have felt so bitter about it if they hadn't specifically reached out to me and asked me to apply. Of course, I don’t know how competitive I was or wasn’t; maybe my writing was so poor that I was literally the worst-ranked candidate. What would I have liked to see instead? I certainly don’t think that I am owed an interview, nor a job offer, and in reality I don’t know how competitive the other candidates were.[4] But I would have liked to have been given a bit more information beyond the implication of merely “other candidates are a better match.” I would love to be told in what way I fell short, and what I should do instead. If they specifically contacted me to invite me to apply, something along the lines of “Hey Joseph, sorry for wasting your time. We genuinely thought that you would have been among the stronger candidates, and we are sorry that we invited you to apply only to reject you at the very first stage.” That would have felt more human and personal, and I wouldn’t hold it against them. But instead I got a very boilerplate email template. Of course, I'm describing my own experience, but lots of other people in EA and adjacent to EA go through this. It isn't unusual for candidate to be asked to do 3-hour work trials without compensation, to be invited to interview and then rejected without information, or to meet 100% of the requirements of a job posting and then get rejected 24 hours after submitting an application.[5] If this is an example of the applicant putting in effort and not getting reciprocity, the other failure mode that I’ve seen is the applicant being asked for more and more effort. A hiring round from one EA adjacent organization involved a short application form, and then a three-hour unpaid trial task. I understand the need to deal with a large volume of applicants; interviewing 5-10 people is feasible, interviewing 80 is less so. What would I have liked to see instead? Perhaps a 30-minute trial task instead of a three-hour trial task. Perhaps a 10-minute screening interview. Perhaps an additional form with some knockout questions and non-negotiables. Perhaps a three hour task that is paid. 1. ^ Although some social psychologist has probably thought of it before me and in much more depth. 2. ^ There are plenty of exceptions, of course. I can’t obligate you to form a friendship with me by doing favors or by giving you gifts. The genuineness matters also: a sycophant who only engages in a relationship in order to extract value isn’t covered by proportionally reciprocity. And there are plenty of misperceptions regarding what level a relationship has reached; I’ve seen many interpersonal conflicts arise from two people having different perceptions of the current level of reciprocity. I think that this is particularly common in romantic relationships among young people. 3. ^ I don’t remember exactly how much time I spent on the short essays. I know that it wasn’t a five-hour effort, but I also know that I didn’t just type a sentence or two and click ‘submit.’ I put a bit of thought into them, and I provided context and justification. Maybe it was between 30 and 90 minutes? One question was about DEI and the relevance it has to the work that organization did. I have actually read multiple books on DEI and I've been exploring that area quite a bit, so I was able to elaborate and give nuance on that. 4. ^ Maybe they had twice as much relevant work experience as me, and membership in prestigious professional institutions, and experience volunteering with the organization. Or maybe I had something noticeably bad about my application, such as a blatant typo that I didn't notice.  5. ^ None of these are made up scenarios. Each of these has happened either to me or to people I know.

Topic Page Edits and Discussion

Tuesday, 9 July 2024
Tue, 9 Jul 2024

Frontpage Posts

Quick takes

The recently released 2024 Republican platform said they'll repeal the recent White House Executive Order on AI, which many in this community thought is a necessary first step to make future AI progress more safe/secure. This seems bad. > Artificial Intelligence (AI) We will repeal Joe Biden’s dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology. In its place, Republicans support AI Development rooted in Free Speech and Human Flourishing. From https://s3.documentcloud.org/documents/24795758/read-the-2024-republican-party-platform.pdf, see bottom of pg 9.
EA Global: Bay Area 2025 will take place 21-23 February 2025 at the Oakland Marriott (the same venue as the past two years). Information on how to apply and other details to follow, just an FYI for now since we have the date.
This is a nudge to leave questions for Darren Margolias, Executive Director of @Beast Philanthropy, in the AMA post here! I'll be recording a video AMA with Darren based on the questions left in the post, and we'll try to get through as many of them as possible. For extra context, Beast Philanthropy is a charity founded by YouTuber MrBeast. They recently collaborated with GiveDirectly on this video; you can read more about it on GiveDirectly's blog and the Beast Philanthropy LinkedIn.

Monday, 8 July 2024
Mon, 8 Jul 2024

Frontpage Posts

Quick takes

Is talk about vegan diets being more healthy is mostly just confirmation bias and tribal thinking? A vegan diet can be very healthy or very unhealthy, and a non-vegan diet can also be very healthy or very unhealthy. The simplistic comparisons that I tend to see are contrasting vegans who put a lot of care and attention toward their food choices and the health consequences, versus people who aren't really paying attention to what they ear (something like the standard American diet or some similar diet without much intentionality). I suppose in a statistics class we would talk about non representativeness. Does the actual causal factor for health tend to be something more like cares about diet, or pays attention to what they eat, or socio-economic status? If we controlled for factors like these, would a vegan diet still be healthier than a non-vegan diet?
BOAS is closing it's public investment campaign soon. Why you might consider this as a very effective use of your money is best captured in my TEDx talk about Profit for Good. Other reasons you might want to invest:  * We've already donated €16.432 donated to charity, which helped distribute 8.648 bed nets, protecting 17.297 people from malaria and preventing 2.783 malaria cases. This not only saves at least 3 lives, but also adds €197.184 to the local economy through reduced illness. We have a feasible path to do multiply tens to ten thousands of times * Since we are not profitable I should add that a lot of that money was generated from voluntary donations and my world record attempt for AMF * With the first 100 investors we opened a store, signed partnerships with MUD Jeans (you can also invest in them on the same platform!) and United Repair Centre, and hired the core team. We've grown revenue with an average of 25% per month this year * We have 300K investment from a large bank and foundation, which I cannot disclose yet, so we're not only backed by 'friends, family and fools'. This does not mean we are overfunded, we are actually heavily underfunded because our plans are ambitious (and have to be if you want to move millions to EC's)  * Invest from €100, so becoming a co-owner of BOAS is feasible for most * Up to 100% of your investment as a gift card (yes really!), and lifetime discounts for all investors, so you can always get vintage denim at the best prices in return for you helping us continue to grow fast * A unique opportunity to invest in the profit for good system's change. BOAS was featured by Elle, Parool, Peter Singer, Bangkok Post, Rutger Bregman, BNR, Algemeen Dagblad, BNN Vara/Dolf Janssen, Radio 538, Metro News and many others for our ability to save jeans and lives, if you think PR is an important indicator of whether this works or not * More than 100 people, including Peter Singer and tennis pro Marcus Daniel have already invested in our mission to save jeans and lives * We've already saved 7.000 jeans and 3 lives, saving 23 million liters of water saved (9 olympic swimming pools), 202.621KG of CO2 saved (2.006 flights from Amsterdam to Barcelona), 14 football fields saved from cotton farming and 488 trash bags of waste saved from being dumped or burned Why you should not invest?   * Our investment is not made to make you rich. 90% of returns are dedicated to effective charities like the Against Malaria Foundation. That's the whole idea of Profit for Good * This is an investment in a high risk high reward startup. We have a small shot of moving a lot of money to effective charities, but also significant odds of not generating much impact if we go bankrupt * EAIF thinks we're not a good fit for their fund and have rejected investment in BOAS multiple times, I have commented about that in the critical piece of EAIF  You can invest here. Thanks, Vin
About a month ago, @Akash stated on Less Wrong that he'd be interested to see more analysis of possible international institutions to build and govern AGI (which I will refer to in this quick take as "i-AGI"). I suspect many EAs would prefer an international/i-AGI scenario. However, it's not clear that countries with current AI leadership would be willing to give that leadership away. International AI competition is often framed as US vs China or similar, but it occurred to me that a "AI-leaders vs AI-laggards" frame could be even more important. AI-laggard countries are just as exposed to AGI existential risk, but presumably stand less to gain, on expectation, in a world where the US or China "wins" an AI race. So here's a proposal for getting from where we are right now to i-AGI: * EAs who live in AI-laggard countries, and are interested in policy, lobby their country's citizens/diplomats to push for i-AGI. * Since many countries harbor some distrust of both the US and China, and all countries are exposed to AGI x-risk, diplomats in AI-laggard countries become persuaded that i-AGI is in their nation's self-interest. * Diplomats in AI-laggard countries start talking to each other, and form an i-AGI bloc analogous to the Non-Aligned Movement during the Cold War. Countries in the i-AGI bloc push for an AI Pause and/or subordination of AGI development to well-designed international institutions. Detailed proposals are drawn up by seasoned diplomats, with specifics regarding e.g. when it should be appropriate to airstrike a datacenter. * As AI continues to advance, more and more AI-laggard countries become alarmed and join the i-AGI bloc. AI pause movements in other countries don't face the "but China" argument to the same degree it is seen in the US, so they find traction rapidly with political leaders. * The i-AGI bloc puts pressure on both the US and China to switch to i-AGI. Initially this could take the form of diplomats arguing about X-risk. As the bloc grows, it could take the form of sanctions etc. -- perhaps targeting pain points such as semiconductors or power generation. * Eventually, the US and China cave to international pressure, plus growing alarm from their own citizens, and agree to an i-AGI proposal. The new i-AGI regime has international monitoring in place so nations can't cheat, and solid governance which dis-incentivizes racing. Note the above story could just be one specific scenario among a broader class of such scenarios. My overall point is that "AI laggard" nations may have an important role to play in pushing for an end to racing. E.g. maybe forming a bloc is unnecessary, and a country like Singapore would be able to negotiate a US/China AI treaty all on its own. I wonder what Singaporeans like @Dion @dion-1 and @xuan think? Trying to think of who else might be interested. @Holly_Elmore perhaps? I encourage people to share as appropriate if you think this is worth considering.
"a Utilitarian may reasonably desire, on Utilitarian principles, that some of his conclusions should be rejected by mankind generally; or even that the vulgar should keep aloof from his system as a whole, in so far as the inevitable indefiniteness and complexity of its calculations render it likely to lead to bad results in their hands." (Sidgwick 1874)

Topic Page Edits and Discussion

Saturday, 6 July 2024
Sat, 6 Jul 2024

Frontpage Posts

Quick takes

For future debate weeks, it might be nice if we could select comments that changed our view! I often find comments more informative than posts. 
I’m pretty bullish on having these kinds of debates. While EA is doing well at having an impact in the world, the forum has started to feel intellectually stagnant in some ways. And I guess I feel that these debates provide a way to move the community forward intellectually. That's something I've been feeling has been missing for a while.

Topic Page Edits and Discussion

Friday, 5 July 2024
Fri, 5 Jul 2024

Frontpage Posts

Personal Blogposts

Quick takes

Lucius Caviola's post mentioned the "happy servants" problem: > If AIs have moral patienthood but don’t desire autonomy, certain interpretations of utilitarian theories would consider it morally justified to keep them captive. After all, they would be happy to be our servants. However, according to various non-utilitarian moral views, it would be immoral to create “happy servant” AIs that lack a desire for autonomy and self-respect (Bales, 2024; Schwitzgebel & Garza, 2015). As an intuition pump, imagine we genetically engineered a group of humans with the desire to be our servants. Even if they were happy, it would feel wrong. This issue also mentioned as a key research question in Digital Minds: Importance and Key Research Questions by Mogensen, Saad, and Butlin. This is just a note to flag that there's also some discussion of this issue in Carl Shulman's recent 80,000 podcast episode. (cf. also my post about that episode.) Rob Wiblin: Yeah. The idea of training a thinking machine to just want to take care of you and to serve your every whim, on the one hand, that sounds a lot better than the alternative. On the other hand, it does feel a little bit uncomfortable. There’s that famous example, the famous story of the pig that wants to be eaten, where they’ve bred a pig that really wants to be farmed and consumed by human beings. This is not quite the same, but I think raises some of the same discomfort that I imagine people might have at the prospect of creating beings that enjoy subservience to them, basically. To what extent do you think that discomfort is justified? Carl Shulman: So the philosopher Eric Schwitzgebel has a few papers on this subject with various coauthors, and covers that kind of case. He has a vignette, “Passion of the Sun Probe,” where there’s an AI placed in a probe designed to descend into the sun and send back telemetry data, and then there has to be an AI present in order to do some of the local scientific optimisation. And it’s made such that, as it comes into existence, it absolutely loves achieving this mission and thinks this is an incredibly valuable thing that is well worth sacrificing its existence. And Schwitzgebel finds that his intuitions are sort of torn in that case, because we might well think it sort of heroic if you had some human astronaut who was willing to sacrifice their life for science, and think this is achieving a goal that is objectively worthy and good. And then if it was instead the same sort of thing, say, in a robot soldier or a personal robot that sacrifices its life with certainty to divert some danger that maybe had a 1-in-1,000 chance of killing some human that it was protecting. Now, that actually might not be so bad if the AI was backed up, and valued its backup equally, and didn’t have qualms about personal identity: to what extent does your backup carry on the things you care about in survival, and those sorts of things. There’s this aspect of, do the AIs pursue certain kinds of selfish interests that humans have as much as we would? And then there’s a separate issue about relationships of domination, where you could be concerned that, maybe if it was legitimate to have Sun Probe, and maybe legitimate to, say, create minds that then try and earn money and do good with it, and then some of the jobs that they take are risky and whatnot. But you could think that having some of these sapient beings being the property of other beings, which is the current legal setup for AI — which is a scary default to have — that’s a relationship of domination. And even if it is consensual, if it is consensual by way of manufactured consent, then it may not be wrong to have some sorts of consensual interaction, but can be wrong to set up the mind in the first place so that it has those desires. And Schwitzgebel has this intuition that if you’re making a sapient creature, it’s important that it wants to survive individually and not sacrifice its life easily, that it has maybe a certain kind of dignity. So humans, because of our evolutionary history, we value status to differing degrees: some people are really status hungry, others not as much. And we value our lives very much: if we die, there’s no replacing that reproductive capacity very easily. There are other animal species that are pretty different from that. So there are solitary species that would not be interested in social status in the same kind of way. There are social insects where you have sterile drones that eagerly enough sacrifice themselves to advance the interests of their extended family. Because of our evolutionary history, we have these concerns ourselves, and then we generalise them into moral principles. So we would therefore want any other creatures to share our same interest in status and dignity, and then to have that status and dignity. And being one among thousands of AI minions of an individual human sort offends that too much, or it’s too inegalitarian. And then maybe it could be OK to be a more autonomous, independent agent that does some of those same functions. But yeah, this is the kind of issue that would have to be addressed. Rob Wiblin: What does Schwitzgebel think of pet dogs, and our breeding of loyal, friendly dogs? Carl Shulman: Actually, in his engagement with another philosopher, Steve Petersen — who takes the contrary position that it can be OK to create AIs that wish to serve the interests or objectives that their creators intended — does raise the example of a sheepdog really loves herding. It’s quite happy herding. It’s wrong to prevent the sheepdog from getting a chance to herd. I think that’s animal abuse, to always keep them inside or not give them anything that they can run circles around and collect into clumps. And so if you’re objecting with the sheepdog, it’s got to be not that it’s wrong for the sheepdog to herd, but it’s wrong to make the sheepdog so that it needs and wants to herd. And I think this kind of case does make me suspect that Schwitzgebel’s position is maybe too parochial. A lot of our deep desires exist for particular biological reasons. So we have our desires about food and external temperature that are pretty intrinsic. Our nervous systems are adjusted until our behaviours are such that it keeps our predicted skin temperature within a certain range; it keeps predicted food in the stomach within a certain range. And we could probably get along OK without those innate desires, and then do them instrumentally in service to some other things, if we had enough knowledge and sophistication. The attachment to those in particular seems not so clear. Status, again: some people are sort of power hungry and love status; others are very humble. It’s not obvious that’s such a terrible state. And then on the front of survival that’s addressed in the Sun Probe case and some of Schwitzgebel’s other cases: if minds that are backed up, the position that having all of my memories and emotions and whatnot preserved less a few moments of recent experience, that’s pretty good to carry on, that seems like a fairly substantial point. And the point that the loss of a life that is quickly physically replaced, that it’s pretty essential to the badness there, that the person in question wanted to live, right? Rob Wiblin: Right. Yeah. Carl Shulman: These are fraught issues, and I think that there are reasons for us to want to be paternalistic in the sense of pushing that AIs have certain desires, and that some desires we can instil that might be convenient could be wrong. An example of that, I think, would be you could imagine creating an AI such that it willingly seeks out painful experiences. This is actually similar to a Derek Parfit case. So where parts of the mind, maybe short-term processes, are strongly opposed to the experience that it’s undergoing, while other processes that are overall steering the show keep it committed to that. And this is the reason why just consent, or even just political and legal rights, are not enough. Because you could give an AI self-ownership, you could give it the vote, you could give it government entitlements — but if it’s programmed such that any dollar that it receives, it sends back to the company that created it; and if it’s given the vote, it just votes however the company that created it would prefer, then these rights are just empty shells. And they also have the pernicious effect of empowering the creators to reshape society in whatever way that they wish. So you have to have additional requirements beyond just, is there consent?, when consent can be so easily manufactured for whatever.

Load more days