Hide table of contents

Welcome!

If you're new to the EA Forum:

  • Consider using this thread to introduce yourself!
  • You could talk about how you found effective altruism, what causes you work on and care about, or personal details that aren't EA-related at all.
  • (You can also put this info into your Forum bio.)

Everyone: 

  • If you have something to share that doesn't feel like a full post, add it here! (You can also create a quick take.)
  • You might also share good news, big or small (See this post for ideas.)
  • You can also ask questions about anything that confuses you (and you can answer them, or discuss the answers).

For inspiration, you can see the last open thread here


Other Forum resources

  1. 🖋️  Write on the EA Forum
  2. 🦋  Guide to norms on the Forum
  3. 🛠️  Forum User Manual

 

I like adding images to my Forum posts. (Credit to Midjourney.)

29

0
0

Reactions

0
0
Comments88
Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Hello everyone,

I am Pacifique Niyorurema from Rwanda. I was introduced to the EA movement last year (2022). I did the introductory program and felt overwhelmed by the content, 80k hours podcast, Slack communities, local groups, and literature. having a background in economics and mission aligning with my values and beliefs, I felt I have found my place. I am pretty excited to be in this community. with time, I plan to engage more in the communities and contribute as an active member. I tend to lean more on meta EA, effective giving and governance, and poverty reduction.

Best.

2
Evan_Gaensbauer
Welcome to the EA Forum! Thanks for sharing!

Would other people find it helpful if I write an article detailing "reasons to be skeptical of working as an independent researcher?" The reasons will come from a mixture of interviews, first-principle thinking, reading past work on this topic by other people (including RP's work), some of my own experiences in research, and my observations as a grantmaker on LTFF.

"Agree"-vote if helpful, "Disagree" if not helpful; assume my nearest counterfactual is writing a different post drawn from the same distribution as my past posts or comments, particularly LTFF-related ones.

1
jjanosabel
Linch, I am interested in your use of "first-principle thinking".  So I vote "Agree", yes it would be helpful to see your article. Personally, I am sceptical of all fields that go under the heading of Social Science (a long story... I, also, would need some encouragement to write :-). 

Hi! I'm Alyssa. I've been abstractly in favor of EA since around 2014ish when I first encountered it in the LessWrong-sphere, and actively doing the 10%-donations thing since starting my first job post-college in 2021. I've been adjacent to the EA community throughout all of time, but haven't been contributing much to it myself; thus, prior to now, my interactions with the EA forum have been purely reading-ish, rather than posting-ish. But now I've finally found something worth posting about! And thus I'm now here, introducing myself for real.

The 'thing worth posting about' in question is: slightly under two weeks ago, I found an EA channel in my company's Slack, including a nice straightforward pointer to "here are the various EA charities which the company is willing to do donation-matching towards". Previously, I'd looked up various particularly-effective charities in the company's donation-matching tool and found that it wasn't matching with any of them; but, despite this, there apparently are some it matches with! (In particular, CEA's various funds.) My company does up to $10,000 in donation-matching per person per year; I've been entirely missing that opportunity, due to not... (read more)

3
Felix Wolf 🔸
Hi Alyssa, welcome to the EA Forum. :) This post could also have been a Quick Take or a post in this open thread as a starting point. This serves well as your introduction, and I am glad you wrote it. Your post could already make someone curious if their company maybe too participates in matching donations and gives an incentive to search for opportunities on a small scale level. To get the proposed public list going, it could help to make a separate post going more into detail on which actions to take and a more detailed sketch of the first plan (does not need to be perfect!). Perhaps someone else has thought about this idea too, and we can brainstorm here a bit.
8
Alyssa Riceman
My broad image of what the plan would look like is something to the effect of: Build a website whose main page consists of a large table of mappings from company names to EA subgroups within those companies, sorted alphabetically for ease-of-browsing. Then there are two branches currently coming to mind for how to gather information into the table: either a second page consisting of a submission-form for "here's some new information to add to the table", or wiki-style editing of the main page. Then the first difficult hurdle: there needs to be some sort of moderation / fact-checking in place on the user-submitted data, in order to avoid problems with trolls submitting fake information, plus ideally also to avoid problems with its listings going stale. The wiki route is somewhat more accessible-to-editors, which on the one hand means staleness will be less of a danger but on the other hand means trolls will be more of one; but either way there are likely to be nonzero problems on both fronts, if the list gets at all big. And then the second difficult hurdle: making the people who might benefit from the list aware of its existence. In one prong, this means search-engine-optimization; in another prong, this means posting it on the EA Forum; in a third prong, this means finding people who curate lists of EA-resource-links and letting them know about the list as a thing-that-exists-and-might-be-worth-linking; and plausibly there exist additional relevant angles that I'm currently failing to think of. If I were working on this purely by myself, I'm reasonably confident that I could do the basic "create website and stick it at a URL somewhere" step, and plausibly I'd be able to figure out the search-engine-optimization and the spreading-word-of-its-existence. (Post on EA Forum, post on the EA Discord channel I'm in, mention its existence to friends who might know of other good places to post about it, et cetera.) However, I feel pretty much entirely inadequately-equipp
4
Lorenzo Buonanno🔸
Hi Alyssa! You might be interested in this table https://airtable.com/app6rLqQByByYXVP2/shrMATSSQRnHazk4a/tblgJmXDO1vROQzjd?backgroundColor=red&viewControls=on from @High Impact Professionals  https://www.highimpactprofessionals.org/groups  I think your company is not in the table, and it might be worth adding!
9
Alyssa Riceman
Ooh. This does, indeed, look like pretty precisely the sort of thing I was envisioning, yes! Albeit less search-optimized than ideal, I think, in light of how I failed to find it even when actively running searches for e.g. "effective altruism at tech companies" in my first pass at trying-to-locate-prior-work-before-resigning-myself-to-doing-the-thing-myself. Still, this sure does seem like pretty solid prior work in the field! Enough that I will likely pivot over from "try to build the thing myself" to "try to figure out how to improve the preexisting thing", at least for the time being. Thanks for the pointer!
3
Gemma 🔸
Hi Alyssa - I run the EA group for EY. Happy to chat :)   
1
Felix Wolf 🔸
This topic was discussed yesterday in the EA Germany Slack, starting with this article and asking if there is a website which lists all companies in Germany who match donations: Why Workplace Giving Matters for Nonprofits + Companies Some companies use external service providers for their matching. We can look at their clients to gain more information who matches donations. Matching Gift Software Vendors: The Comprehensive List With some examples: https://benevity.com/client-stories https://www.joindeed.com/  https://360matchpro.com/partners/  Could be a first step.

The Long-Term Future Fund is somewhat funding constrained. In addition, we (I) have written a number of docs and announcement that we hope to publicly release in the next 1-3 weeks. In the meantime, I recommend anti-x-risk donors who think they might want to donate to LTFF to hold off on donating until after our posts are out next month, to help them make informed decisions about where best to donate to. The main exception of course is funding time-sensitive projects from other charities.

I will likely not answer questions now but will be happy to do so after the docs are released.

(I work for the Long-term Future Fund as a fund manager aka grantmaker. Historically this has been entirely in a volunteer capacity, but I've recently started being paid as I've ramped my involvement up).

4
Linch
Some docs out now: https://forum.effectivealtruism.org/posts/zZ2vq7YEckpunrQS4/long-term-future-fund-april-2023-grant-recommendations  https://forum.effectivealtruism.org/posts/zt6MsCCDStm74HFwo/ea-funds-organisational-update-open-philanthropy-matching  (by Asya) https://forum.effectivealtruism.org/posts/9vazTE4nTCEivYSC6/reflections-on-my-time-on-the-long-term-future-fund  These are the main announcements I wanted to be out before people donated. We'd also hopefully release a few other docs on a) what our marginal grants (may) look like, b) how LTFF differs from other AI alignment grantmakers, and c) adverse selection in longtermist grantmaking. We might also write more on other topics that we either think will benefit the community, or there is or has been demand for information from donors, applicants, or broader members of the community. 

Hi everyone, I'm Claudia!

I found out about Effective Altruism through the Waking Up app. Sam Harris brought William MacAskill as a guest to speak about Being Good and Doing Good, and it really resonated with me. The Waking Up app already did a great job in making me contemplate on what it means to live a considered life, and then hearing about EA really got me thinking about my career and how I can contribute to the world.

I'm currently a Product Manager at a B2B startup, working remotely and based in Singapore. After reading the career guide from 80,000 hours, I found that operations management is probably the best career path for me at the moment, so I'm considering what options I can take to eventually get to Operations at a high impact organization. If anyone has a similar experience of pivoting from product management / working in startups to operations management, I'd love to get your advice!

I'm excited to be here, and looking forward to contributing to the EA community.

2
Felix Wolf 🔸
Hi Claudia, welcome to the EA Forum. :) There are EA ops groups: EA Operations Slack for those who are currently working at EA orgs already, or their Facebook group Effective Altruism Operations. At the EAGx Berlin 2023 we had an operations meetup, I can ask if you want to connect with some of the organizer. Here is also the Effective Altruism Singapore local chapter. Good luck with your career path.  
2
Richard_Leyba_Tejada
Hi Claudia. I am reading the career guide from 80,000 hours (starting chapter 8 today ). I have a systems/business analysis background. Best wishes on your career path. Seems that operations is the best career path for me too.  Felix thank you for the links.

Hi, I'm an undergraduate psychology student from Moldova. I found effective altruism when I was searching for a list of serious global problems when I was trying to write some fiction.

Now I'm trying to learn more about affective neuroscience and brain simulation in the hope that this information could help with AI alignment and safety.

Anyways, good luck with whatever you're working on.

4
JP Addison🔸
I love that origin story, sounds fun. Welcome!
1
Rainbow Affect
Thank you a lot for these kind words, Mr. JP Addison! I would like some EAs reading this comment to give some feedback on an idea I had regarding AI alignment that I'm really unsure about, if they're willing to. The idea goes like this: 1. We want to make AIs that have human values. 2. Human values ultimately come from our basic emotions and affects. These affects come from brain regions older than the neocortex. (read Jaak Pankseep's work and affective neuroscience for more on that) 3. So, if we want AIs with human values, we want AIs that at least have emotions and affects or something resembling them. 4. We could, in principle, make such AIs by developing neural networks that work similarly to our brains, especially regarding those regions that are older than the neocortex. If you think this idea is ridiculous and doesn't make any sense, please say so, even in a short comment.
2
mhendric🔸
Welcome to EA! I hope you will find it a welcoming and inspiring community.   I dont think the idea is ridiculous at all! However, I am not certain 2. and 3. are true. It is unclear whether all our human values come from our basic emotions and affects (this would seem to exclude the possibility of value learning at the fundamental level; I take this to be still an open debate, and know people doing research on this). It is also unclear if the only way of guaranteeing human values in artificial agents is via emotions and affects or something resembling them, even if it may be one way to do so.  
1
Rainbow Affect
Oh my goodness, thanks for your comment! Panksepp did talk about the importance of learning and cognition for human affects. For example, pure RAGE is a negative emotion from which we seek to agressively defend ourselves from noone in particular. Anger is learned RAGE and we are angry about something or someone in particular. And then there are various resentments and hatreds that are more subdued and subtle and which we harbor with our thoughts. Something similar goes for the other 6 basic emotions. Unfortunately, it seems like we don't know that much about how affects work. If I understand you correctly, you said that some of our values have little to no connection to our basic affects (be they emotional or otherwise). I thought that all our values are affective because values tell us what is good or bad and affects also tell us what is good or bad (i.e. values and affects have valence), and that affects seem to "come" from older brain regions compared to the values we think and talk about. So I thought that we first have affects (i.e. pain is bad for me and for the people I care about) and then we think about those affects so much that we start to have values (i.e. suffering is bad for anyone who has it). But I could be wrong. Maybe affects and values aren't always good or bad and that their difference may lie in more than how cognitively elaborated they are. I'd like to know more about what you meant by "value learning at the fundamental level".
2
mhendric🔸
That is interesting. I am not very familiar with Panksepp's work. That being said, I'd be surprised if his model ( _these specific basic emotions_ ; these specific interactions of affect and emotion) were the only plausible option in current cogsci/psych/neuroscience.   Re "all values are affective", I am not sure I understand you correctly. There is a sense in which we use value in ethics (e.g. Not helping while persons are starving faraway goes against my values), and a sense in which we use it in psychology (e.g. in a reinforcement learning paradigm). The connection between value and affect may be clearer for the latter than the former. As an illustration, I do get a ton of good feelings out of giving a homeless person some money, so I clearly value it. I get much less of a good feeling out of donating to AMF, so in a sense, I value it less. But in the ethical sense, I value it more - and this is why I give more money to AMF than to homeless persons. You claim that all such ethical sense values ultimately stem from affect, but I think that is implausible - look at e.g. Kantian ethics or Virtue ethics, both of which use principles that are not rooted in affect as their basis.   Re: value learning at the fundamental level, it strikes me as a non obvious question whether we are "born" with all the basic valenced states, and everything else is just learning history of how states in the world affected basic valenced states before; or whether there are valenced states that only get unlocked/learned/experienced later. Having a child is sometimes used as an example - maybe that is just tapping into existing kinds of valenced states, but maybe all those hormones flooding your brain do actually change something in a way that could not be experienced before.  Either way, I do think it may make sense to play around with the idea more!
2
Rainbow Affect
Thanks for commenting! In other words, there seem to be values that are more related to executive functions (i.e. self-control) than affective states that feel good or bad? That seems like a plausible possibility. There was a personality scale called ANPS (Affective Neuroscience Personality Scale) that was correlated with the Big Five personality traits. They found that conscienciousness wasn't correlated with any of the six affects of the ANPS, while the other traits in the Big Five were correlated with at least one of the traits in the ANPS. So conscienciousness seems related to what you talk about (values that don't come from affects). But at the same time, there was research about how much conscientious people are prone to experience guilt. They found that conscientiousness was positively correlated to how prone to guilt one is. So, it seems that guilt is an experience of responsibility that is different in some way from the affective states that Panksepp talked about. And it's related to conscientiousness that could be related to the ethical philosophical values you talked about and the executive functions. Hm, I wonder if AIs should have something akin to guilt. That may lead to AI sentience, or it may not. Bibliography Uncovering the affective core of conscientiousness: the role of self-conscious emotions Jennifer V Fayard et al., J Pers., 2012 Feb. https://pubmed.ncbi.nlm.nih.gov/21241309/ A brief form of the Affective Neuroscience Personality Scales, Frederick S Barrett et al., Psychol Assess., 2013 Sep. https://pubmed.ncbi.nlm.nih.gov/23647046/ Edit: I must say, I'm embarrassed by how much these comments of mine go by the "This makes intuitive sense!" logic, instead of doing rigurous reviews of scientific studies. I'm embarrassed by how my comments have such a low epistemic status. But I'm glad that at least some EA found this idea interesting.
2
mhendric🔸
Re edit, you should definitely not feel embarrassed. A forum comment will often be a mix of a few sources and intuition rather than a rigorous review of all available studies. I don't think this must hold low epistemic status, especially for the purpose of the idea being exploration, rather than, say, a call for funding or such (which would require a higher standard of evidence). Not all EA discussions are literature reviews, otherwise chatting would be so cumbersome!  I'd recommend using your studies to explore these and other ideas! Undergraduate studies are a wonderful time to soak up a ton of knowledge, and I look fondly upon mine - I hope you'll have a similarly inspiring experience. Feel free to shoot me a pm if you ever want to discuss stuff. 

Hello everyone,

I am Janos, a newbie member of two days.  Originally from Hungary but have been living in London since late 1950's.

Interested in systemic reform of the current version of "capitalism" which is not capitalism as Adam Smith taught us.

I consider Effective Altruism as an immediate response to current suffering, like a First Aid service. But it has to run in parallel with effective efforts to transform a structurally faulty global system. 

As I see that problem, it is based on a bad economic model which is structured to extract the wealth created socially, and transfer it to the top percentile of a population.

PS

I have a self-taught economist's detailed view of the "nuts'n'bolts analysis of the problem...

What is your nuts'n'bolts analysis of the problem?

Hello everyone, 

I am Joel Mwaura Kuiyaki from Kenya. I was introduced to the EA movement by a friend thinking it might be one of those normal lessons but I actually was intrigued and really enjoyed the first Intro sessions we had. It was what I had been looking for for a long while.

I intend to specialize in Effective giving, governance, and longtermism. 

However, I am still interested in learning more about other cause areas and implementing them. 

(Cross-posted on the EA Anywhere Slack and a few other places)

I have, and am willing to offer to EA members and organizations upon request, the following generalist skills: 

  • Facilitation. Organize and run a meeting, take notes, email follow-ups and reminders, whatever you need. I don't need to be an expert in the topic, I don't need to personally know the participants. I do need a clear picture of the meeting's purpose and what contributions you're hoping to elicit from the participants. 
  • Technical writing. More specifically, editing and proofreading, which don't require I fully understand the subject matter. I am a human Hemingway Editor. I have been known to cut a third of the text out of a corporate document while retaining all relevant information to the owner's satisfaction. I viciously stamp out typos. 
  • Presentation review and speech coaching. I used to be terrified of public speaking. I still am, but now I'm pretty good at it anyway. I have given prepared and impromptu talks to audiences of dozens-to-hundreds and I have coached speakers giving company TED talks to thousands. A friend who reached out to me for input said my feedback was "exceedingly helpful".
... (read more)

We should use quick posts a lot more. And anyone doing the more typical long posts should ALWAYS do the TLDRS I see many doing. It will help not scare people off. Im new to these forums, joined about a month ago coming in from first hearing Will M on Sam Harris a few times, reading doing good better, listening to lots of 80k hours pods and doing the trial giving what we can pledge, joining EA everywhere slack etc. But I find the vast majority of these forum posts extremely unapproachable. I consider myself a pretty smart guy and I’m pretty into reading boo... (read more)

3
NickLaing
Completely agree nice one - and I even forgot to do a TLDR on my last post! (although it was a 2 minute read and a pretty easy one I think haha). Great to have you around :)
1
richardiporter
Thanks Nick!

Hi everyone, I'm Connor. I'm an economics PhD student at UChicago. I've been tangentially interested in the EA movement for years, but I've started to invest more after reading What We Owe The Future. In about a month, I'm attending a summer course hosted by the Forethought Foundation, so I look forward to learning even more.

I intend to specialize in development and environmental economics, so I'm most interested in the global health and development focus area of EA. However, I look forward to learning more about other causes.

I'm also hoping to learn more about how to orient my research and work towards EA topics and engage with the community during my studies.

What exactly does the "Request for Feedback" button do when writing a post? I began a post, clicked the aforementioned button, and my post got saved as a draft, with no other visible feedback as to what was happening, or whether the request was successful, or what would happen next.

Also, I kind of expected that I'd be able to mention what kind of feedback I'm actually interested in; a generic request for feedback is unlikely to give me the kind of feedback I'm interested in, after all. Is the idea here to add a draft section with details re: the request for feedback, or what?

Hello! I'm just here to introduce myself as I think I'm a bit of an unusual effective altruist. I am an astrobiologist and my research focuses on the search for life on Mars. Before discovering effective altruism I was always very interested in the long term future of life in the context of looming existential risks. I thought the best thing to do was to send life to other planets so that it would survive in a worst case scenario. But a masters degree later, I got into effective altruism and decided that this cause was a 10/10 on importance, 10/10 on negle... (read more)

2
Richard_Leyba_Tejada
Very interesting. I enjoyed reading your thoughts on life here and there (loose reference to the "Now and Then, Here and There" anime. Our planet and life are very precious. Good meeting you :)
2
Rafael Harth
Are you aware of Robin Hanson's Grabby Aliens model? It doesn't have an immediate consequence for the research conclusions you listed, but I think it (and anthropic considerations in general) are in the wheelhouse of interesting stuff if you care about space travel.
1
JordanStone
Yeah, though I only became aware of it after getting involved with EA. It seems to fit in the group of very interesting philosophical space things that are scary and I have no idea what to do about! Joining the club with the great filter, the doomsday argument, and the fermi paradox.
2
Felix Wolf 🔸
Hi Jordan, welcome to the EA Forum. :) @Ekaterina_Ilin studies stars and exoplanets at the Leibniz Institute for Astrophysics Potsdam. Maybe this connection can help you with your research.
2
JordanStone
Hi Felix. Thanks for the welcome and the introduction to @Ekaterina_Ilin :)

Hi everyone, I'm Paul, a software engineer based in Münster, Germany.

I came across Effective Altruism because of the book by William MacAskill a few years ago. Later, I came in contact with 80.000 hours, giving what we can and many of their representatives in various podcasts/books/articles. I consider Effective Altruism one of the greatest movements I have come across and I feel very grateful for being able to contribute.

 

Up until now, I have "only" earned to give. But having recently quit my job and having had a few start-up experiences, I now want ... (read more)

Hi. I am posting under my real name. I have been effectively banned from less wrong for non actionable reasons. The moderators made a bunch of false accusations publicly without giving me a chance to review or rebut any of them. I believe it was because I don't think AI is necessarily inevitably going to cause genocide and that obvious existing techniques in software should allow us to control AI. This view seems to be banned from discussion, with downvotes used to silence anyone discussing it. I was curious if this moderation action applied here.

The... (read more)

5
Rainbow Affect
Mr. Monroe, that sounds like a terrible experience to go through. I'm sorry you went through that. (So, there are software techniques that can allow us to control AI? Sounds like good news! I'm not a software engineer, just someone who's vaguely interested in AIs and neuroscience, but I'm curious as to what those techniques are.)
-8
Gerald Monroe

Hello! My name is Dawson. I am near to completing my PhD in aerospace engineering. I recently switched to working remotely and I have been missing collaborative projects and working with people. 

If you know of projects, volunteer opportunities, or part-time jobs that would benefit from someone with an engineering research background, please let me know! My primary goal is having a regular reason to talk with people, and I would love to use that time to contribute to EA rather than working at a coffee shop or something. (I'm reading through the EA list of opportunities as well, but I thought I would post something here just in case someone reading this knows of an opportunity)

5
Linch
Welcome Dawson! Where physically are you based in?
2
archdawson
I live in Hartford, Connecticut 

A thought, with low epistemic confidence:

Some wealthy effective altruists argue that by accumulating more wealth, they can ultimately donate more in the long run. While this may initially seem like a value-neutral approach, they reinforce an unequal rather than altruistic distribution of power.

Widening wealth disparities and consolidating power in the hands of a few, further marginalises those who are already disadvantaged. As we know, more money is not inherently valuable. Instead, it is how much someone has relative to others that influences its exchange value, and therefore influence over scarce resources in a zero-sum manner with other market participants, including recipients of charity and their benefactors.

1
Elias Malisi (Prince)
While I agree to the fact that more money is not inherently valuable, I believe that there is a valid case for patient philantropy, which you haven not engaged with in your critism of the concept. Moreover, I disagree with the statement that unequal distributions of power are conceptually opposed to distributions that maximise welfare impartially in light of the argument that it is likely good to increase the power of agents who are sufficiently benevolent and intelligent. I assume that you refer to distributions of power which maximise welfare impartially by saying altruistic distributions. If this interpretation is incorrect, my disagreement might no longer apply. Epistemic status from here: I do not have a degree in economics and my knowledge of market dynamics is fairly limited so I might have missed some implicit fact which validates the argument I am commenting on. I believe that it may be inappropriate to see accumulating money as determining influence over scarce resources in a zero-sum manner since gaining money does not necessarily reduce the influence of any other involved parties over existing resources. To understand this, we can look at the following scenario: 1. Alice and Bob both own $100 2. There are 5 insecticide-treated bet nets which are also owned by Alice 3. Bob earns $100. He now has twice as much money as Alice. 4. However, Alice's 5 bet nets have retained their full effectiveness and she isn't forced to sell any of them to Bob. 5. Therefore, Alice has retained her influence over the scarce resource called bet nets even though the relative value of her financial resources decreased. In the real world Bob could presumably leverage his financial advantage by hiring mercenaries to steal the bet nets from Alice or using other forms of coercion but he does not necessarily do this. Thus, Alice has lost potential influence but not influence, which is an important distinction because altruists are highly unlikely to use their money for t

I was doing the 80,000 hours career guide, but I suppose it's too ambitious for me. I just want to work for an org with a good altruistic mission, not completely maximize my impact. What's the career advice for that? I've been doing full-stack web dev for 11 years now, so I've learned a thing or two about running big complicated projects, at least in the software world, but I think this transfers, since complexity is complexity.

I looked at the orgs listed in the software dev career path, but they didn't seem very inspiring. I'm open to going back to colleg... (read more)

4
Imma🔸
80k is not the only one who provides altruistic career advice. You can check out * Probably Good * Animal Advocacy Careers * Successif There are probably a few more.
2
Michelle_Hutchinson
Sorry to hear you didn't find what you were looking for in the 80,000 Hours career guide. You could consider checking out this website that maintains a list of social purpose job boards. I'd guess that going through some of those would yield some good options for full stack web-dev roles at organisations with a broad range of missions, hopefully including some inspiring ones!

I'm extremely upset about recent divergence from ForumMagnum / LessWrong. 

  • 1 click to go from any page to my profile became 2 clicks. (is the argument that you looked at the clickstream dashboard and found that Quinn was the only person navigating to his profile noticeably more than he was navigating to say DMs or new post? I go to my profile a lot to look up prior comments so I can not repeat myself across discord servers or threads as much!) 
  • permalink in the top right corner of comment, instead of clicking the timestamp (David Mears suggests tha
... (read more)

Thanks for sharing your feedback! Responding to each point:

  1. I removed the profile button link because I found it slightly annoying that opening that menu on mobile also navigates you to your profile, plus I think it's unusual UX for the user menu button to also be a link.
  2. We recently changed this back, so now both the timestamp and the link icon navigate you to the comment permalink. We’ll probably change the link icon to copy the link to your clipboard instead.
  3. We didn't change the criteria for whether the vote arrows appear at the bottom of a post - we still use the same threshold (post is >300 words) as LW does. We did move the vote arrows from the right side of the title to the left side of the title on the post page. This makes the post page header more visually consistent with the post list item, which also has the karma info on the left of the title.

I think it's important that there isn't a knee jerk reaction (e.g. it may not be the fault of any person, there may be a complex cause, or there may not be any real issue at all).

Why has the virtual program scores sort of fallen off? That's a huge stat change.


3
Defacto
example reasons:  * someone with a different vision has started and there's some rough spots to smooth out * someone with a different vision has started and this is exactly the correct score produced by their correct programs * there was gaming or gardening of the scores and this gardening stopped * staff left or anticipates leaving after funding pulled, and have a "schools out" mentality, e.g. participants showing up to empty zoom meeting * reception to the surprise SBF appearance less positive than expected

Hi Defacto! I work on Virtual Programs at CEA. 

I believe Angelina left a note in the post where she shared this graph, but right before that cycle we changed the quiz (only 4/10 of the questions are the same). We knew these new questions were quite a bit harder, although candidly we were not expecting score drops this dramatic.  We're doing some investigation of the questions, and my guess is we'll iterate a bit more since I think some of the new questions were difficult in ways that are not actually about understanding of the content. 

With all that said, I wouldn't focus too much on changes in this score right now, since they are almost certainly tracking changes in evaluation criteria and not changes in student understanding. 

-5
Defacto

Are the Career Conversations and AI Pause Debate weeks on the Forum part of a trend to have 'week-specific' topics? I sort-of like the idea to focus Forum discussion on rotating/specific topics (alongside the usual dynamics) to see what the Community thinks of issue X at a given time as well as share up-to-date opinions and resources.

If it is, is there a prospective calendar anywhere?

Hi all, 

I came across the 80,000 hours initiative a few years ago and it 100% aligned with my purpose in life. 

I'm on track to give 50% of my salary to the top EA charity [We are downsizing our home] later this year. 

Just having this purpose has undoubtedly made me happier and I'd like to promote this benefit to others, as my second passion is increasing Happiness, the synergy between the two is fantastic. 

In the meantime, I've been adjusting some famous quotes to include an EA mindset, what thinks you? 

 

'If you want to lift y... (read more)

Perhaps not entirely EA but I came across this post that’s potentially interesting and disturbing of a statistician casting scepticism on the evidence that convicted Lucy Letby and I wondered if any rationalists could assess its claim. Potentially touches on weaknesses in the justice system regarding medical evidence and how’s it’s weighed up. https://gill1109.com/2023/05/24/the-lucy-letby-case/?amp=1

4
Lorenzo Buonanno🔸
You might want to post this on LessWrong: "an online forum and community dedicated to improving human reasoning and decision-making. We seek to hold true beliefs and to be effective at accomplishing our goals. Each day, we aim to be less wrong about the world than the day before."

The blog post "This can't go on" is quite prominent in the introductory reading lists to AI Safety. I really struggle to see why. Most of the content in the post is about why the growth we currently have is very unusual and why we can't have economic growth forever. I think mainstream audience is already OK with that and that's a reason they are sceptical to AI boom scenarios. When I first read that post I was very confused. After organising a few reading groups other people seem to have similar confusions too. It's weird to argue from "look, we can't grow... (read more)

4
Habryka
It's an argument for most-important century hypothesis. Seems like something big has to change this century, things can't go on in terms of growth. This is IMO substantial evidence in favor of either extinction or like a century of continued growth, and very strong evidence against "the world in 100 years will look kind of similar to what it looks like today".
8
emre kaplan🔸
I disagree with the following: "very strong evidence against "the world in 100 years will look kind of similar to what it looks like today"." Growth is an important kind of change. Arguing against the possibility of some kind of extreme growth makes it more difficult to argue that the future will be very different. Let me frame it this way: Scenario -> Technological "progress" under scenario 1. AI singularity -> Extreme progress within this century 2. AI doom -> Extreme "progress" within this century 3. Constant growth -> Moderate progress within this century, very extreme progress in 8200 years 4. Collapse through climate change, political instability, war -> Technological decline 5. Stagnation/slowdown -> Weak progress within this century Most of the mainstream audience mostly give credence in the scenarios 3, 4 and 5. The scenario 3 is the scenario with the highest technological progress. The blog post is mostly spent on refuting the scenario 3 by explaining the difficulty and rareness of the growth and technological change. This argument makes people give more credence in scenarios 4 and especially 5 rather than 1 and 2, since the scenarios 1 and 2 also involve a lot of technological progress. For these reasons, I'm more inclined to believe that an introductory blog post should be more focused on assessing the possibility of 4 and 5 rather than 3. Arguing against 3 is still important, as it is decision-relevant on the questions of whether philanthropic resources should be spent now or later. But it doesn't look like this topic makes a good intro blog-post for AI risk.

Hello everyone,

I was introduced to the effective altruism movement about a decade ago, but it's taken me a while to engage with it.  At present, I'm interested in exploring ways that modern information and computing technology can help people more effectively align short term decisions (e.g. purchases, housing choices, transportation, employment, etc.) with their ultimate/long-term objectives (e.g. in medicine, economics, social justice, sustainability, etc.), as well as understanding the ethical implications of and possibilities allowed by emerging technologies, and what factors might ultimately determine or limit how those technologies are adopted and expressed. 

I have a proposal for making an AGI killswitch. 

Assuming god like computational budgets, algorithmic improvements, and hardware improvements, can you use fully homomorphic encryption (FHE) to train and run an AGI? FHE allows you to run computation on encrypted data without decrypting it. Wouldn't such an AGI find the world illegible with out its input being specifically encrypted for it with a key? 

Can the key then be split into shards so that m of n shards are needed to encrypt queries to the FHE AGI? Can you also create a m of n function so tha... (read more)

1
Weaver
I feel like this is an excessively software driven way to do this. I have a suggestion. Make it a hardware thing. AI relies on circuitboards and memory and connections etc, so instead of making it something that can be found out using an algorithm, make it a physical key that does a physical thing. Think of a lock on an obscure door versus a really really good password. You can brute force any password, given time. Physical access is access. If you can't even find the widget? Yeah. The opposite is also important. If the Killswitch needs to be integrated then removed it should be a "this needs to be done once every two years but the rest of the time it's hidden via obscurity". Also development of AGI, just making it would be difficult so hastening it for no reason? Hmmm.

Hi I am Richard. I live in the US. Moved to the US from the Dominican Republic in 1994. I am reading the career guide from 80,000 hours (starting chapter 8 today ). I have a systems/business analysis background. I want to use my career to do good in EA, extreme poverty and mental health. Looks like operations is the best path for me. Looking forward to meeting others in this space, learning more and contributing.

Hello everybody, I'm Azad.

I've been into EA for a while but I never explored the community aspect. Since I've decided to go the EAG this year and help volunteer I thought it would be worth being more active on the forums and engage with the community more.

 

I'm currently a student studying Computer Science. On the side I enjoy reading philosophy papers and books. My introduction into EA was through an Ethics course I took, and I pretty quickly bought it as a moral obligation. From there on I took an interest in AI and animal suffering. Nice to meet you all!

I would like for XPT to be assembled with the sequence functionality plz  https://forum.effectivealtruism.org/users/forecasting-research-institute I like the sequence functionality for keeping track of my progress or in what order did I read things

Is there any online aggregator for short term volunteering opportunities with EA-affiliated orgs?

Anecdotally I've seen a lot of interest in volunteering among the EAs in my area, but a quick search for volunteering opportunities online only shows the more traditional "staff a soup kitchen" or "do phone banking" style opportunities. While certainly these are good things, it seems against the ethos of doing the most good with limited resources. Many of the EAs near me are professionals with expertise in high-demand / low-supply skill sets, and it seems like ... (read more)

I have a question regarding artificial sentience. 
While I do agree that it might be theoretically possible and could cause suffering on a astronomical scale, I do not understand why we would intentionally or unintentionally create it. Intentionally I don't see any reason why a sentient AI would proform any better than a non-sentient AI. And unintentionally, I could imagine that with some unknown future technology, it might be possible. But no matter how complex we make AI with our current technology, it will just become a more "intelligent" binary sys... (read more)

Introducing Myself

Who I am

My name is Elias Malisi, though my friends call me Prince.

I am an undergraduate student of philosophy & physics at the University of St Andrews and I volunteer as an IT-officer for the physics education non-profit Orpheus e.V.

My mission is to improve the long-term future through research and advocacy that seek to mitigate risks from misaligned incentive structures while advancing the development of safe AI.

I grew up in Germany where I recently graduated high school with a perfect GPA as the best student of ethics in the countr... (read more)

Hello all, I'm new here and trying to find my way around the site. The main reasons I joined are: 

  • to look for practical information about donating and tax returns in the Netherlands. Does anyone know where I could find information about this? I'm trying to figure out if it's worth it to get something back from taxes and what hoops I have to jump through. 
  • suggestions for charities. My current idea is that I want to donate an amount to GiveWell for them to use as they see fit, and donate an amount to a charity specialized in providing contraception
... (read more)
4
Imma🔸
Welcome to the EA forum. Great to hear that you would like to donate :). You can find information about charity selection and tax on the Doneer Effectief website. You can donate to GiveWell recommended charities via Doneer Effectief, but also to a few other charities. They also have a page with info about tax - but you may want read the website of the Belastingdienst to double check. (I can try to find the info in English for you upon request). If you are looking for a community where you can talk about giving and charity selection, see De Tien Procent Club which is specific for the Netherlands, and Giving What We Can which is international.
1
Shalott
Thanks, that's very useful information! En Nederlands is prima, hoor :)
1
James Herbert
Welcome to the Forum Shalott! The Tien Procent Club is a great suggestion from Imma but you can get a broader overview of the Dutch EA community via Effectief Altruisme Nederland at effectiefaltruisme.nl.  For example, upcoming events include a career planning session, a workshop on lobbying, and various socials around the country. We also have a co-working space in Amsterdam, an introductory course where you can learn more and meet new people, information on volunteering, and much more besides! Feel free get in touch if you have any further questions. 
3
Lorenzo Buonanno🔸
Hi Shalott, welcome to the Forum! * On taxes, you might be interested in this page https://doneereffectief.nl/advies/belastingvoordeel/ (en: https://doneereffectief.nl/en/advice/tax-and-anbi/ ) * On suggestions for charities, there is no easy answer, as it depends a lot on your empirical and moral beliefs. You might be interested in these posts on family planning, which seem related to what you're looking for. I have personally donated to Family Empowerment Media
1
Shalott
Thanks, that looks useful!

Has anything changed on the forum recently? I am no longer able to open posts in new tabs with middle-click? Is it just me?

5
JP Addison🔸
Sorry about that, we recently broke this fixing another bug. Fix should be live momentarily.

TLDR:  Bio Data scientist here concerned about AI risks , working to get his institution (DCRI) at Duke working on AI and alignment.

--

Long Version: I wrote below blurb and pasted it into https://bard.google.com/ to get TLDR to us...
 

Can you create a TLDR for the following post: Hi Sage Arbor here. I just joined effectivealtruism.org and have been listening to the 80K podcast for about a year. I work in data science (PhD biochem) and currently work with clinical trails at Duke. My main concernt is AI in the next 10 years. Id like my institution Du... (read more)

1
Martin (Huge) Vlach
Is this (made by )you?

Hi, 

My name is Dhruv and this is my first post on the forum. I have done a few EA adjacent related events like fellowships and an EAGx event. Currently most interested in AI governance policy and effective giving

[comment deleted]1
0
0
More from Lizka
Curated and popular this week
Relevant opportunities