Hide table of contents

Welcome!

If you're new to the EA Forum:

  • Consider using this thread to introduce yourself!
  • You could talk about how you found effective altruism, what causes you work on and care about, or personal details that aren't EA-related at all.
  • (You can also put this info into your Forum bio.)

Everyone: 

  • If you have something to share that doesn't feel like a full post, add it here! (You can also create a quick take.)
  • You might also share good news, big or small (See this post for ideas.)
  • You can also ask questions about anything that confuses you (and you can answer them, or discuss the answers).

For inspiration, you can see the last open thread here


Other Forum resources

  1. 🖋️  Write on the EA Forum
  2. 🦋  Guide to norms on the Forum
  3. 🛠️  Forum User Manual

 

I like adding images to my Forum posts. (Credit to Midjourney.)

18

0
0

Reactions

0
0
Comments32
Sorted by Click to highlight new comments since:

The Long-Term Future Fund is somewhat funding constrained. In addition, we (I) have written a number of docs and announcement that we hope to publicly release in the next 1-3 weeks. In the meantime, I recommend anti-x-risk donors who think they might want to donate to LTFF to hold off on donating until after our posts are out next month, to help them make informed decisions about where best to donate to. The main exception of course is funding time-sensitive projects from other charities.

I will likely not answer questions now but will be happy to do so after the docs are released.

(I work for the Long-term Future Fund as a fund manager aka grantmaker. Historically this has been entirely in a volunteer capacity, but I've recently started being paid as I've ramped my involvement up).

(Cross-posted on the EA Anywhere Slack and a few other places)

I have, and am willing to offer to EA members and organizations upon request, the following generalist skills: 

  • Facilitation. Organize and run a meeting, take notes, email follow-ups and reminders, whatever you need. I don't need to be an expert in the topic, I don't need to personally know the participants. I do need a clear picture of the meeting's purpose and what contributions you're hoping to elicit from the participants. 
  • Technical writing. More specifically, editing and proofreading, which don't require I fully understand the subject matter. I am a human Hemingway Editor. I have been known to cut a third of the text out of a corporate document while retaining all relevant information to the owner's satisfaction. I viciously stamp out typos. 
  • Presentation review and speech coaching. I used to be terrified of public speaking. I still am, but now I'm pretty good at it anyway. I have given prepared and impromptu talks to audiences of dozens-to-hundreds and I have coached speakers giving company TED talks to thousands. A friend who reached out to me for input said my feedback was "exceedingly helpful". If you plan to give a talk and want feedback on your content, slides, or technique, I would be delighted to advise. 

I am willing to take one-off or recurring requests. I reserve the right to start charging if this starts taking up more than a couple hours a week, but for now I'm volunteering my time and the first consult will always be free (so you can gauge my awesomeness for yourself). Message me or email me at optimiser.joe@gmail.com if you're interested. 

Hello everyone,

I am Pacifique Niyorurema from Rwanda. I was introduced to the EA movement last year (2022). I did the introductory program and felt overwhelmed by the content, 80k hours podcast, Slack communities, local groups, and literature. having a background in economics and mission aligning with my values and beliefs, I felt I have found my place. I am pretty excited to be in this community. with time, I plan to engage more in the communities and contribute as an active member. I tend to lean more on meta EA, effective giving and governance, and poverty reduction.

Best.

We should use quick posts a lot more. And anyone doing the more typical long posts should ALWAYS do the TLDRS I see many doing. It will help not scare people off. Im new to these forums, joined about a month ago coming in from first hearing Will M on Sam Harris a few times, reading doing good better, listening to lots of 80k hours pods and doing the trial giving what we can pledge, joining EA everywhere slack etc. But I find the vast majority of these forum posts extremely unapproachable. I consider myself a pretty smart guy and I’m pretty into reading books and listening to pods, but I’m still quite put off by the constant wall of words I get delivered by the forum digest (a feature I love!) I have enjoyed a few posts I’ve found and skimmed. It’s just that the main content is usually way too much.

Completely agree nice one - and I even forgot to do a TLDR on my last post! (although it was a 2 minute read and a pretty easy one I think haha).

Great to have you around :)

Hi everyone, I'm Connor. I'm an economics PhD student at UChicago. I've been tangentially interested in the EA movement for years, but I've started to invest more after reading What We Owe The Future. In about a month, I'm attending a summer course hosted by the Forethought Foundation, so I look forward to learning even more.

I intend to specialize in development and environmental economics, so I'm most interested in the global health and development focus area of EA. However, I look forward to learning more about other causes.

I'm also hoping to learn more about how to orient my research and work towards EA topics and engage with the community during my studies.

Hello everyone, 

I am Joel Mwaura Kuiyaki from Kenya. I was introduced to the EA movement by a friend thinking it might be one of those normal lessons but I actually was intrigued and really enjoyed the first Intro sessions we had. It was what I had been looking for for a long while.

I intend to specialize in Effective giving, governance, and longtermism. 

However, I am still interested in learning more about other cause areas and implementing them. 

I'm extremely upset about recent divergence from ForumMagnum / LessWrong. 

  • 1 click to go from any page to my profile became 2 clicks. (is the argument that you looked at the clickstream dashboard and found that Quinn was the only person navigating to his profile noticeably more than he was navigating to say DMs or new post? I go to my profile a lot to look up prior comments so I can not repeat myself across discord servers or threads as much!) 
  • permalink in the top right corner of comment, instead of clicking the timestamp (David Mears suggests that we're in violation of industry standard now)
  • moving upvote downvote to the left. and removing it from the bottom! this seems opposite to me: we want more people upvoting/downvoting at the bottom of the posts (presumably to decrease voting on things without actually reading) and less people voting at the top!

I'm neutral on QuickTakes rebrand: I'm a huge fan of shortform overall (if I was Dictator of Big EA I would ban twitter and facebook and move everybody to to shortform/quicktake!), I trust y'all to do whatever you can to increase adoption. 

Thanks for sharing your feedback! Responding to each point:

  1. I removed the profile button link because I found it slightly annoying that opening that menu on mobile also navigates you to your profile, plus I think it's unusual UX for the user menu button to also be a link.
  2. We recently changed this back, so now both the timestamp and the link icon navigate you to the comment permalink. We’ll probably change the link icon to copy the link to your clipboard instead.
  3. We didn't change the criteria for whether the vote arrows appear at the bottom of a post - we still use the same threshold (post is >300 words) as LW does. We did move the vote arrows from the right side of the title to the left side of the title on the post page. This makes the post page header more visually consistent with the post list item, which also has the karma info on the left of the title.

A thought, with low epistemic confidence:

Some wealthy effective altruists argue that by accumulating more wealth, they can ultimately donate more in the long run. While this may initially seem like a value-neutral approach, they reinforce an unequal rather than altruistic distribution of power.

Widening wealth disparities and consolidating power in the hands of a few, further marginalises those who are already disadvantaged. As we know, more money is not inherently valuable. Instead, it is how much someone has relative to others that influences its exchange value, and therefore influence over scarce resources in a zero-sum manner with other market participants, including recipients of charity and their benefactors.

thread to introduce yours

test

Hi!

I'm Calum Richards from the Washington D.C. area. I've been an EA lurker since 2015, and I was originally introduced to the community through Scott Alexander's blog, Slate Star Codex (now Astral Codex Ten on Substack).

I taught high school math for a few years after college, and I'm now recruiting at GiveWell!

I'm happy to connect with anyone — best way to reach me is through DMs here, and you can also find me on LinkedIn.

Has anything changed on the forum recently? I am no longer able to open posts in new tabs with middle-click? Is it just me?

Sorry about that, we recently broke this fixing another bug. Fix should be live momentarily.

I have a proposal for making an AGI killswitch. 

Assuming god like computational budgets, algorithmic improvements, and hardware improvements, can you use fully homomorphic encryption (FHE) to train and run an AGI? FHE allows you to run computation on encrypted data without decrypting it. Wouldn't such an AGI find the world illegible with out its input being specifically encrypted for it with a key? 

Can the key then be split into shards so that m of n shards are needed to encrypt queries to the FHE AGI? Can you also create a m of n function so that if m of n shards are destroyed the key is useless?

Can you distribute the keys in such a way so if the AGI destabilizes or hurts society m of n keys are destroyed as a byproduct and the AGI is functionally turned off?

I think this would create a kill switch that is hard to subvert by the AGI and easily triggered by society. 

At the limit you could create a shard for every person on the planet with only one shard deletion being required to delete the key. Then if anyone doesn't like the way things are going with the AGI they could delete their key killing it. 

You might argue that this is a brute force method for alignment. Everyone is happy with the AGI's behavior or the AGI is rendered nonfunctional.  This method avoids defining what alignment is. This is important because defining defining what an aligned state is is intractable. There are too many divergent and contradictory opinions about what alignment means to ever reach universal consensus. What alignment means will necessarily change from person to person and moment to moment. It is too subjective and value laden a goal to be objectively defined and put in to an algorithm to control an AI. 

I would love criticism of this idea to know why it is impractical, not-even-wrong, or flawed.

One problem is that the FHE AGI could secretly steal enough key shards to generate it's own functional copy of the key. This seems hard to do but not impossible.

Another problem is that the AGI could get smart enough to break the FHE.

FHE of the AGI might make it harder to interpret its internal workings.

I feel like this is an excessively software driven way to do this. I have a suggestion.

Make it a hardware thing.

AI relies on circuitboards and memory and connections etc, so instead of making it something that can be found out using an algorithm, make it a physical key that does a physical thing.

Think of a lock on an obscure door versus a really really good password.

You can brute force any password, given time. Physical access is access.

If you can't even find the widget? Yeah.

The opposite is also important. If the Killswitch needs to be integrated then removed it should be a "this needs to be done once every two years but the rest of the time it's hidden via obscurity".

Also development of AGI, just making it would be difficult so hastening it for no reason? Hmmm.

Hello all, I'm new here and trying to find my way around the site. The main reasons I joined are: 

  • to look for practical information about donating and tax returns in the Netherlands. Does anyone know where I could find information about this? I'm trying to figure out if it's worth it to get something back from taxes and what hoops I have to jump through. 
  • suggestions for charities. My current idea is that I want to donate an amount to GiveWell for them to use as they see fit, and donate an amount to a charity specialized in providing contraception/abortion/gynaecologic health for women. If you have suggestions I would love to hear it. Preferably charities with an ANBI-status, which is a necessity in the Netherlands for the tax returns. Right now the only things I've found are SheDecides, which is more political action less practical, and Pathfinder International, which doesn't have the ANBI stamp. 

Any replies are welcome!

Welcome to the EA forum. Great to hear that you would like to donate :).

You can find information about charity selection and tax on the Doneer Effectief website. You can donate to GiveWell recommended charities via Doneer Effectief, but also to a few other charities. They also have a page with info about tax - but you may want read the website of the Belastingdienst to double check. (I can try to find the info in English for you upon request).

If you are looking for a community where you can talk about giving and charity selection, see De Tien Procent Club which is specific for the Netherlands, and Giving What We Can which is international.

Thanks, that's very useful information! En Nederlands is prima, hoor :)

Hi Shalott, welcome to the Forum!

Thanks, that looks useful!

TLDR:  Bio Data scientist here concerned about AI risks , working to get his institution (DCRI) at Duke working on AI and alignment.

--

Long Version: I wrote below blurb and pasted it into https://bard.google.com/ to get TLDR to us...
 

Can you create a TLDR for the following post: Hi Sage Arbor here. I just joined effectivealtruism.org and have been listening to the 80K podcast for about a year. I work in data science (PhD biochem) and currently work with clinical trails at Duke. My main concernt is AI in the next 10 years. Id like my institution Duke Clinical Research Institute (DCRI) create a center of excellence for AI and dedicate a percentage of work towards alignment. Im starting to give talks to that end now. I thought this might be a good place to post alignment articles or work I have to get suggestions on improvement (as always tha work seems 6 months off when I find time).

Sure, here is a TLDR for Sage Arbor's post:

  • Sage Arbor is a data scientist with a PhD in biochemistry who works with clinical trials at Duke.
  • They are concerned about the potential risks of AI in the next 10 years and want to help their institution, Duke Clinical Research Institute (DCRI), create a center of excellence for AI and dedicate a percentage of work towards alignment.
  • They are starting to give talks about this and thought this forum might be a good place to post alignment articles or work they have to get suggestions on improvement.

Here is a more concise TLDR:

  • Data scientist concerned about AI risks wants to create AI alignment center at DCRI.

I hope this is helpful!

More from Lizka
Curated and popular this week
Relevant opportunities