Welcome!
If you're new to the EA Forum:
- Consider using this thread to introduce yourself!
- You could talk about how you found effective altruism, what causes you work on and care about, or personal details that aren't EA-related at all.
- (You can also put this info into your Forum bio.)
Everyone:
- If you have something to share that doesn't feel like a full post, add it here! (You can also create a quick take.)
- You might also share good news, big or small (See this post for ideas.)
- You can also ask questions about anything that confuses you (and you can answer them, or discuss the answers).
For inspiration, you can see the last open thread here.
Other Forum resources
The Long-Term Future Fund is somewhat funding constrained. In addition, we (I) have written a number of docs and announcement that we hope to publicly release in the next 1-3 weeks. In the meantime, I recommend anti-x-risk donors who think they might want to donate to LTFF to hold off on donating until after our posts are out next month, to help them make informed decisions about where best to donate to. The main exception of course is funding time-sensitive projects from other charities.
I will likely not answer questions now but will be happy to do so after the docs are released.
(I work for the Long-term Future Fund as a fund manager aka grantmaker. Historically this has been entirely in a volunteer capacity, but I've recently started being paid as I've ramped my involvement up).
Testing
comment 6
comment 5
comment 4
commen 3
comment
comment 8
comment 7
(Cross-posted on the EA Anywhere Slack and a few other places)
I have, and am willing to offer to EA members and organizations upon request, the following generalist skills:
I am willing to take one-off or recurring requests. I reserve the right to start charging if this starts taking up more than a couple hours a week, but for now I'm volunteering my time and the first consult will always be free (so you can gauge my awesomeness for yourself). Message me or email me at optimiser.joe@gmail.com if you're interested.
Hello everyone,
I am Pacifique Niyorurema from Rwanda. I was introduced to the EA movement last year (2022). I did the introductory program and felt overwhelmed by the content, 80k hours podcast, Slack communities, local groups, and literature. having a background in economics and mission aligning with my values and beliefs, I felt I have found my place. I am pretty excited to be in this community. with time, I plan to engage more in the communities and contribute as an active member. I tend to lean more on meta EA, effective giving and governance, and poverty reduction.
Best.
We should use quick posts a lot more. And anyone doing the more typical long posts should ALWAYS do the TLDRS I see many doing. It will help not scare people off. Im new to these forums, joined about a month ago coming in from first hearing Will M on Sam Harris a few times, reading doing good better, listening to lots of 80k hours pods and doing the trial giving what we can pledge, joining EA everywhere slack etc. But I find the vast majority of these forum posts extremely unapproachable. I consider myself a pretty smart guy and I’m pretty into reading books and listening to pods, but I’m still quite put off by the constant wall of words I get delivered by the forum digest (a feature I love!) I have enjoyed a few posts I’ve found and skimmed. It’s just that the main content is usually way too much.
Completely agree nice one - and I even forgot to do a TLDR on my last post! (although it was a 2 minute read and a pretty easy one I think haha).
Great to have you around :)
test reply
Hi everyone, I'm Connor. I'm an economics PhD student at UChicago. I've been tangentially interested in the EA movement for years, but I've started to invest more after reading What We Owe The Future. In about a month, I'm attending a summer course hosted by the Forethought Foundation, so I look forward to learning even more.
I intend to specialize in development and environmental economics, so I'm most interested in the global health and development focus area of EA. However, I look forward to learning more about other causes.
I'm also hoping to learn more about how to orient my research and work towards EA topics and engage with the community during my studies.
Hello everyone,
I am Joel Mwaura Kuiyaki from Kenya. I was introduced to the EA movement by a friend thinking it might be one of those normal lessons but I actually was intrigued and really enjoyed the first Intro sessions we had. It was what I had been looking for for a long while.
I intend to specialize in Effective giving, governance, and longtermism.
However, I am still interested in learning more about other cause areas and implementing them.
I'm extremely upset about recent divergence from ForumMagnum / LessWrong.
I'm neutral on QuickTakes rebrand: I'm a huge fan of shortform overall (if I was Dictator of Big EA I would ban twitter and facebook and move everybody to to shortform/quicktake!), I trust y'all to do whatever you can to increase adoption.
Thanks for sharing your feedback! Responding to each point:
A thought, with low epistemic confidence:
Some wealthy effective altruists argue that by accumulating more wealth, they can ultimately donate more in the long run. While this may initially seem like a value-neutral approach, they reinforce an unequal rather than altruistic distribution of power.
Widening wealth disparities and consolidating power in the hands of a few, further marginalises those who are already disadvantaged. As we know, more money is not inherently valuable. Instead, it is how much someone has relative to others that influences its exchange value, and therefore influence over scarce resources in a zero-sum manner with other market participants, including recipients of charity and their benefactors.
test
Hi!
I'm Calum Richards from the Washington D.C. area. I've been an EA lurker since 2015, and I was originally introduced to the community through Scott Alexander's blog, Slate Star Codex (now Astral Codex Ten on Substack).
I taught high school math for a few years after college, and I'm now recruiting at GiveWell!
I'm happy to connect with anyone — best way to reach me is through DMs here, and you can also find me on LinkedIn.
Has anything changed on the forum recently? I am no longer able to open posts in new tabs with middle-click? Is it just me?
Sorry about that, we recently broke this fixing another bug. Fix should be live momentarily.
I have a proposal for making an AGI killswitch.
Assuming god like computational budgets, algorithmic improvements, and hardware improvements, can you use fully homomorphic encryption (FHE) to train and run an AGI? FHE allows you to run computation on encrypted data without decrypting it. Wouldn't such an AGI find the world illegible with out its input being specifically encrypted for it with a key?
Can the key then be split into shards so that m of n shards are needed to encrypt queries to the FHE AGI? Can you also create a m of n function so that if m of n shards are destroyed the key is useless?
Can you distribute the keys in such a way so if the AGI destabilizes or hurts society m of n keys are destroyed as a byproduct and the AGI is functionally turned off?
I think this would create a kill switch that is hard to subvert by the AGI and easily triggered by society.
At the limit you could create a shard for every person on the planet with only one shard deletion being required to delete the key. Then if anyone doesn't like the way things are going with the AGI they could delete their key killing it.
You might argue that this is a brute force method for alignment. Everyone is happy with the AGI's behavior or the AGI is rendered nonfunctional. This method avoids defining what alignment is. This is important because defining defining what an aligned state is is intractable. There are too many divergent and contradictory opinions about what alignment means to ever reach universal consensus. What alignment means will necessarily change from person to person and moment to moment. It is too subjective and value laden a goal to be objectively defined and put in to an algorithm to control an AI.
I would love criticism of this idea to know why it is impractical, not-even-wrong, or flawed.
One problem is that the FHE AGI could secretly steal enough key shards to generate it's own functional copy of the key. This seems hard to do but not impossible.
Another problem is that the AGI could get smart enough to break the FHE.
FHE of the AGI might make it harder to interpret its internal workings.
I feel like this is an excessively software driven way to do this. I have a suggestion.
Make it a hardware thing.
AI relies on circuitboards and memory and connections etc, so instead of making it something that can be found out using an algorithm, make it a physical key that does a physical thing.
Think of a lock on an obscure door versus a really really good password.
You can brute force any password, given time. Physical access is access.
If you can't even find the widget? Yeah.
The opposite is also important. If the Killswitch needs to be integrated then removed it should be a "this needs to be done once every two years but the rest of the time it's hidden via obscurity".
Also development of AGI, just making it would be difficult so hastening it for no reason? Hmmm.
Hello all, I'm new here and trying to find my way around the site. The main reasons I joined are:
Any replies are welcome!
Welcome to the EA forum. Great to hear that you would like to donate :).
You can find information about charity selection and tax on the Doneer Effectief website. You can donate to GiveWell recommended charities via Doneer Effectief, but also to a few other charities. They also have a page with info about tax - but you may want read the website of the Belastingdienst to double check. (I can try to find the info in English for you upon request).
If you are looking for a community where you can talk about giving and charity selection, see De Tien Procent Club which is specific for the Netherlands, and Giving What We Can which is international.
Thanks, that's very useful information! En Nederlands is prima, hoor :)
Hi Shalott, welcome to the Forum!
Thanks, that looks useful!
TLDR: Bio Data scientist here concerned about AI risks , working to get his institution (DCRI) at Duke working on AI and alignment.
--
Long Version: I wrote below blurb and pasted it into https://bard.google.com/ to get TLDR to us...
Can you create a TLDR for the following post: Hi Sage Arbor here. I just joined effectivealtruism.org and have been listening to the 80K podcast for about a year. I work in data science (PhD biochem) and currently work with clinical trails at Duke. My main concernt is AI in the next 10 years. Id like my institution Duke Clinical Research Institute (DCRI) create a center of excellence for AI and dedicate a percentage of work towards alignment. Im starting to give talks to that end now. I thought this might be a good place to post alignment articles or work I have to get suggestions on improvement (as always tha work seems 6 months off when I find time).
Sure, here is a TLDR for Sage Arbor's post:
Here is a more concise TLDR:
I hope this is helpful!
comment 2