New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
Lab-grown meat approved for pet food in the UK  "The UK has become the first European country to approve putting lab-grown meat in pet food. Regulators cleared the use of chicken cultivated from animal cells, which lab meat company Meatly is planning to sell to manufacturers. The company says the first samples of its product will go on sale as early as this year, but it would only scale its production to reach industrial volumes in the next three years." https://www.bbc.co.uk/news/articles/c19k0ky9v4yo
I think it is good to have some ratio of upvoted/agreed : downvotes/disagreed posts in your portfolio. I think if all of your posts are upvoted/high agreeance then you're either playing it too safe or you've eaten the culture without chewing first.
A couple takes from Twitter on the value of merch and signaling that I think are worth sharing here: 1)  2) 
Do you like SB 1047, the California AI bill? Do you live outside the state of California? If you answered "yes" to both these questions, you can e-mail your state legislators and urge them to adopt a similar bill for your state. I've done this and am currently awaiting a response; it really wasn't that difficult. All it takes is a few links to good news articles or opinions about the bill and a paragraph or two summarizing what it does and why you care about it. You don't have to be an expert on every provision of the bill, nor do you have to have a group of people backing you. It's not nothing, but at least for me it was a lot easier than it sounded like it would be. I'll keep y'all updated on if I get a response.
Something bouncing around my head recently ... I think I agree with the notion that "you can't solve a problem at the level it was created". A key point here is the difference between "solving" a problem and "minimising its harm". * Solving a problem = engaging with a problem by going up a level from which is was createwd * Minimising its harm = trying to solve it at the level it was created Why is this important? Because I think EA and AI Safety have historically focussed (and has their respective strengths in) harm-minimisation. This applies obviously the micro. Here are some bad examples: * Problem: I'm experiencing intrusive + negative thoughts * Minimising its harm: engage with the thought using CBT * Attempting to solve it by going meta: apply meta cognitive therapy, see thoughts as empty of intrinsic value, as farts in the wind * Problem: I'm having fights with my partner about doing the dishes * Minimising its harm: create a spreadsheet and write down every thing each of us does around the house and calculate time spent * Attempting to solve it by going meta: discuss our communication styles and emotional states when frustration arises But I also think this applies at the macro: * Problem: People love eating meat * Minimising harm by acting at the level the problem was created: asking them not to eat meat * Attempting to solve by going meta: replacing the meat with lab grown meat * Problem: Unaligned AI might kill us * Minimising harm by acting at the level the problem was created: understand the AI through mechanistic interpretability * Attempting to solve by going meta: probably just Governance

Popular comments

Recent discussion

AI has enormous beneficial potential if it is governed well. However, in line with a growing contingent of AI (and other) experts from academia, industry, government, and civil society, we also think that AI systems could soon (e.g. in the next 15 years) cause catastrophic...

Continue reading

might be worth defining RFP = request for proposal

3
SummaryBot
Executive summary: Open Philanthropy is soliciting funding proposals for work aimed at mitigating catastrophic risks from advanced AI systems, focusing on six key subject areas related to AI governance and policy. Key points: 1. Eligible subject areas include technical AI governance, policy development, frontier company policy, international AI governance, law, and strategic analysis. 2. Proposal types can be research projects, training/mentorship programs, general support for existing organizations, or other projects. 3. Evaluation criteria include theory of change, track record, strategic judgment, project risks, cost-effectiveness, and scale. 4. Application process begins with a short Expression of Interest (EOI) form, followed by a full proposal if invited. 5. Funding is open to individuals and organizations globally, with typical initial grants ranging from $200k-$2M/year over 1-2 years. 6. Open Philanthropy aims to respond to EOIs within 3 weeks and may share promising proposals with other potential funders.     This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Peter commented on Silent cosmic rulers

In this post, I wish to outline an alternative picture to the grabby aliens model proposed by Hanson et al. (2021). The grabby aliens model assumes that “grabby aliens” expand far and wide in the universe, make clearly visible changes to their colonized volumes, and immediately...

Continue reading

This is a pretty interesting idea. I wonder if what we perceive as clumps of 'dark matter' might be or contain silent civilizations shrouded from interference. 

Maybe there is some kind of defense dominant technology or strategy that we don't yet comprehend. 

2
Magnus Vinding
I see, thanks for clarifying. In terms of potential tradeoffs between expansion speeds vs. spending resources on other things, it seems to me that one could argue in both directions regarding what the tradeoffs would ultimately favor. For example, spending resources on the creation of Dyson swarms/other clearly visible activity could presumably also divert resources away from maximally fast expansion. (There is also the complication of transmitting the resulting energy/resources to frontier scouts, who might be difficult to catch up with if they are at ~max speeds.) By rough analogy, if a human army were to colonize a vast (initially) uninhabited territory at max speed, it seems plausible that the best way to do so is by having frontier scouts rush out there in a nimble fashion, not by devoting a lot of resources toward the creation of massive structures right way. (And if we consider factors beyond speed, perhaps not being clearly visible also has strategic advantages if we add uncertainty about whether the territory really is uninhabited — an uncertainty that would presumably be present to some extent in all realistic scenarios.) Of course, one could likewise make analogies that point in the opposite direction, but my point is simply that it seems unclear, at least to me, whether these kinds of tradeoff considerations would overall favor "loud civ speed > quiet civ speed". Besides, FWIW, it seems quite plausible to me that advanced civs would be able to expand at the maximum possible speed regardless of whether they opted to be loud or quiet (e.g. they might not be driven by star power, or their technology might otherwise be so advanced that these contrasting choices do not constrain them either way).
197
17

I'm often asked about how the existential risk landscape has changed in the years since I wrote The Precipice. Earlier this year, I gave a talk on exactly that, and I want to share it here.

Here's a video of the talk and a full transcript.

 

In the years since I wrote...

Continue reading

Language models have been growing more capable even faster. But with them there is something very special about the human range of abilities, because that is the level of all the text they are trained on.

This sounds like a hypothesis that makes predictions we can go check. Did you have any particular evidence in mind? This and this come to mind, but there is plenty of other relevant stuff, and many experiments that could be quickly done for specific domains/settings. 

Note that you say "something very special" whereas my comment is actually about ... (read more)

4
Denkenberger
Have you seen data on spending for future pandemics before COVID and after?
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

Eliezer Yudkowsky periodically complains about people coming up with questionable plans with questionable assumptions to deal with AI, and then either:

  • Saying "well, if this assumption doesn't hold, we're doomed, so we might as well assume it's true."
  • Worse: coming up with cope-y reasons to assume that the assumption isn't even questionable at all. It's just a pretty reasonable worldview.

Sometimes the questionable plan is "an alignment scheme, which Eliezer thinks avoids the hard part of the problem." Sometimes it's a sketchy reckless plan that's probably going to blow up and make things worse.

Some people complain about Eliezer being a doomy Negative Nancy who's overly pessimistic.

I had an interesting experience a few months ago when I ran some beta-tests of my Planmaking and Surprise Anticipation workshop, that I think are illustrative.


i. Slipping into a more Convenient World

I have an exercise...

Continue reading

It's quite possible someone has already argued this, but I thought I should share just in case not.

Goal-Optimisers and Planner-Simulators

When people in the past discussed worries about AI development, this was often about AI agents - AIs that had goals they were attempting...

Continue reading
3
Adebayo Mubarak
2
Larks
Suppose we have some LLM interpritability technology that helps us take LLMs from a bit worse than humans at planning to a bit better (say because it reduces the risk of hallucinations), and these LLMs will ultimately be used by both humans and future agentic AIs. The improvement from human-level planning to better-than-human level benefits both humans and optimiser AIs. But the improvement up to human level is a much bigger boost to the agentic AI, who would otherwise not have access to such planning capabilities, than to humans, who already had human-level abilities. So this interpritability technology actually ends up making crunch time worse. It's different if this interpritability (or other form of safety/alignment work) also applied to future agentic AIs, because we could use it to directly reduce the risk from them.

It seems I get the knack of it now... 

So your argument here is that if we are going to go this route, then interpretability technology should be used as a measure  in the future towards ensuring the safety of this agentic AI as much as they are using currently to improve their "planning capabilities"  

Matt_Sharp posted a Quick Take

Lab-grown meat approved for pet food in the UK 

"The UK has become the first European country to approve putting lab-grown meat in pet food.

Regulators cleared the use of chicken cultivated from animal cells, which lab meat company Meatly is planning to sell to manufacturers.

The company says the first samples of its product will go on sale as early as this year, but it would only scale its production to reach industrial volumes in the next three years."

https://www.bbc.co.uk/news/articles/c19k0ky9v4yo

Continue reading

Rethink Priorities (RP) is excited to announce that Marcus A. Davis is now RP’s sole CEO. Former Co-CEO Peter Wildeford will remain at RP, focusing on projects in artificial intelligence. He will also continue his work as Chief Advisory Executive at the RP-sponsored think tank, the Institute for AI Policy and Strategy (IAPS). 

Since 2018, co-founders Marcus Davis and Peter Wildeford have served as Co-CEOs of RP. Their joint leadership has grown RP from a two-person research team into an international research organization with 60+ staff working around the world. Their guidance has helped expand RP’s research areas to include animal welfare, global health and development, and artificial intelligence policy.

The decision to transition to this new leadership structure comes after discussions around the opportunities for RP's future growth, Peter’s expertise and interests, and developments...

Continue reading

Effective Accelerationism, or e/acc, is a movement dedicated to promoting the acceleration of AI capabilities research. It is also a very annoying coalition between people I tend to find very naïve and people I find basically evil that is trying to do things I think will make me and everyone I care about more likely to die, along with everything I care about which would otherwise outlast us, and whose very name was chosen to mock people like me. Unfortunately the issues I disagree with them about are far too important for me to just ignore or look down on them. Although I haven't seen anyone with this label put their arguments in a way I think is very convincing, I do think there are a couple of arguments I see implicitly buried in (or that I can massage out of their) rhetoric that give me pause, and I want to review them, and my differences with them, now.

I will not be considering the arguments...

Continue reading

All of these thoughts and suggestions are based on my entirely subjective experiences (as a young early-career African woman based in Africa) engaging with the community since 2021. I decided to share these thoughts because I hope they might help someone in a similar situation...

Continue reading

I enjoyed reading your reflections, thanks for writing them up!

My advice: transferable skills are great because they are relevant to multiple actors and contexts. EA organizations are great, but do not hold a monopoly over impactful work. Plus, you are more likely to be impactful if you have a broader view of the world!

+1, I'm grateful in retrospect for not working at an EA organization right out of school :)

1
SummaryBot
Executive summary: The author reflects on their experiences in the Effective Altruism (EA) community as a young African woman, offering insights on work tests, maintaining compassion, power dynamics, career development, and a positive fellowship experience. Key points: 1. Work tests in hiring processes are beneficial, providing learning opportunities and building confidence, especially for those with imposter syndrome. 2. Balancing rational thinking with emotional compassion is crucial; the author warns against losing touch with one's initial motivations for joining EA. 3. Power dynamics within EA can create potentially unsafe spaces for vulnerable individuals, particularly young or less experienced members. 4. Building non-EA work experience is important, as focusing solely on EA causes can limit career opportunities, especially in regions with fewer EA organizations. 5. The Impact Academy fellowship is highlighted as a positive experience, offering valuable learning, networking, and personal growth opportunities. 6. Organizations are advised to provide more detailed feedback to unsuccessful job candidates, and individuals are encouraged to trust their instincts in uncomfortable situations.     This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.