R

RAB

158 karmaJoined

Comments
10

Any thoughts on where e.g. 50K could be well spent?

On point 2, re: defense-dominant vs. offense-dominant future technologies - even if technologies are offense-dominant, the original colonists of a solar system are likely to maintain substantial control over settled solar systems, because even if they tend to lose battles over those systems, antimatter or other highly destructive weapons can render the system useless to would-be conquerors.

In general I expect interstellar conflict to look vaguely Cold War-esque in the worse cases, because the weapons are likely to be catastrophically powerful, hard to defend against (e.g. large bodies accelerated to significant fractions of lightspeed), and visible after launch, with time for retaliation (if slower than light).

RAB
23
7
0

Just fyi, there is an extremely strong taboo (esp. in the US) against saying “the n-word” and most people are not sympathetic to use-mention distinction arguments in this particular case, even if they would be in theory. I strongly suspect this is why your comment was downvoted.

Didn’t mean to imply secret info, edited the comment above.

That said, seeing most of their legal and compliance teams quit gives me much more serious reservations about illegal or unethical behavior.

Edit: I think I retract this second part - I don’t know if everyone’s quitting now that they can’t pay salaries, or just the legal/compliance teams.

RAB
24
27
7

I would highly, highly recommend that people just wait up to 72 hours for more information, rather than digging through Twitter or Reddit threads.

Edit: This is not to imply that I have secret information - just that this is unfolding very quickly and I expect to learn a lot more in the coming days.

RAB
34
24
74

Strikes me as…premature? We’ll have a lot more clarity in the coming days, and resigning + questioning the ethics at FTX when we still fundamentally don’t know what happened doesn’t seem particularly productive.

If FTX just took risks and lost, this will look very dumb in hindsight. And if there turn out to be lots of unethical calls, we’ll have more than enough time to criticize them all to our hearts’ content. But at least we’ll have the facts.

Thanks for this! A couple of things:

  1. It's strange to me that this is aimed at people who aren't aware that MIRI staffers are quite pessimistic about AGI risk. After something like Eliezer's April Fools post, it seems pretty clear to those who've been paying attention - I would've been more interested in something that digs into the meat of the view rather than explaining the basic premises. Though it's possible I'm overestimating the amount of familiarity within longtermist circles of different views, including MIRI's.
  2. There are factors excluded from this model which are necessary for the core claim that alignment fails by default. Warning shots followed by a huge effort to avert disaster are one way things could go well, but we could just be further from AGI than people think (something like 5-15 years is my understanding of the MIRI view) or have very slow takeoff speeds.

 

I'm a bit frustrated because it seems like these 2 things are indicative of a failure to engage with counterarguments. They strike me as more of an attempt to instruct people who aren't familiar with the view, rather than persuasively argue for it compared to different (informed) views.

Thanks Ben! That's very helpful info. I'll edit the initial comment to reflect my lowered credence in exaggeration or malfeasance.

RAB
21
0
0

Thanks - I meant "lone" as in one or two researchers raising these concerns in isolation, not to say they were unaffiliated with an institution. 

I'm not familiar with Zoe's work, and would love to hear from anyone who has worked with them in the past. After seeing the red flags mentioned above,  and being stuck with only Zoe's word for their claims, anything from a named community member along the lines of "this person has done good research/has been intellectually honest" would be a big update for me.

And since I've stated my suspicions, I apologize to Zoe if their claims turn out to be substantiated. This is an extremely important post if true, although I remain skeptical.

In particular, a post of the form: 

I have written a paper (link). 

(12 paragraphs of bravery claims)

(1 paragraph on why EA is failing)

(1 paragraph call to action)

Strikes me as being motivated not by a desire to increase community understanding of an important issue, but rather to generate sympathy for the authors and support for their position by appealing to justice and fairness norms. The other explanation is that this was a very stressful experience, and the author was simply venting their frustrations. 

But I'd hope that authors publishing an important paper wouldn't use its announcement solely as an opportunity for venting, rather than a discussion of the paper and its claims. Whereas that choice makes sense if the goal is to create sympathy and marshal support without needing to defend your object-level argument.

RAB
45
0
0

EDIT: See Ben's comment in the thread below on his experience as Zoe's advisor and confidence in her good intentions.

(Opening disclaimer: this was written to express my honest thoughts, not to be maximally diplomatic. My response is to the post, not the paper itself.)

I'd like to raise a point I haven't seen mentioned (though I'm sure it's somewhere in the comments). EA is a very high-trust environment, and has recently become a high-funding environment. That makes it a tempting target for less intellectually honest or pro-social actors.

If you just read through the post, every paragraph except the last two (and the first sentence) is mostly bravery claims (from SSC's "Against Bravery Debates"). This is a major red flag for me reading something on the internet about a community I know well. It's much easier to start an online discussion about how you're being silenced than to defend your key claims on the merits.  Smaller red flags were: explicit warnings of impending harms if the critique is not heeded, and anonymous accounts posting mostly low-quality comments in support of the critique (shoutout to "AnonymousEA"). 

A lot of EAs have a natural tendency to defend someone who claims they're being silenced, and give their claims some deference to avoid being uncharitable. And it's pretty easy to exploit that tendency.

I don't know Zoe, and I don't want to throw accusations of exaggeration or malfeasance into the ring without cause. If these incidents occurred as described, the community should be extremely concerned. But on priors, I expect a lot of claims along these lines, i.e. "Please fund my research if you don't want to silence criticism" to come from a mix of unaligned academics hoping to do their own thing with LTist funding, and less scrupulous Phil Torres-style actors. 

Yes, I'm leaving myself more vulnerable to a world where LTist orgs do in fact silence criticism and nobody hears about it except from brave lone researchers. But I'd like to see more evidence in support of that case before everyone gets too worried. 

Load more