71

0
0

Reactions

0
0
Comments8
Sorted by Click to highlight new comments since:

I think a lot of people have become complacent within their EA/rationalist bubbles, and have forgotten that communication requires meeting people halfway and adjusting your language to the audience. 

All the useful information or important people in the world are not confined to our tiny subcultures. If you want to make an impact, you have to talk to the outside world, and not dismiss people out of hand. 

Nicholas - yes; strongly upvoted.

The debate over AI risk went mainstream in the last few weeks. It's suddenly within the Overton window (the set of things considered reputable to discuss and believe in public). 

This means we all need to raise our game in terms of public communication -- especially around the topic of AI alignment and risks.

As you imply, this isn't just a matter of building better communication skills (simplicity, clarity, logic, links, vividness, reading Pinker's book, etc) -- crucial though those are. It's also a matter of embracing broader public communication values -- e.g. respect for the audience's time, good will, and ability to contribute to the conversation, and respect for their varying levels of knowledge, backgrounds, levels of fatigue, distraction, and mood, neurodiversity, etc.

This is a crucial time when the EA, Rationalist, and X risk communities either make a positive & decisive impact on the public discourse -- or when we confuse, alienate, and aggravate people. 

One good heuristic when communicating on social media is to try to cultivate more self-aware about which technical terms, arguments, and writing styles are more intended to do status-signaling and virtue-signaling for one's ingroup (e.g. other EAs), versus which are actually intended to communicate effectively with ordinary folks outside the ingroup. Almost always, there's a tradeoff. What's impressive to our EA peers typically won't be persuasive outside EA. And what's persuasive to normies on Twitter won't be impressive to our EA peers. 

We have to be willing to bite the bullet on this latter point. When AI risk suddenly comes into public consciousness, and we have a time-limited opportunity to shape the public discourse in helpful directions, is not the time to try to build one's status and prestige within the movement by showing off how many obscure AI alignment terms one happens to know, or how clever a philosophical critique one can offer of some LessWrong post.

When I began writing in the EA forum I was often told that I was unclear, but this is because there is “rationalistic style” that in my view is not conductive to understanding, but to the signaling of community belonging.

This post would benefit from heeding its own advice to "Repeat your main points. Summarize your main points. Use section headers and bullet points and numbered lists. Color-highlight and number-subscript the same word when it's used in different contexts ("apple1 is not apple2")."

In particular, in this post is no clear summary or conclusion and the most concrete action (a list of suggestions) is somewhat buried in the middle and not referred to in the rest of the post. That list could have bullet points or numbers to make it clearer.

Good points, thanks! (Mainly the list part)

Strongly agreed! The quote that springs to mind is, "The critic hates most what he would have done himself if he had had the guts" - Steven Pressfield.

I'm close to releasing a new EA research communications video, and I hope it embodies some of these great ideas!

This helps people "parse" your

Just a note, this sentence is unfinished

[This comment is no longer endorsed by its author]Reply

Thank you! Another person pointed this out on LW.

Curated and popular this week
Relevant opportunities