This is a special post for quick takes by Ben_West. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
We have received an influx of people creating accounts to cast votes and comments over the past week, and we are aware that people who feel strongly about human biodiversity sometimes vote brigade on sites where the topic is being discussed. Please be aware that voting and discussion about some topics may not be representative of the normal EA Forum user base.
Huh, seems like you should just revert those votes, or turn off voting for new accounts. Seems better than just having people be confused about vote totals.
And maybe add a visible "new account" flag -- I understand not wanting to cut off existing users creating throwaways, but some people are using screenshots of forum comments as evidence of what EAs in general think.
Thanks! Yeah, this is something we've considered, usually in the context of trying to make the Forum more welcoming to newcomers, but this is another reason to prioritize that feature.
Yeah, I think we should probably go through and remove people who are obviously brigading (eg tons of votes in one hour and no other activity), but I'm hesitant to do too much more retroactively. I think it's possible that next time we have a discussion that has a passionate audience outside of EA we should restrict signups more, but that obviously has costs.
How do you differentiate someone who is sincerely engaging and happens to have just created an account now from someone who just wants their viewpoint to seem more popular and isn't interested in truth seeking?
Or are you saying we should just purge accounts that are clearly in the latter category, and accept that there will be some which are actually in the latter category but we can't distinguish from the former?
I think being like "sorry, we've reverted votes from recently signed-up accounts because we can't distinguish them" seems fine. Also, in my experience abusive voting patterns are usually very obvious, where people show up and only vote on one specific comment or post, or on content of one specific user, or vote so fast that it seems impossible for them to have read the content they are voting on.
How about: getting a lot of downvotes from new accounts doesn't decrease your voting-power and doesn't mean your comments won't show up on the frontpage? Half a dozen of my latest comments have responded to HBDers. Since they get a notification it doesn't surprise me that those comments get immediate downvotes which hides them from the frontpage and subsequently means that they can easily decrease my voting-power on this forum (it went from 5 karma for a strong upvote to now 4 karma for a strong upvote). Giving brigaders the power to hide things from the frontpage and decide which people have more voting-power on this forum seems undesirable.
Note: I went through Bob's comments and think it likely they were brigaded to some extent. I didn't think they were in general excellent, but they certainly were not negative-karma comments. I strong-upvoted the ones that were below zero, which was about three or four.
I think it is valid to use the strong upvote as a means of countering brigades, at least where a moderator has confirmed there is reason to believe brigading is active on a topic. My position is limited to comments below zero, because the harmful effects of brigades suppressing good-faith comments from visibility and affirmatively penalizing good-faith users are particularly acute. Although there are mod-level solutions, Ben's comments suggest they may have some downsides and require time, so I feel a community corrective that doesn't require moderators to pull away from more important tasks has value.
I also think it is important for me to be transparent about what I did and accept the community's judgment. If the community feels that is an improper reason to strong upvote, I will revert my votes.
Could you set a minimum karma threshold (or account age or something) for your votes to count? I would expect even a low threshold like 10 would solve much of the problem.
Yeah, interesting. I think we have a lot of lurkers who never get any karma and I don't want to entirely exclude them, but maybe some combo like "10 karma or your account has to be at least one week old" would be good.
The Forum moderation team has been made aware that Kerry Vaughn published a tweet thread that, among other things, accuses a Forum user of doing things that violate our norms. Most importantly:
Where he crossed the line was his decision to dox people who worked at Leverage or affiliated organizations by researching the people who worked there and posting their names to the EA forum
The user in question said this information came from searching LinkedIn for people who had listed themselves as having worked at Leverage and related organizations.
This is not "doxing" and it’s unclear to us why Kerry would use this term: for example, there was no attempt to connect anonymous and real names, which seems to be a key part of the definition of “doxing”. In any case, we do not consider this to be a violation of our norms.
At one point Forum moderators got a report that some of the information about these people was inaccurate. We tried to get in touch with the then-anonymous user, and when we were unable to, we redacted the names from the comment. Later, the user noticed the change and replaced the names. One of CEA’s staff asked the user to encode the names to allow those people more privacy, and the user did so.
Kerry says that a former Leverage staff member “requests that people not include her last name or the names of other people at Leverage” and indicates the user broke this request. However, the post in question requests that the author’s last name not be used in reference to that post, rather than in general. The comment in question doesn’t refer to the former staff member’s post at all, and was originally written more than a year before the post. So we do not view this comment as disregarding someone’s request for privacy.
Kerry makes several other accusations, and we similarly do not believe them to be violations of this Forum's norms. We have shared our analysis of these accusations with Leverage; they are, of course, entitled to disagree with us (and publicly state their disagreement), but the moderation team wants to make clear that we take enforcement of our norms seriously.
We would also like to take this opportunity to remind everyone that CEA’s Community Health team serves as a point of contact for the EA community, and if you believe harassment or other issues are occurring we encourage you to reach out to them.
I’ve found that communicating feedback/corrections often works best when I write something that approximates what I would’ve wished the other person had originally written.
Because of the need to sync more explicitly on a number of background facts and assumptions (and due to not having time for edits/revisions), my draft is longer than I think a moderator’s comment would need to be, were the moderation team to be roughly on the same page about the situation. While I am the Cathleen being referenced, I have had minimal contact with Leverage 2.0 and the EA Forum moderation team, so I expect this draft to be imperfect in various ways, while still pointing at useful and important parts of reality.
Here I’ve made an attempt to rewrite what I wish Ben West had posted in response to Kerry’s tweet thread:
The Forum moderation team has been made aware that Kerry Vaughn published a tweet thread that, among other things, accuses a Forum user of doing things that violate our norms. Most importantly:
Where he crossed the line was his decision to dox people who worked at Leverage or affiliated organizations by researching the people who worked there and posting their names to the EA forum.
We care a lot about ensuring that the EA Forum is a welcoming place where people are free to discuss important issues related to world improvement. While disagreement and criticism are an important part of that, we want to be careful not to allow for abuse to take place on our platform, and so we take such reports seriously.
After reviewing the situation, we have compiled the following response (our full review is still in process but we wanted to share what we have so far while the issue is live):
While Leverage was not a topic that we had flagged as “sensitive” back in Sept 2019 when the then-anonymous user originally made his post, the subsequent discussion around the individuals and organizations who were part of the Leverage/Paradigm ecosystem prior to its dissolution in June 2019 has led it to be classified as a sensitive topic to which we expend more scrutiny and are more diligent about enforcing our norms.
In reviewing the particular post referenced above, we found a number of odd elements:
In posing and then answering his own "Question" on the EA Forum, the user makes his accusation of Paradigm (the org) being a front or a secret replacement for Leverage (the org) despite having previously acknowledged the recent dissolution of the Leverage/Paradigm ecosystem (and in a context where the two organizations were publicly known to be closely connected).
The user largely acts as if he is sharing work history discovered on LinkedIn as his primary argument despite only 2 of the 13 named individuals having Leverage listed on their LinkedIn profiles.
The user names 4 additional people as having worked at Leverage despite citing no evidence of that fact (they did not have Leverage on their LinkedIn profiles).
The user then names 7 additional individuals who did not have Leverage on their LinkedIn profiles and who the user also did not believe to have originally worked at Leverage.
The user neglects to include any context or timelines from LinkedIn, e.g. whether there had been a recent change in work history at the point of the ecosystem dissolution, whether the individuals in question started at Leverage and then moved to Paradigm, started at Paradigm and then moved to Leverage, or started at either Leverage or Paradigm and then moved on to other projects. It is also unclear which, if any, continued to work at either Leverage or Paradigm after the ecosystem dissolution.
More than 2 years later, after reading a clarifying and detailed post about the relevant history of Leverage and Paradigm (which included a request to protect people's identities) and discovering that the Forum mods had removed the named individuals from his comment, the only edit the user decided to make was to deanonymize the names he’d compiled of people previously affiliated with the Leverage ecosystem.
When this post was initially brought to our attention in July of 2020, along with an explanation of possible negative consequences for the people listed (including ~4 individuals who the user was spreading potential misinformation about), we tried to get in touch with the then-anonymous user, and when we were unable to, we redacted the names from the comment and left an explanation for how the comment could make its point without using the personal information of the named individuals.
At the time, we had been informed that the user was mistaken about the work history for some of the people he listed, in large part due to relying on his incorrect personal assumptions. We did not consider the way that some of the 4 (and possibly others) might’ve intentionally excluded Leverage from their work histories, as we were focused on the ones who were incorrectly identified as having worked at Leverage and the potential consequences of that misinformation. Without yet knowing or investigating the full extent of the anonymous user’s posting history across multiple accounts, we did not suspect a pattern of hostile posts. Because of these factors, we did not evaluate whether this post might be a case of doxing.
In Dec 2021, Cathleen, one of the 4 who had been listed as working at Leverage (despite no record on LinkedIn), published a detailed account of her experience at Leverage/Paradigm. In it, she shared her perspective on harassment and ill-will that had come from the EA and Rationalist community members, and the negative effects of misinformation spread via public community forums. She explained why she had intentionally excluded Leverage from her LinkedIn many years prior and asked that people protect her identity as well as the identities of others from the Leverage ecosystem due to the risk of cancellation and harassment.
A few days later, the EA Forum user (who had revealed his real identity a couple months prior) returned to his anonymous post from Sept 2019 and deanonymized the first and last names of all 13 individuals he had previously named. This included Cathleen as well as the other 3 individuals who he attributed to Leverage (despite no record on LinkedIn). He accompanied the edits with a false/misleading comment (using the anonymous account) minimizing the substantive merit of prior requests for corrections to his post and claiming that all of the relevant information had actually been originally drawn from LinkedIn.
(At a cursory glance, it’s difficult to determine the most natural interpretation of the scope of Cathleen’s request, in order to assess the likelihood that the user was knowingly violating her wishes. We initially had a quite narrow interpretation, reading the quote out of context, but I think the situation becomes clearer if you take the time to read her entire post or the section that the quote was pulled from, entitled "We want to move forward with our lives (and not be cancelled)", which includes a direct reference to LinkedIn/her intent to keep her work history private.)
After receiving a new complaint about the potential harm of listing individuals' names, including the spread of misinformation caused by the user’s updated post, we reviewed the case and this time found no violation. As a compromise, we did offer to ask the user if they would be willing to encode the full names to help protect the individuals from potential negative effects arising from a simple google search.
We have since realized that many people (including people on our moderation team) took the user at his word without carefully reviewing the post. We had become confused about the specifics (falsely believing that he was sharing publicly available information from LinkedIn and thus believing that the information could be reasonably treated as true and that the objections raised were splitting hairs). We also did not accurately recognize the general nature/intent of his original post nor the potential negative effects of allowing the information to stand, and we did not evaluate the deanonymizing edits in the context of Cathleen’s recent public request and voiced concerns.
In Oct 2021, when the user had revealed his identity and his use of multiple anonymous accounts, we also failed to review the complete body of evidence and the ways that his actions had potentially violated our norms (e.g. using multiple anonymous accounts to convey similar views on a topic), as well as notice that the full pattern of posts indicated a type of ill-will that we discourage and that is especially relevant given the sensitive nature of the topic of Leverage/Paradigm.
In retrospect, we recognize that while we would like to give users the benefit of the doubt, when there are complaints of doxing, harassment, or other poor behavior, it makes sense for us to look more carefully at the situation and potentially draw on CEA’s Community Health team’s expertise in assisting individuals who flag that they’ve been wronged by a user on the Forum.
Something else we did not consider (because we unfortunately don’t have the bandwidth to track all the goings on in the EA and Rationalist communities) is that the level of threat experienced by people who had previously been part of the Leverage ecosystem had become quite high. In evaluating cases of disclosing private information or even assembling and publishing public information, context matters.
As an example:
On the face of it, it seems fine to have openly communist views or be LGBTQ, but history has shown us that during certain eras, e.g. the Second Red Scare in the US, creating and posting lists of such people (even if true or otherwise individually knowable) would likely subject them to harassment or worse.
It is not an excuse if someone else could create a similarly damaging list, and it doesn’t seem right to ask people to hide their work history from potential employers on a professional networking platform for the sole purpose of protecting themselves from being subject to defamatory public posts from ill-willed and/or ill-informed EAs and Rationalists. It is already unfortunate that the damage to the reputation of the relevant orgs has made it difficult for individuals to decide when and how to associate themselves with their former projects.
Guilt by association is not a good faith argument here, and at a minimum, it seems reasonable to honor individual wishes for Forum users to refrain from using their full names in affiliation with prior projects when requested (and to be careful not to do so in a way that falsely implies that the person is (or should be) ashamed of their affiliation).
After reviewing the overall situation, we think it’s important for users as well as moderators to recognize that posts about former Leverage/Paradigm staff do not happen in a vacuum. We strongly condemn the sharing of information about an individual’s prior professional or social affiliation in a way that intentionally or negligently exposes them to undue negative consequences.
If people are proud of their work at an organization but feel the need to disassociate themselves from that org publicly, it seems like something has gone wrong.
Given the overall pattern of posts from this user’s accounts across the EA Forum as well as Less Wrong from 2018-2021, it seems plausible to our team that the actions of this user may have actually been a significant contributing factor in fomenting negative sentiment towards this group of people. In light of that, making the decision to list their full names in this comment in 2019 and then editing the comment to include them again in 2021, after Cathleen’s detailed post (which includes information relevant to the comment’s hypothesis as well as a request and argument for privacy), we find it harder to defend an interpretation where there was not an intent to cause harm to the named individuals.
Further, it is our understanding that only a fraction of those named worked for Leverage or Paradigm at the time of the original post, and only ~1 to 2 of those named worked for Leverage or Paradigm at the time of the subsequent deanonymizing edit; given that, and with the assumption that the argument for both relevancy of the post on the EA Forum as well as the argument for wrongdoing relies solely on the pattern of employment at these orgs, the weighing of potential benefit/value vs. cost/harm to prior project members seems particularly clear.
Intimidation and harassment can be executed in subtle ways, and while intent can be hard to ascertain, we encourage participants on the Forum to put in extra effort to ward against their posts landing in a gray area.
We don’t think that every case of bad behavior needs to fit neatly into our listed norms (and in evaluating cases like these, we also think it makes sense to revisit our listed norms to see if we should make changes for clarity or scope)[1], but it seems clear to us that the type of behavior exhibited by this user across their anonymous accounts is neither generous nor collaborative and it also seems likely to interfere with good discourse (not least of all by creating a hostile environment for some members of the community).
While we wish we would’ve done better, given our knowledge at the time, we don’t see this as a major failing, but we do recognize the harm that was caused and we want to emphasize that the use of anonymous accounts to harass individuals or groups is not something that we tolerate.
We have referred this case to the CEA Community Health team for further review. They will look at the totality of content from this user on this and related topics, examining patterns, severity, as well as the time period spanned by relevant posts and comments on the EA Forum as well as LW, as a way of assessing potential and actual negative impacts and intent. With the permission of relevant parties, they will also review registered complaints about the user’s behavior. With their input, we will deliberate further and decide whether there is mitigating action that the Forum moderators can and should take in this particular case.
If you have a world improvement related issue that you believe needs public attention but aren’t sure how to navigate it while minimizing unnecessary harm, we encourage you to reach out to the CEA Community Health team who can help organize your thoughts and perhaps mediate a discussion where more information can be exchanged before escalating to a public post. We recognize that in a situation where you suspect a conspiracy or are otherwise suspicious of others’ actions, it may be harder to prioritize the discussion norms of the Forum, but it is in those moments that the norms are most important to respect.
Be kind.
Stay civil, at the minimum. Don’t sneer or be snarky. In general, assume good faith. We may delete unnecessary rudeness and issue warnings or bans for it.
Substantive disagreements are fine and expected. Disagreements help us find the truth and are part of healthy communication.
Stay on topic.
No spam. This forum is for discussions about improving the world, not the promotion of services.
Be honest.
Don’t mislead or manipulate.
Communicate your uncertainty and the true reasons behind your beliefs as much as you can.
*(The Forum moderators are currently grappling with an issue that may be relevant to situations involving sensitive topics like these: it does not violate our norms to inadvertently publish false or misleading information – but in the case that a correction or material clarification is made and the OP doesn’t update their post or comment, an argument could be made that the user is either in violation of the norm of scout mindset/willingness to update their view, or (if they do update their understanding but don’t update their post) they could be in violation of knowingly/deliberately spreading misinformation. We generally have not wanted to act as the arbiters of truth, so it’s not yet clear how to best moderate a situation like this.)
To share a brief thought, the above comment gives me a bad juju because it puts a contested perspective into a forceful and authoritative voice, while being long enough that one might implicitly forget that this is a hypothetical authority talking[1]. So it doesn't feel to me like a friendly conversational technique. I would have preferred it to be in the first person.
Garcia Márquez has a similar but longer thing going on in The Handsomest Drowned Man In The World, where everything after "if that magnificent man had lived in the village" is a hypothetical.
I fairly frequently have conversations with people who are excited about starting their own project and, within a few minutes, convince them that they would learn less starting project than they would working for someone else. I think this is basically the only opinion I have where I can regularly convince EAs to change their mind in a few minutes of discussion and, since there is now renewed interest in starting EA projects, it seems worth trying to write down.
It's generally accepted that optimal learning environments have a few properties:
You are doing something that is just slightly too hard for you.
In startups, you do whatever needs to get done. This will often be things that are way too easy (answering a huge number of support requests) or way too hard (pitching a large company CEO on your product when you've never even sold bubblegum before).
Established companies, by contrast, put substantial effort into slotting people into roles that are approximately at their skill level (though you still usually need to put in proactive effort to learn things at an established company).
Repeatedly practicing a skill in "chunks"
Similar to the last point, established companies have a "rhythm" where e.g. one month per year where everyone has a priority of writing up reflections on how the sales cycle is going, commenting on each other's writeups, and updating your own. Startups do things by the seat of their pants, which means employees are usually rapidly switching between tasks.
Feedback from experts/mentorship
Startup accelerators like YCombinator partially address this, but still a defining characteristic of starting your own project is that you are doing the work without guidance/oversight.
Moreover, even supposing you learn more at a startup, it's worth thinking about what it actually is you learn. I know way more about the laws regarding healthcare insurance than I did before starting a company, but that knowledge isn't super useful to me outside the startup context.
This isn't a 100% universal knockdown argument – some established companies suck for professional development, and some startups are really great. But by default, I would expect startups to be worse for learning.
I think I agree with this. Two things that might make starting a startup a better learning opportunity than your alternative, in spite of it being a worse learning environment:
You are undervalued by the job market (so you can get more opportunities to do cool things by starting your own thing)
You work harder in your startup because you care about it more (so you get more productive hours of learning)
All the things you mention are skills too though: knowing how to handle tasks that are too hard or just tasks you've never done before, prioritising between many easy and hard tasks, being able to work without oversight or clear rhythm, working in fast-paced environment, knowing how to attract people to work with you, etc.
I lowkey feel these skills are less common and more valuable to society than many other skills. Guess it depends which skills you wish to pick up.
Working for/with people who are good at those skills seems like a pretty good bet to me.
E.g. "knowing how to attract people to work with you" – if person A has a manager who was really good at attracting people to work with them, and their manager is interested in mentoring, and person B is just learning how to attract people to work with them from scratch at their own startup, I would give very good odds that person A will learn faster.
Yeah definitely. I don't want to claim that learning is impossible at a startup – clearly it's possible – just that, all else equal, learning usually happens faster at existing companies.
It depends on what you want to learn. At a startup, people will often get a lot more breadth of scope than they would otherwise in an established company. Yes, you might not have in-house mentors or seasoned pros to learn from, but these days motivated people can fill in the holes outside the org.
The food sector has witnessed a surge in the production of plant-based meat alternatives that aim to mimic various attributes of traditional animal products; however, overall sensory appreciation remains low. This study employed open-ended questions, preference ranking, and an identification question to analyze sensory drivers and barriers to liking four burger patties, i.e., two plant-based (one referred to as pea protein burger and one referred to as animal-like protein burger), one hybrid meat-mushroom (75% meat and 25% mushrooms), and one 100% beef burger. Untrained participants (n=175) were randomly assigned to blind or informed conditions in a between-subject study. The main objective was to evaluate the impact of providing information about the animal/plant-based protein source/type, and to obtain product descriptors and liking/disliking levels from consumers. Results from the ranking tests for blind and informed treatments showed that the animal-like protein [Impossible] was the most preferred product, followed by the 100% beef burger. Moreover, in the blind condition, there was no significant difference in preferences between the beef burger and the hybrid and pea protein burgers. In the blind tasting, people preferred the pea protein burger over the hybrid one, contrary to the results of the informed tasting, which implies the existence of affecting factors other than pure hedonistic enjoyment. In the identification question, although consumers correctly identified the beef burger under the blind condition, they still preferred the animal-like burger.
Thanks for the comment and the followup comments by you and Michael, Ben. First, it's really cool that Impossible was preferred to beef burgers in a blind test! Even if the test is not completely fair! Impossible has been around for a while, and obviously they would've been pretty excited to do a blind taste test earlier if they thought they could win, which is evidence that the product has improved somewhat over the years.
I want to quickly add an interesting tidbit I learned from food science practitioners[1] a while back:
Blind taste tests are not necessarily representative of "real" consumer food preferences.
By that, I mean I think most laymen who think about blind taste tests believe that there's a Platonic taste attribute that's captured well by blind taste tests (or captured except for some variance). So if Alice prefers A to B in a blind taste test, this means that Alice in some sense should like A more than B. And if she buys (at the same price) B instead of A at the supermarket, that means either she was tricked by good marketing, or she has idiosyncratic non-taste preferences that makes her prefer B to A (eg positive associations with eating B with family or something).
I think this is false. Blind taste tests are just pretty artificial, and they do not necessarily reflect real world conditions where people eat food. This difference is large enough to sometimes systematically bias results (hence the worry about differentially salted Impossible burgers and beef burgers).
People who regularly design taste tests usually know that there are easy ways that they can manipulate taste tests so people will prefer more X in a taste test, in ways that do not reflect more people wanting to buy more X in the real world. For example, I believe adding sugar regularly makes products more "tasty" in the sense of being more highly rated in a taste test. However, it is not in fact the case that adding high amounts of sugar automatically makes a product more commonly bought. This is generally understood as people in fact having genuinely different food preferences in taste test conditions than consumer real world decisions.
Concrete example: Pepsi consistently performs better than Coca-Cola in blind taste tests. Yet most consumers consistently buy more Coke than Pepsi. Many people (including many marketers, like the writers of the hyperlinks above) believe that this is strong evidence that Coke just has really good brand/marketing, and is able to sell an inferior product well to the masses.
Personally, I'm not so sure. My current best guess is that this discrepancy is best explained by consumer's genuine drink preferences being different in blind taste tests from real-world use cases. As a concrete operationalization, if people make generic knock-offs of Pepsi and Coke with alternative branding, I would expect faux Pepsi (that taste just like Pepsi) to perform better in blind tastes than faux Coke (that tastes just like Coke), but for more people to buy faux Coke anyway.
For Impossible specifically, I remember doing a blind taste test in 2016 between Impossible beef and regular beef, and thinking that personally I liked [2]the Impossible burger more. But I also remember distinctly that the Impossible burger had a much stronger umami taste, which naively seems to me like exactly the type of thing that more taste testers will prefer in blind test conditions than real-world conditions.
This is a pretty long-winded comment, but I hope other people finds this interesting!
This is lore, so it might well be false. I heard this from practitioners who sounded pretty confident, and it made logical sense to me, but this is different from the claims being actually empirically correct. Before writing this comment, I was hoping to find an academic source on this topic I can quickly summarize, but I was unable to find it quickly. So unfortunately my reasoning transparency here is basically on the level of "trust me bro :/"
To be clear I think it's unlikely for this conclusion to be shared by most taste testers, for the aforementioned reason that if Impossible believed this, they would've done public taste tests way before 2023.
Be less ambitious I don't have a huge sample size here, but the founders I've spoken to since the "EA has a lot of money so you should be ambitious" era started often seem to be ambitious in unhelpful ways. Specifically: I think they often interpret this advice to mean something like "think about how you could hire as many people as possible" and then they waste a bunch of resources on some grandiose vision without having validated that a small-scale version of their solution works.
Founders who instead work by themselves or with one or two people to try to really deeply understand some need and make a product that solves that need seem way more successful to me.[1]
Think about failure The "infinite beta" mentality seems quite important for founders to have. "I have a hypothesis, I will test it, if that fails I will pivot in this way" seems like a good frame, and I think it's endorsed by standard start up advice (e.g. lean startup).
Of course, it's perfectly coherent to be ambitious about finding a really good value proposition. It's just that I worry that "be ambitious" primes people to be ambitious in unhelpful ways.
Two days after posting, SBF, who the thread lists as the prototypical example of someone who would never make a plan B, seems to have executed quite the plan B.
If your content is viewed by 100,000 people, making it more concise by one second saves an aggregate of one day across your audience. Respecting your audience means working hard to make your content shorter.
When the 80k podcast describes itself as "unusually in depth," I feel like there's a missing mood: maybe there's no way to communicate the ideas more concisely, but this is something we should be sad about, not a point of pride.[1]
I'm unfairly picking on 80k, I'm not aware of any long-form content which has this mood that I claim is missing ↩︎
This is a thoughtful post and a really good sentiment IMO!
When the 80k podcast describes itself as "unusually in depth," I feel like there's a missing mood: maybe there's no way to communicate the ideas more concisely, but this is something we should be sad about, not a point of pride.
As you touched on, I’m not sure 80k is a good negative example, to me it seems like a positive example of how to handle this?
In addition to a tight intro, 80k has a great highlight section, that to me, looks like someone smart tried to solve this exact problem, balancing many considerations.
This highlight section has good takeaways and is well organized with headers. I guess this is useful for 90% of people who only browse at the content for 1 minute.
Thanks for the push back! I agree that 80k cares more about the use of their listener's time than most podcasters, although this is a low bar.
80k is operating under a lot of constraints, and I'm honestly not sure if they are actually doing anything incorrectly here. Notably, the fancy people who they get on the podcast probably aren't willing to devote many hours to rephrasing things in the most concise way possible, which really constrains their options.
I do still feel like there is a missing mood though.
Closing comments on posts If you are the author of a post tagged "personal blog" (which notably includes all new Bostrom-related posts) and you would like to prevent new comments on your post, please email forum@centerforeffectivealtruism.org and we can disable them.
We know that some posters find the prospect of dealing with commenters so aversive that they choose not to post at all; this seems worse to us than posting with comments turned off.
Democratizing risk post update Earlier this week, a post was published criticizing democratizing risk. This post was deleted by the (anonymous) author. The forum moderation team did not ask them to delete it, nor are we aware of their reasons for doing so. We are investigating some likely Forum policy violations, however, and will clarify the situation as soon as possible.
First, early advocates of cryonics and MNT focused on writings and media aimed at a broad popular audience, before they did much technical, scientific work. These advocates successfully garnered substantial media attention, and this seems to have irritated the most relevant established scientific communities (cryobiology and chemistry, respectively), both because many of the established scientists felt that something with no compelling scientific backing was getting more attention than their own “real” work, and because some of them (inaccurately) suspected that media attention for cryonics and MNT had translated into substantial (but unwarranted) funding for both fields.
Second, early advocates of cryonics and MNT spoke and wrote in a way that was critical and dismissive toward the most relevant mainstream scientific fields, and this contributed further to tensions between advocates of cryonics and MNT and the established scientific communities from which they could have most naturally recruited scientific talent and research funding.
Third, and perhaps largely as a result of these first two issues, these “neighboring” established scientific communities (of cryobiologists and chemists) engaged in substantial “boundary work” to keep advocates of cryonics and MNT excluded. For example, in the case of cryonics: according to an historical account by a cryonicist26 (who may of course be biased), established cryobiologists organized to repeatedly label cryonicists as frauds until cryonicists threatened a lawsuit; they passed over cryonics-associated cryobiologists for promotions within the professional societies, or asked them to resign from those societies, and they also blocked cryonicists from obtaining new society memberships via amendments to the bylaws; they threatened to boycott the only supplier of storage vessels suitable for cryonics, forcing cryonicists to build their own storage vessels; they wrote a letter to the California Board of Funeral Directors and Embalmers urging them to investigate cryonicists and shut them down; and so on.
We are banning stone and their alternate account for one month for messaging users and accusing others of being sock puppets, even after the moderation team asked them to stop. If you believe that someone has violated Forum norms such as creating sockpuppet accounts, please contact the moderators.
I have recently been wondering what my expected earnings would be if I started another company. I looked back at the old 80 K blog post arguing that there is some skill component to entrepreneurship, and noticed that, while serial entrepreneurs do have a significantly higher chance of a successful exit on their second venture, they raise their first rounds at substantially lower valuations. (Table 4 here.)
It feels so obvious to me that someone who's started a successful company in the past will be more likely to start one in the future, and I continue to be baffled why the data don't show this.[1]
I have heard some VCs say that they don't fund serial entrepreneurs because humans only have enough energy to start one company; I always thought this was kind of dumb, but maybe there is some truth to it.
Although the data don't not show this either. Companies founded by second time entrepreneurs might be less ambitious, raise money earlier, or have some other difference which is consistent with second time entrepreneurs being more successful
Wild guesses as someone that knows very little about this:
I wonder if it's because people have sublinear returns on wealth, so their second company would be more mission driven and less optimized for making money. Also, there might be some selection bias in who needs to raise money vs being self funded.
But if I had to bet I would say that it's mostly noise, and there's not enough data to have a strong prior.
This post points out that brain preservation (cryonics) is potentially quite cheap on a $/QALY basis because people who are reanimated will potentially live for a very long time with very high quality of life.
It seems reasonable to assume that reanimated people would funge against future persons, so I'm not sure if this is persuasive for those who don't adopt person affecting views, but for those who do, it's plausibly very cost-effective.
This is interesting because I don't hear much about person affecting longtermist causes.
A question I'm curious about "person-affecting longtermism":
Person-affecting ethics are usually summarized as "caring about making people happy, not about making happy people". But what if you are not trying to create happy people in the future, but you know that someone else will in fact cause the creation of large numbers of future people. Will you care about making these people happy?
Also if you apply a discount rate to longtermism, then I'm assuming the discount rate also applies to present people living very long lives - their present years should be worth more than their future years, no?
So for your stance to hold true I guess someone has to apply zero (or low) discount rate in addition to accepting person-affecting, and also not find more costeffective opportunities to make future people happy as per 1?
Possible Vote Brigading
We have received an influx of people creating accounts to cast votes and comments over the past week, and we are aware that people who feel strongly about human biodiversity sometimes vote brigade on sites where the topic is being discussed. Please be aware that voting and discussion about some topics may not be representative of the normal EA Forum user base.
Huh, seems like you should just revert those votes, or turn off voting for new accounts. Seems better than just having people be confused about vote totals.
And maybe add a visible "new account" flag -- I understand not wanting to cut off existing users creating throwaways, but some people are using screenshots of forum comments as evidence of what EAs in general think.
Arguably also beneficially if you thought that we should typically make an extra effort to be tolerant of 'obvious' questions from new users.
Thanks! Yeah, this is something we've considered, usually in the context of trying to make the Forum more welcoming to newcomers, but this is another reason to prioritize that feature.
I agree.
Yeah, I think we should probably go through and remove people who are obviously brigading (eg tons of votes in one hour and no other activity), but I'm hesitant to do too much more retroactively. I think it's possible that next time we have a discussion that has a passionate audience outside of EA we should restrict signups more, but that obviously has costs.
When you purge user accounts you automatically revoke their votes. I wouldn't be very hesitant to do that.
How do you differentiate someone who is sincerely engaging and happens to have just created an account now from someone who just wants their viewpoint to seem more popular and isn't interested in truth seeking?
Or are you saying we should just purge accounts that are clearly in the latter category, and accept that there will be some which are actually in the latter category but we can't distinguish from the former?
I think being like "sorry, we've reverted votes from recently signed-up accounts because we can't distinguish them" seems fine. Also, in my experience abusive voting patterns are usually very obvious, where people show up and only vote on one specific comment or post, or on content of one specific user, or vote so fast that it seems impossible for them to have read the content they are voting on.
How about: getting a lot of downvotes from new accounts doesn't decrease your voting-power and doesn't mean your comments won't show up on the frontpage?
Half a dozen of my latest comments have responded to HBDers. Since they get a notification it doesn't surprise me that those comments get immediate downvotes which hides them from the frontpage and subsequently means that they can easily decrease my voting-power on this forum (it went from 5 karma for a strong upvote to now 4 karma for a strong upvote).
Giving brigaders the power to hide things from the frontpage and decide which people have more voting-power on this forum seems undesirable.
Note: I went through Bob's comments and think it likely they were brigaded to some extent. I didn't think they were in general excellent, but they certainly were not negative-karma comments. I strong-upvoted the ones that were below zero, which was about three or four.
I think it is valid to use the strong upvote as a means of countering brigades, at least where a moderator has confirmed there is reason to believe brigading is active on a topic. My position is limited to comments below zero, because the harmful effects of brigades suppressing good-faith comments from visibility and affirmatively penalizing good-faith users are particularly acute. Although there are mod-level solutions, Ben's comments suggest they may have some downsides and require time, so I feel a community corrective that doesn't require moderators to pull away from more important tasks has value.
I also think it is important for me to be transparent about what I did and accept the community's judgment. If the community feels that is an improper reason to strong upvote, I will revert my votes.
Edit: is to are
I agree.
Could you set a minimum karma threshold (or account age or something) for your votes to count? I would expect even a low threshold like 10 would solve much of the problem.
Yeah, interesting. I think we have a lot of lurkers who never get any karma and I don't want to entirely exclude them, but maybe some combo like "10 karma or your account has to be at least one week old" would be good.
Yeah I think that would be a really smart way to implement it.
Do the moderators think the effect of vote brigading reflect support from people who are pro-HBD or anti-HBD?
The Forum moderation team has been made aware that Kerry Vaughn published a tweet thread that, among other things, accuses a Forum user of doing things that violate our norms. Most importantly:
The user in question said this information came from searching LinkedIn for people who had listed themselves as having worked at Leverage and related organizations.
This is not "doxing" and it’s unclear to us why Kerry would use this term: for example, there was no attempt to connect anonymous and real names, which seems to be a key part of the definition of “doxing”. In any case, we do not consider this to be a violation of our norms.
At one point Forum moderators got a report that some of the information about these people was inaccurate. We tried to get in touch with the then-anonymous user, and when we were unable to, we redacted the names from the comment. Later, the user noticed the change and replaced the names. One of CEA’s staff asked the user to encode the names to allow those people more privacy, and the user did so.
Kerry says that a former Leverage staff member “requests that people not include her last name or the names of other people at Leverage” and indicates the user broke this request. However, the post in question requests that the author’s last name not be used in reference to that post, rather than in general. The comment in question doesn’t refer to the former staff member’s post at all, and was originally written more than a year before the post. So we do not view this comment as disregarding someone’s request for privacy.
Kerry makes several other accusations, and we similarly do not believe them to be violations of this Forum's norms. We have shared our analysis of these accusations with Leverage; they are, of course, entitled to disagree with us (and publicly state their disagreement), but the moderation team wants to make clear that we take enforcement of our norms seriously.
We would also like to take this opportunity to remind everyone that CEA’s Community Health team serves as a point of contact for the EA community, and if you believe harassment or other issues are occurring we encourage you to reach out to them.
How I wish the EA Forum had responded
I’ve found that communicating feedback/corrections often works best when I write something that approximates what I would’ve wished the other person had originally written.
Because of the need to sync more explicitly on a number of background facts and assumptions (and due to not having time for edits/revisions), my draft is longer than I think a moderator’s comment would need to be, were the moderation team to be roughly on the same page about the situation. While I am the Cathleen being referenced, I have had minimal contact with Leverage 2.0 and the EA Forum moderation team, so I expect this draft to be imperfect in various ways, while still pointing at useful and important parts of reality.
Here I’ve made an attempt to rewrite what I wish Ben West had posted in response to Kerry’s tweet thread:
The Forum moderation team has been made aware that Kerry Vaughn published a tweet thread that, among other things, accuses a Forum user of doing things that violate our norms. Most importantly:
We care a lot about ensuring that the EA Forum is a welcoming place where people are free to discuss important issues related to world improvement. While disagreement and criticism are an important part of that, we want to be careful not to allow for abuse to take place on our platform, and so we take such reports seriously.
After reviewing the situation, we have compiled the following response (our full review is still in process but we wanted to share what we have so far while the issue is live):
While Leverage was not a topic that we had flagged as “sensitive” back in Sept 2019 when the then-anonymous user originally made his post, the subsequent discussion around the individuals and organizations who were part of the Leverage/Paradigm ecosystem prior to its dissolution in June 2019 has led it to be classified as a sensitive topic to which we expend more scrutiny and are more diligent about enforcing our norms.
In reviewing the particular post referenced above, we found a number of odd elements:
When this post was initially brought to our attention in July of 2020, along with an explanation of possible negative consequences for the people listed (including ~4 individuals who the user was spreading potential misinformation about), we tried to get in touch with the then-anonymous user, and when we were unable to, we redacted the names from the comment and left an explanation for how the comment could make its point without using the personal information of the named individuals.
At the time, we had been informed that the user was mistaken about the work history for some of the people he listed, in large part due to relying on his incorrect personal assumptions. We did not consider the way that some of the 4 (and possibly others) might’ve intentionally excluded Leverage from their work histories, as we were focused on the ones who were incorrectly identified as having worked at Leverage and the potential consequences of that misinformation. Without yet knowing or investigating the full extent of the anonymous user’s posting history across multiple accounts, we did not suspect a pattern of hostile posts. Because of these factors, we did not evaluate whether this post might be a case of doxing.
In Dec 2021, Cathleen, one of the 4 who had been listed as working at Leverage (despite no record on LinkedIn), published a detailed account of her experience at Leverage/Paradigm. In it, she shared her perspective on harassment and ill-will that had come from the EA and Rationalist community members, and the negative effects of misinformation spread via public community forums. She explained why she had intentionally excluded Leverage from her LinkedIn many years prior and asked that people protect her identity as well as the identities of others from the Leverage ecosystem due to the risk of cancellation and harassment.
A few days later, the EA Forum user (who had revealed his real identity a couple months prior) returned to his anonymous post from Sept 2019 and deanonymized the first and last names of all 13 individuals he had previously named. This included Cathleen as well as the other 3 individuals who he attributed to Leverage (despite no record on LinkedIn). He accompanied the edits with a false/misleading comment (using the anonymous account) minimizing the substantive merit of prior requests for corrections to his post and claiming that all of the relevant information had actually been originally drawn from LinkedIn.
(At a cursory glance, it’s difficult to determine the most natural interpretation of the scope of Cathleen’s request, in order to assess the likelihood that the user was knowingly violating her wishes. We initially had a quite narrow interpretation, reading the quote out of context, but I think the situation becomes clearer if you take the time to read her entire post or the section that the quote was pulled from, entitled "We want to move forward with our lives (and not be cancelled)", which includes a direct reference to LinkedIn/her intent to keep her work history private.)
After receiving a new complaint about the potential harm of listing individuals' names, including the spread of misinformation caused by the user’s updated post, we reviewed the case and this time found no violation. As a compromise, we did offer to ask the user if they would be willing to encode the full names to help protect the individuals from potential negative effects arising from a simple google search.
We have since realized that many people (including people on our moderation team) took the user at his word without carefully reviewing the post. We had become confused about the specifics (falsely believing that he was sharing publicly available information from LinkedIn and thus believing that the information could be reasonably treated as true and that the objections raised were splitting hairs). We also did not accurately recognize the general nature/intent of his original post nor the potential negative effects of allowing the information to stand, and we did not evaluate the deanonymizing edits in the context of Cathleen’s recent public request and voiced concerns.
In Oct 2021, when the user had revealed his identity and his use of multiple anonymous accounts, we also failed to review the complete body of evidence and the ways that his actions had potentially violated our norms (e.g. using multiple anonymous accounts to convey similar views on a topic), as well as notice that the full pattern of posts indicated a type of ill-will that we discourage and that is especially relevant given the sensitive nature of the topic of Leverage/Paradigm.
In retrospect, we recognize that while we would like to give users the benefit of the doubt, when there are complaints of doxing, harassment, or other poor behavior, it makes sense for us to look more carefully at the situation and potentially draw on CEA’s Community Health team’s expertise in assisting individuals who flag that they’ve been wronged by a user on the Forum.
Something else we did not consider (because we unfortunately don’t have the bandwidth to track all the goings on in the EA and Rationalist communities) is that the level of threat experienced by people who had previously been part of the Leverage ecosystem had become quite high. In evaluating cases of disclosing private information or even assembling and publishing public information, context matters.
As an example:
On the face of it, it seems fine to have openly communist views or be LGBTQ, but history has shown us that during certain eras, e.g. the Second Red Scare in the US, creating and posting lists of such people (even if true or otherwise individually knowable) would likely subject them to harassment or worse.
It is not an excuse if someone else could create a similarly damaging list, and it doesn’t seem right to ask people to hide their work history from potential employers on a professional networking platform for the sole purpose of protecting themselves from being subject to defamatory public posts from ill-willed and/or ill-informed EAs and Rationalists. It is already unfortunate that the damage to the reputation of the relevant orgs has made it difficult for individuals to decide when and how to associate themselves with their former projects.
Guilt by association is not a good faith argument here, and at a minimum, it seems reasonable to honor individual wishes for Forum users to refrain from using their full names in affiliation with prior projects when requested (and to be careful not to do so in a way that falsely implies that the person is (or should be) ashamed of their affiliation).
After reviewing the overall situation, we think it’s important for users as well as moderators to recognize that posts about former Leverage/Paradigm staff do not happen in a vacuum. We strongly condemn the sharing of information about an individual’s prior professional or social affiliation in a way that intentionally or negligently exposes them to undue negative consequences.
If people are proud of their work at an organization but feel the need to disassociate themselves from that org publicly, it seems like something has gone wrong.
Given the overall pattern of posts from this user’s accounts across the EA Forum as well as Less Wrong from 2018-2021, it seems plausible to our team that the actions of this user may have actually been a significant contributing factor in fomenting negative sentiment towards this group of people. In light of that, making the decision to list their full names in this comment in 2019 and then editing the comment to include them again in 2021, after Cathleen’s detailed post (which includes information relevant to the comment’s hypothesis as well as a request and argument for privacy), we find it harder to defend an interpretation where there was not an intent to cause harm to the named individuals.
Further, it is our understanding that only a fraction of those named worked for Leverage or Paradigm at the time of the original post, and only ~1 to 2 of those named worked for Leverage or Paradigm at the time of the subsequent deanonymizing edit; given that, and with the assumption that the argument for both relevancy of the post on the EA Forum as well as the argument for wrongdoing relies solely on the pattern of employment at these orgs, the weighing of potential benefit/value vs. cost/harm to prior project members seems particularly clear.
Intimidation and harassment can be executed in subtle ways, and while intent can be hard to ascertain, we encourage participants on the Forum to put in extra effort to ward against their posts landing in a gray area.
We don’t think that every case of bad behavior needs to fit neatly into our listed norms (and in evaluating cases like these, we also think it makes sense to revisit our listed norms to see if we should make changes for clarity or scope)[1], but it seems clear to us that the type of behavior exhibited by this user across their anonymous accounts is neither generous nor collaborative and it also seems likely to interfere with good discourse (not least of all by creating a hostile environment for some members of the community).
While we wish we would’ve done better, given our knowledge at the time, we don’t see this as a major failing, but we do recognize the harm that was caused and we want to emphasize that the use of anonymous accounts to harass individuals or groups is not something that we tolerate.
We have referred this case to the CEA Community Health team for further review. They will look at the totality of content from this user on this and related topics, examining patterns, severity, as well as the time period spanned by relevant posts and comments on the EA Forum as well as LW, as a way of assessing potential and actual negative impacts and intent. With the permission of relevant parties, they will also review registered complaints about the user’s behavior. With their input, we will deliberate further and decide whether there is mitigating action that the Forum moderators can and should take in this particular case.
If you have a world improvement related issue that you believe needs public attention but aren’t sure how to navigate it while minimizing unnecessary harm, we encourage you to reach out to the CEA Community Health team who can help organize your thoughts and perhaps mediate a discussion where more information can be exchanged before escalating to a public post. We recognize that in a situation where you suspect a conspiracy or are otherwise suspicious of others’ actions, it may be harder to prioritize the discussion norms of the Forum, but it is in those moments that the norms are most important to respect.
*(The Forum moderators are currently grappling with an issue that may be relevant to situations involving sensitive topics like these: it does not violate our norms to inadvertently publish false or misleading information – but in the case that a correction or material clarification is made and the OP doesn’t update their post or comment, an argument could be made that the user is either in violation of the norm of scout mindset/willingness to update their view, or (if they do update their understanding but don’t update their post) they could be in violation of knowingly/deliberately spreading misinformation. We generally have not wanted to act as the arbiters of truth, so it’s not yet clear how to best moderate a situation like this.)
To share a brief thought, the above comment gives me a bad juju because it puts a contested perspective into a forceful and authoritative voice, while being long enough that one might implicitly forget that this is a hypothetical authority talking[1]. So it doesn't feel to me like a friendly conversational technique. I would have preferred it to be in the first person.
Garcia Márquez has a similar but longer thing going on in The Handsomest Drowned Man In The World, where everything after "if that magnificent man had lived in the village" is a hypothetical.
(fwiw I didn't mind the format and felt like this was Cathleen engaging in good faith.)
I would have so much respect for CEA if they had responded like this.
Startups aren't good for learning
I fairly frequently have conversations with people who are excited about starting their own project and, within a few minutes, convince them that they would learn less starting project than they would working for someone else. I think this is basically the only opinion I have where I can regularly convince EAs to change their mind in a few minutes of discussion and, since there is now renewed interest in starting EA projects, it seems worth trying to write down.
It's generally accepted that optimal learning environments have a few properties:
Moreover, even supposing you learn more at a startup, it's worth thinking about what it actually is you learn. I know way more about the laws regarding healthcare insurance than I did before starting a company, but that knowledge isn't super useful to me outside the startup context.
This isn't a 100% universal knockdown argument – some established companies suck for professional development, and some startups are really great. But by default, I would expect startups to be worse for learning.
I think I agree with this. Two things that might make starting a startup a better learning opportunity than your alternative, in spite of it being a worse learning environment:
All the things you mention are skills too though: knowing how to handle tasks that are too hard or just tasks you've never done before, prioritising between many easy and hard tasks, being able to work without oversight or clear rhythm, working in fast-paced environment, knowing how to attract people to work with you, etc.
I lowkey feel these skills are less common and more valuable to society than many other skills. Guess it depends which skills you wish to pick up.
Thanks! I'm not sure I fully understand your comment – are you implying that the skills you mention are easier to learn in a startup?
Unsurprisingly, I disagree with that view :)
Yes, I was implying these skills are easier to learn in a startup.
I'd be keen to know your view. Where do you feel is a better place to pick up such skills?
Working for/with people who are good at those skills seems like a pretty good bet to me.
E.g. "knowing how to attract people to work with you" – if person A has a manager who was really good at attracting people to work with them, and their manager is interested in mentoring, and person B is just learning how to attract people to work with them from scratch at their own startup, I would give very good odds that person A will learn faster.
Can you give some advice about the topic of attracting good people to work with you, or have any writeups you like?
Thank you, that makes sense. So - being a cofounder / early employee could work? (Assuming the founder has these skills)
Yeah definitely. I don't want to claim that learning is impossible at a startup – clearly it's possible – just that, all else equal, learning usually happens faster at existing companies.
Makes sense!
It depends on what you want to learn. At a startup, people will often get a lot more breadth of scope than they would otherwise in an established company. Yes, you might not have in-house mentors or seasoned pros to learn from, but these days motivated people can fill in the holes outside the org.
It depends what you want to learn
As you said.
(I don't see why to break it up more than that)
Plant-based burgers now taste better than beef
https://www.sciencedirect.com/science/article/abs/pii/S0963996923003587
Interesting! Some thoughts:
(I couldn't get access to the paper.)
This Twitter thread points out that the beef burger was less heavily salted.
Thanks for the comment and the followup comments by you and Michael, Ben. First, it's really cool that Impossible was preferred to beef burgers in a blind test! Even if the test is not completely fair! Impossible has been around for a while, and obviously they would've been pretty excited to do a blind taste test earlier if they thought they could win, which is evidence that the product has improved somewhat over the years.
I want to quickly add an interesting tidbit I learned from food science practitioners[1] a while back:
Blind taste tests are not necessarily representative of "real" consumer food preferences.
By that, I mean I think most laymen who think about blind taste tests believe that there's a Platonic taste attribute that's captured well by blind taste tests (or captured except for some variance). So if Alice prefers A to B in a blind taste test, this means that Alice in some sense should like A more than B. And if she buys (at the same price) B instead of A at the supermarket, that means either she was tricked by good marketing, or she has idiosyncratic non-taste preferences that makes her prefer B to A (eg positive associations with eating B with family or something).
I think this is false. Blind taste tests are just pretty artificial, and they do not necessarily reflect real world conditions where people eat food. This difference is large enough to sometimes systematically bias results (hence the worry about differentially salted Impossible burgers and beef burgers).
People who regularly design taste tests usually know that there are easy ways that they can manipulate taste tests so people will prefer more X in a taste test, in ways that do not reflect more people wanting to buy more X in the real world. For example, I believe adding sugar regularly makes products more "tasty" in the sense of being more highly rated in a taste test. However, it is not in fact the case that adding high amounts of sugar automatically makes a product more commonly bought. This is generally understood as people in fact having genuinely different food preferences in taste test conditions than consumer real world decisions.
Concrete example: Pepsi consistently performs better than Coca-Cola in blind taste tests. Yet most consumers consistently buy more Coke than Pepsi. Many people (including many marketers, like the writers of the hyperlinks above) believe that this is strong evidence that Coke just has really good brand/marketing, and is able to sell an inferior product well to the masses.
Personally, I'm not so sure. My current best guess is that this discrepancy is best explained by consumer's genuine drink preferences being different in blind taste tests from real-world use cases. As a concrete operationalization, if people make generic knock-offs of Pepsi and Coke with alternative branding, I would expect faux Pepsi (that taste just like Pepsi) to perform better in blind tastes than faux Coke (that tastes just like Coke), but for more people to buy faux Coke anyway.
For Impossible specifically, I remember doing a blind taste test in 2016 between Impossible beef and regular beef, and thinking that personally I liked [2]the Impossible burger more. But I also remember distinctly that the Impossible burger had a much stronger umami taste, which naively seems to me like exactly the type of thing that more taste testers will prefer in blind test conditions than real-world conditions.
This is a pretty long-winded comment, but I hope other people finds this interesting!
This is lore, so it might well be false. I heard this from practitioners who sounded pretty confident, and it made logical sense to me, but this is different from the claims being actually empirically correct. Before writing this comment, I was hoping to find an academic source on this topic I can quickly summarize, but I was unable to find it quickly. So unfortunately my reasoning transparency here is basically on the level of "trust me bro :/"
To be clear I think it's unlikely for this conclusion to be shared by most taste testers, for the aforementioned reason that if Impossible believed this, they would've done public taste tests way before 2023.
Reversing start up advice
In the spirit of reversing advice you read, some places where I would give the opposite advice of this thread:
Be less ambitious
I don't have a huge sample size here, but the founders I've spoken to since the "EA has a lot of money so you should be ambitious" era started often seem to be ambitious in unhelpful ways. Specifically: I think they often interpret this advice to mean something like "think about how you could hire as many people as possible" and then they waste a bunch of resources on some grandiose vision without having validated that a small-scale version of their solution works.
Founders who instead work by themselves or with one or two people to try to really deeply understand some need and make a product that solves that need seem way more successful to me.[1]
Think about failure
The "infinite beta" mentality seems quite important for founders to have. "I have a hypothesis, I will test it, if that fails I will pivot in this way" seems like a good frame, and I think it's endorsed by standard start up advice (e.g. lean startup).
Of course, it's perfectly coherent to be ambitious about finding a really good value proposition. It's just that I worry that "be ambitious" primes people to be ambitious in unhelpful ways.
Two days after posting, SBF, who the thread lists as the prototypical example of someone who would never make a plan B, seems to have executed quite the plan B.
Longform's missing mood
If your content is viewed by 100,000 people, making it more concise by one second saves an aggregate of one day across your audience. Respecting your audience means working hard to make your content shorter.
When the 80k podcast describes itself as "unusually in depth," I feel like there's a missing mood: maybe there's no way to communicate the ideas more concisely, but this is something we should be sad about, not a point of pride.[1]
I'm unfairly picking on 80k, I'm not aware of any long-form content which has this mood that I claim is missing ↩︎
This is a thoughtful post and a really good sentiment IMO!
As you touched on, I’m not sure 80k is a good negative example, to me it seems like a positive example of how to handle this?
In addition to a tight intro, 80k has a great highlight section, that to me, looks like someone smart tried to solve this exact problem, balancing many considerations.
This highlight section has good takeaways and is well organized with headers. I guess this is useful for 90% of people who only browse at the content for 1 minute.
Thanks for the push back! I agree that 80k cares more about the use of their listener's time than most podcasters, although this is a low bar.
80k is operating under a lot of constraints, and I'm honestly not sure if they are actually doing anything incorrectly here. Notably, the fancy people who they get on the podcast probably aren't willing to devote many hours to rephrasing things in the most concise way possible, which really constrains their options.
I do still feel like there is a missing mood though.
Closing comments on posts
If you are the author of a post tagged "personal blog" (which notably includes all new Bostrom-related posts) and you would like to prevent new comments on your post, please email forum@centerforeffectivealtruism.org and we can disable them.
We know that some posters find the prospect of dealing with commenters so aversive that they choose not to post at all; this seems worse to us than posting with comments turned off.
See the updates here.
@lukeprog's investigation into Cryonics and Molecular nanotechnology seems like it may have relevant lessons for the nascent attempts to build a mass movement around AI safety:
We are banning stone and their alternate account for one month for messaging users and accusing others of being sock puppets, even after the moderation team asked them to stop. If you believe that someone has violated Forum norms such as creating sockpuppet accounts, please contact the moderators.
An EA Limerick
(Lacey told me this was not good enough to actually submit to the writing contest, so publishing it as a short form.)
Nice!
I have recently been wondering what my expected earnings would be if I started another company. I looked back at the old 80 K blog post arguing that there is some skill component to entrepreneurship, and noticed that, while serial entrepreneurs do have a significantly higher chance of a successful exit on their second venture, they raise their first rounds at substantially lower valuations. (Table 4 here.)
It feels so obvious to me that someone who's started a successful company in the past will be more likely to start one in the future, and I continue to be baffled why the data don't show this.[1]
I have heard some VCs say that they don't fund serial entrepreneurs because humans only have enough energy to start one company; I always thought this was kind of dumb, but maybe there is some truth to it.
Although the data don't not show this either. Companies founded by second time entrepreneurs might be less ambitious, raise money earlier, or have some other difference which is consistent with second time entrepreneurs being more successful
Wild guesses as someone that knows very little about this:
I wonder if it's because people have sublinear returns on wealth, so their second company would be more mission driven and less optimized for making money. Also, there might be some selection bias in who needs to raise money vs being self funded.
But if I had to bet I would say that it's mostly noise, and there's not enough data to have a strong prior.
Person-affecting longtermism
This post points out that brain preservation (cryonics) is potentially quite cheap on a $/QALY basis because people who are reanimated will potentially live for a very long time with very high quality of life.
It seems reasonable to assume that reanimated people would funge against future persons, so I'm not sure if this is persuasive for those who don't adopt person affecting views, but for those who do, it's plausibly very cost-effective.
This is interesting because I don't hear much about person affecting longtermist causes.
Person-affecting ethics are usually summarized as "caring about making people happy, not about making happy people". But what if you are not trying to create happy people in the future, but you know that someone else will in fact cause the creation of large numbers of future people. Will you care about making these people happy?
So for your stance to hold true I guess someone has to apply zero (or low) discount rate in addition to accepting person-affecting, and also not find more costeffective opportunities to make future people happy as per 1?