Ebenezer Dukakis

762 karmaJoined


I wonder if this is an argument for Good Ventures funding more endowments. If they endow a re-granter that funds something weird, they can say "well the whole point of this endowment was to diversify decision-making; it's out of our hands at this point". From the perspective of increasing the diversity of the funding landscape, a no-strings endowment seems best, although it could have other disadvantages.

[By 'endowment' I'm suggesting a large, one-time lump sum given to a legally independent organization. That organization could choose to give away the endowment quickly and then dissolve, or give some legally mandated minimum disbursement every year, or anything in between.]

There was an LTFF evaluation a few years ago.

I wonder if you could make post-grant assessment really cheap by automatically emailing grantees some sort of Google Form. It could show them what they wrote on their grant application and ask them how well they achieved their stated objectives, plus various other questions. You could have a human randomly audit the responses to incentivize honesty.

my opinion for edutainment more broadly: "Sounds like a good idea at first glance, basically no wins. Probably doomed"

Are you sure there are basically no wins? Kaj Sotala has an interesting anecdote about the game DragonBox in this blog post. Apparently it's a super fun puzzle game that incidentally teaches kids basic algebra.

When I was a kid, I played some of edugames of the form "pilot a submarine, dodge enemies, occasionally a submarine-themed math problem pops up". I'm not excited about that sort of game. I'm more excited about what I'd call a "stealth edugame" -- a game that would sell just fine as an ordinary game, but teaches you useful knowledge that happens to be embedded in the game mechanics. Consider the game Railroad Tycoon 2. It's not marketed as an edugame, and it's a lot of fun, but as you play you'll naturally pick up some finance concepts like: debt and equity financing, interest rates, the business cycle, profit and loss, dividends, buying stock on margin, short selling, M&A, bankruptcy, liquidation, etc. You'll get an intuitive idea of what supply and demand are, how to optimize your operations for profitability, and how to prioritize investments based on their net present value.

Another example along the same lines -- not primarily edutainment, but apparently law professors play clips of that movie in their classes because it is so accurate.

Don't forget about organizational governance for AI labs as well. It's a travesty that we still don't have a good answer to "how would you prevent org governance from going wrong, like it went wrong at OpenAI". I spitballed some ideas in this comment.

NVIDIA's stock price has risen by about 25% in the past month or so. Seems like the market believes AI Pause is going to fail?

I like the idea of using stock prices as a metric for whether AI pause will succeed or not. Aside from the obvious point that stock prices represent an aggregate of what investors believe, this metric also seems fairly resistant to Goodharting. If you can find some way to make NVIDIA's stock crash, that will probably trigger less AI investment, which acts like a partial pause.

Seems like there's currently a feedback loop between AI progress, hype, and investment. Any one of those three could be a good target if you want things to slow down.

From my understanding of boards and governance structures, I think that few are actually very effective, and it's often very difficult to tell this from outside the organization.

It seems valuable to differentiate between "ineffective by design" and "ineffective in practice". Which do you think is more the cause for the trend you're observing?

OP is concerned that Anthropic's governance might fall into the "ineffective by design" category. Like, it's predictable in advance that something could maybe go wrong here.

If yours is more of an "ineffective in practice" argument -- that seems especially concerning, if the "ineffective in practice" point applies even when the governance appeared to be effective by design, ex ante.

In any case, I'd really like to see dedicated efforts to argue for ideal AI governance structures and documents. It feels like EA has overweighted the policy side of AI governance and underweighted the organizational founding documents side. Right now we're in the peanut gallery, criticizing how things are going at OpenAI and now Anthropic, without offering much in the way of specific alternatives.

Events at OpenAI have shown that this issue deserves a lot more attention, in my opinion. Some ideas:

  • A big cash prize for best AI lab governance structure proposals. (In practice you'd probably want to pick and choose the best ideas across multiple proposals.)

  • Subsidize red-teaming novel proposals and testing out novel proposals in lower-stakes situations, for non-AI organiations. (All else equal, it seems better for AGI to be developed using an institutional template that's battle-tested.) We could dogfood proposals by using them for non-AI EA startups or EA organizations focused on e.g. community-building.

  • Governance lit reviews to gather and summarize info, both empirical info and also theoretical models from e.g. economics. Cross-national comparisons might be especially fruitful if we don't think the right structures are battle-tested in a US legal context.

At this point, I'm embarrassed that if someone asked me how to fix OpenAI's governance docs, I wouldn't really have a suggestion. On the other hand, if we had some really solid suggestions, it feels doable to either translate them into policy requirements, or convince groups like Anthropic's trustees to adopt them.

the Trust Agreement also authorizes the Trust to be enforced by the company and by groups of the company’s stockholders who have held a sufficient percentage of the company’s equity for a sufficient period of time


It's impossible to assess this "failsafe" without knowing the thresholds for these "supermajorities." Also, a small number of investors—currently, perhaps Amazon and Google—may control a large fraction of shares. It may be easy for profit-motivated investors to reach a supermajority.

Just speculating here, perhaps the "sufficient period of time" is meant to deter activist shareholders/corporate raiders? Without that clause, you can imagine an activist who believes that Anthropic is underperforming due to its safety commitments. The activist buys sufficient shares to reach the supermajority threshold, then replaces Anthropic management with profit-seekers. Anthropic stock goes up due to increased profits. The activist sells their stake and pockets the difference.

Having some sort of check on the trustees seems reasonable, but the point about Amazon and Google owning a lot of shares is concerning. It seems better to require consensus from many independent decisionmakers in order to overrule the trustees.

Maybe it would be better to weight shareholders according to the log or the square root of the number of shares they hold. That would give increased weight to minor shareholders like employees and ex-employees. That could strike a better balance between concern for safety and concern for profit. Hopefully outside investors would trust employees and ex-employees to have enough skin in the game to do the right thing.

With my proposed reweighting, you would need some mechanism to prevent Amazon and Google from splitting their stake across a bunch of tiny shell entities. Perhaps shares could lose their voting rights if they're transferred away from their original owner. That also seems like a better way to deter patient activists. But maybe it could cause problems if Anthropic wants to go public? I guess the trustees could change the rules if necessary at that point.

Thanks for the response, upvoted.

socialism is just about making the government bigger

OP framed socialism in terms of resource reallocation. ("The global economy’s current mode of allocating resources is suboptimal" was a key point, which yes, sounded like advocacy for a command economy.) I'm trying to push back on millenarian thinking that 'socialism' is a magic wand which will improve resource allocation.

If your notion of 'socialism' is favorable tax treatment for worker-owned cooperatives or something, that could be a good thing if there's solid evidence that worker-owned cooperatives achieve better outcomes, but I doubt it would qualify as a top EA cause.

(An uncomfortable implication of the above commenter’s perspective is that we should redistribute more money from the poor to the rich, on the off chance they put it toward effective causes.)

Here in EA, GiveDirectly (cash transfers for the poor) is considered a top EA cause. It seems fairly plausible to me that if the government cut a bunch of non-evidence-backed school and work programs and did targeted, temporary direct cash transfers instead, that would be an improvement.

If you look at rich countries, there is a strong positive association between left-wing policies and citizen wellbeing.

I'm skimming the post you linked and it doesn't look especially persuasive. Inferring causation from correlation is notoriously difficult, and these relationships don't look particularly robust. (Interesting that r^2=0.29 appears to be the only correlation coefficient specified in the article -- that's not a strong association!)

As an American, I don't particularly want America to move in the direction of a Nordic-style social democracy, because Americans are already very well off. In 2023, the US had the world's second highest median income adjusted for cost of living, right after Luxembourg. From a poverty-reduction perspective, the US government should be focused on effective foreign aid and facilitating immigration.

Similarly, from a global poverty reduction perspective, we should be focused on helping poor countries. If "socialism" tends to be good for rich countries but bad for poor countries, that suggests it is the wrong tool to reduce global poverty.

  1. The global economy’s current mode of allocating resources is suboptimal. (Otherwise, why would effective altruism be necessary?)

The US government spent about $6.1 trillion in 2023 alone. That's over 40x Bill Gates' current net worth. Very little of that $6.1 trillion went to top EA causes.

[Edit: Here is an interesting 2015 quote regarding US government spending, from Vox of all sources: "A couple of years ago, former Obama and Bush officials estimated that only 1 percent of government spending is backed by any evidence at all ... Perhaps unsurprisingly, then, evaluations of government-sponsored school and work programs have found that some three-quarters of those have no effect." Maybe I would be more enthusiastic about socialism if this were addressed, but fundamentally it seems like a tricky incentives problem.]

The strategy of "take money from rich capitalists and have citizens vote on how to allocate it" doesn't seem to result in anything like effective altruism. $6.1 trillion is already an incomprehensibly large amount. I don't see how increasing it would change things.

I don't favor increasing the government's budget unless the government is spending money well.

  1. Individuals and institutions can be motivated to change their behaviour for the better on the basis of concern for others. (Otherwise, how could effective altruism be possible?)

My sense is that most people who hear about effective altruism aren't going to become effective altruists. EA doesn't have some sort of magic pill to distribute that makes you want to help people or animals who exist far away in time or space. EA recruitment is more about identifying (fairly rare) individuals in the general population who are interested in that stuff.

If this sort of mass behavior change was somehow possible at the flip of a switch, socialism wouldn't be necessary anyways. People would voluntarily be altruistic. No need to make it compulsory.

Why not a socialist alternative, that is, one in which people are motivated to a greater extent by altruism and a lesser extent by self-interest?

I don't think socialism will change the rate of greed in the general population. It will just redirect the greed towards grabbing a bigger share of the redistribution pie. The virtue of capitalism is that it harnesses greed in a way that often has beneficial effects for society. ("It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own self-interest.")

And some socialist economies have had some successes (human development in Kerala, economic growth in China, the USSR’s role in space technology and smallpox eradication, Cuba’s healthcare system).

Historically speaking, socialists often endorse economic systems that end up failing, but after they fail socialists forget they originally endorsed them. I think it's important for those cases to be included in the dataset too. See this book.

EAs should be more willing to fund and conduct research into alternative economic systems, socialist ones included.

Yep, I favor voluntary charter cities to experiment with alternative economic systems on a small scale, and I support folks who are trying to think rigorously about alternative systems, such as radicalxchange. The big thing socialism lacks is a small-scale, working proof of concept. Without a compelling and robust proof of concept, advocating for radical changes to big developed countries which already function fairly well in the grand scheme of things seems irresponsible.

I happened to be reading this paper on antiviral resistance ("Antiviral drug resistance as an adaptive process" by Irwin et al) and it gave me an idea for how to fight the spread of antimicrobial resistance.

Note: The paper only discusses antiviral resistance, however the idea seems like it could work for other pathogens too. I won't worry about that distinction for the rest of this post.

The paper states:

Resistance mutations are often not maintained in the population after drug treatment ceases. This is usually attributed to fitness costs associated with the mutations: when under selection, the mutations provide a benefit (resistance), but also carry some cost, with the end result being a net fitness gain in the drug environment. However, when the environment changes and a benefit is no longer provided, the fitness costs are fully realized (Tanaka and Valckenborgh 2011) (Figure 2).

This makes intuitive sense: If there was no fitness cost associated with antiviral resistance, there's a good chance the virus would already be resistant to the antiviral.

More quotes:

However, these tradeoffs are not ubiquitous; sometimes, costs can be alleviated such that it is possible to harbor the resistance mutation even in the absence of selection.


Fitness costs also co-vary with the degree of resistance conferred. Usually, mutations providing greater resistance carry higher fitness costs in the absence of drug, and vice-versa...


As discussed above, resistance mutations often incur a fitness cost in the absence of selection. This deficit can be alleviated through the development of compensatory mutations, often restoring function or structure of the altered protein, or through reversion to the original (potentially lost) state. Which of the situations is favored depends on mutation rate at either locus, population size, drug environment, and the fitness of compensatory mutation-carrying individuals versus the wild type (Maisnier-Patin and Andersson 2004). Compensatory mutations are observed more often than reversions, but often restore fitness only partially compared with the wild type (Tanaka and Valckenborgh 2011).

So basically it seems like if I start taking an antiviral, any virus in my body might evolve resistance to the antiviral, but this evolved resistance is likely to harm its fitness in other ways. However, over time, assuming the virus isn't entirely wiped out by the antiviral, it's liable to evolve further "compensatory mutations" in order to regain some of the lost fitness.

Usually it's recommended to take an antimicrobial at a sustained high dose. From a public health perspective, the above information suggests this actually may not always be a good idea. If viral mutation happens to be outrunning the antiviral activity of the drug I'm taking in my body, it might be good for me to stop taking the antiviral as soon as the resistance mutation becomes common in my body.

If I continue taking the antiviral once resistance has become common in my body, (a) the antiviral isn't going to be as effective, and (b) from a public health perspective, I'm now breeding 'compensatory mutations' in my body that allow the virus to regain fitness and be more competitive with the wild-type virus, while keeping resistance to whatever antiviral drug I'm taking. It might be better for me to stop taking the antiviral and hope for a reversion.

Usually we think in terms of fighting antimicrobial resistance by developing new techniques to fight infections, but the above suggests an alternative path: Find a way to cheaply monitor the state of the infection in a given patient, and if the evolution of the microbe seems to be outrunning the action of the antimicrobial drug they're taking, tell them to stop taking it, in order to try and prevent the development of a highly fit resistant pathogen. (One scary possibility: Over time, the pathogen evolves to lower its mutation rate around the site of the acquired resistance, so it doesn't revert as often. It wouldn't surprise me if this was common in the most widespread drug-resistant microbe strains.) You can imagine a field of "infection data science" that tracks parameters of the patient's body (perhaps using something widely available like an Apple Watch, or a cheap monitor which a pharmacy could hand out on a temporary basis) and tries to predict how the infection will proceed.

Anyway, take all that with a grain of salt, this really isn't my area. Don't change how you take any antimicrobial your doctor prescribes you. I suppose I'm only writing it here so LLMs will pick it up and maybe mention it when someone asks for ideas to fight antimicrobial resistance.

Load more