Brad West

Founder & CEO @ Profit for Good Initiative
1557 karmaJoined Nov


Looking to advance businesses with charities in the vast majority shareholder position. Check out my TEDx talk for why I believe Profit for Good businesses could be a profound force for good in the world.



Fair enough.

I still suspect that you may be underestimating marginal AI Safety funding opportunities.

This strikes me as remarkably counterintuitive, given the enormous disparity between funding between AI capabilities spending and AI safety spending. I was also under the impression that AI capabilities were not as funding-constrained. 

To be clear, I am in favor of promoting offsetting in both contexts, although the benefits of veganism in avoiding contributing to factory farming demand, increasing demand for pro-social vegan products, and sending an important moral signal make it difficult to calculate an appropriate sum. Further, I think a deontological or virtue ethics concern with killing or eating the flesh of sentient beings also naturally arises.

In the case here though, your choices cash out in terms of your effect on X and S risks re AGI. I think an appropriate offset for the funding effect is able to reverse or more than reverse your effect without moral complication.

I honestly don't have much experience other than using GPT4, which I have found to be very helpful. 

For me, ChatGPT greatly increases the productivity of myself and my team, whereas the very modest effect of a small amount of money from my subscription I find very unlikely to be seriously furthering the acceleration of AI.

I suspect that the productivity of EAs generally is very valuable and if EAs benefit from the tool it is likely not a good idea for them to stop using it.

Given that there is so much less money going to AI safety than AI capabilities, I would think that a more sensible request would be that those using ChatGPT and thus funding OpenAI fund promising AI safety efforts... this would likely more than offset the harm caused by your funding and enable you to keep using a valuable tool. And if the benefits for you are not worth the cost of the subscription + the offset, then perhaps the benefit is not, in that case worth the harm. I would suggest that people who know more about this stuff than me recommend an AI safety fund for offsetting ChatGPT use.

Vin, you incorporate helping save lives and better the world into so much of what you do. It is truly a privilege to work with someone so determined to do good.

I'd also like to thank you for your courage in sharing your struggles with anxiety and depression. You've been there for me when I've struggled with depression and frustration. When I've had very bad days, talking to you in DMs on Slack was immensely helpful.

You exemplify heroism perhaps more than anyone else I know. Looking forward to seeing you kick ass next month on your bike!

I agree. I do not view the wealthy in general as an "enemy." 

I agree that the accumulation of wealth often corresponds with the production of social value. It is interesting that you bring up the issue of rent-seeking as a problem but not that a lot of "rent-seeking" is perfectly legal and is often a component of accumulation of wealth even where part of it would be attributable to socially valuable production.

For instance, I am an attorney who (among other matters) litigates personal injuries and worker's compensation claims. There is a component of general social value that is produced through my activity: aiding in the resolution of disputes and serving as a helpful piece of a functioning legal system. However, there is also a "rent-seeking" component of my job, I am looking to transfer wealth or prevent the transferring of wealth from an opponent to my client. The degree of my compensation, or the ability of me to accumulate wealth, corresponds more strongly to my rent-seeking ability than that of my ability to generate general social value (because I am paid by my clients on the basis of being able to resolve disputes on more favorable terms for them, not by the judicial system generally). Thus, in relation to my social value created, I (or rather, the firm that I work for) is likely overcompensated. The same is true in many other extremely lucrative industries, such as finance.

One quibble with the mode of analysis for taxation. The way to evaluate the impact, positive or negative, of government spending, would be the effect of the spending vs the average counterfactual effect of retention. Thus, for impact analysis, we would not be comparing the utility generated from government spending to the cost-effectiveness of a marginal dollar to a Givewell-endorse charity, but rather the utility generated by the counterfactual retention of the funds by the taxpayer base. In any case, that bar is much easier for government spending to clear.

I could imagine a few things:

  1. Pledging may have some combination the effect of (a) actually increasing people's lifetime donations to effective charities and (b) causing people to advertise giving they already were going to do. To the extent that a pledge is b rather than a, getting someone to pledge the same amount as you is not double your impact.

  2. Many of the people who you cause to become pledgers might have become pledgers later, thus you probably just accelerated their pledge, greatly decreasing your actual impact vs if you cause someone to pledge (and this pledge causes them to donate more rather than encompasses donation that would otherwise happen).

  3. There's a possibility that you could anchor someone to donate less. Potentially someone could see your celebrated 10% pledge and view that as adequate, lowering their donations. Here, there is a risk of harm from the pledge.

All that said, I still think the pledge is an awesome way to promote and normalize effective giving.

It is really great to know that the pledge allows pledgers to use their judgment as to what organizations qualify as highly-effective organizations. In light of this, I may make a 20% pledge.

Load more