CH

Charlie Harrison

Student @ UCL
429 karmaJoined Pursuing an undergraduate degree

Bio

Participation
5

2nd PPE student at UCL and outgoing President of the EA Society there. I was an ERA Fellow in 2023 & researched historical precedents for AI advocacy. 

Comments
29

Thanks for writing this Gideon. 

I think the risks around securitisation are real/underappreicated, so I'm grateful you've written about them. As I've written about, I think the securitisation of the internet after 9/11 impeded proper privacy regulation in the US, and prompted Google towards an explicitly pro-profit business model. (Although, this was not a case of macrosecuritization failure). 

Some smaller points: 

Secondly, its clear that epistemic expert communities, which the AI Safety community could clearly be considered

This is argued for at greater length here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4641526

but ultimately the social construction of the issue as one of security is what is decisive, and this is done by the securitising speech act.

I feel like this point was not fully justified. It seems likely to me that whilst rhetoric around AGI could contribute to securitisation, other military/economic incentives could be as (or more) influential.

 What do you think?

Hi, thanks for this! Any idea how this compares to total costs?

thanks, Charlie! Seems like a reasonable concern. I feel like a natural response is that hedonic wellbeing is only one factor within life satisfaction. Though, I had a quick look online, and one study suggests they're pretty strongly correlated (r between 0.8 and 0.9) https://www.sciencedirect.com/science/article/abs/pii/S0167487018305087

At least, from one of showing the plot. I'm more skeptical of the line, the further out it goes, especially to a region with only a few points.

 

Fair. 

This data is the part I was nervous about. I don't see a great indication of "leveling off" in the blue lines. Many have a higher slope than the red lines, and the slope=0 item seems like an anomaly. 

To be clear – there are 2 version of levelling off. 

  1. Absolute levelling off: slopes indistinguishable from 0
  2. Relative levelling off: slopes which decrease after the income threshold.

And for both 1) and 2), I am referring to the bottom percentiles. This is the unhappy minority which Kahneman and Killingsworth are referring to. So: the fact that slopes are indistinguishable after the income threshold for p=35, 50, 70 is consistent with the KK findings. The fact the slope increased for the 85th percentile is also consistent with the KK findings. Please look at Figure 1 if you want to double check.

I think there is stronger evidence for 2) than for 1).At percentiles p=5, 10, 15, 20, 25, 30 there was a significant decrease in the slope (2): see below. I agree that occurrences of 1) (i.e. insignificant slopes above £50k) may be because of a lack of data. 

I also agree with you that the 0 slope is strange. I found this at the 10th and 30th percentiles. I think the problem might be that there wasnt many unhappy rich people in the sample. 

Thank you! I haven’t used GitHub much before ... Next time 🫡

Hey Ozzie! 

1) Thank you!

2) < Or, the 15 percentile slopes are far higher than the other slopes > Agreed, this is probably the most robust finding. I feel pretty uncomfortable about translating this into policy or prescriptions about cash transfers, because this stuff was all correlative; and unearned income might affect happiness differently from earned income. 

3) < 50k threshhold seems arbitrary > This is explained in the second footnote. It is worth >$100 USD now, I believe.

< I'd also flag that it seems weird to me to extend the red lines so far to the left, when there are so few data points at less than ~3k > Do you mean from an aesthetic point of view, or a statistical one? The KK (2022) paper uses income groups – and uses the midpoints for the regressions – which is why their lines don't extend back to very low income.

< I'm skeptical of what you can really takeaway after the 50k pound marks. There seems to be a lot of randomness here > 

I think this depends on what claim you are making. I think there is pretty strong evidence for relative levelling off – i.e. significant decrease in the slope for lower percentiles. You can look at the Table for t/p values. 

[Editted: didn’t phrase this well].  Though, I agree with you that there is less evidence for absolute levelling off (i.e. 0 slopes above 50k). The fact that the slopes for lower percentiles weren't significantly positive might be because of a lack of data. 0 slopes for p=10, 30 seems to corroborate this. 

Although, if the problem was a generic lack of observations above £50k, then we wouldn’t see significant positive slopes for the higher percentiles. Perhaps, the specific problem was that there wasnt many unhappy rich people in the sample. I will add something to the summary about this. 

I haven't checked for outliers via influence plots or the like. 

4) Yeahh, I feel like that would be cool, but would be better to do on the bigger dataset that Killingsworth used. The usefulness here was to use the same methods on different (worse) data. 

I’d guess less than 1/4 of the people had engaged w AIS (e.g. read some books/articles). Perhaps 1/5 had heard about EA before. Most were interested in AI though.

Load more