NunoSempere

Researcher @ Shapley Maximizers
11223 karmaJoined
nunosempere.com

Bio

I am an independent research and programmer working at my own consultancy, Shapley Maximizers ÖU. I like to spend my time acquiring deeper models of the world, and generally becoming more formidable.  I'm also a fairly good forecaster: I started out on predicting on Good Judgment Open and CSET-Foretell, but now do most of my forecasting through Samotsvety, of which Scott Alexander writes:

Enter Samotsvety Forecasts. This is a team of some of the best superforecasters in the world. They won the CSET-Foretell forecasting competition by an absolutely obscene margin, “around twice as good as the next-best team in terms of the relative Brier score”. If the point of forecasting tournaments is to figure out who you can trust, the science has spoken, and the answer is “these guys”.


I used to post prolifically on the EA Forum, but nowadays, I post my research and thoughts at nunosempere.com / nunosempere.com/blog rather than on this forum, because:

  • I disagree with the EA Forum's moderation policy—they've banned a few disagreeable people whom I like, and I think they're generally a bit too censorious for my liking. 
  • The Forum website has become more annoying to me over time: more cluttered and more pushy in terms of curated and pinned posts (I've partially mitigated this by writing my own minimalistic frontend)
  • The above two issues have made me take notice that the EA Forum is beyond my control, and it feels like a dumb move to host my research in a platform that has goals different from my own. 

But a good fraction of my past research is still available here on the EA Forum. I'm particularly fond of my series on Estimating Value. And I haven't left the forum entirely: I remain subscribed to its RSS, and generally tend to at least skim all interesting posts.


I used to do research around longtermism, forecasting and quantification, as well as some programming, at the Quantified Uncertainty Research Institute (QURI). At QURI, I programmed Metaforecast.org, a search tool which aggregates predictions from many different platforms, which I still maintain. I spent some time in the Bahamas as part of the FTX EA Fellowship. Previously, I was a Future of Humanity Institute 2020 Summer Research Fellow, and then worked on a grant from the Long Term Future Fund to do "independent research on forecasting and optimal paths to improve the long-term." I used to write a Forecasting Newsletter which gathered a few thousand subscribers, but I stopped as the value of my time rose. I also generally enjoy winning bets against people too confident in their beliefs.

Before that, I studied Maths and Philosophy, dropped out in exasperation at the inefficiency, picked up some development economics; helped implement the European Summer Program on Rationality during 2017, 2018 2019, 2020 and 2022; worked as a contractor for various forecasting and programming projects; volunteered for various Effective Altruism organizations, and carried out many independent research projects. In a past life, I also wrote a popular Spanish literature blog, and remain keenly interested in Spanish poetry.


You can share feedback anonymously with me here.

Note: You can sign up for all my posts here: <https://nunosempere.com/.newsletter/>

Sequences
3

Vantage Points
Estimating value
Forecasting Newsletter

Comments
1091

Topic contributions
14

I, and perhaps others, would be curious about a bit more of a postmortem. It seems like this was a beloved and valuable project. Why did it get shut down? Did you get burnt out? Was it a lack of funds? Any requests? Is there something that the EA community could have done better?

Because that seems like enough time to have something good now.

I tend to agree with this perspective, though I would also add that I think that not investing more in longtermist evaluation seven years ago was a mistake.

I don't know about ACE because I don't stay up to date on animals but I bet it's similar there.

I am a bit more familiar with ACE, and my impression is that you are right.

Nice! 

Readers might also be interested in the linux utility version of this: https://github.com/NunoSempere/PredictResolveTally 

Retrospective grant evaluations of longtermist projects

EA red teaming project

I am very amenable to either of these. If someone is starting these, or if they are convinced that these could be super valuable, please do get in touch.

I mean, I think it exceeds some level of rudeness in that you consider the hypothesis that Karnofsky might not be an impeccable boy scout, which some people might consider to be rude. But I also think that it's fine to exceed that threshold, so ¯\_(ツ)_/¯

I think you may have a model where you don't want to have comments above a given level of rudeness/sarcasm/impoliteness/political incorrectness, etc. However, I would prefer that you had a model where you give a warning or a ban if a comment or a user exceeds some rudeness - value threshold, as I think that would provide more value: I would want to have the rude comments if they produce enough value to be worth it. 

And I think that you do want to have the disagreeable people push back, to discourage fake group consensus.

Load more