Any updates here? I share Devon's concern: this news also makes me less likely to want to donate via EA Funds. At worst, the fear would be this: so much transparency is lost that donations go into mysterious black holes rather than funding effective organizations. What steps can be taken to convince donors that that's not what's happening?
Thanks for your engaging insights!
this sounds like you're talking about a substantive concept of rationality
Yes indeed!
Substantive concepts of rationally always go under moral non-naturalism, I think.
I'm unclear on why you say this. It certainly depends on how exactly 'non-naturalism' is defined.
One contrast of the Gert-inspired view I've described and that of some objectivists about reasons or substantive rationality (e.g. Parfit) is that the latter tend to talk about reasons as brute normative facts. Sometimes it seems they have no story to tell about why those facts are what they are. But the view I've described does have a story to tell. The story is that we had a certain robust agreement in response toward harms (aversion to harms and puzzlement toward those who lack the aversion). Then, as we developed language, we developed terms to refer to the things that tend to elicit these responses.
Is that potentially the subject of the 'natural' sciences? It depends: it seems to be the subject not of physical sciences but of psychological and linguistic sciences. So it depends whether psychology and linguistics are 'natural' sciences. Does this view hold that facts about substantive rationality are not identical with or reducible to any natural properties? It depends on whether facts about death, pain, injury, and dispositions are reducible to natural properties.
It's not clear to me that the natural/non-natural distinction applies all that cleanly to the Gert-inspired view I've delineated. At least not without considerably clarifying both the natural/non-natural distinction and the Gert-inspired view.
you can be a constructivist in two different ways: Primarily as an intersubjectivist metaethical position, and "secondarily" as a form of non-naturalism.
This seems like a really interesting point, but I'm still a little unclear on it.
Rambling a bit
It's helpful to me that you've pointed out that my Gert-inspired view has an objectivist element at the 'normative bedrock' level (some form of realism about harms & rationality) and a constructivist element at the level of choosing first-order moral rules ('what would impartial, rational people advocate in a public system?').
A question that I find challenging is, 'Why should I care about, or act on, what impartial, rational people would advocate in a public system?' (Why shouldn't I just care about harms to, say, myself and a few close friends?) Constructivist answers to that question seem inadequate to me. So it seems we are forced to choose between two unsatisfying answers. On the one hand, we might choose a minimally satisfying realism that asserts that it's a brute fact that we should care about people and apply moral rules to them impartially; it's a brute fact that we 'just see'. On the other hand, we might choose a minimally satisfying anti-realism that asserts that caring about or acting on morality is not actually something we should do; the moral rules are what they are and we can choose it if our heart is in it, but there's not much more to it than hypotheticals.
So you know who's asking, I happen to consider myself a realist, but closest to the intersubjectivism you've delineated above. The idea is that morality is the set of rules that impartial, rational people would advocate as a public system. Rationality is understood, roughly speaking, as the set of things that virtually all rational agents would be averse to. This ends up being a list of basic harms--things like pain, death, disability, injury, loss of freedom, loss of pleasure. There's not much more objective or "facty" about rationality than the fact that basically all vertebrates are disposed to be averse to those things, and it's rather puzzling for someone not to be. People can be incorrect about whether a thing is harmful, just as they can be incorrect about whether a flower is red. But there's nothing much more objective or "facty" about whether the plant is red than that ordinary human language users on earth are disposed to see and label it as red.
I don't know whether or not you'd label that as objectivism about color or about rationality/harm. But I'd classify it as a weak form of realism and objectivism because people can be incorrect, and those who are not reliably disposed to identify cases correctly would be considered blind to color or to harm.
These things I'm saying are influenced by Joshua Gert, who holds very similar views. You may enjoy his work, including his Normative Bedrock (2012) or Brute Rationality (2004). He is in turn influenced by his late father Bernard Gert, whose normative ethical theory Josh's metaethics work complements.
One thought is that if morality is not real, then we would not have reasons to do altruistic things. However, I often encounter anti-realists making arguments about which causes we should prioritize, and why. A worry about that is that if morality boils down to mere preference, then it is unclear why a different person should agree with the anti-realist's preference.
The vague term "great" gets used a lot in this post. If possible, wielding more precise concepts regarding what you're looking for—what counts as "great" in the sense you're using the term—could be helpful moving forward. By honing in on the particular kind of skill you're seeking, you'll help identify those who have those skills. And you may help yourselves confirm which specific skills are truly are essential to the position you're seeking to fill.
(Also, I think there are more ways to be a "great" software engineer than being able to write a substantial pull request for a major machine learning library with minimal ramp-up time. So other wording can help you be more precise, as well as kinder to engineers who are great in other ways.)