The primary problem you mention is exaggerating the importance of your project. That is a fundamental issue with every grant. Every grantmaker wants to fund projects with maximum impact per dollar.
There is an incentive to aggrandize your work, but there's a counterincentive to not bullshit. A lot of the work of reviewing grants is having a well-tuned bullshit detector.
I don't think there's any way around the tension between those two factors. You can change the goalposts, but there's always a goal, and a claim of efficiency in moving toward that goal.
The other issue here is with the intended use of the grant money. If these organizations really only want to fund projects that improve our chances of survival and flourishing, that is their choice. If that's their goal, there has to be a chain of logic for how that is going to happen. Sometimes grantmakers come up with that chain of logic, and so they fund projects like "better understanding health psychology" because they believe accomplishing that will produce a better world. The organizations you mention are trying to be broad by allowing anyone to convince them that their unique project will make the world better with a good $/benefit ratio. This work can't be skipped, but it can be shared by grantmaker and applicant.
Therefore, I'd suggest that they add "if you don't have a grand narrative that's fine; we might have a grand narrative for your work that you're not seeing. Of course it helps your odds if you do have a convincing answer for a way your project achieves our goal (X) with a good cost ratio, in case we don't have one."
My career to date has been mostly funded by US government grants. These do not require a well thought out grand narrative, or any other sort of direct causal reasoning about impacts. I believe this is disastrous. It shifts most of the competition to cultural knowledge of the granting agency and the types of individuals who are likely to be reviewers. And by not requiring much explicit logic about likely outcomes and therefore payoff ratio, I believe the government is wasting money like crazy. They effectively fund projects that "sound like good work" to the people already doing similar work, which creates a clique mentality divorced from actual impact of the funded work.
My experience with EA organization granting processes has been vastly better, primarily based on their focus on the careful payoff logic you seem to be arguing against.
The primary problem you mention is exaggerating the importance of your project. That is a fundamental issue with every grant. Every grantmaker wants to fund projects with maximum impact per dollar.
There is an incentive to aggrandize your work, but there's a counterincentive to not bullshit. A lot of the work of reviewing grants is having a well-tuned bullshit detector.
I don't think there's any way around the tension between those two factors. You can change the goalposts, but there's always a goal, and a claim of efficiency in moving toward that goal.
The other issue here is with the intended use of the grant money. If these organizations really only want to fund projects that improve our chances of survival and flourishing, that is their choice. If that's their goal, there has to be a chain of logic for how that is going to happen. Sometimes grantmakers come up with that chain of logic, and so they fund projects like "better understanding health psychology" because they believe accomplishing that will produce a better world. The organizations you mention are trying to be broad by allowing anyone to convince them that their unique project will make the world better with a good $/benefit ratio. This work can't be skipped, but it can be shared by grantmaker and applicant.
Therefore, I'd suggest that they add "if you don't have a grand narrative that's fine; we might have a grand narrative for your work that you're not seeing. Of course it helps your odds if you do have a convincing answer for a way your project achieves our goal (X) with a good cost ratio, in case we don't have one."
My career to date has been mostly funded by US government grants. These do not require a well thought out grand narrative, or any other sort of direct causal reasoning about impacts. I believe this is disastrous. It shifts most of the competition to cultural knowledge of the granting agency and the types of individuals who are likely to be reviewers. And by not requiring much explicit logic about likely outcomes and therefore payoff ratio, I believe the government is wasting money like crazy. They effectively fund projects that "sound like good work" to the people already doing similar work, which creates a clique mentality divorced from actual impact of the funded work.
My experience with EA organization granting processes has been vastly better, primarily based on their focus on the careful payoff logic you seem to be arguing against.