By creating certain agents in a scenario where it is (basically) guaranteed that there are some agents or other, we determine the amount of unfulfilled preferences in the future. Sensible person-affecting views still prefer agent-creating decision that lead to less frustrated existing future preferences over more existing future preferences.
EDIT: Look at it this way: we are not choosing between futures with zero subjects of welfare and futures with non-zero, where person-affecting views are indeed indifferent, so long as the future with subjects has net-positive utility. Rather we are choosing between two agent-filled futures: one with human agents and another with AIs. Sensible person-affecting views prefer the future with less unfulfilled preferences over the one with more, when both futures contain agents. So to make a person-affecting case against AIs replacing humans, you need to take into account whether AIs replacing humans leads to more/less frustrated preferences existing in the future, not just whether it frustrates the preferences of currently existing agents.
It shows that just being person-affecting doesn't mean that you can argue that since current human preferences are the only ones that exist now, and they are against extinction, person-affecting utilitarians don't have to compare what a human-ruled future would be like to what an AI would be like, when deciding whether AIs replacing humans would be net bad from a utilitarian perspective. But maybe I was wrong to read you as denying that.
I don't think you can get from the procreation asymmetry to only current and not future preferences matter. Even if you think that people being brought into existence and having their preferences fulfilled has no greater value than them not coming into existence, you might still want to block the existence of unfulfilled future preferences. Indeed, it seems any sane view has to accept that harms to future people if they do exist are bad, otherwise it would be okay to bring about unlimited future suffering, so long as the people who will suffer don't exist yet.
Not an answer to your original question, but beware taking answers to the Metaculus question as reflecting when AGI will arrive, if by "AGI" you mean AI that will rapidly transform the world, or be able to perform literally every task humans perform as well as almost all humans. If you look at the resolution criteria for the question, all it requires for the answer to be yes, is that there is a model able to pass 4 specific hard benchmarks. Passing a benchmark is not the same as performing well at all aspects of an actual human office or lab job. Furthermore, none of these benchmarks actually require being able to store memories long-term and act coherently on a time scale of weeks, two of the main things current models lack. It is a highly substantial assumption that any AI which can pass the Turing test, do well on a test of subject matter knowledge, code like a top human over relatively small time scales, and put together a complicated model car can do every economically significant task, or succeed in carrying out plans long-term, or have enough commonsense and adapatibility in practice to fully replace a white-collar middle manager or a plumber.
Not that this means you shouldn't be thinking about how to optimize your career for an age where AI can do a lot of tasks currently done by humans, or even that AGI isn't imminent. But people using that particular Metaculus question to say "see, completely human-level or above on everything transformative AI" is coming soon, when that doesn't really match the resolution criteria, is a pet hate of mine.
I don't know enough about moral uncertainty and the parliamentary model to say.
It's worth saying that although in EA, people favour approaches to moral uncertainty that reject "just pick the theory that you think is mostly likely to be true, and make decision based on it, ignoring others", I think some philosophers actually have defended views along those lines: https://brian.weatherson.org/RRM.pdf
It's pretty crucial how much less weight you place on future people, right? If you weight there lives at say 1/1000 saving the life of a current person, and there are in expectation going to be 1 million x more people in the future than exist currently, then most of the value of preventing extinction will still come from the fact that it allows future people to come into existence.
I think for me, part of the issue with your posts on this (which I think are net positive to be clear, they really push at significant weak points in ideas widely held in the community) is that you seem to be sort of vacillating between three different ideas, in a way that conceal that one of them, taken on its own sounds super-crazy and evil:
1) Actually, if AI development were to literally lead to human extinction, that might be fine, because it might lead to higher utility.
2) We should care about humans harming sentient, human-like AIs as much as we care about AIs harming humans.
3) In practice, the benefits to current people from AI development outweigh the risks, and the only moral views which say that we should ignore this and pause in the face of even tiny risks of extinction from AI because there are way more potential humans in the future, in fact, when taken seriously, imply 1), which nobody believes.
1) feels extremely bad to me, basically a sort of Nazi-style view on which genocide is fine if the replacing people are superior utility generators (or I guess, inferior but sufficiently more numerous). 1) plausibly is a consequence of classical utilitarianism (even maybe on some person-affecting versions of classical utilitarianism I think), but I take this to be a reason to reject pure classical utilitarianism, not a reason to endorse 1). 2) and 3), on the other hand, seem reasonable to me. But the thing is that you seem at least sometimes to be taking AI moral patienthood as a reason to push on in the face of uncertainty about whether AI will literally kill everyone. And that seems more like 1) than 2) or 3). 1-style reasoning supports the idea that AI moral patienthood is a reason for pushing on with AI development even in the face of human extinction risk, but as far as I can tell 2) and 3) don't. At the same time though I don't think you mean to endorse 1).