Anthropic has the same people who originally made GPT3, so them replicating gpt3 is a different sort of diffusion than others. I'd guess they internally matched gpt3 when they'd existed for ~6 months, so mid 2021. Claude, their public beta model, has similar performance to chatgpt in conversation but is not benchmarked.
Redwood Research has not attempted to train any large language models from scratch, and hasn't even reproduced gpt-2. Redwood does have employees who've worked at OpenAI, and likely could reproduce gpt3 if needed.
Gpt3 is algorithmically easier to reproduce, in the sense that it's a simpler architecture with fewer and more robust hyperparameters. It's engineering is harder to reproduce, because it needs more model parallelism. People have estimated gpt3 to cost 5M, and StableDiffusion to cost 300k, which is similar to the 36k number you quoted
The implicit bets on religiosity in AI safety aren't that it'll be unpopular, only that it won't influence decision making / powerful actors. If/when AGI arises, it'll initially be controlled by govts/companies/citizens of rich countries. Of all the belief systems, AI safety people worry most about confucianism because it has the most real influence on decision making (ccp).
There are also like 3 different ways 2+2!=4.
Outer universe with different math - We're a simulation inside a different universe that runs on different math where 2+2!=4, but the math inside our universe is consistent. This is the same as 2+2=4 for most purposes. This is imaginable, I think...
Active demon - there's a demon that controls all your inputs, in a way that's inconsistent with any reasonable mathematics, but you can't tell. This is the least likely, and if it were true I wouldn't even consider myself a person.
Math is flawed - the whole concept of arithmetic, or all of mathematics, is inconsistent and it's impossible to construct a system where you can actually prove 2+2=4. This doesn't necessarily mean two apples and two apples makes four apples - it just means the apples behave like they do for other reasons than arithmetic. This is conceivable.
Open Phil only has so much energy and cognitive diversity to spend evaluating your project, and if you project is too weird for that, they just can't fund it. Instead, you can donate/work/volunteer for weird projects yourself, and convince others on the forum to do the same. There are even other billionaires out there, and ways to get your charter cities or whatever without enormous philanthropic funding. If you're the only one who believes, maybe you need to bet.
Near term AI risk: simulated social status. Social status is a big downside to playing video games all day / abusing drugs. GPT3 and friends are probably capable of giving you believable simulated social status, far beyond what current video games provide. This could be a big theraputic boon as well as well as a threat to people's happiness and contribution to society.
I used to believe "if you kill something, look it in the eyes as you do," but after experiencing it, now I don't. I was at my grandparents' house, and a mouse trap they set caught a mouse by crushing its lower spine. it was still very much alive. I watched it for a minute, then took it outside and cut its head off with my thumb. It just made me feel cold. Wouldn't kill by hand again.
Edit the Bible. It is the information replicated the most times throughout history, and thus it's probably the best vehicle for a cultural or intellectual agenda. Finding the right edits would not be easy, because the bible would need to retain the qualities that made it so viral in the first place.
Edits could include reducing mysogeny/anti-LGBTQ, valuing the happiness and suffering of all beings, and putting more faith in reason. Adding more reason could easily undermine the persuasive power of the bible, but something could probably be done.
The bible was written between 0 and 100 AD in Greek, so "the team" of time travellers would need to learn ancient Greek (the known parts now, all the unrecorded parts when they arrived), go back to either 1 or 2 bc and influence early manuscripts / verbal recitations, or perhaps arrive around 50AD and write the official Bible, or influence those who wrote it.
It feels very plausible to me that "many people know about biorisk thread models" is the most important lever to impact biorisk. I've heard that many state bioweapons programs were started because states found out that other states thought bioweapons were powerful. If mere rumors caused them to invest millions in bioweapons, then preventing those rumors would have been an immensely powerful intervention, and preventing further such rumors is critically important.