D

DPiepgrass

1047 karmaJoined

Bio

I'm a senior software developer in Canada (earning ~US$70K in a good year) who, being late to the EA party, earns to give. Historically I've have a chronic lack of interest in making money; instead I've developed an unhealthy interest in foundational software that free markets don't build because their effects would consist almost entirely of positive externalities.

I dream of making the world better by improving programming languages and developer tools, but AFAIK no funding is available for this kind of work outside academia. My open-source projects can be seen at loyc.net, core.loyc.net, ungglish.loyc.net and ecsharp.net (among others).

Comments
153

After following the Ukraine war closely for almost three years, I naturally also watch China's potential for military expansionism. Whereas past leaders of China talked about "forceful if necessary" reunification with Taiwan, Xi Jinping seems like a much more aggressive person, one who would actually do it―especially since the U.S. is frankly showing so much weakness in Ukraine. I know this isn't how EAs are used to thinking, but you have to start from the way dictators think. Xi, much like Putin, seems to idolize the excesses of his country's communist past, and is a conservative gambler: that is, he will take a gamble if the odds seem enough in his favor. Putin badly miscomputed his odds in Ukraine, but Russia's GDP and population were 1.843 trillion and 145 million, versus 17.8 trillion and 1.4 billion for China. At the same time, Taiwan is much less populous than Ukraine and its would-be defenders in the USA/EU/Japan are not as strong naval powers as China (yet would have to operate over a longer range). Last but not least, China is the factory of the world―if they should decide they want to do world domination military-style, they can probably do that fairly well while simultaneously selling us vital goods at suddenly-inflated prices.

So when I hear China ramped up nuclear weapon production, I immediately think of it as a nod toward Taiwan. If we don't want an invasion of Taiwan, what do we do? Liberals have a habit of magical thinking in military matters, talking of diplomacy, complaining about U.S. "war mongers", and running protests with "No Nukes" signs. But the invasion of Taiwan has nothing to do with the U.S.; Xi simply *wants* Taiwan and has the power to take it. If he makes that decision, no words can stop him. So the Free World has no role to play here other than (1) to deter and (2) to optionally help out Taiwan if Xi invades anyway.

Not all deterrents are military, of course; China and USA will surely do huge economic damage to each other if China invades, and that is a deterrent. But I think China has the upper hand here in ways the USA can't match. On paper, USA has more military spending, but for practical purposes it is the underdog in a war for Taiwan[1]. Moreover, President Xi surely noticed that all it took was a few comments from Putin about nuclear weapons to close off the possibility of a no-fly-zone in Ukraine, NATO troops on the ground, use of American weapons against Russian territory (for years), etc. So I think Xi can reasonably―and correctly―conclude that China wants Taiwan more than the USA wants to defend it. (To me at least, comments about how we can't spend more than 4% of the defense budget on Ukraine "because we need to be ready to fight China" just shows how unserious the USA is about defending democracy.) Still, USA aiding Taiwan is certainly a risk for Jinping and I think we need to make that risk look as big and scary as possible.

All this is to say that warfighting isn't the point―who knows if Trump would even bother. The point is to create a credible deterrent as part of efforts to stop the Free World from shrinking even further. If war comes, maybe we fight, maybe we don't. But war is more likely whenever dictators think they are stronger than their victims.

I would like more EAs to think seriously about containment, democracy promotion and even epistemic defenses. For what good is it to make people more healthy and prosperous, if those people later end up in a dictatorship that conscripts them or their children to fight wars―including perhaps wars against democracies? (I'm thinking especially of India and the BJP party, here. And yes, it's still good to help them despitee the risk, I'm just saying it's not enough and we should have even broader horizons.)

Granted, maybe we can't do anything. Maybe there's no tractable and cost-effective thing in this space. There are probably neglected things―like, when the Ukraine war first started, I thought Bryan Caplan's "make desertion fast" idea was good, and I wish somebody had looked at counterpropaganda operations that could've made the concept work. Still, I would like EAs to understand some things. 

  1. The risks of geopolitics have returned―basically, cold-war stuff.
  2. EAs overly focus on x-risk and s-risk over catastrophic risk. Technically, c-risk is way less bad than x-risk, but it doesn't feel less bad. c-risk is more emotionally resonant for people, and risk management tasks probably overlap a lot between the two, so it's probably easier to connect with policymakers over c-risk than x-risk.
  3. I haven't heard EAs talk about "loss-of-influence" risk. One form of this would be AGI takeover: if AGIs are much faster, smarter and cheaper than us (whether they are controlled by humans or by themselves), a likely outcome is one in which normal humans have no control over what happens: either AGIs themselves or dictators with AGI armies make all the decisions. In this sense, it sure seems we are the hinge of history, as future humans may have no control, and past humans had an inadequate understanding of their world. But in this post I'm pointing to a more subtle loss of control, where the balance of power shifts toward dictatorships until they decide to invade democracies that are more and more distant from their sphere of influence. If global power shifts toward highly-censored strongman regimes, EAs' influence could eventually wane to zero.
  1. ^

I've been thinking that there is a "fallacious, yet reasonable as a default/fallback" way to choose moral circles based on the Anthropic principle, which is closely related to my article "The Putin Fallacy―Let’s Try It Out". It's based on the idea that consciousness is "real" (part of the territory, not the map), in the same sense that quarks are real but cars are not. In this view, we say: P-zombies may be possible, but if consciousness is real (part of the territory), then by the Anthropic principle we are not P-Zombies, since P-zombies by definition do not have real experiences. (To look at it another way, P-Zombies are intelligences that do not concentrate qualia or valence, so in a solar system with P-zombies, something that experiences qualia is as likely to be found alongside one proton as any other, and there are about 10^20 times more protons in the sun as there are in the minds of everyone on Zombie Earth combined.) I also think that real qualia/valence is the fundamental object of moral value (also reasonable IMO, for why should an object with no qualia and no valence have intrinsic worth?)

By the Anthropic principle, it is reasonable to assume that whatever we happen to be is somewhat typical among beings that have qualia/valence, and thus, among beings that have moral worth. By this reasoning, it is unlikely that the sum total |W| of all qualia/valence in the world is dramatically larger than the sum total |H| of all qualia/valence among humans, because if |W| >> |H|, you and I are unlikely to find ourselves in set H. I caution people that while reasonable, this view is necessarily uncertain and thus fallacious and morally hazardous if it is treated as a certainty. Yet if we are to allocate our resources in the absence of any scientific clarity about which animals have qualia/valence, I think we should take this idea into consideration.

P.S. given the election results, I hope more people are doing now the soul-searching we should've done in 2016. I proposed my intervention "Let's Make the Truth Easier to Find" on EA Forum in March 2023. It's necessarily a partial solution, but I'm very interested to know why EAs generally weren't interested in it. I do encourage people to investigate for themselves why Mr. Post Truth himself has roughly the same popularity as the average Democrat―twice.

Also: critical feedback can be good. Even if painful, it can help a person grow. But downvotes communicate nothing to a commenter except "f**k you". So what are they good for? Text-based communication is already quite hard enough without them, and since this is a public forum I can't even tell if it's a fellow EA/rat who is voting. Maybe it's just some guy from SneerClub―but my amygdala cannot make such assumptions. Maybe there's a trick to emotional regulation, but I've never seen EA/rats work that one out, so I think the forum software shouldn't help people push other people's buttons.

I haven't seen such a resource. It would be nice.

My pet criticism of EA (forums) is that EAs seem a bit unkind, and that LWers seem a bit more unkind and often not very rationalist. I think I'm one of the most hardcore EA/rationalists you'll ever meet, but I often feel unwelcome when I dare to speak.

Like: 

  • I see somebody has a comment with -69 karma. An obvious outsider asking a question with some unfair assumptions about EA. Yes, it was brash and rude, but no one but me actually answered him.
  • I write an article (that is not critical of any EA ideas) and, after many revisions, ask for feedback. The first two people who come along downvote it, without giving any feedback. If you downvote an article with 104 points and leave, it means you dislike it or disagree. If you downvote an article with 4 points and leave, it means you dislike it, you want the algorithm to hide it from others, you want the author to feel bad, and you don't want them to know why. If you are not aware that it makes people feel bad, you're demonstrating my point.
  • I always say what I think is true and I always try to say it reasonably. But if it's critical of something, I often get downvote instead of a disagree (often without comment).
  • I describe a pet idea that I've been working on for several years on LW (I built multiple web sites for it with hundreds of pages, published NuGet packages, the works). I think it works toward solving an important problem, but when I share it on LW the only people who comment say they don't like it, and sound dismissive. To their credit, they do try to explain to me why they don't like it, but they also downvote me, so I become far too distraught to try to figure out what they were trying to communicate.
  • I write a critical comment (hypocrisy on my part? Maybe, but it was in response to a critical article that simply assumes the worst interpretation of what a certain community leader said, and then spends many pages discussing the implications of the obvious trueness of that assumption.) This one is weird: I get voted down to -12 with no replies, then after a few hours it's up to 16 or so. I understand this one―it was part of a battle between two factions of EA―but man that whole drama was scary. I guess that's just reflective of Bay Area or American culture, but it's scary! I don't want scary!

Look, I know I'm too thin-skinned. I was once unable to work for an entire day due to a single downvote (I asked my boss to take it from my vacation days). But wouldn't you expect an altruist to be sensitive? So, I would like us to work on being nicer, or something. Now if you'll excuse me... I don't know I'll get back into a working mood so I can get Friday's work done by Monday.

Okay, not a friendly audience after all! You guys can't say why you dislike it?

Story of my life... silent haters everywhere.

Sometimes I wonder, if Facebook groups had downvotes, would it be as bad, or worse? I mean, can EAs and rationalists muster half as much kindness as normal people for saying the kinds of things their ingroup normally says? It's not like I came in here insisting alignment is easy actually.

I only mentioned human consciousness to help describe an analogy; hope it wasn't taken to say something about machine consciousness.

I haven't read Superintelligence but I expect it contains the standard stuff―outer and inner alignment, instrumental convergence etc. For the sake of easy reading, I lean into instrumental convergence without naming it, and leave the alignment problem implicit as a problem of machines that are "too much" like humans, because

  • I think AGI builders have enough common sense not to build paperclip maximizers
  • Misaligned AGIs―that seem superficially humanlike but end up acting drastically pathological when scaled to ASI―are harder to describe so instead I describe (by analogy) something similar: humans outside the usual distribution. I argue that psychopathy is absence of empathy, so when AGIs surpass human ability it's way too easy to build in a machine like that. (Indeed, I could've said, even normal humans can easily turn off their empathy with monstrous results, see: Nazis, Mao's CCP).

I don't incorporate Yudkowsky's ideas because I found the List of Lethalities to be annoyingly incomplete and unconvincing, and I'm not aware of anything better (clear and complete) that he's written. Let me know if you can point me to anything.

My feature request for EA Forum is the same as my feature request for every site: you should be able to search within a user (i.e. a user's page should have a search box). This is easy to do technically; you just have to add the author's name as one of the words in the search index.

(Preferably do it in such a way that a normal post cannot do the same, e.g. you might put "foo authored this post" in the index as @author:foo but if a normal post contains the text "@author:foo" then perhaps the index only ends up with @author (or author) and foo, while the full string is not in the index (or, if it is in the index, can only be found by searching with quotes a la Google: "@author:foo")

I didn't see a message about kneecaps, or those other things you mentioned. Could you clarify? However, given Torres' history of wanton dishonesty ― I mean, prior to reading this article I had already seen Torres lying about EA ― and their history of posting under multiple accounts to the same platform (including sock puppets), if I see an account harassing Torres like that, I would (1) report the offensive remark and (2) wonder if Torres themself controls that account.

Sorry if I sounded redundant. I'd always thought of "evaporative cooling of group beliefs" like "we start with a group with similar values/goals/beliefs; the least extreme members gradually get disengaged and leave; which cascades into a more extreme average that leads to others leaving"―very analogous to evaporation. I might've misunderstood, but SBF seemed to break the analogy by consistently being the most extreme, and actively and personally pushing others away (if, at times, accidentally). Edit: So... arguably one can still apply the evaporative cooling concept to FTX, but I don't see it as an explanation of SBF himself.

What if, instead of releasing very long reports about decisions that were already made, there were a steady stream of small analyses on specific proposals, or even parts of proposals, to enlist others to aid error detection before each decision?

Load more