DM

David Mathers🔸

4366 karmaJoined

Posts
10

Sorted by New

Comments
460

People who deny animal consciousness are often working with a background assumption that any thing can in principle be perceived unconsciously, and that in practice loads of unconscious representation goes on in the human brain. It's not clear what use a conscious pain is above a unconscious perception of bodily damage.  

I'm working on a "who  has funded what in AI safety" doc. Surprisingly, when I looked up Lightspeed Grants online (https://lightspeedgrants.org/) I couldn't find any list of what they funded. Does anyone know where I could find such a list? 

"You can have bits of your visual field, for instance, that you're not introspectively aware of but are part of your consciousness"  Maybe, but in the current context this is basically begging the question, whereas I've at least sketched an argument (albeit one you can probably resist without catastrophic cost). 

EDIT: Strictly speaking, I don't think people with the Dennettian view have to or should deny that there is phenomenally conscious content that isn't in fact introspectively accessed. What they do/should deny is that there is p-conscious content that you couldn't access even if you tried. 

I've seen Dan Dennett (in effect) argue for it as follows: if a human adult subject reports NOT experiencing something in a lab experiment and we're sure they're sincere, and that they were paying attention to what they were experiencing, that is immediately pretty much 100% proof that they are not having a conscious experience of that thing, no matter what is going on in the purely perceptual (functional) regions of their brains and how much it resembles typical cases of a conscious experience of that thing.  The best explanation for this is that its just part of our concept of "conscious" that a conscious experience is one that you're (at least potentially) introspectively aware that you're having. Indeed (my point not Dennett's), this is how we found out that there is such a thing as "unconscious perception", we found out that information about external things can get into the brain through the eye, without the person being aware that that information is there. If we don't think that conscious experiences are ones you're (at least potentially) introspectively aware of having, it's not clear why this would be evidence for the existence of unconscious perception. But almost all consciousness scientists and philosophers of mind accept that unconscious perception can happen. 

Here's Dennett (from a paper co-authored with someone else) in his own words on this, critiquing a particular neuroscientific theory of consciousness: 

"It is easy to imagine what a conversation would sound like between F&L and a patient (P) whose access to the locally recurrent activity for color was somehow surgically removed. F&L: ‘You are conscious of the redness of the apple.’ P: ‘I am? I don’t see any color. It just looks grey. Why do you think I’m consciously experiencing red?’ F&L: ‘Because we can detect recurrent processing in color areas in your visual cortex.’ P: ‘But I really don’t see any color. I see the apple, but nothing colored. Yet you still insist that I am conscious of the color red?’ F&L: ‘Yes, because local recurrency correlates with conscious awareness.’ P: ‘Doesn’t it mean something that I am telling you I’m not experiencing red at all? Doesn’t that suggest local recurrency itself isn’t sufficient for conscious awareness?"

I don't personally endorse Dennett's view on this, I give to animal causes, and I think it is a big mistake to be so sure of it that you ignore the risk of animal suffering entirely, plus I don't think we can just assume that animals can't be introspectively aware of their own experiences. But I don't think the view itself is crazy or inexplicable, and I have moderate credence (25% maybe?) that it is correct. 

Just  the stuff I already said about the success he seems to have had. It is also true that many people hate him and think he's ridiculous, but I think that makes him polarizing rather than disastrous. I suppose you could phrase it as "he was a disaster in some ways but a success in others" if you want to. 

I hate Trump as much as anyone but it seems unlikely EA can make much difference here, given how many other well-resourced, powerful actors there are trying to shape outcomes in US politics.

Yeah, I'm not a Yudkowsky fan. But I think the fact that he mostly hasn't been a PR disaster is striking, surprising and not much remarked upon, including by people who are big fans.

The thing about Yudkowsky is that, yes, on the one hand, every time I read him, I think he surely must be coming across as super-weird and dodgy to "normal" people. But on the other hand, actually, it seems like he HAS done really well in getting people to take his ideas seriously? Sam Altman was trolling Yudkowsky on twitter a while back about how many of the people running/founding AGI labs had been inspired to do so by his work. He got invited to write on AI governance for TIME despite having no formal qualifications or significant scientific achievements whatsoever. I think if we actually look at his track record, he has done pretty well at convincing influential people to adopt what were once extremely fringe views, whilst also succeeding in being seen by the wider world as one of the most important proponents of those views, despite an almost complete lack of mainstream, legible credentials. 

My gut instinct, given how Trump seems to view the world (i.e. in terms of personal loyalty to him) is that Ivanaka Trump re-tweeting Situational Awareness may actually have been a more significant moment. 

I think he is pretty clearly an EA given he used to help run the Future Fund, or at most an only very recently ex-EA. Having said that, it's not clear to me this means that "EAs" are at fault for everything he does. 

Load more