Tuesday 11/06

Tuesday 11/06

Am I Just a Brain in a Vat?

Readings

Texts

Cases

Video

Synopsis

Today we finished watching Ex Machina and briefly discussed its philosophical import in terms of two questions it raises in particular:

  1. Does Ava understand--in the sense that "Watson does not understand it won Jeapordy"--that she is escaping Nathan's 'prison'?
  2. Is there something it is like to be Ava?

The problem in each case, of course, is that we have at most behavioral evidence to cite. She could merely be acting as if she wanted to escape, as if she relished her new found freedom and the experiences it would bring her, just as she was clearly acting as if she cared for Caleb.

As loathsome a character as Nathan was, he was correct in observing that "proving an AI is really hard".

Of course, proving an AI is very nearly the same problem we have proving each other. We take it as a bedrock assumption that there is something it is like to be our friends and lovers. They have subjective experiences just as we do. They understand us when we talk with them, just as we understand them when they talk with us. Yet all we have to go on is behavioral evidence, how they act with us and how they react to us.

How can we be sure those around us are not simply biological automatons, devoid of thoughts and feelings?

This, roughly, is the Problem of Other Minds. Given the private and perspectival nature of subjective experience (Remember: It is my subjective experience in the sense that there is something it is like to be me, but it is also my subjective experience in the sense that what it is like to be me is inaccessible to any other than me) it is compatible with all I can ascertain that I alone enjoy subjective experience; there are no other minds. This is a kind of solipsism, rooted ultimately in a kind of skeptical challenge about the impossibility of our having access to another's subjective experience.

Another kind of skepticism intrudes as we think about the ramifications of subjective experience. Our individual subjective experiences exhaust our (individual) worlds, if you will. That is to say, each of us is confined in our understanding of the nature of the world to our subjective experiences (perceptions) of it. I know that this is a keyboard upon which I type and not, say, a hippopotamus because of perception of it is as of a keyboard and not a hippopotamus.

What, though, are perceptions? Aren't they ultimately tied to physical (read, 'causal') processes that impinge on specific neural structures? The fact that I see a keyboard and not a hippopotamus before me is because of the way the light-waves are variously absorbed and reflected from the keyboard, impacting my retina, and causing cascades of neural events down my optic nerve.

Suppose we interrupt all of these neural events and stimulate them artificially? In theory we could make me see a hippopotamus and not a keyboard. Going further, could we (again, in theory at least) do that for all of my senses, giving me the feel, smell, taste (eww!), and sounds of the hippopotamus as well?

If we can interrupt and artificially feed, if you will, all the neural processes that ultimately underwrite (somehow) my subjective experience, then I don't really need a body at all. I can just be a brain in a vat, hooked up to a computer which is busily simulating all of the experiences I take to be real by stimulating the very nerves that would formerly have assured me of the reality of the keyboard, and not the hippopotamus.

Maybe I'm even given a choice. I can be a brain in a vat, subject of whatever experiences I might relish having, or I can continue on in the 'vat' of bone and muscle I currently occupy.

And how do I know I haven't already made that choice?

You see the problem: There is no way for me to tell whether I am a brain in a vat!

So what is reality?