April 23, 2005

2 suggested paths from mechanism to consciousness

Irrationality is a necessary (and perhaps sufficient) condition for consciousness?? Think about it this way. You have a system like Drescher's which compiles statistics to direct its actions in a "world." For consciousness, the most important aspect of such a system is that which

"is defined not so much by its particular set of primitives as by its ways of combining structures to form larger ones, and by its means of abstraction -- its means of forming new units of representation that allows the details of their implementation to be ignored." (Drescher, Made Up Minds, p10)

As I read it, what this means is we get a system which combines its primitive processes into higher-level ones so that you get a "commanding" program with any number of functions, or sub-routines. For non-programmers, this means you could have the lower level processes carrying out the detailed statistical analyses to determine actions, while the so-called commanding program is "unaware" of of those activities. All it needs to operate is a "Yes" or "No" from the sub-routine based on its detailed statistical analysis.

What you end up with is a system, if it is indeed ignoring the primitive functions, that is unaware of its own internal processes and how its actions are determined. If it receives a command, say, to stack 3 blocks that are in its world, the primitive statistical processes are going to run in order to determine the locations of said blocks and make the parts move to execute the command. The higher order process of the system, though, undergoes only the "experience" of receiving the command, finding the blocks, and doing it. If you could ask what it is doing, it would say it was following the order, not that it was carrying out statistical analysis. I believe this is the first step towards conscious machines. There is nothing it is like to be a mechanism which merely compiles statistics and uses it to push buttons on and off. And maybe there is nothing it is like to be a higher-level program which "sees" only the results of such compiling and uses the "Yes" or "No" to push other buttons on and off. But maybe there is something it is like to be a higher-higher-higher ... -level program which pulls vast amounts of different types of input together in one orderly mechanism and is ignorant of the extraordinarily complex underworkings.

Why could this be true? Because it could be true that we are such systems. The complex set of processes that go into pouring myself a bowl of cereal feels simpler by orders of magnitude.

Labels: ,

9 Comments:

Blogger Dan(iel) said...

This is only one path, despite my title. Decided to leave it out for another post. Sorry.

4/23/2005 02:19:00 AM  
Blogger Ignacio Prado said...

This is cool. The key is question re:

Dan(niel) said . . .

"What you end up with is a system, if it is indeed ignoring the primitive functions, that is unaware of its own internal processes and how its actions are determined."

Is why, since we are epistemically closed (at least right now, maybe in principle) to how our actions are determined, assume that they are enitrely determined at the sub-personal level?

4/23/2005 03:27:00 PM  
Blogger Dub! said...

Hi Dan,

Irrationality is a necessary (and perhaps sufficient) condition for consciousness??

What's this sentence all about? Is this the second path that was left out for another post?

maybe there is nothing it is like to be a higher-level program which "sees" only the results of such compiling and uses the "Yes" or "No" to push other buttons on and off. But maybe there is something it is like to be a higher-higher-higher ... -level program which pulls vast amounts of different types of input together in one orderly mechanism and is ignorant of the extraordinarily complex underworkings.

What's the distinction between a higher-level and a higher-higher-level program? As long as you have the right sort of input-output transformation, I dont know if it matters if you're running three subprograms or five thousand. From the top-down perspective, the system is blind to the number of subprograms it's actually running.

Igancio said:

why, since we are epistemically closed (at least right now, maybe in principle) to how our actions are determined, assume that they are enitrely determined at the sub-personal level?

Not sure I understand this. By sub-personal level, do you mean processes within the brain? Are you asking why we should assume that our actions are determined by physical processes, or are you asking something else?

4/23/2005 04:26:00 PM  
Blogger Ignacio Prado said...

Hi Richard,

If I understand Dan(iel)'s post (correct me if I am way off, Dan(iel)), I think he is in a way defining or explicating "irrationality" as def. = (i) necessary blindness at a higher-order command level to information at a lower-order level of a cognitive system and/or (ii) necessary blindness by the higher-order command level to the causal complexity of the deterministic, lower-order mechanisms that produce action (therefore creating a need for unitary / agential consciousness, to help the system navigate while having imperfect access to its own informational and causal states) Jesus, this is way too much jargon.

The "personal" or consciousness emerges because the sub-personal can't organize itself efficiently without it(therefore, consciousness is the necessary product of "irrationality" in his sense). This is pretty much Dennett's line, no?

This is cool because traditionally "irrationality" just means blindness to salient information that is available available to an agent. So we get an updated explication of "irrationality" using cognitive science.

My question was motivated by the following concern: why would we need is the illusions of consciousness if the processes at the sub-personal level are in fact deterministic? A fully deterministic mechanism should have no need for illusions of agency >> it just does what it deterministically needs to do. An epiphenomenal higher-order system that just says "yes" or "no" to the results of deterministic causal processes seems unnecessary and wasteful by evolutionary standards. This is Meixner's point in the excerpt of the Book I linked to below. Consciousness only seems necessary if we assume an indeterministic world.

If we start, as we always do, from the first-person perspective trying to justify our beliefs to each other (including our beliefs about what the first-person perspective is), and then we give an analysis of the question of what the first-person is by saying that the first-person is a kind of necessary illusion of agency created by the lower-order complexities and "irrationality" in the system, then what is the criterion for justifying the belief that the mechanistic model Dan(iel) proposes is the fundamental one and not just a projection out of the first-person to explain itself in objective terms?--what is it that gives metaphysical priority to one level over the other?--if we can't actually (because of principled epistemic limitations) account for all the sub-personal facts that we claim are determining what is going on at the personal level, then why assume they are there?-- we don't need this assumption to do cognitive science, right?-so what's the compelling need to slim the ontology down? Longest question ever.

4/23/2005 05:38:00 PM  
Blogger Dan(iel) said...

1. Yes, the bit about irrationality is the #2 path I was saving for another post, but Ignacio has pointed out how it is connected to this one.

2. Richard asks what is the difference between a higher level program and a higher-higher-higher level one. I was troubled with this question as I wrote this post, but I wanted to get the post up quickly. I knew the question would come up. Answer? Not sure. It's possible that there is something it is like to be the first higher level one, but I personally doubt it. It is still way too simple of a mechanism. As we saw from Marr's "Vision" chapter, even the fly's vision system is mechanistic.

But if you are committed to there being no important difference when you get a deep enough hierarchy, I suspect you have to be a dualist. Take humans. We have consciousness, i.e., there is something it is like to be a human. We also have oodles of processes going on physically in our brains which result in our conscious experience. We are unaware of said processes, only the end result. There must be an algorithm getting followed under our skulls to get from the physical processes to experience. If you disagree, I think you must be a dualist. (Broadly similar) algorithms are implemented in computers and machines. If they can make us conscious, they can make the machine conscious.

At what point does that happen? I think Hofstadter is on the right track that there is no discrete point at which, voila, you have consciousness, but that it happens in degrees. the more complex the hierarchy, the more conscious(???). Sounds like an empirical question. After we start learning how brain processes result in conscious we can analyze lower animals on up to us.

--Sorry if I am all over the place. I'm typing frantically and just sending it up.

3. Ignacio writes: Is why, since we are epistemically closed (at least right now, maybe in principle) to how our actions are determined, assume that they are enitrely determined at the sub-personal level?

Good question, this is a *third* potential track to an indication of consciousness. If you cannot determine what a system is doing or why it is doing what it is doing, simply from examining its underlying structure, maybe *that's* when we have consciousness. But there's a paradox here. Here's a rough sketch: I'm thinking of a machine, or robot, that we program in such away that it makes its own determinations about what actions to perform. On one level the machine is acting independently, making its own decision; on another level, it is merely executing a program. Which one is right? I'm not sure. Can we simply look at the programming algorithm to answer the question "What is it doing" or "Why is it doing X?" I'm not sure of that either. Is this kind of programming possible? I don't know that either!

But if it is, and we have to ask it what it's doing, as opposed to examining the programming, I think we might have a conscious system.

4. What does this have to do with irrationality. I think Ignacio and I are on to the same point, but he puts it differently than I. (And yes, I think Dennett would assent to most or all of this) Rationality involves behaving according to some set of rules or algorithm - logic, basically. A purely mechanistic system is "rational" in this sense. It "follows orders" on a near perfectly consistent basis. But if you can create system like I describe, which is programmed to figure out its own actions, there's a sense in which the system has room to stray from the mechanism, leading to irrationality. So the irrational aspect of consciousness maybe coincides with my previous idea.

I also realize there is a difference between a system programmed to make its own decisions, and a system with lower-order processes it is unaware of. I apologize for interchanging the two a bit, but since this is informal I am not worrying too much about it. I only hope to have generated some ideas and it would be great to see what you guys think.

4/23/2005 07:22:00 PM  
Blogger Dan(iel) said...

As for your "longest question ever." That's the one that REALLY bakes the noodle. Can I write it off as an external question? :)

4/23/2005 07:25:00 PM  
Blogger Winston said...

I think it's possible that having many layers of abstraction (I think that's roughly what you're getting at) is a necessary condition for consciousness, but I doubt it's a sufficient condition. It's often useful to look at real-world examples, and we fortunately have a perfect model. It's the computer right in front of you.

One of the lessons that is ingrained early into the mind of every computer programmer is the idea of separating implementation from interface. If you're writing a program to, say, do some physics simulation, you may find yourself often calculating the position of a body at time t, given its initial position and velocity. If you find yourself doing the same thing over and over again, it's useful to write a specific function for it. Suppose it's called CalcPosition in this case. You give the input variables to CalcPosition, and it returns some output.

One of the virtues of using the CalcPosition function is that you don't need to know how it reaches its answer; you just trust that it does reach the right answer. You can specify the interface of CalcPosition like this: it takes initial position, velocity, and final time, and returns the position at the final time. Then you can get someone else to implement it; they figure out how to actually do it. If their implementation is inefficient, you could replace it with a better algorithm, and maintain the same external interface.

These lower-level components are used by higher-order functions. If we're programming a robot how to walk, we might have functions like CalcFootTrajectory, which might make use of CalcPosition. Higher order function might be something like WalkForward or GoToStore. Even these commands can be implemented in different ways; maybe WalkForward will be implemented with a Monty Python-esque funny walk, and GoToStore might be implemented by walking or crawling or by taking the scenic route there, and so on.

Almost every piece of software on your computer is written this way. Microsoft Windows consists of layer upon layer upon layer of computer code. When you click the Send button in your email program, that calls a specific function. At this level, there's no need to be aware of all the tricky details of sending stuff over the internet; that gets handled by lower level code.

Anyway, there are plenty of ways you can keep piling it on that will result in something completely useless and uninteresting. This is why I say that simply having many levels doesn't seem to be a sufficient condition for consciousness.


I'm not really sure in what sense the thing is irrational... In point 4 above, you say that the system has room to stray from its mechanism -- but it can't. Since we've tacitly stipulated that this is a computer, it must always be following some algorithm. Maybe you mean that it can be mistaken about which algorithm it's running? What I'm wondering is this: who or what, exactly, is mistaken?

Here's a question along similar lines (maybe it's really the same one?). If, for consciousness, it's necessary for a system to be unaware of what's going on underneath, doesn't that imply that the system is already "aware" of something and therefore conscious?

4/25/2005 01:06:00 AM  
Blogger Dub! said...

Oooh, very good post, Winston.

Also, I'm not sure that having many levels is necessary for consciousness. It's not obvious that you couldn't make a neural net, with no clearly demarcated compositional subsystems, that is functionally identical to a multi-layered GOFAI program.

4/25/2005 12:12:00 PM  
Blogger Ignacio Prado said...

Daniel said,

"Rationality involves behaving according to some set of rules or algorithm - logic, basically. "

But rationality (or at least inteligence) is more than this: it is responding to your situation appropriately. Sadly, George W. Bush consistently acts upon some basic algorithms, but they are no sign of intelligence in this cases.

4/29/2005 01:53:00 AM  

Post a Comment

Subscribe to Post Comments [Atom]

Create a Link

<< Home