November 2, 2004

Determinism and Prediction

This post is a response to Daniel's post below. Be sure to read his post -- otherwise, this one won't make much sense. I originally put it in the comments of his post, but I think it's interesting enough to post as its own topic, since it's something I've thought about a lot. Also, maybe we'll get more lively discussion this way? I have a feeling that the usual blog format (each post with its own comments) isn't the best format for an broad, ongoing discussion -- not that I think that everybody posting their own new entry whenever they have anything to say is the best either. We'll just have to use some discretion, I think. If you think I'm wrong, feel free to express your views as a comment on this post, where no one will ever see it.

Anyway... I've spent a lot of time puzzling over the problem of prediction and determination, spurred on by interminable discussions of Newcomb's problem. I've decided that the problem that Daniel describes, the paradox of action and prediction in a deterministic world, is not really a problem at all, but an illusion, one that I hope to dispel here.

Say we have a prediction machine P. Give the state of a system S at time 0, it can predict the state of S at time t, before t actually comes about. I'd like to note that we actually have devices that can predict what a system is going to do (ie. a computer calculating a projectile trajectory), but these differ from our machine P in an important way: they simplify the problem by making certain assumptions about the system and throwing out irrelevant information. In a deterministic world with P being a perfect predictor of any S in that world, P must take into account every bit of information about the initial state of S. This, I think, is not an unreasonable picture.

The problem arises when you try to use P to predict the behavior of a system that contains P itself. If P's output/behavior is dependent on its prediction of what it is going to do, this is an impossible feat. (There is the trivial case where all possible predictions would lead to the same output, but that situation isn't very interesting. I'm going to focus on P's output is conditional on P's prediction of P's output.) Suppose P could predict it's own output, and its behavior was conditional on that prediction. It has two output lights, labeled A and B. You ask it to make predictions this way: "If you predict John Kerry will win the election, turn on A; otherwise turn on B." In this case, we make the following request: "If you predict that you will turn on light A, turn on light B. Otherwise turn on light A." So if it predicts A, then it'll flash B, and if it predicts B, it'll flash A. Now we flip the swith to make P go. Which light turns on? We've stipulated that P can predict what P is going to do, but to actually do so, it must see to the end of an infinite regress. The absurd conclusion gives us a reductio, so we should reject that there can exist a P that makes self-prediction-conditional actions.

So the Minority Report-like paradox that you propose isn't possible, because in this case, the predictor's output necessarily depends on its prediction of its output. Here's the picture: P predicts your action, but since your action depends on its prediction, it must first predict its prediction. And as we've seen, that's not possible. You could take a step back and say that the predictor just makes a very good guess instead of a perfect prediction (as in the actual Minority Report movie) but then the paradox is no longer a paradox. It's still a conundrum, to be sure, but of a different sort. It becomes a question of whether or not you believe the predictor, and how much reason you have to doubt it, and so on.

Now it's time to take back some of what I said: I stated that P was impossible, but I don't mean that in the strict sense of the word. Let me put it this way: if P were possible, we would have to give up a lot of things we take for granted; we'd have to cut out a huge swath of our web of belief. Though I haven't actually done it yet, I'm pretty sure that I could concoct a system in which, using a similar method, a group of predictors act as a Turing machine that can predict its own output, thus solving the halting problem. The halting problem is the thesis that no Turing machine program can predict, for every Turing machine program, whether that program will halt. As I understand it, the halting problem is equivalent to Godel's incompleteness theorem, which states that in any formal system powerful enough to represent arithmetic, there are statements that can not be proven true or false. So if P existed, it would be a physical method of proving an unprovable (via logic) statement true or false. If we can make P, then perhaps we could decide the truth value of "This statement is false" after all. If we could physically prove things that we can't logically prove, well, then the world is a much weirder place.

This scenario of revising logic is unlikely, but it's happened before: Quine and Putnam suggested that quantum mechanics necessitated adoption of a nonstandard system of logic, so I suppose that in a very broad sense, P is possible (but exceedingly unlikely, given what we know about the world now). That said, I don't think they had anything quite this radical in mind.

Addendum: It's 8 hours later, and I've changed my mind. I think it's completely impossible for the predictor to exist. Forget all that clever stuff about Godel and the halting problem. In the flashing light setup above, there's simply no answer it can give that will be correct.

Labels: ,

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home