November 2, 2004

Determinism and Prediction

This post is a response to Daniel's post below. Be sure to read his post -- otherwise, this one won't make much sense. I originally put it in the comments of his post, but I think it's interesting enough to post as its own topic, since it's something I've thought about a lot. Also, maybe we'll get more lively discussion this way? I have a feeling that the usual blog format (each post with its own comments) isn't the best format for an broad, ongoing discussion -- not that I think that everybody posting their own new entry whenever they have anything to say is the best either. We'll just have to use some discretion, I think. If you think I'm wrong, feel free to express your views as a comment on this post, where no one will ever see it.

Anyway... I've spent a lot of time puzzling over the problem of prediction and determination, spurred on by interminable discussions of Newcomb's problem. I've decided that the problem that Daniel describes, the paradox of action and prediction in a deterministic world, is not really a problem at all, but an illusion, one that I hope to dispel here.

Say we have a prediction machine P. Give the state of a system S at time 0, it can predict the state of S at time t, before t actually comes about. I'd like to note that we actually have devices that can predict what a system is going to do (ie. a computer calculating a projectile trajectory), but these differ from our machine P in an important way: they simplify the problem by making certain assumptions about the system and throwing out irrelevant information. In a deterministic world with P being a perfect predictor of any S in that world, P must take into account every bit of information about the initial state of S. This, I think, is not an unreasonable picture.

The problem arises when you try to use P to predict the behavior of a system that contains P itself. If P's output/behavior is dependent on its prediction of what it is going to do, this is an impossible feat. (There is the trivial case where all possible predictions would lead to the same output, but that situation isn't very interesting. I'm going to focus on P's output is conditional on P's prediction of P's output.) Suppose P could predict it's own output, and its behavior was conditional on that prediction. It has two output lights, labeled A and B. You ask it to make predictions this way: "If you predict John Kerry will win the election, turn on A; otherwise turn on B." In this case, we make the following request: "If you predict that you will turn on light A, turn on light B. Otherwise turn on light A." So if it predicts A, then it'll flash B, and if it predicts B, it'll flash A. Now we flip the swith to make P go. Which light turns on? We've stipulated that P can predict what P is going to do, but to actually do so, it must see to the end of an infinite regress. The absurd conclusion gives us a reductio, so we should reject that there can exist a P that makes self-prediction-conditional actions.

So the Minority Report-like paradox that you propose isn't possible, because in this case, the predictor's output necessarily depends on its prediction of its output. Here's the picture: P predicts your action, but since your action depends on its prediction, it must first predict its prediction. And as we've seen, that's not possible. You could take a step back and say that the predictor just makes a very good guess instead of a perfect prediction (as in the actual Minority Report movie) but then the paradox is no longer a paradox. It's still a conundrum, to be sure, but of a different sort. It becomes a question of whether or not you believe the predictor, and how much reason you have to doubt it, and so on.

Now it's time to take back some of what I said: I stated that P was impossible, but I don't mean that in the strict sense of the word. Let me put it this way: if P were possible, we would have to give up a lot of things we take for granted; we'd have to cut out a huge swath of our web of belief. Though I haven't actually done it yet, I'm pretty sure that I could concoct a system in which, using a similar method, a group of predictors act as a Turing machine that can predict its own output, thus solving the halting problem. The halting problem is the thesis that no Turing machine program can predict, for every Turing machine program, whether that program will halt. As I understand it, the halting problem is equivalent to Godel's incompleteness theorem, which states that in any formal system powerful enough to represent arithmetic, there are statements that can not be proven true or false. So if P existed, it would be a physical method of proving an unprovable (via logic) statement true or false. If we can make P, then perhaps we could decide the truth value of "This statement is false" after all. If we could physically prove things that we can't logically prove, well, then the world is a much weirder place.

This scenario of revising logic is unlikely, but it's happened before: Quine and Putnam suggested that quantum mechanics necessitated adoption of a nonstandard system of logic, so I suppose that in a very broad sense, P is possible (but exceedingly unlikely, given what we know about the world now). That said, I don't think they had anything quite this radical in mind.

Addendum: It's 8 hours later, and I've changed my mind. I think it's completely impossible for the predictor to exist. Forget all that clever stuff about Godel and the halting problem. In the flashing light setup above, there's simply no answer it can give that will be correct.

Labels: ,

9 Comments:

Blogger Winston said...

I think the example of predictor with the A and B lights predicting which light it will turn on might seem a bit remote and abstract, so here's another example: Suppose the predictor has buttons labeled X and Y, and you ask it to predict which button you will press. But before you ask it that question, you have it planned out that if it says X, then you will press Y, and vice versa. Here the predictor simply can't give a correct prediction, because the prediction interacts with a prediction. (I realize there may be some weaselly solutions to the predictor's actions, like "it won't turn either on" or "it'll flash X and Y," but suppose your instruction to it is specific enough to rule out anything but your pressing X and Y.)

11/03/2004 10:24:00 PM  
Blogger Dub! said...

Continuing from the discussion being conducted in Daniel's topic:

Winston, you write,

"there's no reason that a predictor couldn't have access to a summary of its own state from which it could deduce that it's working properly and will continue to do so. This is just a straightforward reduction."

I'm not sure what you mean by a 'summary' of a machine's state. If it throws out noise that could potentially be relevant, then that's not a straightforward reduction at all. If you're saying that it doesn't throw out any potentially relevant noise... well, that's exactly the kind of thing that Godel's theorem rules out.

Let's go through the example you give in your postscript:

"Suppose you have a computer program stored in memory, and that each bit has three possible states: the normal states 1 and 0, and the error state (akin to the errant electron) is 0.5."

I was going to object to your treatment of 'bits' as fundamental, but it looks like you foresaw this objection:

"You might object that the program can't tell if the hardware it's running on is error-free, but that's irrelevant. The abstract system is itself meant to be taken as the deterministic "universe," not the physical realization of it on an actual computer. I think that if we're going to assume a deterministic universe, then this abstract formal system is a perfectly acceptible model of it. Picture Conway's Game of Life -- that can serve as a model of an abstract system which is a deterministic universe."

I still think that in the example you give, you're not treating the predictor's universe as a closed formal system. Let's see if you'd agree with this setup: It's possible to model a Turing Machine in Conway's Game of Life Universe. Let's say we do so, and we build a prediction machine out of that Turing Machine. You want to say that this Turing Machine can have perfect knoweldge of the state its Life universe, correct?

"If they're [the bits] all ones and zeros, then the program can be certain that all its future states will be error free."

You're missing an important premise: "*and* the program knows its states are all zeros and ones."

"It could sweep through its entire memory, bit by bit, and upon finding all ones and zeros, it could conclude that all its future states will be error-free. This is the sort of reduction I mentioned before."

No, it can't do that! That would produce a Godel sentence, which is a logical fallacy. The Turing Machine can't have knowledge of all the bits that compose it.

I think you're conflating "the memory of the Turing Machine predictor in the Life Universe" with "the memory of the computer on which the Life Universe is running". Think of it this way: how would the machine determine whether a bit is in the zero or one state? If it just reads it off the memory of the computer that the computer is running on, it is accessing something outside the Life universe. More importanty, if it somehow did get the information, how would it store it? It has to somehow store that information in its memory system within the Life Universe, and that stored information has to itself take stock of the state of the memory system. This isn't possible.

11/04/2004 06:37:00 PM  
Blogger Winston said...

A recap:

W: It could sweep through its entire memory, bit by bit, and upon finding all ones and zeros, it could conclude that all its future states will be error-free. This is the sort of reduction I mentioned before.

R: No, it can't do that! That would produce a Godel sentence, which is a logical fallacy. The Turing Machine can't have knowledge of all the bits that compose it.

Yes it can. What you're referring to is that it can't have simultaneous knowledge of all the bits that compose it -- that would be a logical fallacy. But it can scan each bit one at a time and examine it. If you really, really don't believe that this it's possible, I could show you a short C program that does exactly what I described. It would scan each bit in its memory; if the bit is 0 or 1, then it continues to the next bit, if it's 0.5, then it reports an error. It doesn't need to know it's entire memory state at one time; all the bits get written into the same bit of memory, just at different times. That they're changing as you run it isn't really a big issue; you just want to make sure that there are no 0.5s. Of course, if you actually run this on a computer, it won't tell you anything interesting because it won't find 0.5 as the state of a bit.

But anyway, we still have the problem of error-checking the error-checking routine. I think this sort of skepticism is inescapable, even with the Lapacean Demon outside the world it's making predictions about. So if we're going to say that a predictor can never really predict what it's going to do because it can't know if it's going to make a mistake, then we'd have to level the same criticism at the Laplacean Demon.

Anyway, here's an example of a straightforward reduction. You have a perfect cube of atoms arranged in a perfect lattice, that's moving at some velocity, and you want to predict the position of every single atom in it at time t. Instead of tracking the trajectory of every single atom, you can just find the center at time 0 and the position of each atom relative to the center, calculate where the center will be at time t, and then add the relative position of each atom to that value.

11/04/2004 07:24:00 PM  
Blogger Dub! said...

Two quick comments (I gotta get some other work done!)

1) "if we're going to say that a predictor can never really predict what it's going to do because it can't know if it's going to make a mistake, then we'd have to level the same criticism at the Laplacean Demon."

I agree, but that's not what I'm arguing against. The issue is whether a Laplacean demon can make perfect predictions about a universe it is in (whether it is skeptical about being perfect or not!). If a Laplacean demon is looking at a deterministic universe from the outside, then it *can* make 100% perfect predictions about that universe... though it will never be quite sure that the knowledge is perfect. But that's knowledge about itself, not the universe... it can still be perfect without knowing it's perfect. I thought that we were debating whether a Laplacean demon could make perfect predictions from inside its universe. I claim that that's an impossibility.

2) "If you really, really don't believe that this it's possible, I could show you a short C program that does exactly what I described. ... It doesn't need to know it's entire memory state at one time; all the bits get written into the same bit of memory, just at different times."

I'm really losing track of this example. Let me try to rebuild it. The C program is written on bits in a computer. The program scans each bit that it is composed of, and when it scans each bit, it checks to make sure it is not a 0.5. We agree that it can't store all the information about its state at one time in the system itself.

So this proves that it can tell that none of its states are 0.5s. Where does this get us? I thought you wanted to say that because it could prove that there weren't 0.5s, it could predict things with perfect accuracy.

I introduced the errant electron as an example of something about its physical system that it probably wasn't considering. Is your model just to show that it *could* consider that? That's fine, but in so doing, it will have to ignore other information about its state that could potentially have consequences it doesn't predict. Maybe there's a piece of really crappy code in there that it isn't considering, such that when it 'chooses' to pick raising the wrecking ball, it will end up lowering it by mistake. My point about the electron was that the system necessarily ignores *some* information about itself, not a specific piece of information. An electron would just be a likely thing for it to ignore; if it pays attentions to certain electrons, it'll have to ignore other bits of information.

(By the way, this was a really clumsy sentence on my part: "That would produce a Godel sentence, which is a logical fallacy". Firstly, it's an impossibility, not a fallacy, and secondly, it doesn't really "produce a Godel sentence" at all, whatever that means. I hate that these comments can't be edited... Blakely, no opening this up to the public or to our professors until I learn to think before I write!)

11/04/2004 08:08:00 PM  
Blogger Winston said...

Well, we should make sure we have things straight: I'm saying that there could be a perfect predictor in the sense that it won't make mistakes, but there can't be one that's capable of predicting what it, itself, will do, in every possible case. But there could be one that can make some predictions about itself, within limits. I'm not sure if the Laplacean Demon has other connotations, so I'm going to try to avoid it.

I didn't mean to say that if the computer program has no 0.5s that it can know that its predictions are accurate -- there could very well be some crappy code that causes it to lower the wrecking ball in an attempt to save my life. And I agree that no program could vet itself to make sure there are no mistakes, without starting with the assumption that there is some error-checking part of it is error-free. So it's still always vulnerable to skepticism about itself.

My point is that there could be an error-free predictor, and the way I read your point is that there could be an error in the predictor that the predictor is unaware of, but those two possibilities aren't mutually exclusive. To rule out my claim, you'd have to argue that there must be an error in the predictor. Like your take on the Demon, my predictor can't prove to itself that it's error-free. But I'm just saying that it could exist, not that it could be certain of itself.

11/04/2004 09:09:00 PM  
Blogger Dub! said...

Things are clearer to me now.

"I'm just saying that it could exist, not that it could be certain of itself."

I still want to say that it can't exist.

"My point is that there could be an error-free predictor [that is, a machine that has knowledge of the future], and the way I read your point is that there could be an error in the predictor that the predictor is unaware of,"

Well, I don't know if I want to say there's an *error* it's not aware of. I want to say that that the system is unaware of the material state it is in. So when you say, "To rule out my claim, you'd have to argue that there must be an error in the predictor," I think the word 'error' is confusing the argument I'm making. I just have to argue that the predictor cannot know its entire physical state. And we both agree that that's true.

Here's a more formal version of my argument. Tell me where you disagree.

P1) The physical state of the predictor has effects in the material world.
P2) The predictor cannot know everything about its physical state. There are facts about its physical state it does not know.
P3) Unknown facts about the predictor's physical state might cause its prediction about the universe to be wrong.
C) The machine's prediction might be wrong.

It seems to me that you'll try to deny P3, but I don't see how you can do this. The machine can't rule out that the state its in will not affect the world in relevant ways, because that would involve it knowing something about the physical facts it knows nothing about - namely, that they will not affect its prediction.

11/04/2004 10:16:00 PM  
Blogger Winston said...

Framed this way, I'd say that I'm denying that the conclusion is relevant. It all hinges on the word "might." That word suggests an epistemic shortcoming, but if you have complete knowledge, from a third-person perspective of a deterministic world, it makes little sense -- the predictor either is right or it isn't. So yes, you can say that, given your or the predictor's limited knowledge, it might be wrong. But given complete knowledge (from the third-person perspective) it either is right or it is wrong. I'm just picking a predictor that happens to be right; it happens that there are no unknown physical facts that mess it up.

The proper way to look at the argument is this:
P1) The physical state of the predictor has effects in the material world.
P2) The predictor cannot know everything about its physical state. There are facts about its physical state it does not know.
P3) Unknown (to the predictor) facts about the predictor's physical state might (relative to the predictor's body of knowledge) cause its prediction about the universe to be wrong.
C) The machine's prediction might (relative to the predictor's body of knowledge) be wrong.

It seems that you're saying that every predictor might be wrong in every prediction it makes. The question is: says who? From the third person perspective with complete knowledge of a deterministic world, there is no might, only is. P3 simply evaporates from this perspective.

You wrote, "The machine can't rule out that the state its in will not affect the world in relevant ways...." To that I say again that I don't care if the machine can or can't rule out that it's making a mistake. Whether it can prove to itself that it's correct is not what is at stake; what we care about is whether it is right or is wrong. The Demon, in the exact same way, can't rule out that it's making a mistake in its predictions. You have to choose your perspective: the perspective of the predictor/Demon, or a perspective external to the predictor/Demon. You can't take the first-person perspective relative to my predictor and conclude it can't make perfect predictions, and then take the third-person perspective relative to the Demon and conclude that it can make perfect predictions. If you apply the same perspective to each of the predictors, you get the same conclusion about their predictive powers.

11/04/2004 11:30:00 PM  
Blogger Dub! said...

Okay, let's talk about everything from the third person perspective (I don't think that's quite the right term for this, though).

From the third person perspective, the demon is always right about his predictions. Hands down. That's why he is a deterministic predictor.

From the third person perspective, there are two scenarios. One is that the predictor is wrong about its prediction (maybe even always wrong). This in itself is enough to distinguish the in-the-world predictor from a Laplacean demon. The second is that it gets it right. And you're stipulating that that's the only prediction machine you choose to talk about. But so what? *I* might make a prediction that it will rain tomorrow. If it happens to occur, we wouldn't want to say that I'm a deterministic predictor.

For something to be a deterministic predictor, then for every arrangement of matter in the universe, the predictor will make a correct prediction about what will happen next. Moreover, it seems like there has to be some sort of justification. Getting lucky about some arrangements doesn't count (interestingly, there are shades of the Gettier problem here).

Let's say I get a coin from the mint and decide it's going to be my prediction machine. I flip it; if it's heads, it predicts that it's going to rain tomorrow; if it's tails it's going to rain. It turns up heads, and I promptly melt it so it can't make any more predictions. Tomorrow it rains. Is it a deterministic predictor? If not, how is it different from your deterministic predictor that just happens to be lucky that its own state is not adversely affecting its prediction?

11/05/2004 12:26:00 AM  
Blogger Winston said...

Point taken that the predictor can't just be right, but it must be right for a good reason. I agree that a coin-flipper that just happens to be right all the time doesn't deserve to be called a deterministic predictor. But that doesn't rule out the possibility of a predictor that is:
A) Right all the time (excluding certain self-prediction cases)
B) Right for a good reason. (A determinist prediction via simulation and/or reduction)

I don't see anything wrong with calling the weather service a predictor (far from perfect, of course) and a not calling a coin-flipper a predictor of rain. It's because the weather service has some good reasons to make its predictions and the coin-flipper doesn't. The justification may just be a matter of degree. You say the burden is on me to show how a coin flip and something like the weather service are different; I say that the burden is on you to show how they're the same. If you require that everything be absolutely proved correct, or else it's just equivalent to a coin flip, then you're taking the path of nihilistic skepticism. I'm only claiming that there has to be some reasonable justification; to be called a deterministic predictor is to impose some conditions on how it operates, but I have no problem with that.

You wrote, "From the third person perspective, the demon is always right about his predictions. Hands down. That's why he is a deterministic predictor." I just want to be precise here: I think what you meant was that the demon is a deterministic predictor, therefore he's right all the time. Otherwise he could be a lucky coin-flipper and fit your discription of a deterministic predictor. Implicit in this definition is that the Demon make no mistakes. But the question is why doesn't he make any mistakes?

Let me define a "predictor" as one that is mostly right, but sometimes wrong, and a "Predictor" as one that is always right. Similarly, a "demon" is mostly right, whereas a "Demon" is always right. And all these things make predictions for good reasons. So you say that your Demon is always right, by definition. I say my Predictor is similarly always right, by definition.

I think we can boil this conflict down to our conceptions of modal terms. To back up a bit, I think you're saying that it's possible that my predictor make mistakes; therefore it's not a true deterministic predictor. I say it's not possible because I'm only looking at ones that don't make mistakes. I'm automatically excluding all those predictors that have errant electrons or whatever, so no matter what information you fed my predictor, it would give you the right prediction (excluding certain self-prediction cases, blah, blah, blah).

To sum up: I think you're saying "There could be no perfect predictors because every possible predictor could possibly be wrong," to which I say, "This wrongness is possible to whom? I'm stipulating that my Predictor is never wrong, just like you and your Demon." Now we have a muddled morass of epistemic and metaphysical possibility. It seems that no matter how hard you try to avoid it, every philosophical argument ends up in metaphysics or philosophy of language. We just might not be able to agree on how to use possibility here. Cue possible worlds.

11/05/2004 02:06:00 AM  

Post a Comment

Subscribe to Post Comments [Atom]

Create a Link

<< Home