November 30, 2004


Everyone in this department seems to love/hate talking about Newcomb's problem, so I might as well feed the fire. Here is a new take on the Newcomb problem by Nick Bostrom. All you two-boxers can just suck it.

By the way, someone else post or comment. It's getting lonely.

Consider the following twist of the Newcomb problem.
There are two boxes in front of you and you are asked to choose between taking only box B or taking both box A and box B. Box A contains $ 1,000. Box B will contain either nothing or $ 1,000,000. What B will contain is (or will be) determined by Predictor, who has an excellent track record of predicting your choices. There are two possibilities. Either Predictor has already made his move by predicting your choice and putting a million dollars in B iff he predicted that you will take only B (like in the standard Newcomb problem); or else Predictor has not yet made his move but will wait and observe what box you choose and then put a million dollars in B iff you take only B. In cases like this, Predictor makes his move before the subject roughly half of the time. However, there is a Metapredictor, who has an excellent track record of predicting Predictor’s choices as well as your own. You know all this. Metapredictor informs you of the following truth functional: Either you choose A and B, and Predictor will make his move after you make your choice; or else you choose only B, and Predictor has already made his choice. Now, what do you choose?
“Piece of cake!” says a naïve non-causal decision theorist. She takes just box B and walks off, her pockets bulging with a million dollars.
But if you are a causal decision theorist you seem to be in for a hard time. The additional difficulty you face compared to the standard Newcomb problem is that you don’t know whether your choice will have a causal influence on what box B contains. If Predictor made his move before you make your choice, then (let us assume) your choice doesn’t affect what’s in the box. But if he makes his move after yours, by observing what choice you made, then you certainly do causally determine what B contains. A preliminary decision about what to choose seems to undermine itself. If you think you will choose two boxes then you have reason to think that your choice will causally influence what’s in the boxes, and hence that you ought to take only one box. But if you think you will take only one box then you should think that your choice will not affect the contents, and thus you would be led back to the decision to take both boxes; and so on ad infinitum.


November 25, 2004

Philosophy Break-Up Lines

My favorites:

The Solipsist: It’s not you, it’s me.
The Anti-Solipsist: There’s someone else.
The Presentist: There just isn’t any future for us.
The Eternalist: At least we’ll always have that weekend in Paris.
The Nominalist: I'm afraid of commitment.
The Moorean: Here’s one hand. Here’s another. What do I need you for?

Post your own!


November 23, 2004


I just read an article in the NY Times about devices that translate one type of stimulus to another. For example, one of the devices translates visual input from a video camera to tactile stimulus on the user's tongue. And the user can "see", at least to some degree. Another guy who has no feeling in his hands was able to feel once again by using a special glove and some sort of stimulus device on his forehead.

What's especially interesting in the latter case is that the guy claims that the sensation was as though it was from his fingertips. On the BrainPort web site, there are some some video clips of a blind guy learning to use the video camera-tongue thing. His perceptions seem to be somewhere between mentally translating tactile sensations and "real" vision. This is a grown adult (who was able to see) learning to do this. I'd bet that a newborn baby with one of these things would be able to function very naturally in the world -- I'll have to restrict myself to talking about how people behave and report the sensation, because, not having experienced it, I have no idea how to talk about "what it's like."

I find this totally fascinating and tremendously relevant to philosophy of mind -- when I told my roommate Jonathan about this, he looked at me for a moment and then said, "I want to know what it's like to be a bat."

Labels: , ,

November 18, 2004

Amazing Research Tool

Google just released Google Scholar, a web crawler that searches online academic journals and articles. Type in the name of a book or article you like, and it can pull up all the papers it can find that cite that work. (Example: here are the articles it can find that cite George Ainslie's Breakdown of Will). Very cool.


November 10, 2004

No Authority?

I was very excited that Blakely created this blog but haven’t had time at all to even read it. It’s that time of the semester when things are starting to pile up and you never know if you are coming or going. But Stephen White canceled Epistemology tonight so I have some unexpected free time. Then on the way home I heard something on NPR that has awaken me from a confused slumber (well not really, but it definitely has me thinking).

I think the show was Fresh Air and they always have various editorial commentaries. Tonight there was a lady who is a high school English teacher and was talking about the struggle of teaching high school students. She talked about the way students struggle without authority in interpreting books like Huck Finn. She said that you start to get the idea that maybe Huck isn’t the most reliable narrator and Twain keeps the characters alive in such a way as to make it ambiguous who the good guys and bad guys are. We aren’t sure what to think of the different characters and it isn’t clear who we should be rooting for. The teacher said that this lack of authority is difficult for her student and they are constantly asking her to define these issues for them. They want to know whether or not Hamlet is crazy or just acting. She gave a third example that I thought was really good, but now I am drawing a blank.

What she was trying to stress is that what she and any good teacher try to do is to get the students to think about these questions and answer them for themselves.

So why am I writing about this? Well, it really struck me that this is one of my primary problems with my own study of philosophy. Obviously philosophy is all about learning arguments from various perspectives, thinking about them, and then making our own conclusions about them. We try to use rigorous critical thinking and reasoning in our analysis, but ultimately it is up to us, the individual philosophers to figure out how to interpret things and criticize them. (There may be another branch of philosophy in which you spend the time and effort trying to figure out what so-and-so meant when they said whatever they said. I don’t think this is very interesting philosophy and it certainly isn’t what I want to be studying. That should be more a matter of history of philosophy than actual philosophy, but this is beside the point—almost.)

But what struck me was how much I identify with the students desperately clinging for some solid ground or truth (I think those were the speaker’s words, or at least a close paraphrase). I don’t have a very strong background in philosophy and I am always trying to figure out what it is I am supposed to be getting from the various arguments. What did so-and-so mean when they said what they said? What are the standard interpretations and am I getting it? I am always trying to extract from my professors what it is I am supposed to be thinking about the various philosophic positions. But really it isn’t their job to break it down for me in that way. Maybe they are just supposed to make the questions just clear enough so that I can think about it and do the analysis for myself. Maybe I need to quite struggling for the right interpretation and analysis and just find my own.

But as I am writing this, that just seems like an obvious point. Of course we are supposed to be looking for our own philosophic voice. Nevertheless, I am not sure I want to go all the way (still clinging?). I am here to get my philosophic footing and a part of that must be learning the sort of history of philosophy stuff that doesn’t really interest me that much. Don’t I have to learn what others have said and what the traditional interpretations are? Don’t I need a sort of broad view, lay of the land sort of thing before I can jump in to do my own work?

The question I am trying to ask is how much of my study here should be about assimilating information versus really doing philosophy? Can you assimilate without doing philosophy? Can you do philosophy without assimilating it?


November 2, 2004


Blakely, this is the COOLEST. Thanks for doing this. For my first post, I may as well inflict my web habit on you guys and offer up some hopefully unknown philosophy-related links. Warning: very naturalized. May not meet your standard of relevance to philosophy.
(Probably the two most prominent philosophy blogs.)

Water's Water Everywhere
(An especially excellent article on recent analytic philosophy by Jerry Fodor published in the LRB. Links to his other LRB articles are on the bottom-right of the page.)


(A great linguistics blog)

The Loom
(A science blog)

And finally, some assorted comics that are kind of about philosophy (NSFW, maybe? Bob the Angry Flower is done by a fellow Edmontonian):

I'll post some responses to Winston's and Daniel's posts later. Anyone else have any good philosophy links? Post them in the comments! (Winston, I don't think it's going to be much of a problem to conduct discussions in the comments sections. It's what is done at most other philosophy blog sites, and it seems to work out fine).


Determinism and Prediction

This post is a response to Daniel's post below. Be sure to read his post -- otherwise, this one won't make much sense. I originally put it in the comments of his post, but I think it's interesting enough to post as its own topic, since it's something I've thought about a lot. Also, maybe we'll get more lively discussion this way? I have a feeling that the usual blog format (each post with its own comments) isn't the best format for an broad, ongoing discussion -- not that I think that everybody posting their own new entry whenever they have anything to say is the best either. We'll just have to use some discretion, I think. If you think I'm wrong, feel free to express your views as a comment on this post, where no one will ever see it.

Anyway... I've spent a lot of time puzzling over the problem of prediction and determination, spurred on by interminable discussions of Newcomb's problem. I've decided that the problem that Daniel describes, the paradox of action and prediction in a deterministic world, is not really a problem at all, but an illusion, one that I hope to dispel here.

Say we have a prediction machine P. Give the state of a system S at time 0, it can predict the state of S at time t, before t actually comes about. I'd like to note that we actually have devices that can predict what a system is going to do (ie. a computer calculating a projectile trajectory), but these differ from our machine P in an important way: they simplify the problem by making certain assumptions about the system and throwing out irrelevant information. In a deterministic world with P being a perfect predictor of any S in that world, P must take into account every bit of information about the initial state of S. This, I think, is not an unreasonable picture.

The problem arises when you try to use P to predict the behavior of a system that contains P itself. If P's output/behavior is dependent on its prediction of what it is going to do, this is an impossible feat. (There is the trivial case where all possible predictions would lead to the same output, but that situation isn't very interesting. I'm going to focus on P's output is conditional on P's prediction of P's output.) Suppose P could predict it's own output, and its behavior was conditional on that prediction. It has two output lights, labeled A and B. You ask it to make predictions this way: "If you predict John Kerry will win the election, turn on A; otherwise turn on B." In this case, we make the following request: "If you predict that you will turn on light A, turn on light B. Otherwise turn on light A." So if it predicts A, then it'll flash B, and if it predicts B, it'll flash A. Now we flip the swith to make P go. Which light turns on? We've stipulated that P can predict what P is going to do, but to actually do so, it must see to the end of an infinite regress. The absurd conclusion gives us a reductio, so we should reject that there can exist a P that makes self-prediction-conditional actions.

So the Minority Report-like paradox that you propose isn't possible, because in this case, the predictor's output necessarily depends on its prediction of its output. Here's the picture: P predicts your action, but since your action depends on its prediction, it must first predict its prediction. And as we've seen, that's not possible. You could take a step back and say that the predictor just makes a very good guess instead of a perfect prediction (as in the actual Minority Report movie) but then the paradox is no longer a paradox. It's still a conundrum, to be sure, but of a different sort. It becomes a question of whether or not you believe the predictor, and how much reason you have to doubt it, and so on.

Now it's time to take back some of what I said: I stated that P was impossible, but I don't mean that in the strict sense of the word. Let me put it this way: if P were possible, we would have to give up a lot of things we take for granted; we'd have to cut out a huge swath of our web of belief. Though I haven't actually done it yet, I'm pretty sure that I could concoct a system in which, using a similar method, a group of predictors act as a Turing machine that can predict its own output, thus solving the halting problem. The halting problem is the thesis that no Turing machine program can predict, for every Turing machine program, whether that program will halt. As I understand it, the halting problem is equivalent to Godel's incompleteness theorem, which states that in any formal system powerful enough to represent arithmetic, there are statements that can not be proven true or false. So if P existed, it would be a physical method of proving an unprovable (via logic) statement true or false. If we can make P, then perhaps we could decide the truth value of "This statement is false" after all. If we could physically prove things that we can't logically prove, well, then the world is a much weirder place.

This scenario of revising logic is unlikely, but it's happened before: Quine and Putnam suggested that quantum mechanics necessitated adoption of a nonstandard system of logic, so I suppose that in a very broad sense, P is possible (but exceedingly unlikely, given what we know about the world now). That said, I don't think they had anything quite this radical in mind.

Addendum: It's 8 hours later, and I've changed my mind. I think it's completely impossible for the predictor to exist. Forget all that clever stuff about Godel and the halting problem. In the flashing light setup above, there's simply no answer it can give that will be correct.

Labels: ,

November 1, 2004

Does determinism matter?

Hi all,

Had a thought today about determinism in P&E that I want to explore. In light of Heidegger's da-sein, can we put the determinism we get out of physical reductionism into the same place as external skepticism? That is, maybe we're brains in vats or being toyed with by an evil demon, but so what, it doesn't feel that way so it doesn't really affect our lives at all. Similarly, if we somehow found out tomorrow that our lives and actions and decisions ARE reducible to physical principles and everything is determined, it wouldn't change the way we live our lives since it never at any moment FEELS as if that is what is going on. I for one would still care about all the same things and still make conscious decisions and still have the same goals. I would still "take care" in the world, as Heidegger would say. For that is what it is to be human. If we lost this condition of "taking care" we could no longer be classified as human beings (da-seins).

Now, this is all being said under the assumption that we somehow discovered determinism was true without being able to use it to know in advance everything that was going to happen. Naturally, if we did we WOULD make some changes in our cares and decisions and goals. For example, if we could foresee that Harry was going to commit some heinous crime next week, we might find it in our power to prevent this from happening, but problems would arise immediately here.

One, we would also foresee that we try to prevent Harry from committing the crime, but he must somehow commit it anyway if we foresaw it. But then why go through the trouble of trying to prevent it? Or, if we DID successfully prevent it then what we foresaw was wrong. But to be fair to determinism and escape the immanent paradox, we never would have foreseen him committing the crime, we would have foreseen ourselves foreseeing the crime and preventing it, but not actually foreseen the crime happening.

Does anyone want to pick up on the next step here? Does this make sense or sound interesting? I know it is a lot like "Minority Report" but I've never seen it. No time to continue this at the moment, but I want to expose that there is still a paradox here, and also that moral issues are raised (as they always are when determinism is involved).

Labels: , ,

Great idea!

Thanks Blakely! I thing this is a great idea. I hope it takes off.


I thought it would be good to have a weblog to use as an all-encompassing discussion board. To, you know, share the love. If we like this, we could (depending on whether the idea of professors finding it excites us or terrifies us) 1) place a link to it on the department site or 2) make a more professional-looking blog hosted on the department site. Enjoy!