May 9, 2007

Probable But Still Unjustifiable

I am attempting to construct an argument against the widely accepted thesis that one may justifiably believe that p based on evidence that makes p probable but which does not guarantee that p. In short, I wish to argue that any belief based on evidence that makes p probable, but with a probability less than 1, is unjustified. My argument utilises a lottery-type analysis†. Imagine a lottery composed of n tickets in which n is large enough to make the following claim putatively true, according to the standard probabilistic analysis, of some particular ticket, t1: S may justifiably believe that her ticket, t1, will lose. For example, most probability theorists would hold that in a lottery of 1,000,000 tickets in which one ticket must win but only one ticket can win, S may justifiably believe that her ticket, t1, will lose. (Of course, S does not know that her ticket will lose, but on the view I wish to impugn she may still justifiably believe that her ticket will lose. You may make n as large as necessary to motivate the relevant intuitions.)

I take it as a truism that a subject may not justifiably believe a set of inconsistent propositions which she recognises to be inconsistent. My argument will take the form of a reductio beginning with the assumption, “S may justifiably believe that her ticket, t1, will lose”, and concluding with the negation of the aforementioned truism. Assuming that the first premise is the least plausible of all the premises in my argument, then my argument should establish that my first premise ought to be rejected. I would greatly appreciate any feedback concerning the structure, validity or soundness of my argument, or questions regarding any of my assumptions or steps. My reductio runs as follows:



(1) S may justifiably believe that her ticket, t1, will lose.


(2) If S may justifiably believe that t1 will lose, then she may also justifiably believe that t2 will lose, she may justifiably believe that t3 will lose ... she may justifiably believe that ticket tn will lose.


(3) S may justifiably believe that tickets t1, t2 ... tn will lose. [from (1) and (2)]


(4) S may justifiably believe that either t1 will not lose or t2 will not lose ... or tn will not lose.


(5) Propositions of the following form comprise an inconsistent set: (a) p1, p2 ... pn, either not-p1 or not-p2 ... or not-pn.


(6) S recognises that propositions of the following form comprise an inconsistent set: (a*) t1 will lose ... tn will lose, either t1 will not lose ... or tn will not lose.


(7) S may justifiably believe a set of inconsistent propositions that she recognises to be inconsistent. [from (3), (4), (5), and (6)]
Prima facie, (1)-(7) only shows that a subject is not justified in believing something she recognises to be inconsistent. Such cases fall under the umbrella of what Jonathan Sutton has dubbed, "known unknowns"—namely, instances in which the subject is aware that she does not have the knowledge in question. But this argument seems ineffective against certain types of "unknown unknowns"—i.e., cases in which the subject does not know that she does not know. Specifically, (1)-(7) does not seem to apply to cases in which the subject fails to recognise that a certain set of her beliefs are inconsistent. In such cases (6) would fail to apply. Thus, for all that has been shown, (1) may be true in cases in which the subject does not recognise her beliefs to be inconsistent. (Moreover, once we have dispatched with the tendentious Cartesian notion of the transparency of the mental, a subject's failure to recognise such an inconsistency in her beliefs becomes a live possibility.)

At least two points should be noted in reply. For starters, we may widen the domain of known unknowns to include beliefs that a subject is in a position to know (say, via reflection alone). Since S is in a position to recognise that the propositions are inconsistent, assuming she is rationally competent, (6) still applies. Alternatively, we may simply note that the failure on S's part is a rational one, which (on even the narrowest J-internalist reading) would ex hypothesi render her belief unjustifiable. Given these considerations, the conclusion of the argument seems generalisable to all cases of belief based on evidence that renders the belief likely with a probability of <1. † See Dana Nelkin's paper “The Lottery Paradox, Knowledge and Rationality” for a discussion of the lottery paradox regarding knowledge and justifiably held belief.



Labels: , ,

22 Comments:

Blogger Rachael said...

I know my epistemology teacher tried to convince me that "one may justifiably believe that p based on evidence that makes p probable but which does not guarantee that p" is a "widely accepted thesis" but, is that really correct? How are you defining widely? More importantly, how are you --and all these 'wide accepters', defining belief?

The reason I'm asking, is because I made an argument very similar to yours here, during introductory philosophy--when the lottery paradox came up in Quine and Ullian's 'The Web of Belief'. What I found, was that I was using a much more rigid definition of "belief" than my teacher and peers.

I told the professor that I did not believe the wall beside me was grey, and merely believed that it was probably grey--given that I could take a few basic premises for granted. (I had to laugh at myself for this, when I read Raymond Smullyan's 'An Epistemological Nightmare'.) In the case of the lottery ticket, the same--I merely believed that it would probably lose, recognizing the possibility that it might win.

However, it seemed that professor and peers were content to use belief in a more relaxed (and perhaps more common) sense. For them, to say "I believe the ticket (t1) will probably lose." was effectively the same as saying "I believe the ticket (t1) will lose." Ex: "I believe x is so, but I could be wrong."

Do you think the problem here could be a confusion rooted in tricky semantics?

I suppose the problem probably is more than semantics. But really, I have a hard time imagining how anyone could disagree with you otherwise.

OK--so I'm just some wee and flailing undergraduate with dreams of attending Tufts (which is why I spy on your blog here). I hope my questions are welcome! :)

* Note, I've used the second person to refer to Mr. Archer, but everyone is welcome to reply to my comment.

5/10/2007 09:14:00 AM  
Blogger AVERY ARCHER said...

Rachael,
I think you put your finger right on the pulse of the issue; namely, a certain ambiguity in the word believe. Often, we use “I believe” to identify something we are intellectually committed to. For example, I believe that protons exist and that they are not merely theoretical posits. Sometimes we use “I believe” to identify something we take ourselves to know. I believe I'm currently looking at a computer monitor. This is also something that I know! However, we frequently use “I believe” to indicate something we merely hold to be likely. Consider the following exchange:

(A): “Is it 2:30pm yet?”
(B): “I'm not sure, but I believe it is.”

(B) seems like a perfectly natural thing to say (though on at least one reading of “believe” it would be self-contradictory). I take (B) to be expressing something along the following lines:

(B*): “I'm not sure, but it seems likely.”

I agree that often when we say that we believe that p, what we are actually saying that p is likely or has a high probability of being true. However, this cannot be the notion of belief that factors into a JTB account of knowledge since to know something we must be intellectually committed to it, and not merely think it likely. (Consider, for example, the contrast between merely believing your ticket will lose (read: likely to lose) and actually knowing that your ticket has lost after the results of the drawing have been announced.) If I am right that knowing p requires that one be intellectually committed to p (rather than merely think p likely) the type of belief that factors into a JTB account of knowledge must implicate intellectual commitment.

There is, of course, an alternative way of construing the present debate (and here is where I lay my McDowellian cards on the table): To simply dispense with the JTB account altogether and offer a disjunctive analysis of knowledge and belief. On the disjunctive view one either knows that p or merely believes that p. The former is factive (i.e., probability = 1) while the latter is not (i.e., probability = <1) and there is no common factor shared between the two. More precisely, knowledge is not to be understood as a special kind of belief or belief plus something else (e.g., justification and truth).

However, for the time being (and out of respect for my non-McDowellian colleagues) I will like to stick to the non-disjunctive option—namely, the possibility that there are (at least) two notions of belief, one implicating intellectual commitment to p and one implicating the thought that p is likely. To the extent that our concern is actually about knowledge (i.e., true justified belief), it is belief as intellectual commitment that is at play. Thus, in the above criticism of the “orthodox” view I have simply taken for granted something which may itself be called into question: to believe that p is to be intellectually committed to p.

5/10/2007 10:04:00 AM  
Blogger Larry Hamelin said...

I think the best way to look at probabilism is as a differentiator rather than a "justifier": one may believe that p is better than not p iff the probability that p is true (given the evidence?) is greater than the probability that not-p is true.

In this case, S believes that the probability that her ticket t1 will lose is greater than the probability that her ticket will win. She is making a comparison, not a justification.

The rules for combining probabilities are different than the rules for combining binary true/false statements. We start combining tickets, we comparing different propositions than the original: the probability that both ticket t1 will lose and that ticket t2 will lose vs. the probability that ticket t1 will win or that ticket t2 will win.

I think your term "intellectual commitment" is vague. Does intellectual commitment to p entail belief in the absolute truth of p? Or does it merely entail that one will make real-world decisions on the basis of a probabilistic analysis?

If the former, why? If I had to bet my life on t1 (or any individual ticket) by itself either winning or losing, I would of course, bet my life on it losing, even though I'm still perfectly aware that there is a small chance that I would die.

5/15/2007 09:50:00 AM  
Blogger Unknown said...

(3) and (4) are worded somewhat differently but to the casual reader they seem to be at odds. It seems counter-intuitive if not outright contradictory to argue that S may justifiably believe that either t1, t2...tn will both lose and not lose.

5/16/2007 09:23:00 AM  
Blogger Larry Hamelin said...

Lyric: Archer appears to be making a proof by contradiction: Because (3) and (4) are in contradiction, premise (1), or, more properly

(J) if P(~p) < ε then p

from which (1) is derived, is disproved.

In a mathematical sense, (J) is trivially fallacious for any ε, because the semantic meaning of "and" and "or" are different between probabilism and boolean logic. Specifically the distributivity of the probabilistic P function over the conjunctions is complex and differs when p and q are mutually dependent:

P(p) and/or P(q) ≠ P(p and/or q) (if p and q are dependent, as in the lottery problem)

5/17/2007 07:34:00 AM  
Blogger Clayton Littlejohn said...

Avery,

I wish I could have come earlier so that I might have said what Rachael did, but I was away on vacation and under strict orders to stay away from computers.

Anyway, about this:
I am attempting to construct an argument against the widely accepted thesis that one may justifiably believe that p based on evidence that makes p probable but which does not guarantee that p

I'm rather sympathetic to the project, but it seems we should distinguish:
(a) One may justifiably believe on evidence that makes p probable but which does not guarantee that p.
(b) One may justifiably believe on evidence that transparently only serves to make p probable but which does not guarantee that p.

I'm of the opinion that (b) is false, but not that (a) is. Consider cases of 'covert' lotteries, cases in which it seems from the point of view of the subject that the case is not a lottery case (i.e., one in which the truth of the relevant belief hangs upon the outcome of a chancy, lottery-like process whose outcome is not yet known), and distinguish them from cases of overt lotteries.

It strikes me that in cases of overt lotteries, you shouldn't believe outright that claims whose truth hangs upon the outcomes of a lottery are true, and I take it that your argument makes some significant progress towards establishing this. However, the argument does not apply to cases in which the truth of the relevant belief secretly depends upon how a lottery plays out, and so cannot have the full generality you initially suggested that it did.

5/18/2007 12:08:00 PM  
Blogger Criminally Bulgur said...

Hi Avery,

I don't have much of substance to add, as this is not my field at all.

I do have one minor point, though, which may be related to the question.

(FYI, I learned this at an intra-departmental colloquium last fall at Tufts. I can put you in touch with the relevant Professors, if you like).

You say here:

"(Consider, for example, the contrast between merely believing your ticket will lose (read: likely to lose) and actually knowing that your ticket has lost after the results of the drawing have been announced."

When you put 'knows' in italics, it may garner a different sense than it does without it. For example, something approaching certainty may be a requirement for knowledge when one is asked if one knows something to be the case. The sense would be similar to 'actually knows' or 'strictly knows.' Maybe italicizing something for emphasis is the pragmatic equivalent of indicating that the reader should use the word in its strict sense, where 'strict' = def. 'approaching some ideal limit of exactitude.'

5/18/2007 02:35:00 PM  
Blogger AVERY ARCHER said...

Barefoot Bum,
I like your differentiator vs justifier distinction. As you may have guessed, I ultimately wish to argue against a probabilistic model of epistemic justification, so your suggestion suits my purposes well.

I agree with you that the locution “intellectual commitment” is a bit vague. I'm still in the process of ironing out precisely what it entails, but roughly the idea is as follows: By my lights, the concept of justification is not a probabilistic one (i.e., no proposition with a probability <1 is justified). In brief, I see justification as factive. This is a position I hope to spell out and defend further in the near future.

Now, it is a truism that some of our beliefs are more certain than others. On my view, this fact is not to be explained in terms of variations in the level of justification (whatever that's supposed to mean) but in terms of variations in the object of justification (i.e., the content of the belief being justified). In the case of “strong beliefs”, the belief content that constitutes the object of justification is the fact that p simpliciter, while in the case of “weak beliefs”, the belief content that constitutes the object of justification is the fact that p is likely. Note, when I say that the object of justification for weak beliefs is “the fact that p is likely”, I'm not suggesting that justification itself is a probabilistic notion. Rather, I'm suggesting that the object of justification (i.e., the belief) has a probabilistic content.

This is where my worrisome locution “intellectual commitment” comes in. In the case of “strong beliefs” the object of justification is the fact that p simpliciter. This is the idea I try to capture with the notion of being “intellectually committed” to p. belief as intellectual commitment is supposed to stand in contrast to the type of belief that merely implicates the likelihood (i.e., probability = >0.5 • <1) of the object of belief. Put differently, we may say that strong and weak beliefs differ in their truth-conditions. Strong beliefs are made true by the fact that p is true, while weak beliefs are made true by the fact that p is probable.

For a further discussion of my distinction between strong and weak beliefs see my post Strongly believe vs. Weakly Believe.

5/19/2007 04:46:00 AM  
Blogger AVERY ARCHER said...

Clayton,
I agree that when taken at face value my argument fails to establish the falsity of (a). In fact, I seem to make this very point in the objection I consider towards the end of the post. This makes my concluding declaration all the more curious:

“the conclusion of the argument seems generalisable to all cases of belief based on evidence that renders the belief likely with a probability of <1”.

This final pronouncement seems clearly wrong. Now, as it happens, I do take (a) to be false. By my lights, the distinction you draw between (a) and (b) is analogous to Sutton's solecistic distinction between “known unknowns” and “unknown unknowns”, both of which I take to be unjustified. However, it is abundantly clear that the falsity of (a) is not something established by the present argument. Or is it?

Now suppose we were to replace my opening line:

“I am attempting to construct an argument against the widely accepted thesis that one may justifiably believe that p based on evidence that makes p probable but which does not guarantee that p.”

with the following:

“I will like to argue that it is a conceptual requirement that justification be factive.”

Now on the orthodox view, it is part of our concept of justification that a subject's belief that p may be justified by defeasible evidence—that is, evidence that falls short of the fact that p. However, once we admit that some type or token body of evidence is defeasible, we find ourselves in the position of the aforementioned lottery subject. For example, suppose we were to hold that at best, perceptual experience puts a subject in touch with defeasible evidence for some empirical proposition p. Such a posture would remove all entitlement we may have for believing that p since we would be admitting that perceptual experience, at best, only makes p likely.

I believe it is when my argument is interpreted along these lines that it provides support for my more general conclusion. The distinction between “known unknowns” and “unknown unknowns”, between (a) and (b), is no longer salient since we are now dealing with the conceptual requirements for justification. In sum, the contention of my argument is most charitably interpreted as follows: to suggest that the concept of justification is not factive is to suggest that defeasible evidence may justify. But to admit that defeasible evidence may justify is equivalent to saying that a “known unknown” may be justified. But that a “known unknown” may be justified is the very conclusion my argument impugns.

5/19/2007 01:18:00 PM  
Blogger Larry Hamelin said...

Avery,

If you're going to argue against a probabilistic model of justification, simply demonstrating that a switch between probability semantics and boolean semantics is not a particularly forceful argument. If one uses probabilism, he has to stay with probabilistic semantics all the way through.

I think you would be better served as well to focus on how probabilism in actually used in epistemic justification. The big problem with your example is that the "probability that ticket t1 will win" is (in a manner of speaking) a property of the ticket itself, not a property of our knowledge about the ticket. Even recasting the example in a specifically epistemic way (i.e. our limited knowledge about the physical causes which will entail the specific ticket chosen) simply makes our knowledge probabilistic about the game, not the ticket.

I am an engineer, and my company produces software to manage scientific experiments. One component of the software analyzes the data using various statistical (i.e. probabilistic) methods.

The issue is too detailed to describe fully in a comment (I might write about it on my own blog in the near future, time permitting), but I suggest you contact someone with an advanced degree in statistics and pose the following question: "My t-Test shows a p-value of 0.04; what is the probability that the difference between the control and test groups caused the difference between the mean values of the measured parameter?"

Her answer should be, "you can't tell." Find out why. Two days of deep conversation with just such a statistician on just this topic was immensely philosophically illuminating.

5/20/2007 12:56:00 PM  
Blogger Larry Hamelin said...

Sorry: "... demonstrating that a switch between probability semantics and boolean semantics entails a contradiction..."

5/20/2007 07:46:00 PM  
Blogger AVERY ARCHER said...

Barefoot Bum,
I agree with you that the probability that ticket t1 will win is not a property of our knowledge about the ticket. It could not be since knowledge is factive. Rather, I attack a conception of probabilistic justification that gives rise to something like the following sufficiency claim:

(*) A proposition ψ is justified if P(ψ) > t

Where P(ψ) is a function representing the probability of ψ on the relevant evidence, and t is a threshold value close to 1. Construed along these lines, the challenge that faces probability theorists is to characterise P(ψ) in a way that is both rationally consistent and yet falls short of probability 1. The lottery argument is meant to show that no such characterisation is available. Thus, I see the P-function as a representation of the probability that the relevant proposition is true, given the available evidence. In the case of the lottery, one's body of evidence is constituted by one's knowledge that the lottery is fair, that it is composed of a million tickets, that only one ticket will win, etc. I see this body of evidence as only making it likely that ticket t1 will lose, which means that one's evidence only defeasibly supports the relevant proposition “ticket t1 will lose”. (Note: the present characterisation is meant to remain neutral with regards to the many competing theories on how such evidential probabilities should be conceived.)

Also, I don't see how your point about t-distribution fits in here. Is the suggestion that the proposition, “ticket t1 will win”, be taken as a null hypothesis? Notice, I am concerned with the opposite claim—i.e, ticket t1 will lose—and with showing that the subject is unjustified in believing it. I don't doubt that an examination of t-distributions may be philosophically instructive, but I fail to see its relevance to my argument. (I do hope you decide to post on this subject since I could certainly do with some clarification on this score.)

5/22/2007 03:03:00 AM  
Blogger AVERY ARCHER said...

Ignacio,
I think there is something to be said in defence of the claim that we often employ the word “know” loosely. However, I would not go as far as to suggest that our concept of knowledge is itself ambiguous. Specifically, I do not think knowledge is ever less than factive. This is why I prefer to speak of an ambiguity in terms of belief rather than knowledge. For example, compare the following two cases:

A: Is it 2:30pm yet?
B: I'm not sure, but I believe so.

and,

A*: Is it 2:30pm yet?
B*: I'm not sure, but I know it is.

While B does not seem problematic, B* is certainly contradictory. I am therefore quite wary of attempts to draw distinctions between “strict” knowing and some lesser (non-factive?) sense of knowing. The aforementioned reservation notwithstanding, I wouldn't mind hearing more about the position you have adumbrated. Why don't you send me an email with the relevant information.

BTW, wasn't the Tufts graduation last weekend?

5/22/2007 09:25:00 AM  
Blogger Larry Hamelin said...

Avery,

The issue is obviously drawing a factive conclusion from a probabilistic statement, which your original post validly proves entails a contradiction.

The proof, however, does not demonstrate that probabilism is false, it demonstrates only that we have to choose between factive knowledge and probabilism.

The reason why I exhort you to look into t-Tests and the like is that science, the endeavor most pragmatically effective at generating what sure looks to me like knowledge, relies almost exclusively on such statistical analysis, and the underlying philosophy is subtle and fascinating.

5/22/2007 10:07:00 AM  
Blogger Criminally Bulgur said...

Hi Avery,

I fear I am really hijacking your thread now. One last post, though ;).

You say here:

I think there is something to be said in defence of the claim that we often employ the word “know” loosely. However, I would not go as far as to suggest that our concept of knowledge is itself ambiguous. Specifically, I do not think knowledge is ever less than factive.

Is the idea the following?: in putting an utterance of ‘knows’ in italics and/or with vocal stress, we are pragmatically checking to see if a specific use of the word ‘knows’ is in line with the concept, which is always factive?

That seems plausible, but for the sake of playing Devil’s Advocate. . .

I think you are right that this

A*: Is it 2:30pm yet?
B*: I'm not sure, but I know it is.


sounds awful, maybe even ungrammatical. But I guess the question is whether it sounds awful for pragmatic reasons or reasons having to do with the essence of our concept of knowledge.

(I am assuming, to simplify, that this pragmatics-semantics distinction can be made precisely enough to answer the question, which is I guess part of what I would ultimately want to dispute, but that’s an argument for another thread!).

For example, take the following:

C*: I don’t know that I am not a Brain in a Vat, but I know that I have two hands.

I think C* is true, even though it sounds infelicitous.

I think uttering B* sounds worse than uttering C*, but that may just be because the context is relatively familiar in the former case, whereas it is not in the latter.

In effect, we know what it would take to settle the question of whether or not it was 2:10 PM, whereas we don't know what it would take to settle the question of whether or not we are BIVs.

Here, 'we know' = def. 'we have an established, socially available procedure of possible verification that competent users of the English language understand.'

We know--in this sense--that you can’t both know that it is a certain clock time and not know that it is that clock time, because we know you could just, for example, look at your watch to settle the question.

By contrast, we don't know what it would take to settle the question of whether or not we are BIVs. That, one might argue, is why C* doesn't sound as bad as B*.

But again, this is not really my area, so I am not sure I have any settled opinions about this. This is just some half-remembered Wittgenstein On Certainty.

Finally, you ask

BTW, wasn't the Tufts graduation last weekend?

Yes, but I could not attend. It was a little rainy from what I hear.

Best,

Ignacio

5/22/2007 12:38:00 PM  
Blogger Larry Hamelin said...

Another thought that occurs to me: 10^-6 is not, in probabilistic terms, particularly small: There are, by definition 6,000 people in the world right now who are "one in a million". Odds of 10^-50 to 10^-500 (and higher) routinely occur in even superficial probabilistic analysis—and don't get me started on Ramsey numbers!

Logically or mathematically speaking the inference from improbability to falsity is always fallacious; our inference, however, that no actual person will ever actually duplicate a random ordering of a standard deck of cards seems pretty safe.

5/22/2007 12:39:00 PM  
Blogger Aidan said...

Some comments, in perhaps increasing order of substantialness:

Premise 2 seems to (at least) require that S knows that each ticket in the lottery has equal chance of winning. Seems prudent to make that explicit. (Weighed lotteries have been the subject of some discussion after all).

The move from (1) and (2) to (3) seems to require some multi-premise version of a principle that says that 'may be justifiably believed' is closed under known logical implication. Is there any reason to think the falliblist is committed to such a principle? (My initial reaction is to think there's no such committment - see below).

(7) simply doesn't follow from the lines you say it does, as far as I can see. You need to show the following. Firstly, that S *recognizes* in the lottery case you offer that she would believe something of the form of (a) from premise (5), were she to form all the relevant beliefs. Secondly, that S's beliefs in that case would nonetheless be justified. Only then do you get that your premises entail (7), contradicting your initial assumption that no belief in something of the form of (a) can be justified.

And I don't think your premises straightforwardly entail either of these. Suppose that (6) is true, so S recognizes that propositions of the form (a*) comprise an inconsistent set. It then follows from (6) that she recognizes that her own justifiable beliefs in this case comprise an inconsistent set *only if* she recognizes that her own justifiable beliefs are of the form (a*). But why should we accept this? The things that S may justifiably believe about this lottery don't seem to need to be propositions she's so much as explicitly contemplated (especially if our choice n for number of tickets is very large, as you've recommended). So why should S be so aware of the structure of the beliefs she could justifiably form about the lottery, and the logical relations they bear to one another?

As for the second requirement to get (7) to follow, we again need some kind of closure principle for 'may be justifiably believed'; your argument requires that if S may justifiably believe {t1 will lose, t2 will lose...tn will lose}, and she may justifiably believe that either t1 will not lose,.....or tn will not lose, then she may justifiably believe their union. That again doesn't seem mandatory for falliblism as you characterize it (indeed, on the face of it, it looks implausible; on this view evidence E can make p probable enough that a belief that p can be counted as justified, even though E gives p probability less than 1).

5/22/2007 04:44:00 PM  
Blogger Criminally Bulgur said...

To be explicit:

(1) I know I have two hands.
(2) I don't know that I am not a Brain in a Vat.
(3) If I don't I know that I am not a BIV, then I don't know that I have two hands (because if I am a BIV, I don't have two hands).

I guess I reject (3). So let's take this dialog, parallel in structure to Avery's from his last comment, with an addition:

A1: Do you have two hands?
B2: I am not sure (because I might be a BIV), but I have no reason to think I am a BIV, so, for all intents and purposes, I know that I have two hands.
A2: But you don't know it, if you are not sure that you are not a BIV?
B2: Well, that's just semantics. You understand my position.

5/23/2007 11:16:00 AM  
Blogger Larry Hamelin said...

Wikipedia has some good references about probability and its interpretations:

Probability Interpretations
Frequency Probability (a.k.a. Frequentism, a.k.a. Aleatory probability)
Bayesian probability (a.k.a. Epistemic probability, a.k.a. plausibility)
Null hypothesis

5/24/2007 02:20:00 PM  
Blogger AVERY ARCHER said...

Ignacio,
I would actually prefer to stay away from BIV-type examples since it is part of the view I advocate that sceptical scenarios are importantly dis-analogous to everyday type scenarios. To wit, I do not believe that the intuitions elicited by the one type of case necessarily carry over into the other. Consequently, I will attempt to clarify my position without recourse to discussion of BIVs.

When I say that belief is ambiguous I am not merely suggesting that we sometimes use “believe” loosely. (I take amenability to loose usage to be a characteristic of most English words, including “know”.) Rather, I am making the additional claim/proposal that belief statements have disjunctive truth conditions. They may be made true either by the fact that p is true, in the case of a strong beliefs, or by the fact that p is likely, in the case of weak beliefs.

According to my proposal, it is possible to be sure about something one believes and it is a fortiori possible to believe something of which one is sure. Given this fact, there are instances in which believing that p implicates being sure that p. Of any such instance it would be inconsistent to say that one both believed that p, but was unsure that p. (This is the type of belief that factors into a JTB account of knowledge.) However, there are other instances in which it would be perfectly consistent to say one believed that p, but was unsure that p (such as the examples mentioned earlier). The same cannot be said of knowledge. There are no instances in which one may both know that p and be unsure that p. (Or at least so I claim.) In sum, it is in the stronger sense of possessing disjunctive truth conditions that I argue that “believing” is ambiguous while “knowing” is not.

To suggest that a single word may have two different truth conditions is to say that word has two different meanings. My claim, then, is that the word know only has one meaning. It is important to see how this relates to the question of emphasis that you raised earlier (such as the use of italics). In general, emphasis does not change the meaning of a word (i.e., its truth-conditions). For example, consider the use of the word “killed” in the following two sentences:

(A) “He killed her.”
(B) “He killed her.”

It would be a mistake to assume that we have two meanings of “killed”, only one of which implicates the death of the victim. Emphasis merely calls our attention to what is salient. Thus, (B) may call attention to the fact that the victim is really dead, as opposed to merely pretending to be dead as on a movie set. Even so, this should not be taken to imply that the word “killed” sometimes means a less than fatal act. (Notice, this remains true even if we sometimes (loosely) use expressions like “he killed her” to describe what takes place on a movie set.) In relevant respects, non-factive knowledge is analogous to non-fatal killing.

5/27/2007 03:38:00 AM  
Blogger Criminally Bulgur said...

Avery,

I think you are right that we have to separate loose use from straight up ambiguity, such as a "river bank" and "federal bank" case.

The question is, "what do we do with family resemblance words such as 'game' or 'field,' which seem most analogous to your case of 'belief'?":

(1) Miracle Gro has come up with a new treatment for fields.
(2) They played the American Football game on the field.

Is this one sense of 'field' or two?

The implicature on (1) would suggest that whatever is picked out by 'field' in the relevant sense would have to be made of grass (i.e., the field has to be made of grass for (1) to be true on a literal reading).

The implicature on (2), however, does not entail that the field is made of grass on a literal reading (it might be made of artificial turf).

You might try to explain the difference by appealing to some compositionality principle. So, for example, perhaps 'Miracle Gro' is doing the work of getting the relevant implicature in (1). But then the problem reemerges, for you can use the phrase 'Miracle Gro has come up with X' in cases that don't necessarily imply a grassy object:

(3) Miracle Gro has come up with a new way of packaging their product guide.

Admittedly, this is a problem for far more than 'belief,' but the question for your example would, I guess, be whether or not

(4) 'Miracle Gro' is to 'field' as 'lottery ticket' is to 'believes.'

I have some thoughts about this, but nothing conclusive. I might work up a post on my blog eventually. . .

5/28/2007 03:09:00 PM  
Blogger Larry Hamelin said...

There are no instances in which one may both know that p and be unsure that p. (Or at least so I claim.)

This is the crux of the biscuit. In order to call any scientific knowledge as actual knowledge, one must embed the epistemic probabilism in the statement of knowledge ("I know (i.e I'm sure) that it's probable that gravity exists") or accept "I know p but I'm unsure that p" as legitimate.

The first seems attractive, but might lead to problems of self-reference, specifically: "Do I know that it's probable that I know that it's probable that ...?" and your brain is right back in the vat.

I think it's better to just bite the bullet and say, "I know that p but I'm unsure that p."

6/22/2007 10:34:00 AM  

Post a Comment

Subscribe to Post Comments [Atom]

<< Home