Here’s Mike Huemer’s second set of responses to me and you.


About Bryan’s Comments

Thanks again to Bryan, and the readers who commented on his post, for their thoughts about Part 2. This is all cool and interesting. I’ll just comment on a few questions and points of disagreement.

1. Real World Hypothesis (RWH) vs. Brain-in-a-Vat Hypothesis (BIVH)

Why, though, couldn’t we race the Real World theory against the Simulation-of-the-Real-World theory?

Good question. We can think of it like this: we have a theory, B (for “brain in a vat”), and some evidence E. B doesn’t predict E very well, because there are so many other things that are about equally compatible with B. What you could do is add some auxiliary assumptions onto B, producing (B&A). And you could pick A specifically so that it entails E, given B.

In Bayesian terms, this has the effect of increasing your likelihood (P(e|h) in Bayes’ Theorem). It also reduces your prior (P(h)) by the same ratio (or more). So there’s no free lunch.

In other words, now the problem is just going to be that the new theory (B&A) has a low prior probability, because it stipulates that the scientists program the computer in a particular way, where this is one out of a very large number of ways they could program it (if there were such scientists).

Btw, this might not initially sound to you like it’s a super-specific stipulation (“they program a simulation of the real world”). But I think it really is very specific, comparatively speaking. You have to include the stipulation that they make a simulation that is perfect, with no glitches or program bugs or processing delays that would reveal the simulation. This is possible but is true of only a very narrow range of simulations that could exist. You have to stipulate that they decide to simulate an ordinary life in a society much less advanced than their own. Again, possible, but only a narrow range of the intentions they could have, and not one that makes a whole lot of sense. By comparison, if we found that our lives maximized enjoyment, or aesthetic beauty for observers, or intellectually interesting situations, or anything else other than “looking just like a normal mundane life”, then we’d have some evidence for the BIVH.

To what extent is this approach [direct realism] compatible with just saying that the reasonable Bayesian prior probability assigns overwhelming [probability] to the Real World story?

Also a good question. It’s different from saying we assign a high prior to the RW Hypothesis, because it depends on having sensory experiences. In the “high prior” story, you should believe the RWH before having any sensory experiences (if you could somehow still understand the RWH at that time).

The direct realist approach is more like assigning a high prior probability to “Stuff is generally the way it appears” (or “if there appears to be a real world, then there is”). I think it might be a requirement of rationality that one assign a high prior to this.

2. The definition of “knowledge”

[W]hat’s wrong with the slightly modified view that when we call X “knowledge,” we almost always mean that X is a “justified, true belief”?  … [W]e can think of “justified, true belief” as a helpful description of knowledge rather than a strict definition.  What if anything is wrong with that?

I think “justified, true belief” is a fair approximation to the meaning of “knowledge”. (A closer approximation is “justified, true belief with no defeaters”.) That is to say that when we call x “knowledge”, we mean something close to that. I don’t think we ever mean exactly that, though. E.g., when you say that someone knows something, part of what you’re saying is that they’re not in a Gettier case. That’s always implied by your statement (even if Gettier cases aren’t something anyone is thinking about at the moment), so you never merely mean that the person has a JTB.

About Reader Comments

part of the problem with questions like “Is there a God?” is not that they are meaningless or that they have no answer. Rather, it’s that they are unanswerable.

I don’t see why that’s unanswerable. That is, I think we can and do have evidence for or against the existence of God. Granted, none of it is conclusive evidence. But then, we also don’t have conclusive evidence for or against any scientific theory, yet we shouldn’t say that scientific questions are “unanswerable” (should we?).

The second example [goodness of polygamy], however, is not a factual question, and will depend on what each particular culture considers  good and bad.

It is a factual question! I can’t find that passage right now (it’s not on 87-8 in my copy), so I’m not sure what point I was making. But I explain my arguments against moral relativism in ch. 13.

I too found the discussion of ‘polygamy is wrong’ to be ignoring the ambiguity of the proposition

I doubt that I was doing that. Just take whatever interpretation of the phrase you want, assume that is understood, and then read the passage with that sense. If you think there are three senses of “wrong”, say, wrong1, wrong2, and wrong3, just substitute “Polygamy is wrong1”, and then read the rest of the passage as normal. Again, I’m not sure where this passage is, so I am not sure what its actual point was.

…Phenomenal Conservatism, which says that we are entitled to presume that whatever seems to us to be the case is in fact the case, unless and until we have reasons to think otherwise.

This sounds exactly as Popperian fallibilism, since you are admitting that the moment you get a good reason to think what it seems to you is false, you should doubt it, and thus the original “foundation” is still fallible.

This isn’t Popper’s point. Popper’s point is not merely that we are fallible and should give up our beliefs if we find evidence against them. Pretty much everyone agrees with that for almost all beliefs, and so that would not be a distinctively Popperian point (nor would Popper have gotten famous for saying that). What is distinctive of Popper is that he thinks that you never get any reason at all to believe that any scientific theory is true. (Most people can’t believe that Popper thinks that, because it’s so crazy, and so they just refuse to interpret him as saying that, no matter how clearly he says it. If you don’t believe me, see the Popper quotations in this post: https://fakenous.net/?p=1239.)

“BIV is a bad explanation because from it anything goes and so is not really an explanation” … Michael goes on an unnecessary argument (not even completely expressed in the book) with made up probabilities

I thought it would be too complicated for undergrad students. In case you’re interested, this is where the argument is explained more fully: https://philpapers.org/rec/HUESTA.

The Deutsch argument doesn’t sound adequate to me, since it doesn’t explain why the BIV theory is unlikely to be true. The remark, “from it anything goes” is indeed the start of the explanation, but it sounds like Deutsch does not give the rest of the explanation. He infers that the BIV theory isn’t an explanation, which doesn’t follow at all. If there were a BIV, and its experiences were really caused by scientists stimulating it, then that, trivially, would be the explanation of its experiences.

I don’t think my argument was unnecessary, because what I did was to actually explain why the BIV theory should be rejected. As far as I can tell (not having read Deutsch’s book, but just from Benjamin’s comment), Deutsch doesn’t actually say why we’re not likely to be BIV’s.

But just because “your belief is not justified” is not a good reason to change it, specially if the belief to which you should change is also not justified.

I think this is a misunderstanding. I didn’t mean that you should change to another unjustified belief. I meant that, according to the skeptic, you should change from believing to not believing (whatever they’re saying is unjustified). Why? That’s just what “unjustified” means. If you think that it can be rational to hold an “unjustified” belief, than I just don’t know what you mean by “unjustified”.

In order to create an accurate description of “only” the brain in its vat, the scientists, and the brain apparatus — as if that were all that existed, without relying on the simple rules of physics playing out from a (presumably simple) original condition — you would need an absolutely absurd quantity of description.

I think this argument is assuming that (i) the BIV theorist has to give a detailed description of the actual state of the brain, the scientists, etc., without stating the laws of nature (?), but (ii) the Real World theory only states the general laws, and doesn’t have to specify any boundary conditions. (?)

But I can’t figure out why Hellestal assumes that. Why wouldn’t the BIVH and RWH both assume the same laws? And in order to get any empirical predictions, both would of course have to add some information about the configuration of the physical world (some initial conditions). The BIV theory would have to add information about the state of the BIV. The RWH would have to add information about the state of the real world.

Maybe you’re assuming that the BIVH says that there is only a BIV, and nothingness outside the BIV’s lab. Of course that’s a ridiculous theory. But that’s not how anyone understands it. The BIV theory specifies that there is a BIV (and stuff for stimulating it, etc.), and it doesn’t make claims about whatever else is going on outside the BIV’s room.

the low entropy starting condition of our universe … I don’t know why this puzzles anyone. A low entropy starting condition is a mathematically simpler starting condition. That is literally what “low entropy” means: simplicity.

I don’t agree with the last statement. Entropy is more precisely defined in terms of the measure of the phase space region that corresponds to a given macroscopic condition. Low entropy corresponds to a small region, and high entropy to a large region. Almost all of the phase space is occupied by the highest-entropy state, thermal equilibrium. On the standard way of assigning probabilities in thermodynamics, “low entropy” basically means “improbable”.

For our world: You need a universe, which should ideally have simple rules of physics and a simple (low-entropy) starting condition. Then the simple rules of physics need to play out to, eventually, create us. And that’s it.

But that’s not enough to explain your evidence. To explain your evidence, you need there to be, e.g., tables, and giraffes, and 8 planets, and Mount Rushmore, etc. Because you have experiences of all those things, so the RWH says that all those things really exist. For the BIVH, you still only need the scientists and the brain-stimulating equipment to explain all those experiences. There doesn’t have to actually be, say, a Mount Rushmore.

To my surprise, Prof. Huemer largely neglects social epistemology.

True. I was trying to keep the book of manageable length. However, I plan to do an epistemology text next, and it will at least include testimony and peer disagreement.

The burning question I have: is the typo in the footnote on p.95 on purpose?

No. In fact, I don’t even see the typo. (?)

I think the skeptic’s claim that we cannot know anything with 100% certainty must be correct.

Do you really mean that, or do you just mean that we can’t know controversial beliefs with certainty? (E.g., progressives do not in fact know the optimal minimum wage.) Would you say that you’re uncertain whether you exist? Is it uncertain that A=A?

…our most solid cases of knowledge is built from cooperation. In the example with the octopus, if my friend is next to me and is also seeing the octopus, and we both talk about what we are seeing, and our descriptions match, the likelihood that I am truly seeing the octopus goes up.

True, but notice also that this is not actually a likely case. If you see a normal physical object in normal conditions, you’re not going to be asking your friend if he sees it, etc. I’ve never done that in my life. Why not? Because I don’t need to, because I already know what I see.

The cases in which you actually need to check with other people are theoretical claims. Like, you’ve just given an argument against the minimum wage, and you ask your friend who is an economist to check it. I’ve done that sort of thing all the time. And yes, that definitely increases the likelihood of being correct.

This undermines the first claim above (“…our most solid cases of knowledge…”). No, our most solid cases of knowledge are things like immediate observations, made in normal conditions (good lighting, no hallucinogens, etc.). The theoretical knowledge that is commonly produced cooperatively (like science) is typically much less solid (less likely to be true, more likely to be revised in the future), even after we’ve gone through that cooperative process. Of course, that’s not because the cooperative process is bad; it’s because theoretical claims are inherently harder to know and easier to be wrong about. Which is why we feel the need to work together on them in the first place.