Thursday, March 15, 2018

Something that has no reasonable numerical epistemic probability

I think I can give an example of something that has no reasonable (numerical) epistemic probability.

Consider Goedel’s Axiom of Constructibility. Goedel proved that if the Zermelo-Fraenkel (ZF) axioms are consistent, they are also consistent with Constructibility (C). We don’t have any strong arguments against C.

Now, either we have a reasonable epistemic probability for C or we don’t.

If we don’t, here is my example of something that has no reasonable epistemic probability: C.

If we do, then note that Goedel showed that ZF + C implies the Axiom of Choice, and hence implies the existence of non-measurable sets. Moreover, C implies that there is a well-ordering W on the universe of all sets that is explicitly definable in the language of set theory.

Now consider some physical quantity Q where we know that Q lies in some interval [x − δ, x + δ], but we have no more precise knowledge. If C is true, let U be the W-smallest non-measurable subset of [x − δ, x + δ].

Assuming that we do have a reasonable epistemic probability for C, here is my example of something that has no reasonable epistemic probability: C is false or Q is a member of U.

Logical closure accounts of necessity

A family of views of necessity (e.g., Peacocke, Sider, Swinburne, and maybe Chalmers) identifies a family F of special true statements that get counted as necessary—say, statements giving the facts about the constitution of natural kinds, the axioms of mathematics, etc.—and then says that a statement is necessary if and only if it can be proved from F. Call these “logical closure accounts of necessity”. There are two importantly different variants: on one “F” is a definite description of the family and on the other “F” is a name for the family.

Here is a problem. Consider:

  1. Statement (1) cannot be proved from F.

If you are worried about the explicit self-reference in (1), I should be able to get rid of it by a technique similar to the diagonal lemma in Goedel’s incompleteness theorem. Now, either (1) is true or it’s false. If it’s false, then it can be proved from F. Since F is a family of truths, it follows that a falsehood can be proved from truths, and that would be the end of the world. So it’s true. Thus it cannot be proved from F. But if it cannot be proved from F, then it is contingently true.

Thus (1) is true but there is a possible world w where (1) is false. In that world, (1) can be proved from F, and hence in that world (1) is necessary. Hence, (1) is false but possibly necessary, in violation of the Brouwer Axiom of modal logic (and hence of S5). Thus:

  1. Logical closure accounts of necessity require the denial of the Brouwer Axiom and S5.

But things get even worse for logical closure accounts. For an account of necessity had better itself not be a contingent truth. Thus, a logical closure account of necessity if true in the actual world will also be true in w. Now in w run the earlier argument showing that (1) is true. Thus, (1) is true in w. But (1) was false in w. Contradiction! So:

  1. Logical closure accounts of necessity can at best be contingently true.

Objection: This is basically the Liar Paradox.

Response: This is indeed my main worry about the argument. I am hoping, however, that it is more like Goedel’s Incompleteness Theorems than like the Liar Paradox.

Here's how I think the hope can be satisfied. The Liar Paradox and its relatives arise from unbounded application of semantic predicates like “is (not) true”. By “unbounded”, I mean that one is free to apply the semantic predicates to any sentence one wishes. Now, if F is a name for a family of statements, then it seems that (1) (or its definite description variant akin to that produced by the diagonal lemma) has no semantic vocabulary in it at all. If F is a description of a family of statements, there might be some semantic predicates there. For instance, it could be that F is explicitly said to include “all true mathematical claims” (Chalmers will do that). But then it seems that the semantic predicates are bounded—they need only be applied in the special kinds of cases that come up within F. It is a central feature of logical closure accounts of necessity that the statements in F be a limited class of statements.

Well, not quite. There is still a possible hitch. It may be that there is semantic vocabulary built into “proved”. Perhaps there are rules of proof that involve semantic vocabulary, such as Tarski’s T-schema, and perhaps these rules involve unbounded application of a semantic predicate. But if so, then the notion of “proof” involved in the account is a pretty problematic one and liable to license Liar Paradoxes.

One might also worry that my argument that (1) is true explicitly used semantic vocabulary. Yes: but that argument is in the metalanguage.

Tuesday, March 13, 2018

A third kind of moral argument

The most common kind of moral argument for theism is that theism better fits with there being moral truths (either moral truths in general, or some specific kind of moral truths, like that there are obligations) than alternative theories do. Often, though not always, this argument is coupled with a divine commmand theory.

A somewhat less common kind of argument is that theism better explains how we know moral truths. This argument is likely to be coupled with an evolutionary debunking argument to argue that if naturalism and evolution were true, our moral beliefs might be true, and might even be reliable, but wouldn’t be knowledge.

But there is a third kind of moral argument that one doesn’t meet much at all in philosophical circles—though I suspect it is not uncommon popularly—and it is that theism better explains why we have moral beliefs. The reason we don’t meet this argument much in philosophical circles is probably that there seems to be very plausible evolutionary explanations of moral beliefs in terms of kin selection and/or cultural selection. Social animals as clever as we are benefit as a group from moral beliefs to discourage secret anti-cooperative selfishness.

I want to try to rescue the third kind of moral argument in this post in two ways. First, note that moral beliefs are only one of several solutions to the problem of discouraging secret selfishness. Here are three others:

  • belief in karmic laws of nature on which uncooperative individuals get very undesirable reincarnatory outcomes

  • belief in an afterlife judgment by a deity on which uncooperative individuals get very unpleasant outcomes

  • a credence around 1/2 to an afterlife judgment by a deity on which uncooperative individuals get an infinitely bad outcome (cf. Pascal’s Wager).

These three options make one think that cooperativeness is prudent, but not that it is morally required. Moreover, they are arguably more robust drivers of cooperative behavior than beliefs about moral requirement. Admittedly, though, the first two of the above might lead to moral beliefs as part of a theory about the operation of the karmic laws or the afterlife judgment.

Let’s assume that there are important moral truths. Still, P(moral beliefs | naturalism) is not going to exceed 1/2. On the other hand, P(moral beliefs | God) is going to be high, because moral truths are exactly the sort of thing we would expect God to ensure our belief in (through evolutionary means, perhaps). So, the fact of moral belief will be evidence for theism over naturalism.

The second approach to rescuing the moral argument is deeper and I think more interesting. Moreover, it generalizes beyond the moral case. This approach says that a necessary condition for moral beliefs is being able to have moral concepts. But to have moral concepts requires semantic access to moral properties. And it is difficult to explain on contemporary naturalistic grounds how we have semantic access to moral properties. Our best naturalistic theories of reference are causal, but moral properties on contemporary naturalism (as opposed to, say, the views of a Plato or an Aristotle) are causally inert. Theism, however, can nicely accommodate our semantic access to moral properties. The two main theistic approaches to morality ground morality in God or in an Aristotelian teleology. Aristotelian teleology allows us to have a causal connection to moral properties—but then Aristotelian teleology itself calls for an explanation of our teleological properties that theism is best suited to give. And approaches that ground morality in God give God direct semantic access to moral properties, which semantic access God can extend to us.

This generalizes to other kinds of normativity, such as epistemic and aesthetic: theism is better suited to providing an explanation of how we have semantic access to the properties in question.

Conscious computers and reliability

Suppose the ACME AI company manufactures an intelligent, conscious and perfectly reliable computer, C0. (I assume that the computers in this post are mere computers, rather than objects endowed with soul.) But then a clone company manufactures a clone of C1 out of slightly less reliable components. And another clone company makes a slightly less reliable clone of C2. And so on. At some point in the cloning sequence, say at C10000, we reach a point where the components produce completely random outputs.

Now, imagine that all the devices from C0 through C10000 happen to get the same inputs over a certain day, and that all their components do the same things. In the case of C10000, this is astronomically unlikely, as the super-unreliable components of the C10000 produce completely random outputs.

Now, C10000 is not computing. Its outputs are no more the results of intelligence than the copy of Hamlet typed by the monkeys is the result of intelligent authorship. By the same token, C10000 is not conscious on computational theories of consciousness.

On the other hand, C0’s outputs are the results of intelligence and C0 is conscious. The same is true for C1, since if intelligence or consciousness required complete reliability, we wouldn’t be intelligent and conscious. So somewhere in the sequence from C0 to C10000 there must be a transition from intelligence to lack thereof and somewhere (perhaps somewhere else) a transition from consciousness to lack thereof.

Now, intelligence could plausibly be a vague property. But it is not plausible that consciousness is a vague property. So, there must be some precise transition point in reliability needed for computation to yield consciousness, so that a slight decrease in reliability—even when the actual functioning is unchanged (remember that the Ci are all functioning in the same way)—will remove consciousness.

More generally, this means that given functionalism about mind, there must be a dividing line in measures of reliability between cases of consciousness and ones of unconsciousness.

I wonder if this is a problem. I suppose if the dividing line is somehow natural, it’s not a problem. I wonder if a natural dividing line of reliability can in fact be specified, though.

Monday, March 12, 2018

The usefulness of having two kinds of quantifiers

A central Aristotelian insight is that substances exist in a primary way and other things—say, accidents—in a derivative way. This insight implies that use of a single existential quantifier ∃x for both substances and forms does not cut nature at the joints as well as it can be cut.

Here are two pieces of terminology that together not only capture the above insight about existence, but do a lot of other (but closely related) ontological work:

  1. a fundamental quantifier ∃u over substances.

  2. for any y, a quantifier ∃yx over all the (immediate) modes (tropes) of y.

We can now define:

  • a is a substance iff ∃u(u = a)

  • b is a (immediate) mode of a iff ∃ax(x = b)

  • f is a substantial form of a substance a iff a is a substance and ∃ax(x = f): substantial forms are immediate modes of substances

  • b is a (first-level) accident of a substance a iff u is a substance ∃axxy(y = b & y ≠ x): first-level accidents are immediate modes of substantial forms, distinct from these forms (this qualifier is needed so that God wouldn’t coount as having any accidents

  • f is a substantial form iff ∃uux(x = f)

  • b is a (first-level) accident iff ∃uuxxy(y = b).

This is a close variant on the suggestion here.

Friday, March 9, 2018

A regress of qualitative difference

According to heavyweight Platonism, qualitative differences arise from differences between the universals being instantiated. There is a qualitative difference between my seeing yellow and your smelling a rose. This difference has to come from the difference between the universals seeing yellow (Y) and smelling a rose (R). But one doesn’t get a qualitative difference from being related in the same way to numerically but not qualitatively different things (compare: being taller than Alice is not qualitatively different from being taller than Bea if Alice and Bea are qualitatively the same—and in particular, of the same height). Thus, if the qualitative difference between my seeing yellow and your smelling a rose comes from being related by instantiation to different things, namely Y and R, then this presupposes that the two things are themselves qualitatively different. But this qualitative difference between Y and R depends on Y and R exemplifying different—and indeed qualitatively different—properties. And so on, in a regress!

Intrinsic attribution

  1. If heavyweight Platonism is true, all attribution of attributes to a subject is grounded in facts relating the subject to abstracta.

  2. Intrinsic attribution is never grounded in facts relating a subject to something distinct from itself.

  3. There are cases of intrinsic attribution with a non-abstract subject.

  4. If heavyweight Platonism is true, each case of intrinsic attribution to a non-abstract subject is grounded in facts relating that object to something other than itself. (By 1 and 2)

  5. So, if heavyweight Platonism is true, there are no cases of intrinsic attribution to a non-abstract subject. (2 and 4)

  6. So, heavyweight Platonism is not true. (By 2 and 5)

Here, however, is a problem with 3. All cases of attribution to a creature are grounded in the creature’s participation in God. Hence, no creature is a subject of intrinsic attribution. And God’s attributes are grounded in a relation between God and the Godhead. But by divine simplicity, God is the Godhead. Since the Godhead is abstract, God is abstract (as well as being concrete) and hence God does not provide an example of intrinsic attribution with a non-abstract subject.

I still feel that there is something to the above argument. Maybe the sense in which a creature’s attributes are grounded in the creature’s participation in God is different from the sense of grounding in 2.

Friday, March 2, 2018

Wishful thinking

Start with this observation:

  1. Commonly used forms of fallacious reasoning are typically distortions of good forms of reasoning.

For instance, affirming the consequent is a distortion of the probabilistic fact that if we are sure that if p then q, then learning q is some evidence for p (unless q already had probability 1 or p had probability 0 or 1). The ad hominem fallacy of appeal to irrelevant features in an arguer is a distortion of a reasonable questioning of a person’s reliability on the basis of relevant features. Begging the question is, I suspect, a distortion of an appeal to the obviousness of the conclusion: “Murder is wrong. Look: it’s clear that it is!”


  1. Wishful thinking is a commonly used form of fallacious reasoning.

  2. So, wishful thinking is probably a distortion of a good form of reasoning.

I suppose one could think that wishful thinking is one of the exceptions to rule (1). But to be honest, I am far from sure there are any exceptions to rule (1), despite my cautious use of “typically”. And we should avoid positing exceptions to generally correct rules unless we have to.

So, if wishful thinking is a distortion of a good form of reasoning, what is that good form of reasoning?

My best answer is that wishful thinking is a distortion of correct probabilistic reasoning on the basis of the true claim that:

  1. Typically, things go right.

The distortion consists in the fact that in the fallacy of wishful thinking one is reasoning poorly, likely because one is doing one or more of the following:

  1. confusing things going as one wishes them to go with things going right,

  2. ignoring defeaters to the particular case, or

  3. overestimating the typicality mentioned in (4).

Suppose I am right about (4) being true. Then the truth of (4) calls out for an explanation. I know of four potential explanations of (4):

  1. Theism: God creates a good world.

  2. Optimalism: everything is for the best.

  3. Aristotelianism: rightness is a matter of lining up with the telos, and causal powers normally succeed at getting at what they are aiming at.

  4. Statisticalism: norms are defined by what is typically the case.

I think (iv) is untenable, so that leaves (i)-(iii).

Now, optimalism gives strong evidence for theism. First, theism would provide an excellent explanation for optimalism (Leibniz). Second, if optimalism is true, then there is a God, because that’s for the best (Rescher).

Aristotelianism also provides evidence for theism, because it is difficult to explain naturalistically where teleology comes from.

So, thinking through the fallacy of wishful thinking provides some evidence for theism.

Thursday, March 1, 2018

Superpositions of conscious states

Consider this thesis:

  1. Reality is never in a superposition of two states that differ with respect to what, if anything, observers are conscious of.

This is one of the motivators for collapse interpretations of quantum mechanics. Now, suppose that S is an observable that describes some facet of conscious experience. Then according to (1), reality is always in some eigenstate of S.

Suppose that at the beginning t0 of some interval I of times, reality is in eigenstate ψ0. Now, suppose that collapse does not occur during I. By continuity considerations, then, over I reality cannot evolve to a state orthogonal to ψ0 without passing through a state that is a superposition of ψ0 and something else. In other words, over a collapse-free interval of time, the conscious experience that is described by S cannot change if (1) is true.

What if collapse happens? That doesn’t seem to help. There are two plausible options. Either collapses are temporally discrete or temporally dense. If they are temporally dense, then by the quantum Zeno effect with probability one we have no change with respect to S. If they are temporally discrete, then suppose that t1 is the first time after t0 at which collapse causes the system to enter a state ψ1 orthogonal to ψ0. But for collapse to be able to do that, the state would have had to have assigned some weight to ψ1 prior to the collapse, while yet assigning some weight to ψ0, and that would violate (1).

(There might also be some messy story where there are some temporally dense and some temporally isolated collapse. I haven’t figured out exactly what to say about that, other than that it is in danger of being ad hoc.)

So, whether collapse happens or not, it seems that (1) implies that there is no change with respect to conscious experience. But clearly the universe changes with respect to conscious experience. So, it seems we need to reject (1). And this rejection seems to force us into some kind of weird many-worlds interpretation on which we have superpositions of incompatible experiences.

There are, however, at least two places where this argument can be attacked.

First, the thesis that conscious experience is described by observables understood (implicitly) as Hermitian operators can be questioned. Instead, one might think that conscious states correspond to subsets of the Hilbert space, subsets that may not even be linear subspaces.

Second, one might say that (1) is false, but nothing weird happens. We get weirdness from the denial of (1) if we think that a superposition of, say, seeing a square and seeing a circle is some weird state that has a seeing-a-square aspect and a seeing-a-circle aspect (this is weird in different ways depending on whether you take a multiverse interpretation). But we need not think that. We need not think that if a quantum state ψ1 corresponds to an experience E1 and a state ψ2 corresponds to an experience E2, then ψ = a1ψ1 + a2ψ2 corresponds to some weird mix of E1 and E2. Perhaps the correspondence between physical and mental states in this case goes like this:

  1. when |a1| ≫ |a2|, the state ψ still gives rise to E1

  2. when |a1| ≪ |a2|, the state ψ gives rise to E2

  3. when a1 and a2 are similar in magnitude, the state ψ gives rise to no conscious experience at all (or gives rise to some other experience, perhaps one related to E1 and E2, or perhaps one that is entirely unrelated).

After all, we know very little about which conscious states are correlated with which physical states. So, it could be that there is always a definite conscious state in the universe. I suppose, though, that this approach also ends up denying that we should think of conscious states as corresponding in the most natural way to the eigenvectors of a Hermitian operator.

Wednesday, February 28, 2018

More on pain and presentism

Imagine two worlds, in both of which I am presently in excruciating pain. In world w1, this pain began a nanosecond ago and will end in a nanosecond. In w2, the pain began an hour ago and will end in an hour.

In world w1, I am hardly harmed if I am harmed at all. Two nanoseconds of pain, no matter how bad, are just about harmless. It would be rational to accept two nanoseconds of excruciating pain in exchange for any non-trivial good. But in world w2, things are really bad for me.

An eternalist has a simple explanation of this: even if each of the two nanosecond pains has only a tiny amount of badness, in w2 I really have 60 × 109 of them, and that’s really bad.

It seems hard, however, for a presentist to explain the difference between the two worlds. For of the 60 × 109 two-nanosecond pains I receive in w2, only one really exists. And there is one that really exists in w1. Where is the difference? Granted, in w2, I have received billions of these pains and will receive billions more. But right now only one pain exists. And throughout the two hours of pain, at any given time, only one of the pains exists—and that one pain is insignificant.

Here is my best way of trying to get the presentist out of this difficulty. Pain is like audible sound. You cannot attribute an audible sound to an object in virtue of how the object is at one moment of time, or even a very, very short interval of times. You need at least 50 microseconds to get an audible sound, since you need one complete period of air vibration (I am assuming that 50 microseconds doesn’t count as “very, very short”). When the presentist says that there is an audible sound at t, she must mean that there was air vibration going on some time before t and/or there will be air vibration going on for some time after t. Likewise, to be in pain at t requires a non-trivial period of time, much longer than two nanoseconds, during which some unpleasant mental activity is going on.

How long is that period? I don’t know. A tenth of a second, maybe? But maybe for an excruciating pain, that activity needs to go for longer, say half a second. Suppose so. Can I re-run the original argument, but using a half-second pulse of excruciating pain in place of the two-nanosecond excruciating pain? I am not sure. For a half-second of excruciating pain is not insignificant.

Collapse and the continuity of consciousness

One version of the quantum Zeno effect is that if you collapse a system’s wavefunction with respect to a measurement often enough, the measurement is not going to change.

Thus, if observation causes collapse, and you look at a pot of water on the stove often enough, it won’t boil. In particular, if you are continuously (or just at a dense set of times) observing the pot of water, then it won’t boil.

But of course watched pots do boil. Hence:

  • If observation causes collapse, consciousness is not temporally continuous (or temporally dense).

And the conclusion is what we would expect if causal finitism were true. :-)

Tuesday, February 27, 2018

A problem for Goedelian ontological arguments

Goedelian ontological arguments (e.g., mine) depend on axioms of positivity. Crucially to the argument, these axioms entail that any two positive properties are compatible (i.e., something can have both).

But I now worry whether it is true that any two positive properties are compatible. Let w0 be our world—where worlds encompass all contingent reality. Then, plausibly, actualizing w0 is a positive property that God actually has. But now consider another world, w1, which is no worse than ours. Then actualizing w1 is a positive property, albeit one that God does not actually have. But it is impossible that a being actualize both w0 and w1, since worlds encompass all contingent reality and hence it is impossible for two of them to be actual. (Of course, God can create two or more universes, but then a universe won’t encompass all contingent reality.) Thus, we have two positive properties that are incompatible.

Another example. Let E be the ovum and S1 the sperm from which Socrates originated. There is another possible world, w2, at which E and a different sperm, S2, results in Kassandra, a philosopher every bit as good and virtuous as Socrates. Plausibly, being friends with Socrates is a positive property. And being friends with Kassandra is a positive property. But also plausibly there is no possible world where both Socrates and Kassandra exist, and you can’t be friends with someone who doesn’t exist (we can make that stipulative). So, being friends with Socrates and being friends with Kassandra are incompatible and yet positive.

I am not completely confident of the counterexamples. But if they do work, then the best fix I know for the Goedelian arguments is to restrict the relevant axioms to strongly positive properties, where a property is strongly positive just in case having the property essentially is positive. (One may need some further tweaks.) Essentially actualizing w0 limits one from being able to actualize anything else, and hence isn’t positive. Likewise, essentially being friends with Socrates limits one to existing only in worlds where Socrates does, and hence isn’t positive. But, alas, the argument becomes more complicated and hence less plausible with the modification.

Another fix might be to restrict attention to positive non-relational properties, but I am less confident that that will work.

Voluntariness of beliefs

The following claims are incompatible:

  1. Beliefs are never under our direct voluntary control.

  2. Beliefs are high credences.

  3. Credences are defined by corresponding decisional dispositions.

  4. Sometimes, the decisional dispositions that correspond to a high credence are under our direct voluntary control.

Here is a reason to believe 4: We have the power to resolve to act a certain way. When successful, exercising the power of resolution results in a disposition to act in accordance with the resolution. Among the things that in some cases we can resolve to do is to make the decisions that would correspond to a high credence.

So, I think we should reject at least one of 1-3. My inclination is to reject both 1 and 3.

Friday, February 23, 2018

More on wobbling of priors

In two recent posts (here and here), I made arguments based on the idea that wobbliness in priors translates to wobbliness in posteriors. The posts while mathematically correct neglect an epistemologically important fact: a wobble in a prior may be offset be a countervailing wobble in a Bayes’ factor, resulting in a steady posterior.

Here is an example of this phenomenon. Either a fair coin or a two-headed coin was tossed by Carl. Alice thinks Carl is a normally pretty honest guy, and so she thinks it’s 90% likely that a fair coin was tossed. Bob thinks Carl is tricky, and so he thinks there is only a 50% chance that Carl tossed the fair coin. So:

  • Alice’s prior for heads is (0.9)(0.5)+(0.1)(1.0) = 0.55

  • Carl’s prior for heads is (0.5)(0.5)+(0.5)(1.0) = 0.75.

But now Carl picks up the coin, mixes up which side was at the top, and both Alice and Bob have a look at it. It sure looks to them like there is a head on one side of it. As a result, they both come to believe that the coin is very, very likely to be fair, and when they update their credences on their observation of the coin, they both come to have credence 0.5 that the coin landed heads.

But a difference in priors should translate to a corresponding difference in posteriors given the same evidence, since the force of evidence is just the addition of the logarithm of the Bayes’ factor to the logarithm of the prior odds ratio. How could they both have had such very different priors for heads, and yet a very similar posterior, given the same evidence?

The answer is this. If the only relevant difference between Alice’s and Carl’s beliefs were their priors for heads, then indeed they couldn’t get the same evidence and both end up very close to 0.5. But their Bayes’ factors also differ.

  • For Alice: P(looks fair | heads)≈0.82; P(looks fair | tails)≈1; Bayes’ factor for heads vs. tails ≈0.82

  • For Bob: P(looks fair | heads)≈0.33; P(looks fair | tails)≈1; Bayes’ factor for heads vs. tails ≈0.33.

Thus, for Alice, that the coin looks fair is pretty weak evidence against heads, lowering her credence from 0.55 to around 0.5, while for Bob, that the coin looks fair is moderate evidence against heads, lowing his credence from 0.75 to around 0.5. Both end up at roughly the same point.

Thus, we cannot assume that a difference with respect to a proposition in the priors translates to a corresponding difference in the posteriors. For there may also be a corresponding difference in the Bayes’ factors.

I don’t know if the puzzling phenomena in my two posts can be explained away in this way. But I don’t know that they can’t.

A slightly different causal finitist approach to finitude

The existence of non-standard models of arithmetic makes defining finitude problematic. A finite set is normally defined as one that can be numbered by a natural number, but what is a natural number? The Peano axioms sadly underdetermine the answer: there are non-standard models.

Now, causal finitism is the metaphysical doctrine that nothing can have an infinite causal history. Causal finitism allows for a very neat and pretty intuitive metaphysical account of what a natural number is:

  • A natural number is a number one can causally count to starting with zero.

Causal counting is counting where each step is causally dependent on the preceding one. Thus, you say “one” because you remember saying “zero”, and so on. The causal part of causal counting excludes a case where monkeys are typing at random and by chance type up 0, 1, 2, 3, 4. If causal finitism is false, the above account is apt to fail: it may be possible to count to infinite numbers, given infinite causal sequences.

While we can then plug this into the standard definition of a finite set, we can also define finitude directly:

  • A finite set or plurality is one whose elements can be causally counted.

One of the reasons we want an account of the finite is so we get an account of proof. Imagine that every day of a past eternity I said: “And thus I am the Queen of England.” Each day my statement followed from what I said before, by reiteration. And trivially all premises were true, since there were no premises. Yet the conclusion is false. How can that be? Well, because what I gave wasn’t a proof, as proofs need to be finite. (I expect we often don’t bother to mention this point explicitly in logic classes.)

The above account of finitude gives an account of the finitude of proof. But interestingly, given causal finitism, we can give an account of proof that doesn’t make use of finitude:

  • To causally prove a conclusion from some assumptions is to utter a sequence of steps, where each step’s being uttered is causally dependent on its being in accordance with the rules of the logical system.

  • A proof is a sequence of steps that could be uttered in causally proving.

My infinite “proof” that I am the Queen of England cannot be causally given if causal finitism is true, because then each day’s utterance will be causally dependent on the previous day’s utterance, in violation of causal finitism. However, interestingly, the above account of proof does not guarantee that a proof is finite. A proof could contain an infinite number of steps. For instance, uttering an axiom or stating a premise does not need to causally depend on previous steps, but only on one’s knowledge of what the axioms and premises are, and so causal finitism does not preclude having written down an infinite number of axioms or premises. However, what causal finitism does guarantee is that the conclusion will only depend on a finite number of the steps—and that’s all we need to make the proof be a good one.

What is particularly nice about this approach is that the restriction of proofs to being finite can sound ad hoc. But it is very natural to think of the process of proving as a causal process, and of proofs as abstractions from the process of proving. And given causal finitism, that’s all we need.