This essay is presented from the perspective of the philosophy of probability.

It is of the nature of a horse to have two eyes. Such is the ancient and naive ‘explanation’. However, that is not an explanation or even an answer. At best, it is simply a statement of the observation, which prompts the question. At worst, it is a copout, which evades the question.

More recent observations are that predatory animals have two eyes in a plane enabling binary vision. In contrast, animals, like the horse, which are prey to others, have eye placement approximately in two planes, which are the two sides of their heads. Such placement affords nearly a complete view of the sphere from which attack by a predator may occur.

In prey, the placement of the eyes approximately forms the axis of a globe of visual sensation, over which the eyes are uniformly distributed. In the case of fewest eyes, namely two, the eyes are at the axis of the globe. The axis may be viewed as the diameter of one longitudinal circumference.

The more eyes distributed uniformly over the globe of visual sensation, the more complete and uniform would be the monitoring of the sphere of predatory attack. Let the algorithm of adding uniformly distributed eyes to the globe of visual sensation be by adding equally spaced longitudinal circumferences, where each circumference has a number of eyes equaling twice the number of circumferences, while maintaining the location of the two original eyes at the axis of the globe.

By this algorithm, the relationship between the number of equally spaced eyes, N, and the number of longitudinal circumferences, n, would be: N = (n^2 – (n -1)) × 2. For n = 1, 2, 3 and 4, the number of uniformly distributed eyes, N, is 2, 6, 14 and 26. The practical upper limit would be determined by the size of the eyes and the size of a horse’s head. Also, the practical number of eyes would be diminished by those of the virtual globe of visual sensation which would be eliminated due to the neck of the horse.

Why do horses have two eyes? Horses have two eyes as a uniform distribution of visual sensors forming a global base for monitoring an attack from any point of the sphere of predation.

It should be apparent that the fact that animals of prey in the scope of human observation, such as the horse, have just two eyes is one possibility among several. It happens that in our universe the number is two. However, the fact that our observation is limited to our earth in our universe voids the rationale, which claims that it is of some fundamental character of a horse that it has two eyes. Indeed, it has two eyes in our universe, but horses may have any range of a number of eyes, notably greater than two, in other regions of the multiverse. It is the multiverse which explains the fact that we observe within our universe just one of many possibilities, where in accord with probability, the number of eyes of earth horses is two.

It is evident that the existence of the multiverse in cosmology is not a consequence solely of the science of physics and the numerical values of the physical constants in our universe. The multiverse is harmonious with biology as well and with what seems like a simple question, ‘Why do horses have two eyes?’



This essay is presented by its author on the supposition of his virtual assignment to the debate side expressed by the title. It is prompted by the impression that published views, on the con side of the debate, typically dismiss the pro side as intellectually and philosophically trivial. Consequently, the con side has not adequately addressed the issue of debate.

The issue or thesis is that human knowledge of material reality is the inference of mathematical probability. Hahn and Wiker (Answering the New Atheism, p 10) accuse Dawkins of an irrational faith in chance when Dawkins has explicitly denied chance as a solution (The God Delusion, p 119-120). Feser (The Last Superstition) does not even discuss mathematical probability, although identifying Dawkins as his main philosophical opponent. In a few instances Feser uses the word, probability, but in the sense of human certitude, not in the mathematical sense.

The Historical Issues

There were two dichotomies with which the ancient Greek philosophers wrestled. One was the discrete and the continuous. The other was the particular and the universal.

The Discrete and the Continuous

Zeno of Elia was a proponent of the discrete to the denial of the continuous. This took the form of a discrete analysis of motion. Any linear local motion takes a finite time to proceed halfway, leaving the remainder of the motion in the same situation. If local motion were real, it would take an infinite number of finite increments of time and also of distance to complete the motion. Therefore, motion is an illusion. From this perspective, it is assumed that the discrete is real. When subjected to discrete analysis, motion, which is continuous, is seen to be untenable.

Heraclitus of Ephesus took the opposite view. Everything is always changing. It is change, which is real. Things as entities, i.e. as implicitly stable, are mental constructs. They are purely logical. It is continuous fluidity which is reality.

The Particular and the Universal

It was apparent to both Plato and his student, Aristotle, that the object of sense knowledge was particular, completely specified. In contrast, intellectual concepts were universal, not characterized by particularity, but compatible with a multitude of incompatible particulars. Plato proposed that sense knowledge of the particular was a prompt to intellectual knowledge, recalling a memory when the human soul, apart from the body, had known the universals.

Aristotle proposed that material entities or substances were composites of two principles. One was intelligible and universal, the substantial form. The other was the principle of individuation or matter, which enabled the expression of that universal form in a complete set of particulars. The human soul had the power to abstract the universal form from a phantasm presented to it by sense knowledge of the individual material entity in its particularities.

From this binary division into the two principles of substantial form and matter arose the concept of formal causality. The form of an entity made an entity to be what it was. It was the formal cause, whereas the particular material substance, as a composite of form and matter, was the effect. Thus, cause and effect were binary variables. The cause is absent, 0, or present, 1, and its effect was correspondingly binary as absent, 0, or present, 1. Thereby, the philosophy of formal causality was tied to the discrete mathematics of binary arithmetic.

The Modern Assessment of Form

This discrete and binary view of formal causality was subtly undermined in the 19th century. What led to its demise was the study of variation in biological forms. Darwin proposed that the modification of biological forms was due to the generation of variants by random mutation and their differential survival due to natural selection.

Superficially this appeared to be consonant with the distinction of one substantial form, or identity of one species, as discretely distinct from another. However, it was soon realized that the spectrum of seemingly discrete and graduated forms was, in its limit, continuous variation. One species in an evolutionary line did not represent a discretely distinct substantial form from the next substance in the spectrum. Rather, they were related by continuous degree The distinction of one biological form from another, as substantial, was an imposition of the human mind on biological reality. To save at least the jargon of Aristotelian philosophy, it could be said that the evolutionary and graduated differences among living things were accidental differences among individuals of one substantial form, namely the substantial form, living thing.

The Resultant Modern Assessment of Efficient Causality

Apart from formal causality, Aristotle also identified efficient causality, namely the transition of potency to act. This would include all change, both substantial change and local motion. In keeping with the limitations of binary arithmetic, efficient causality and its effect were identified as absent, 0, and present, 1. However, concomitant to the implication of the random mutation of forms, which renders the substantial form of living things a continuum, is the implication of mathematical probability as the outcome of an event. Just as the realization that the mutation of forms defined a continuous spectrum for formal causality, probability defines a continuous spectrum from 0 to 1, for efficient causality. Efficient causality is the probability of an outcome, the probability of an event. The outcome or event as the effect is within a continuous spectrum and proportional to its continuous efficient cause, which is mathematical probability. Thus, the inference of mathematical probability as the mode of human knowledge of material reality, frees efficient causality and its effect from the restrictions of binary arithmetic.

Causality was no longer discrete and binary. Causality was the amplitude from 0 to 1 of the continuous variable, probability. Causality had now the nuance of degree, made possible by the rejection of discrete, binary arithmetic in favor of continuity. The magnitude of the effect was directly proportional to the amplitude of the cause. The simplicity of discrete, binary arithmetic, which is so satisfying to the human mind, was replaced by what we see in nature, namely degree.

A Clearer Understanding of Chance

Hume had rejected the idea of efficient causality. He claimed that, which we propose as cause and effect, was simply a habit of association of a sequence of events. In this view, we label as an effect the next in a series of events according to what we anticipate due to our habit of association. The understanding of probability as causality having amplitude restores cause and effect, negating Hume’s denial.

Mathematical probability is the fractional concentration of an element, x, of quantity, n, in a logical set of N elements. This fraction, n/N, has a lower limit of 0 as n → 0. The limit, 0, is a non-fraction. The upper limit of the fraction, probability, n/N, as n → N is 1, a non-fraction. These non-fractional limits represent the old, binary conception of causality. Properly understood, these limits demarcate the continuum of probability, the continuum of efficient causality.

The binary definition of chance was an effect of 1, where the cause was 0. In recognizing probability as efficient causality, this does not change. No one offers chance as an explanation (The God Delusion, p 119-120). In the context of probability, however, the binary concept of chance yields to a properly nuanced understanding. Chance is directional within the continuum of probability. Causality tends toward chance as the probability tends toward 0. This is mathematically the same as improbability increasing toward 1. Consequently, Dawkins notes that a decrease in probability is moving away from chance by degree, “I want to continue demonstrating the problem which any theory of life must solve: how to escape from chance.” (The God Delusion, p120). This escape from chance by degree is explicit, “The answer is that natural selection is a cumulative process, which breaks the problem of improbability up into small pieces. Each of the small pieces is slightly improbable, but not prohibitively so.” (The God Delusion, p121)

Often in common parlance, chance and probability are synonyms: The chance or probability of heads in flipping a coin is one-half. In recognizing probability as the spectrum of efficient causality they are not synonyms. Chance is properly understood as directional movement toward the lower end of the spectrum of probability.

Mathematical Probability and Human Certitude Merge

The recognition of efficient causality as the continuum of probability introduces a distinction between mathematical chance as directional and mathematical probability as spectrum. On the other hand, this recognition merges the meaning of mathematical probability and probability in the sense of an individual’s certitude of the truth of a proposition.

In the Aristotelian discrete binary view of efficient causality, an individual’s certitude of the truth of a proposition though commonly labeled, probability, was strictly qualitative and subjective. One could of course, describe his certitude on a numerical scale, but this was simply a subjective accommodation. For example, stating a numerical degree of one’s certitude was just for the fun of it within a discussion of politics by TV pundits. In spite of adopting an arbitrary scale such as zero to ten, to express a pundit’s certitude, human certitude was still recognized as qualitative.

The recognition of efficient causality as the continuum of mathematical probability, implies that human knowledge is the inference of mathematical probability and, indeed, a matter of degree. There is no distinction between the probability of efficient causality and the degree of certitude of human knowledge. Human certitude, which was thought to be qualitative, is quantitative because human knowledge is the inference of mathematical probability.

Final Causality

Final causality or purpose is characteristic of human artifacts. However enticing as it may be, it is simply anthropomorphic to extrapolate purpose from human artifacts to material reality (The God Delusion, p 157). In the binary context of form and matter, it was quite easy to give in to the temptation. Once binary arithmetic was discarded with respect to formal and efficient causality, the temptation vanished. The continuity of probability not only erased the discrete distinctions among forms, but melded formal causality and efficient causality into the one continuous variable of probability. Final causality is identifiable in human artifacts and in a philosophy based on binary arithmetic. It serves no purpose in a philosophy based on the continuity arising from the inference of mathematical probability from material reality.

Conclusion: Regarding the Existence of God

Binary arithmetic was Aristotle’s basis for the distinction of substantial form and matter in solving the problem of the particular and the universal. The form was the intelligible principle which explained the composite, the particular substance. The composite was identified as the nature of the individual material entity. However, this implied a discrete distinction between the nature of the individual substance and its existence. One binary solution led to another binary problem: How do you explain the existence of the individual when its form, in association with matter, merely explains its nature? The Aristotelian solution lay in claiming there must be a being, outside of human experience in which there was no discrete distinction between nature and existence. That being would be perfectly integral in itself. Thereby, it would be its own formal, efficient and final causes. Its integrity would be the fix needed to amend the dichotomy of the nature and existence of the entities within our experience.

Both the problem and its solution arise out of the mindset of binary arithmetic. The problem is to explain a real, discrete distinction between nature and existence in material entities. Its solution is God, an integral whole. In contrast, the problem does not arise in the philosophy of probability, which expands philosophical understanding to permit the concept of mathematical continuity. That philosophy allows the human epistemological inference of mathematical probability. Probability and its inference from material reality, do not require a dichotomy between formal and efficient causality. In that inference, expressed as amplitude, both form and existence are integral. There is no need of a God, an external source, to bind into a whole that which is already integral in itself.

In Aristotelian philosophy, it is said that there is only a logical distinction between God’s nature and God’s existence, whereas there is a real distinction of nature and existence in created entities. The philosophy of probability avoids the dichotomies arising out of Aristotelian binary arithmetic. In the philosophy of probability there is only a logical distinction between formal and efficient causality in material things. There is no real dichotomy for a God to resolve.

It is perfectly acceptable in thought to characterize material processes as mathematically random. For example, the roll of two dice is characterized as random such that their sum of seven is said to have a probability of 1/6. The equation of radioactive decay may be characterized as the probability function of the decay of a radioactive element. Wave equations in quantum mechanics may be characterized as probability functions. However, such valid mathematical characterizations do not attest to randomness and probability as being characteristics of material reality. Rather, such characterizations attest to mathematical randomness and probability as being characteristic of human knowledge in its limitations.

If randomness and probability were characteristic of material reality at any level, including the atomic and sub-atomic level, material reality would be inherently unintelligible in itself. Material reality would be inexplicable and as such inherently mysterious. Yet, to view material reality as inherently mysterious is superstition. Superstition denies causality by claiming that material results are a mystery in themselves, e.g. that they are materially random.

It is an erroneous interpretation to hold that quantum mechanics requires material reality to be random and probable in itself. Wave equations may be viewed as probability functions only in the same sense that the result of rolling dice is mathematically probable. That sense is in the suspension of human knowledge of material causality at the level of physical forces for the sake of a mathematical utility without the denial of material causality.

A commenter on a recent post at (ref. 1), was so enamored with the validity, utility and beauty of the mathematics of quantum mechanics that he declared, “This randomness is inherent in nature.” Indeed it is inherent in nature, i.e. in human nature in the limitations of the human knowledge of the measurable properties of material reality.

Material reality is not random in its nature. The nature of material reality, in light of the utility of application of the mathematics of probability or in light of perceiving a mathematical function as one of probability, is not a question within the scope of science or mathematics. The nature of material reality is always a question within philosophy. In contrast, the mathematical and scientific question is the suitability of specific mathematics in representing the relationships among the measurable properties of material reality including those properties, which can only be detected and measured with instruments.

Let it be noted that scientific knowledge cannot demonstrate the fundamental invalidity of human sensory knowledge and human intellectual knowledge because the validity of scientific knowledge depends on the fundamental validity of these.

It has been recognized since the time of Aristotle that the human intellect is extrinsically dependent in its activity upon a sensual phantasm, i.e. a composite of sense knowledge. This and all visualizations or imaginative representations are necessarily restricted to the scope of the senses, although the intellect is not. Consequently, science at the atomic and sub-atomic level cannot consist in an analysis of visual or imaginative simulations, which are confined to the scope of human sensation. Rather, the science consists in the mathematics, which identifies quantitative relationships among instrumental measurements. It would be a fool’s quest to attempt to determine a one to one correspondence between science and an imaginative representation of the atomic and sub-atomic level or to constrain the understanding of the science to such a representation (Ref. 2).

Remarkably, in an analogy of a wave function in quantum mechanics as a probability function which collapses into a quantum result, the physicist, Stephan M. Barr, did not choose an example of mathematical probability (Ref. 3). He could have proposed an analogy of mathematical probability simulated by flipping a coin. When the coin is rotating in the air due to being flipped it could be viewed as a probability function of heads of 50%, which collapses into a quantum result of heads, namely one, or tails, namely zero, upon coming to rest on the ground.

Instead, he chose an example where the meaning of probability is not mathematical, but qualitative.

Mathematical probability is the fractional concentration of an element in a logical set, e.g. heads has a fractional concentration of 50% in the logical set of two logical elements with the nominal identities of heads and tails. A coin is material simile.

A completely unrelated meaning of the word, probability, is an individual’s personal lack of certitude of the truth of a statement. Examples: ‘I probably had eggs for breakfast in the past two weeks’ or ‘Jane will probably pass the French exam.’ These statements identify no set of elements or anything quantitative. Personal human certitude is qualitative. Yet, we are bent upon quantitatively rating the certitude with which we hold our personal judgments.

Barr succumbs to this penchant for quantifying personal certitude. He illustrates the collapse of a wave function in quantum mechanics with the seemingly objective quantitative statement:
“This is where the problem begins. It is a paradoxical (but entirely logical) fact that a probability only makes sense if it is the probability of something definite. For example, to say that Jane has a 70% chance of passing the French exam only means something if at some point she takes the exam and gets a definite grade. At that point, the probability of her passing no longer remains 70%, but suddenly jumps to 100% (if she passes) or 0% (if she fails). In other words, probabilities of events that lie in between 0 and 100% must at some point jump to 0 or 100% or else they meant nothing in the first place.”
Barr mistakenly thinks that probability, whether referring either to mathematics or to human certitude, refers to coming into existence, to happening. In fact, both meanings are purely static. The one refers to the composition of mathematical sets, although its jargon may imply occurrence or outcome. The other refers to one’s opinion of the truth of a statement, which may be predictive. That Jane has a 70% chance or will probably pass the French exam obviously expresses the certitude of some human’s opinion, which has no objective measurement even if arrived at by some arbitrary algorithm.

Probability in mathematics is quantitative, but static. It is the fractional composition of logical sets. Probability in the sense of human certitude, like justice, is a quality. It cannot be measured because it is not material. This, however, does not diminish our penchant for quantifying everything (Ref. 4).

Barr’s identification of probability, as potential prior to its transition to actuality in an outcome, is due to taking the jargon of the mathematics of sets for the mathematics of sets itself. We say that the outcome of flipping a coin had a probability of 50% heads prior to flipping, which results in an outcome or actuality of 100% or 0%. What we mean to illustrate by such a material simulation is a purely static relationship involving the fractional concentration of the elements of logical sets. The result of the coin flip illustrates the formation or definition of a population of new sets of elements based on a source set of elements. In this case the source set is a set of two elements of different IDs. The newly defined population of sets consists of one set identical to the original set, or, if you wish, a population of any multiple of such sets.

Another illustration is defining a population of sets of three elements each, based on the probabilities of a source set of two elements of different nominal IDs, such as A and B. The population is identified by eight sets. One set is a set of three elements, A, at a probability (fractional concentration) of 12.5% in the newly defined population of sets. One set is a set of three elements, B, at a probability of 12.5%. Three sets are of two elements A and one element B, at a probability of 37.5%. Three sets are of two elements B and one element A, at a probability of 37.5%. The relationships are purely static. We may imagine the sets as being built by flipping a coin. Indeed, we use such jargon in discussing the mathematics of the relationship of sets. The flipping of a coin in the ‘building’ of the population of eight sets, or multiples thereof, is a material simulation of the purely logical concept of random selection. Random selection is the algorithm for defining the fractional concentrations of the population of eight new sets based on the probabilities of the source set. It is only jargon, satisfying to our sensual imagination, in which the definitions of the eight new sets in terms of the fractional concentration of their elements are viewed as involving a transition from potency to act or probability to outcome. The mathematics, in contrast to the analogical imaginative aid, is the logic of static, quantitative relationships among the source set and the defined population of eight new sets.

Random selection, or random mutation, is not a material process. It is a logical concept within an algorithm, which defines a logical population of sets based on the probabilities of a logical source set.

It is a serious error to conflate mathematical probability with the certitude of human judgment. It is also a serious error to believe that either refers to coming into existence or to the transition from potency to act, which are subjects of philosophical inquiry.

Ref. 1 “When Randomness Becomes Superstition”

Ref. 2 “Random or Non-random, Math Option or Natural Principle?”

Ref. 3 “Does Quantum Physics Make It Easier to Believe in God?”

Ref. 4 “The Love of Quantification”

Richard Dawkins has extensively discussed arithmetic. The theme of The God Delusion is that there is an arithmetical solution to the improbability of evolution in a one-off event, namely gradualism, whereas there is no arithmetical solution to the improbability of God. Obviously, the ‘improbability’ of God cannot be solved by gradualism.

It is encouraging that Richard Dawkins is interested in mathematics. If he were to learn to correct his mistakes in math, he might do very well in re-educating those whom he has deceived in mathematics, science and philosophy due to his errors in arithmetic.

The following is a math quiz based on problems in arithmetic addressed by Richard Dawkins and his answers, whether implicit or explicit, in his public work. I present this as a helpful perspective in the delineation of Dawkins’ public errors in arithmetic.

1) What is the opposite of +1?
Correct Answer: -1
Student Dawkins: Zero. Let us then take the idea of a spectrum of probabilities seriously between extremes of opposite certainty. The spectrum is continuous, but can be represented by the following seven milestones along the way:
Strong positive, 100%; Short of 100%; Higher than 50%; 50%; Lower than 50%; Short of zero; Strong negative, 0% (p 50-51. Ref 1).
Those, who aver that we cannot say anything about the truth of the statement, should refuse to place themselves anywhere in the spectrum of probabilities, i.e. of certitude. (p 51, Ref 1)
Critique of Dawkins’ answer:
On page 51 of The God Delusion, Dawkins devotes a paragraph to discussing the fact that his spectrum of certitude, from positive to its negative opposite, does not accommodate ‘no opinion’. Yet, he fails to recognize what went so wrong that there is no place in his spectrum for ‘no opinion’. The reason, that there is no place, is that he has identified a negative opinion as zero, rather than as -1. If he had identified a negative opinion as -1, which is the opposite of his +1 for a positive opinion, then ‘no opinion’ would have had a place in his spectrum of certitude at its midpoint of zero. Instead Dawkins discusses the distinction between temporarily true zeros in practice and permanently true zeros in principle, neither of which are accommodated by his spectrum ‘between two extremes of opposite certainty’ in which the opposite extreme of positive is not negative, but a false zero.

2) Is probability in the sense of rating one’s personal certitude of the truth of a statement a synonym for mathematical probability, which is the fractional concentration of an element in a logical set? For example, is probability used univocally in these two concepts: (a) The probability of a hurricane’s assembling a Boeing 747 while sweeping through a scrapyard. (b) The probability of a multiple of three in the set of integers, one through six?
Correct Answer: No. The probability of a hurricane’s assembling a Boeing 747 does not identify, even implicitly, a set of elements, one of which is the assembling of a Boeing 747. Probability as a numerical rating of one’s personal certitude of the truth of such a proposition has nothing to do with mathematical probability. In contrast to the certitude of one’s opinion about the capacity of hurricanes to assemble 747’s, the probability of a multiple of 3 in the set of integers, one through six, namely 1/3, is entirely objective.
Student Dawkins: Probability as the rating of one’s personal certitude of the truth of a proposition, has the same spectrum of definition as mathematical probability, namely 0 to +1 (p 50, Ref. 1). The odds against assembling a fully functioning horse, beetle or ostrich by randomly shuffling its parts are up there in the 747 territory of the chance that a hurricane, sweeping through a scrapyard would have the luck to assemble a Boeing 747. (p 113, Ref. 1) There is only one meaning of probability, whether it is the probability of the existence of God, the probability of a hurricane’s assembling a Boeing 747, the probability of success of Darwinian evolution based on the random generation of mutations or the probability of seven among the mutations of the sum of the paired output of two generators of random numbers to the base, six.

3) In arithmetic, is there a distinction between the factors of a product and the parts of a sum? Is the probability of a series of probabilities, the product or the sum of the probabilities of the series?
Correct Answers: Yes; The product. The individual probabilities of the series are its factors.
Student Dawkins: Yes; The relationship of a series of probabilities is more easily assessed from the perspective of improbability. An improbability can be broken up into smaller pieces of improbability. Those, who insist that the probability of a series is the product of the probabilities of the series, don’t understand the power of accumulation. Only an obscurantist would point out that if a large piece of improbability can be broken up into smaller pieces of improbability as the parts of a sum, i.e. as parts of an accumulation, then it must be true that its complement, the corresponding small piece of probability, is concomitantly broken up into larger pieces of probability, where the larger pieces of probability are the parts, whose sum (accumulation) equals the small piece of probability. (p 121, Ref. 1).

4) Jack and Jill go to a carnival. They view one gambling stand where for one dollar, the gambler can punch out one dot of a 100 dot card, where each dot is a hidden number from 00 to 99. The 100 numbers are randomly distributed among the dots of each card. If the gambler punches out the dot containing 00, he wins a kewpie doll. Later they view another stand where for one dollar, the gambler gets one red and one blue card, each with 10 dots. The hidden numbers 0 to 9 are randomly distributed among the dots of each card. If in one punch of each card, the gambler punches out red 0 and blue 0, he wins a kewpie doll. This second stand has an interesting twist, lacking in the first stand. A gambler, of course, may buy as many sets of one red card and one blue card as he pleases at one dollar per set. However, he need not pair up the cards to win a kewpie doll until after he punches all of the cards and examines the results.
(a) If a gambler buys one card from stand one and one pair of cards from stand two, what are his respective probabilities of winning a kewpie doll?
Correct Answer: The probability is 1/100 for both.
Student Dawkins: The probability of winning in one try at the first stand is 1/100. At the second stand the probability of winning is smeared out into probabilities of 1/10 for each of the two tries.
(b) How many dollars’ worth of cards must a gambler buy from each stand to reach a level of probability of roughly 50% that he will win at least one kewpie doll?
Correct Answers: $69 worth or 69 cards from stand one yields a probability of 50.0%. $12 worth or 24 cards (12 sets) from stand two yields a probability of 51.5%. (A probability closer to 50% for the second stand is not conveniently defined.)
Student Dawkins: A maximum of $50 and 50 cards from stand one yields a probability of 50%. A maximum of $49 and 49 cards from stand one yields a probability of 49%.A maximum of $14 and 28 cards yields a probability of 49%. (A probability closer to 50% for the second stand is not conveniently defined.)
(c) In the case described in (b), is the probability of winning greater at stand two?
Correct Answer: No
Student Dawkins: Yes
(d) In the case described in (b), is winning more efficient or less efficient in terms of dollars and in terms of total cards at the second carnival stand?
Correct Answer: More efficient. The second stand is based on two sub-stages of Darwinian evolution compared to the first stand, which is based on one overall stage of Darwinian evolution. The gradualism of sub-stages is more efficient in the number of random mutations while having no effect on the probability of evolutionary success. Efficiency is seen in the lower input of $12 or 24 random mutations compared to $69 or 69 random mutations to produce the same output, namely the probability of success of roughly 50%.
Student Dawkins: Efficiency is irrelevant. It’s all about probability. The gradualism of stand two breaks up the improbability of stand one into smaller pieces of improbability. (p 121, Ref. 1)
This problem is an illustration of two mutation sites of ten mutations each. I analyzed these relationships in Ref. 2, using an illustration of three mutation sites of six mutations each. In that illustration, I introduced two other modifications. One modification was that the winning number was unknown to the gambler. The other was that the gambler could choose the specific numbers on which to bet, so his tries or mutations were non-random. With the latter deviation from the Darwinian algorithm, the probability of winning a kewpie doll required a maximum of 216 non-random tries for the first stand and a maximum of 18 non-random tries for the second stand. The gradualism of the second stand smears out the luck required by the first stand. The increased probability of winning a kewpie doll at the second stand is due to the fact that one need not get his luck in one big dollop, as one does at the first stand. He can get it in dribs and drabs. It takes, respectively at the two stands, maxima of 216 and 18 tries for a probability of 100% of winning a kewpie doll. Consequently and respectively, it would take maxima of 125 and 15 tries to achieve a probability of 57.9% of winning a kewpie doll. Whether one compares 216 tries for the first stand to 18 tries for the second stand or 125 tries to 15 tries, the probability of winning a kewpie doll is greater at the second stand because it takes fewer tries. (See also the Wikipedia explanation, which is in agreement with Student Dawkins, Ref. 3)
Another example of extreme improbability is the combination lock of a bank vault. A bank robber could get lucky and hit upon the combination by chance. In practice the lock is designed with enough improbability to make this tantamount to impossible. But imagine a poorly designed lock. When each dial approaches its correct setting the vault door opens another chink. The burglar would home in on the jackpot in no time. ‘In no time’ indicates greater probability than that of his opening the well-designed lock. Any distinction between probability of success and efficiency in time is irrelevant. Also, any distinction between the probability of success and efficiency in tries, whether the tries are random mutations or non-random mutations is irrelevant. (p 122. Ref. 1)

5) If packages of 4 marbles each are based on a density of 1 blue marble per 2 marbles, how many blue marbles will a package selected at random contain?
Correct Answer: 2 blue marbles
Student Dawkins: 2 blue marbles.

6) If packages of 4 marbles each are based on a probability of blue marbles of 1/2, how many blue marbles will a package selected at random contain?
Correct Answer: Any number from 0 to 4 blue marbles.
Student Dawkins: 2 blue marbles. This conclusion is so surprising, I’ll say it again: 2 blue marbles. My calculation would predict that with the odds of success at 1 to 1, each package of 4 marbles would contain 2 blue marbles. (p 138, Ref. 1)

1. The God Delusion
2. minute 4:25

Where the total number of generated mutations, x, is random, and n is the number of different mutations, the Probability of success, P, equals 1- ((n – 1)/n)^x.
For n = 100 and x = 69, P = 50.0%
For n = 10 and x = 12, P = 71.7%. For P^2 = 51.5%, the sum of x = 24
Where the total number of generated mutations, x, is non-random, and n is the number of different mutations, the Probability of success, P, equals x/n
For n = 100 and x = 50, P = 50%
For n = 100 and x = 49, P = 49%
For n = 10 and x = 7, P = 70.0%. For P^2 = 49%, the sum of x = 14
For n = 216 and x = 216, P = 100%
For n = 6 and x = 6, P = 100%. For P^3 = 100%, the sum of x = 18
For n = 216 and x = 125, P = 57.9%
For n = 6 and x = 5, P = 83.3%. For P ^3 = 57.9%, the sum of x = 15

Science is the determination of the mathematical relationships inherent in the measureable properties of material reality. Oftentimes, associated with the mathematical relationships of science is a visual narrative pleasing to the human imagination. Such a narrative is essentially extrinsic to the science.

The Relationship of the Intellect and the Imagination

From the time of Aristotle it has been recognized that human intellectual knowledge is immaterial, an abstraction from material reality. However, Aristotle noted that the action of the intellect is dependent upon a phantasm, a composite of the sense knowledge obtained through the animal senses of man. Thus the human intellect, though immaterial in its nature and activity, is extrinsically dependent upon the sensual phantasm in order to function and to be aware. Without such a sensual phantasm, the human intellect is inactive.

The scope of the senses, including the sensual phantasm, is limited. Material reality, however, has properties beyond the scope of the senses, which properties can be measured instrumentally. Material properties smaller than this scope have been designated, micro, and larger than this scope, macro. Although the human intellect is unlimited, the human imagination is limited to the mid-range scope, which is that of human sensation. The dilemma arises when the sensual imagination attempts to constrain intellectual thought to its level of sensation.

In the Context of Science

A simple scientific relationship is that among the measured values of pressure, volume and temperature of a gas. Gas, confined to a tank, will increase in pressure when its temperature is increased. The science is the mathematical relationship among the instrumentally measured values of pressure, volume and temperature. Associated with the mathematics is a popular visual narrative. The gas is composed of molecules, depicted as tiny ‘micro-sized’ balls in motion. Pressure is depicted as the balls striking the internal walls of the tank. When the temperature is increased the balls move faster, so they strike the walls with greater force, thereby increasing the pressure. This is a harmless visual narrative at the level of human sensation and pleasing to the human imagination. In contrast, the mathematical relationship among the measured values of pressure, volume and temperature is the science.

However, there are scientific relationships, which are not accompanied by visual narratives satisfying to the human imagination. Some measured properties of light are related by equations, which are accompanied by a visual narrative of particles of light. Other properties of light are related by equations which are accompanied by a visual narrative of waves of light. Yet, the human imagination finds the particle and wave narratives visually incompatible in spite of the validity of the mathematical relationships among the measured values in the two different results.

The Classic Dilemma

Collimated light is unidirectional. When it is passed through a slit it produces a pattern of intensity, which may be viewed as a pattern of particles or quanta or photons of light distributed about a norm. The distribution may also be viewed as a continuous curve symmetrical about a single peak (See figure 5 at the end of reference 1).

When collimated light is passed through two adjacent slits the result is a pattern of intensity, which may be viewed as a pattern of particles or quanta of light distributed about several norms. The distribution may be viewed as a continuous curve of a series of individually symmetrical peaks (See figures 3 and 4 at the end of reference 1). This multi-peak pattern is best described mathematically, i.e. scientifically, as due to the interference of two waves of light emanating from point sources as they emerge from the slits.

The purpose of the experimental setup in reference 1 is to track light, quantum by quantum. In our human imagination a single isolated particle, i.e. a discrete particle, cannot act as a continuous wave.

This same purpose was that of the experiment described in reference 2. The rationale of this experiment is that interference requires two waves, whereas a single particle or quantum of light cannot behave as two waves. (This rationale suspends the validity of the quantum/wave mathematics applied at the micro-level and imposes a dictum of the sensory-level human imagination: Particles in the human imagination are discrete particles, not continuous waves.) The experiment proposes that collimated light as an individual quantum is in transit along one of two paths when a de-collimating device or beam-splitter is introduced or not introduced farther along where the two paths exit collimation and cross. The human imagination demands that the beam-splitter cannot change the particle into two waves once the particle is in transit along one or the other path. Nevertheless, the presence of the beam-splitter introduces wave interference. The human imagination reaches the conclusion, ‘Then one decides the photon shall have travelled by one route or by both routes after it has already done the travel.’ This is an imaginative impossibility.

The experiment in reference 1 is clearer than that of reference 2. When the collimation of the light is maintained even though the light is tracked photon by photon, the intensity pattern is of a single peak (figure 5) by detectors D3 and D4 identified in figure 2. When the collimated light is released from collimation by mixing the paths, it displays wave interference in the cumulative pattern of the individually tracked photons in figures 3 and 4 from detectors D1 and D2 of figure 2, respectively.

Another experiment uses two telescopes to maintain the collimation of light. Without the telescopes the light from the two sources interferes as waves. If the light, maintained as collimated by the two telescopes, is permitted to mix after exiting the telescopes it interferes as waves. (See reference 3).

The dilemma of the imagination is, ‘How can a single quantum be ‘mixed’ with nothing by a bean-splitter and thereby act as a wave?’

Attempts to Placate the Imagination within the Context of Physics

Some physicists pacify their imaginations by viewing the fundamental equation as that of the wave and view the detection of a quantum as 1 and the non-detection as 0, the two possible outcomes of a probability event. The wave function is viewed as an expression of probability, the outcome of which is yes, one or no, zero. An outcome is said to be the collapse of the wave function. As we shake a pair of dice in our cupped hands, the probability of the sum, seven, is one-sixth, but when the dice have landed the probability, which was one-sixth has collapsed to 1 (seven) or 0 (non-seven). Unlike the eleven discrete probabilities, which are the sums of two dice, a wave as a probability function, may be viewed as indefinite in probability between 0 and 1. In this view, the imagination states that the quantum does not exist except in the potency (probability) of a wave. The wave collapses due to an observation. As a result of observation, the quantum comes into existence or doesn’t. Upon observation, the probability function no longer exists. Thus a quantum exists, only if observed.

The author of reference 1, concludes, ‘Ho-hum another experimental proof of quantum mechanics’. Another expression rejecting the need to placate the human imagination has been expressed as ‘Shut up and calculate!’ (See reference 4).

The need to placate the human imagination has been carried to two extremes. One extreme is to claim that observation creates material reality, such as ‘The moon is demonstrably not there, unless someone is looking.’ (See reference 5). The other extreme is to claim that there are as many universes as there are possible material outcomes. A quantum we observe is valid within our universe, but not universally valid across multi-verses. What exists in our universe in our observation is characteristic of our universe. All other possibilities exist in a multitude of universes encompassed in what appears in our universe to be a probability function, the wave (reference 6).

The middle course is to recognize that the compatibility of the discrete and the continuous is a sticky wicket, not to the intellect, but to the sensual imagination. The photon vs. wave phenomena of light is just one example where the human imagination tends to deny the compatibility between the discrete and the continuous.

The Central Problem, Imagining the Concept of Continuity

The most fundamental concept in mathematics is the counting of discrete elements. Material objects are easily counted. Another concept in the basic development of mathematics is the fractionation of a discrete element. Because mathematics is an abstraction from material reality, it is intellectually possible not only to fractionate an element into two parts, it is possible to fractionate it into an unlimited number of parts. You can’t do that with an apple or any material thing even of an apparently uniform structure. Our sensual imagination cannot keep up with our intellect. Our intellect has conceived the abstract idea of continuity.

An independent variable is continuous over its domain if its value can be specified to any arbitrary precision.

Suppose I had a building with a frontage of thirty meters. I could delineate two contiguous parallel parking spaces of ten meters each demarcated with painted end stripes twenty centimeters wide and having one stripe at the midpoint. That would be a 20.6 meter, continuous segment, of the frontage. However, if I asked a painter to paint the three twenty centimeter stripes to a precision of 10^(-11) meters, he would think I was crazy. He would note that the diameter of the electron cloud of the hydrogen atom was 10^(-10) meters and I was asking for a painted line to the precision of a tenth of the diameter of a hydrogen atom. He would point out that the abstract concept of mathematical continuity to such a precision did not apply to the context of the level of human sensation and specifically to the painting of lines in a street. He wouldn’t deny the validity of the mathematical concept of continuity. The painter would merely note that the human imagination did not have the power to concretize the abstract concepts of mathematics in the manner and context, which I proposed.

One of my favorite apparent incompatibilities of a mathematical concept with material reality, as viewed by the human imagination, involves the probability of the sequence of a deck of playing cards, namely 1 in 8.06 x 10^67. Intellectually we are content with emulating a ‘random selection’ from a set of that size by shuffling the deck. Yet a set of that size can only be logical, far beyond the scope of the human imagination. It would be foolish to claim that the shuffled result was the collapse of the fundamental reality, the probability function, into a materially observable event, where the collapse of the probability function was essentially due to the observation. It would likewise be foolish to claim that the shuffled result was a material event in my universe, but that the entire set of probable mutations of a deck of cards must exist in some other material universe(s), rather than solely in human logic. In both views, shuffling brings one material event into observable existence out of ‘real’ probabilities or ‘real’ worlds equaling 8.06 x 10^67. These views require material reality to conform to human thought in its subjection to the human imagination. The proper view is to require human judgments about reality to conform to reality, while also recognizing the immateriality of human thought and logic, which frees the human intellect from subordination to the human imagination, i.e. subordination solely to sense knowledge.

Note that adding two jokers to the deck would increase the ‘real’ probabilities and the number of ‘real’ worlds by a factor greater than 1000 to 2.3 x 10^71. The mass of the earth is a mere 5.97 x 10^36 nanograms.

The mathematics of probability concerns the identification of logical sets solely according to their fractional composition of logical elements. When this mathematics is applied to material sets, the material properties of the elements are completely irrelevant. The IDs of the elements are purely nominal. The elements have no relevant material properties. Material elements may be used only in emulation of the mathematics, while ignoring their material properties. Thus, the mathematics may be used as a tool of ignorance of the material properties of the elements.

To the extent that a wave function is considered a probability function, it renders the mathematics a tool of ignorance, i.e. a tool to compensate for ignorance of the material properties underlying the phenomena being studied. It is a serious error to take probability as fundamental reality or as a characteristic of reality, rather than as a mathematic tool of abstract logic.

1. Excerpts from “A Delayed Choice Quantum Eraser” by Yoon-Ho et al Phys. Rev. Lett. 84:1-5 (2000) with commentary by Ross Rhodes.
2. Wheeler’s delayed choice experiment by Alain Aspect

3. Wheeler’s Classic delayed choice experiment
4. Copenhagen interpretation
5. Video of the faith and science conference in 2011, at time 1:09:45
6. Chapter 11, Quantum Time, by Sean Carroll

Mathematical probability is the fractional concentration of an element in a logical set. The word, probability, has the aura of the rationality of mathematics. Its synonym, chance, connotes a lack of rationale. The perfect deal is defined as four hands, each a flush of thirteen cards. Its probability is its fractional concentration and equals the probability of every other variant four-hand deal, namely one divided by the total number of variant four-hand deals. The probability is 4.474 x 10^(-28).

The following is an excerpt from the essay, “What is modern in the new atheism? – the inference of probability”, which was printed in the Delta Epsilon Sigma Journal, Volumes LVII Issue 2 (2012) and LVIII Issue 1 (2013). Links to the pdf files of the entire journal issues are:

Imposing an arbitrary extra-mathematical numerical limit on probability

This brings us to the most common error shared not only by Dawkins and his critics, but by many others. It is a variant of the argument of ‘the perfect deal’, which, due to its low probability, cannot be explained by chance. The argument goes by many names. Dawkins calls it ‘the problem of improbability’, by which he claims that the improbability of evolution in a one-off event is ‘far beyond the reach of chance’. The argument is that of ‘irreducible complexity’. Most complexities are explained by mathematical probability, but not those of a probability close to zero. The argument is also called the anthropic argument. In this form the argument claims that the combined probability of the various factors necessary for life on earth is so close to zero that the combination of factors cannot be due to chance.

The argument is based on the distinction between the connotations of the synonyms, chance and probability. Its general form is: The probability of this outcome is so close to zero that this outcome cannot be due to chance. The argument implies that chance is some numerical limit imposed from outside the mathematics of probability.

The argument is a mathematical self-contradiction. It states that the fractional concentration of this element in this set is so close to zero that it cannot be the fractional concentration of this element in this set. The problem of improbability, under whatever name, is a fiction. Dawkins’ thesis in The God Delusion is that there is no solution to the fictitious problem of the improbability of God, whereas Darwin’s theory solves the fictitious problem of the improbability of evolution in a one-off event.

It is a fiction. There is no finite limit to the number of elements in a set. Consequently, there is neither a positive lower limit greater than zero for the fractional concentration of a specific element in a set nor an upper limit less than one for improbability. For a set of n unique elements, the probability, 1/n, cannot be too close to zero to be a valid probability, nor can the improbability, 1 – (1/n), be too close to one. All numerical values of probability are of equal validity, irrespective of how close they are to zero.

Both testimony and evidence for a jury point to a past material event, which cannot be experimentally repeated. The evidence must have a material component as well as an intelligible component. It is essentially the intelligible component of the evidence which points back to the intelligible component of the past material event.

Although material evidence may be inherently intelligible, the extent to which it is usefully intelligible depends upon the state of scientific knowledge and engineering art. Consider a blood sample at a crime scene, which both the defense and the prosecution stipulate is from the perpetrator of the crime. Sixty years ago the sample could not have been analyzed for its DNA profile, even though the sample inherently contained that intelligibility. Suppose the sample was the same blood type as that of the defendant, who was convicted on this and other testimony and evidence. Suppose DNA analysis would have demonstrated that the blood sample from the crime scene was not that of the defendant. The defendant would have been exonerated by stipulation, except for the state of scientific knowledge and engineering art at that time.

Notice that the evidence consisted not only of the blood sample from the crime scene, but also a biological sample from the defendant. The intelligence was inherent in the material evidence, but it required human intellectual acuity to recognize the intelligence. It was the intelligible component of the biological sample of the defendant which did or failed to point back to the intelligible component of the blood sample.

It is generally agreed that the DNA complement of the nucleus of a biological cell is the entire biological code of the individual of the species. This is the basis of cloning. As a code, the DNA is intelligible, but it is a determinate code. The code is determinate biologically and can be deciphered experimentally, e.g. as the three base code of messenger RNA (mRNA) for amino acid incorporation into a polypeptide. There is no arbitrariness in the code because it is material. The lack of arbitrariness in the code makes it eminently amenable to deciphering it by experimentation.

It is the determinate intelligence in material reality that makes experimental science possible. In contrast, human intelligence is a grade above the intelligibility of material things. The codes employed by human intelligence are not determinate but are arbitrary. House, maison and casa are arbitrary codes for the same concept as are dog, chien and perro for another concept.

It was the universality of human concepts, including those which apply to material things, in contrast to the particularity of material things, which led Aristotle and eventually the entirety of western civilization to recognize (1) that material things are composed of a principle of intelligibility and a principle of particularity, (2) that in humans the principle of intelligibility is immaterial (Whereas intellectual concepts such as dog are universal, sense knowledge is limited to individual material dogs) and (3) that humans have the power to perceive the intelligible natures of material things because of the immaterial nature of each human’s principle of animation. This is in contrast to the principle of animation of animals, which only have sense knowledge of the particular.

This is not only in accord with, but explains the very meaning of evidence both in the jury box and in the laboratory. In the laboratory, it is not simply data that are accumulated. What is achieved is the intellectual recognition of the mathematical relationships discoverable in data. These logical relationships are inherent in the intelligible principles which are constitutive components of material things. In contrast, humans have an immaterial principle of animation, which renders them intellectual agents. This immaterial principle of animation enables humans to comprehend the universal, which includes the mathematical relationships determined through the experimental measurement of the properties of material things. It is the immateriality of the principle of animation of humans which is the basis of human knowledge and communication. It is also the principle which enables the possibility of argument.

A full argument is a series of individual arguments. A premise of any individual argument may be an immaterial fact completely independent of material reality. This is possible even though all humanly known facts are traceable in origin to human knowledge of the immaterial intelligent component of a material thing. This ultimate dependence of human intellectual knowledge upon material reality is an extrinsic dependence. Every explanation of the material must itself be immaterial to be an explanation. This immateriality, this intrinsic independence of material reality, which characterizes the premises evoked during the course of argumentation, is quite evident in mathematics, but it is also apparent in other areas such as science and philosophy.

In contrast to the character of jury box evidence, some website com box authors adopt the prejudice that evidence is material which points as material to some other, which is also simply material. They do not admit that the universality of human knowledge requires an agent of intelligence, which is immaterial. They do not admit that the existence of material things, which are indifferent by nature to existence, require for the explanation of their existence an immaterial being, which explains its own existence. They see no distinction between the intelligible, determinate character of the particular material thing and the universality of its human conceptualization. They freely admit the existence of particular, determinate intelligible codes in material things, such as the codes of DNA and mRNA, but they deny that the arbitrary codes of human invention require an intelligent, immaterial agent. They skipped Western Thought 101, with its confluence of Greek Aristotelian philosophy and the Judeo-Christian revelation ( It was the recognition of the Logos, both in Greek philosophy and the Judeo-Christian revelation, which identified material reality as fundamentally intelligible. This led to modern science, which is fully integrated with that philosophy, which is fully integrated with that revelation.

It was Aristotle who first recognized by analysis of our everyday experience of reality that material things had to be composites of a principle of intelligibility and a principle of particularity. He also noted (1) that humans were not simply materially sensitive agents, but intellectual agents, which required that their principle of animation had to be immaterial and (2) that there had to be a transcendent being, who was pure act, in order to explain the existence of the things of our daily experience, which are indifferent to existence. Temporally parallel to these developments in Greek thought was the gradual revelation to the Jews. This was notable in the concepts of individual intelligence and free will, but most of all in the existence of pure act. Revelation expressed the concept of pure act more clearly and eloquently than did Aristotle. Pure Act declared, “I am who am.”, Ex 3:14, culminating in “Before Abraham came to be, I am.” Jn 8:58.

Today, without the recognition of the immaterial components of human animation and human knowledge and without the recognition of the intelligible component of evidence, there could be no basis for argument in the com boxes. Com box disputation of that to which evidence points would be simply the clash of material entities, if indeed it could even be recognized as that. It would be meaningless.

I challenge anyone, who views human knowledge as purely material, such as electronic activity in the brain, to explain the functioning (and the purpose) of argumentation as well as to explain the character of evidence.