The distinction between random and non-random is a distinction in mathematical logic. Is it also a distinction in nature, a distinction between natural principles?

Consider one mutation site of fifty-two different mutations. An analogy would be a playing card.
(1) Let each of the fifty-two different mutations be generated deliberately and one mutation be selected randomly, discarding the rest.
(2) Let fifty-two mutations be generated randomly and the selection of a specified mutation, if generated, be deliberate, discarding the rest.

These two algorithms are grossly the same. They present a proliferation of mutations followed by its reduction to a single mutation. They differ in whether the proliferation is identified as random or non-random and whether the reduction to a single mutation is identified as random or non-random.

Apply these two mathematical algorithms analogically to playing cards.

For the evolution of the Ace of Spades, the first algorithm would begin with a deck of fifty-two cards followed by selecting one card at random from the deck. If it is the Ace of Spades, it is kept. If not, it is discarded. The probability of evolutionary success would be 1/52 = 1.9%.

For the evolution of the Ace of Spades by the second algorithm, fifty-two decks of cards would be used to select randomly one card from each deck. The resulting pool of fifty-two cards would be sorted, discarding all cards except for copies of the Ace of Spades, if any. The probability of evolutionary success would be 1 – (51/52)^52 = 63.6%.

The probability of success of the second algorithm can be increased by increasing the number of random mutations generated. If 118 mutations are generated randomly, the probability of this pool’s containing at least one copy of the Ace of Spades is 90%.

Notice of the two processes, the generation of mutations and their differential survival, that either process is arbitrarily represented as mathematically random and the other is arbitrarily represented as mathematically non-random.

Also, notice that in the material analogy of the mathematics, the analog of randomness is human ignorance and lack of knowledgeable control. In its materiality, ‘random selection’ of a playing card is a natural, scientifically delineable, non-random, material process.

In the mathematics of probability, random selection is solely a logical relationship of logical elements of logical sets. It is only analogically applied to material elements and sets of material elements. The IDs of the elements are purely nominal. Measurable properties, which are the subject of science, and which are implicitly associated with the IDs, are completely irrelevant to the mathematical relationships. A set of seven elements consisting of four sheep and three roofing nails has the exact same mathematical relationships of randomness and probability as a set consisting of four elephants and three sodium ions.

In the logic of the mathematics of probability, the elementary composition of sets is arbitrary. The logic does not apply to material things as such because the IDs of elements and the IDs of sets can only be nominal due to the logical relationships defined by the mathematics. This is in contrast to the logic of the syllogism in which the elementary composition of sets is not arbitrary. The logic of the syllogism does apply to material things, but only if the material things are not arbitrarily assigned as elements to sets, but are assigned as elements to sets according to their natural properties, which properties are irrelevant to the mathematics of probability. The logic of the syllogism applies to material things, if the IDs are natural rather than nominal.

Charles Darwin published The Origin of Species in 1859. Meiosis, which is essential to the detailed modern scientific knowledge of genetic inheritance, was discovered in 1876 by Oscar Hertwig. In the interim, Gergor Mendel applied mathematical probability as a tool of ignorance of the details of genetics to the inheritance of flower color in peas. The conclusion was not that the material processes of genetics are random. The conclusion was that the material processes involved binary division of genetic material in parents and its recombination in their offspring. The binary division and recombination are now known in scientific detail as meiosis and fertilization.

The mathematics of randomness and probability, which can be applied only analogically to material, serves as a math of the gaps in the scientific knowledge of the details of material processes.

Consider the following two propositions. Can both be accepted as compatible, as applying the mathematics of randomness and probability optionally to one process or the other? Can either be rejected as scientifically untenable in principle, without rejecting the other by that same principle?
(I) The generation of biological mutations is random, while their differential survival is due to natural, non-random, scientifically delineable, material processes.
(II) The generation of biological mutations is due to natural, non-random, scientifically delineable, material processes, while their differential survival is random.

My detailed answers are contained in the essay, “The Imposition of Belief by Government”, Delta Epsilon Sigma Journal, Vol. LIII, p 44-53 (2008). My answers are also readily inferred from the context in which I have presented the questions in this essay.

In its recent decision in Derwin v. State U., the State Supreme Court ordered the State University to award Charles Derwin the degree of doctor of philosophy. Derwin admitted that his case, which he lost in all the lower courts, depended upon one sarcastic statement made in writing by Prof. Stickler of the faculty panel, which heard his defense of his graduate thesis. The bylaws of the University in awarding the degree of doctor of philosophy require unanimous approval of the faculty panel by written yes or no voting. The members of the panel are free to offer verbal or written criticism during and after the defense, but must mark their ballots simply yes or no. However, in casting the lone negative vote, Prof. Stickler wrote in addendum to his ‘No’, “If Derwin and his major advisor were to submit an article for publication reporting the experimental results of Derwin’s thesis, I suggest they submit it to The Journal of Random Results or to The Journal of Darwinian Evolution.”

Derwin’s legal team argued that Stickler violated the university bylaws by adding the written addendum as well as academic decorum by its sarcasm. Further, and most importantly, they argued that Prof. Stickler exposed his own incompetence to judge the thesis by his attempt to belittle Darwinian Evolution. By this Stickler had disqualified himself as a judge of the thesis panel. The State Supreme Court agreed and ordered the State University to award the degree in accord with the university bylaws requiring unanimous approval by the thesis panel. The Court noted that the university bylaws allow for a panel of six to eight faculty members. The panel, which heard the defense of his thesis by Derwin, consisted of seven, including Prof. Stickler. The Court also ruled that the academic level arguments presented in the lower courts by both sides regarding ‘random results’ in general and ‘random mutation’ in the particular case of Darwinian evolution, were simply of academic interest and irrelevant to the legal case.

For their academic interest those arguments are presented here.

Prof. Stickler stated that random experimental results are of no scientific value and that Derwin conceded the results he reported in his thesis could be characterized as random. Stickler argued that even those, who contend that genetic mutation is random, claim that Darwinian evolution is non-random, and therefore scientific, even though their claim is erroneous. Stickler attributed the following quote to Emeritus Prof. Richard Dawkins of Oxford University as his response to the question, “Could you explain the meaning of non-random?” Dawkins replied, “Of course, I could. It’s my life’s work. There is random genetic variation and non-random survival and non-random reproduction. . . That is quintessentially non-random. . . . Darwinian evolution is a non-random process. . . . It (evolution) is the opposite of a random process.” (Ref.1).

Stickler stated that Dawkins’ argument that Darwinian evolution is non-random and therefore, scientific, does not hold water. He noted that the pool of genetic mutants subjected to natural selection in Darwinian evolution is formed by random mutation. The pool’s containing the mutant capable of surviving natural selection is a matter of probability. Consequently, the success of natural selection cannot be 100%. Evolutionary success is equal to the probability of the presence of at least one copy of the survivable mutant in the pool subjected to natural selection and is therefore random.

In his rebuttal, Derwin agreed with Stickler that Darwinian evolution was indeed characterized by probability and randomness. However, the universal scientific acceptance of Darwinian evolution indicates that random results are indeed scientific, which he noted was the pertinent issue in the case.

Stickler’s counter argument was to note that Darwinian evolution is based on data consisting of a series of cycles, each cycle consisting of the proliferation of genetically variant forms and their diminishment to a single form. Darwinian evolution explains such cycles by the hypothesis of the random generation of genetic variants and their reduction to singularity by natural differential survival. Stickler claimed the data could also be explained by what he called ‘The inverse Darwinian Theory of Evolution’. The inverse theory explains the same cyclic data of the standard theory, but as the non-random, natural generation of genetic variants by scientifically identifiable material processes and the reduction of this pool of genetic variants to singularity by random, differential survival.

Deciding which hypothesis, if either, was valid would require some ingenuity beyond the stipulated data of the proliferation and diminishment of variant genetic mutations. He noted, however, that the inverse hypothesis would be rejected a priori by the claim that we know at least some of the scientific factors affecting differential survival, so it could not be hypothesized that differential survival was random. This, Stickler claimed, demonstrates that randomness and probability cannot be proffered as a scientific explanation. He said, “If randomness is rejected a priori as scientifically untenable as an explanation of variant survival because differential survival is due to scientific material processes, then randomness must be rejected a priori as scientifically untenable as an explanation for the generation of genetic variants for the same reason. If randomness is sauce for the goose of scientifically variant generation, randomness must potentially be sauce for the gander of scientifically variant survival. In fact it is sauce, i.e. the mathematics of randomness and probability is a tool of ignorance to cover a gap in scientific knowledge. It is in the context of the absence of the scientific knowledge of genetics in the mid-nineteenth century that made it seem plausible at that time to propose ignorance of the scientific knowledge of material processes, that is, to propose random changes, as part and parcel of a scientific theory.”

In response, Derwin noted that quantum mechanics, perhaps the most basic of the sciences, is recognized as founded on probability and therefore randomness. (Ref.2).

1) (minute 38:56)


Amaryllis is a form of lily and as such its petals are in sets of three. It has two such sets, one fore and one aft. One set may be roughly identified by direction as north, southeast and southwest. The other set as south, northeast and northwest. These sets are natural measureable properties of the amaryllis plant and are useful in the science of taxonomy.

The mathematics of sets may be characteristically embodied in nature and when so, it forms part of the base of science. However, that is not to say that the entirety of the mathematics of sets is characteristically embodied in nature. One important exception is the mathematics of probability.

Probability is the fractional concentration of an element in a logical set. The mathematics of probability is centered in the formation of other logical sets from a source set, where all sets are identified by their probabilities, i.e. their fractional compositions.

In discussing the mathematics of probability where the source set is one of two subsets of three unique elements, it would not be inappropriate to refer to the amaryllis flower as a visual aid in discussing what is essentially logical and in no way characteristic of the nature of the amaryllis flower.

In randomly forming sets of two petals where the source set is the amaryllis flower, what are the probabilities of the population of sets defined in terms of fore and aft petals? The population defined is of four sets of two petals each. One set consists of two fore petals. One set consists of two aft petals. Each of the other two sets of the population consist of one fore petal and one aft petal. The probability of sets of two fore or two aft petals in the population of sets is 25%, while the probability of a set of one fore and one aft petal is 50%.

One could imagine placing one set of six petals from an amaryllis flower in each of four hundred pairs of hats and blindly selecting a petal from each of two paired hats. One would expect the distribution of paired sets of petals to be roughly one hundred of two fore petals, one hundred of two aft petals and two hundred of one fore and one aft petal. This would represent a material simulation of the purely logical concept of probability.

What is the probability of a set of eighteen randomly selected amaryllis petals containing at least one ‘north’ petal? The answer, P, equals (1- ((n – 1)/n)^x), where n = 6 and x = 18. P = 66.5%.

Notice that these questions in the logic of probability have nothing to do with amaryllis flowers or petals or their manipulation. The petals and their visual (or actual) manipulation are entirely visual aids in a discussion of pure logic. It is materially impossible to select any material thing at random from a set of material things. Selection is always explicable in terms of the material forces involved. Thus, it is always non-random. It is by convention that we equate human ignorance of the details of the material process of selection with mathematical randomness.

We say that the probability of one sperm fertilizing a mammalian egg is one in millions. What we mean is that the fractional concentration of any one sperm is one in millions and that we are ignorant of the detailed non-random physical, chemical and biological processes by which one sperm of the natural set fertilizes the egg.

It is, of course, permissible to use the mathematics of probability in many instances when for any number of reasons we are ignorant of the scientific explanation of material processes. However, we must be constantly aware that the mathematics of probability characterizes human ignorance and not material reality when it is used as a tool to compensate for a lack of knowledge.

The mathematics of probability is an exercise in logic, unrelated to the nature of material things and their measurable properties. In contrast, science is the determination of the mathematical relationships among the measureable properties of things, which properties are characteristic of the nature of material things.

In my previous post (Ref. 1), I noted that John Lennox concurs with Richard Dawkins that replacing a single cycle of Darwinian evolution with a series of sub-cycles ‘drastically increases the probabilities’ (Ref. 2).

However, Lennox is not in full agreement with Dawkins’ view. Lennox’ primary criticism of Dawkins’ interpretation of the biological version of Darwinian evolution is that it is a circular argument, “And strangest of all, the very information that the mechanisms are supposed to produce is apparently already contained somewhere within the organism, whose genesis he claims to be simulating by the process.” Lennox misinterpretation of Dawkins’ view is ‘within the organism’ (Ref. 3). For Dawkins and Darwin, natural selection is essentially inanimate and external to the evolving organism. It is natural selection which defines the target.

Dawkins has identified the generation of biological forms (at whatever level, such as that of the genome or the morphology of a bird’s wing) as random, while natural selection is non-random, thereby rendering, in Dawkins’ judgment, evolution overall as ‘quintessentially non-random’ (Ref. 4). Thus, consistently with the algorithm of Darwinian evolution, it is random mutation, which is ‘a blind, mindless, unguided process’, not natural selection, as Lennox alleges (Ref. 3).

Natural selection is entirely external to the organism and essentially physical, not biological. Natural selection may be described as biologically blind, mindless and unguided, while being physically sighted, intelligent and guided. Natural selection is an ecological niche, defined in purely inanimate, physical and chemical terms.

In the biological simulation of the Darwinian algorithm, life is plastic by means of random mutation. This plasticity is allowed survivable, discrete expressions only within the molds of environmental constraints. These biological expressions appear to us as biological norms when in fact they are environmental norms.

Typically we think in terms of biological norms, such as mice, amoebae, algae and tulips. According to Darwinian evolution and natural selection in particular, these are not biological norms. What is biological is simply potency working through random generation (mutation). What appears to us to be biological norms are really environmental, i.e. physical, norms, evidenced in biological terms. We are aware of the existence of the inanimate environmental niches by the existence of the biological forms, once randomly generated, which now fill them. The environmental norms are physically defined and physically formed. They are merely filled with living matter, which coincidentally expresses a biological form compatible with the determining environment.

In the biological simulation of the Darwinian algorithm, natural selection is completely external to the organism and may indeed be viewed as an existent target, where the culling of ‘failed’ mutations is one aspect of the overall physical processes of natural selection. The Darwinian algorithm, in itself and as presented by Dawkins, is not circular because the target is not within the organism as alleged by Lennox (Ref. 3).

It should be noted that Dawkins is in error in claiming that the overall algorithm of Darwinian evolution is ‘quintessentially non-random’ simply because the last part of the algorithm, namely natural selection is non-random. Though natural selection is non-random in itself, the probability of success of natural selection depends upon the probability of the presence of the survivable mutant in the pool of mutants randomly generated. Thus, the end result of the Darwinian algorithm is random. Darwinian evolution is quintessentially random. Darwinian random mutation is the generation of random numbers as Dawkins has illustrated in his excellent metaphor of the combination lock (Ref. 5).


2. “God’s Undertaker”, page 165
3. “God’s Undertaker”, page 167
4. Minute 39:10
5. “The God Delusion”, page 122

My previous post (Ref. 1), may have given the false impression that no one agreed with Richard Dawkins’ explanation of smearing out the luck of Darwinian evolution. This post hopefully corrects that impression.

Dawkins has stated “. . . natural selection is a cumulative process, which breaks the problem of improbability up into small pieces. Each of the small pieces is slightly improbable, but not prohibitively so.” (Ref. 2) What does this mean? We can tell from his illustrations (Ref. 3 – 4).

It would seem that Dawkins is saying that the probability of the generation of a given number by a random numbers generator, is increased by the introduction of natural selection. This doesn’t fly. Natural selection doesn’t generate mutations. It culls mutations. It permits only copies of one particular mutation to survive. It doesn’t affect the generation of the survivable mutation from which arises its probability.

Consider a single mutation site defining six different mutations. The six faces of a die define six different mutations. In this example of a total of six defined mutations, the probability of the random generation of at least one copy of the number, 6, for a total of one randomly generated mutation is 1/6 = 16.7%, with or without natural selection. Similarly the probability of the random generation of at least one copy of 6 for a total of six randomly generated mutations is 80.6%, with or without natural selection. In Darwinian evolution natural selection has no effect on probability (Ref. 1). It merely eliminates superfluous mutations, whether the superfluous mutations have been generated randomly or non-randomly.

It is apparent that Dawkins is not assessing the role of natural selection, but analyzing the replacement of a single cycle of random mutation and natural selection with several sub-cycles. In Ref. 3, he compares a single cycle affecting three mutation sites of six mutations each to three sub-cycles, each affecting a single site. The replacement of a single cycle with a series of sub-cycles has no effect on probability. Rather, it increases the efficiency of random mutation. Yet, Dawkins does not identify this as efficiency in mutation due to sub-cycles. He calls it ‘smearing out the luck’, as if the probability of success changed from 1/216 to 1/18. Dawkins is comparing 216 non-random mutations to 18 non-random mutations at a probability of success of 100% (Ref. 4).

Some of Dawkins’ reviewers have agreed with him. The Wikipedia review says the comparison is between probabilities of 1/216 and 1/18 (Ref. 5). More remarkably, in referring to a set of twenty-eight mutation sites of twenty-seven mutations each, John Lennox cites a probability of 10^(-31) and one billion mutations for a single cycle compared to the probability and the number of mutations for a series of 28 sub-cycles (Ref. 6). A computer simulation of the sub-cycles reached a probability of 1 in a maximum of 43 mutations per sub-cycle. In accord with Dawkins, Lennox refers to this as drastically increasing the probabilities. Superficially, Lennox’ comparison appears to imply an increase in probability due to the introduction of sub-cycles.

However, the Darwinian algorithm with sub-cycles, as well as this example of it, does not increase probability. In fact, the comparison in Lennox’ example, implies efficiency in the number of mutations due to sub-cycling with no change to the probability. A more appropriate comparison would have been at a probability of 90% for both the single overall cycle and for the series of 28 sub-cycles. This would compare 2.3 x (27)^28, i.e. roughly 2.7 x 10^40 mutations for the single cycle to 4144 mutations for the series of 28 sub-cycles at the same probability of success, namely 90%.. The 4144 mutations are 148 mutations for each of 28 sub-cycles, where the probability of success for each sub-cycle is 99.6%. This yields a probability of 90% for the series of 28.

Contrasting non-random vs random mutation, within the algorithm of Darwinian evolution for a single cycle, also shows that natural selection has no effect upon probability. For non-random mutation, one mutation yields a probability of 1/n. This increases linearly to a probability of 1 as the number of non-random mutations reaches n. Natural selection merely culls the superfluous mutants. Similarly, random mutation starts out at a probability of 1/n with one mutation and asymptotically approaches 1 as the number of random mutations increases. When the number of non-random mutations is respectively, n, 2.3n, 4.6n and 11.5n, then the respective probabilities are 63%, 90%, 99% and 99.999%. Here too, natural selection merely culls the superfluous mutants.

Another common error in the evaluation of Darwinian evolution is to attribute temporal and material constraints to random mutation. Due to the fact that Darwinian evolution is strictly a logical algorithm of random mutation and natural selection, it is not subject to any temporal or material constraints. It is material simulations, not the logical algorithm, which can exceed such constraints. Also, in a material simulation, there is no increase in time or material due to a random mutation compared to a non-random mutation.

There are 52 factorial or 8.06 x 10^67 different sequences of 52 elements. The inverse of this is the probability of any sequence. In a material simulation, how many decks of cards and how long does it take to generate randomly any sequence, if shuffling for five seconds is granted to be a random selection? The answer is one deck and five seconds. Granted this, how many decks of cards and how long would it take to generate a pool of decks of cards containing at least one copy of a particular sequence at a probability of 90%? The answer is 2.3 x 8.06 x 10^67 decks and 5 x 2.3 x 8.06 x 10^67 seconds. If we apply the Darwinian algorithm of a single cycle of random mutation and natural selection, these paired numbers of decks and seconds are required for a probability of success of evolution of 90%. This exceeds by far any practical temporal and material limits. However, if we are content with any value of probability, then we would be content with one random mutation. Natural selection does not affect probability. It merely culls superfluous, randomly generated mutants. If we trust success to just one random mutation, then there is no need for natural selection, while the material and temporal requirements of the simulation are insignificant, namely one deck and five seconds.

Indeed, we must be content with any and every value of probability. I have argued that no value of probability represents a ‘problem of improbability’. To claim that ‘the probability of this outcome is so close to zero that it could not be due to chance’ is a self-contradiction. Of course, I am not claiming that probability is to be accepted as an explanation. Rather, if probability is accepted in any instance as an explanation, then in no instance can it be rejected as an explanation on the basis of its numerical value, irrespective of how close it is to zero. (Ref. 7). Similarly, the acceptance of probability as an explanation is not bolstered by a value of probability closer to 1.

(2) “The God Delusion”, page 121
(3) “The God Delusion”, page 122
(4) minute 4:25
(5) Growing Up in the Universe, Part 3
(6) “God’s Undertaker Has Science Buried God?” Page 165-167

According to Richard Dawkins the problem which any theory of life must solve is how to escape from chance (Ref. 1), how to solve the problem of improbability (Ref. 2).

Darwinian evolution consists not simply in a single cycle of random mutation and natural selection, but the gradualism of a series of such cycles. According to Dawkins, the problem of improbability of a single cycle is solved by the gradualism due to cycles, each of which is terminated by natural selection. Although Dawkins has declared that the meaning of non-random is his life’s work (Ref 3), it is really gradualism which is his key to explaining the randomness and improbability of Darwinian evolution.

What follows are analyses of four explanations Dawkins has offered to elucidate the role of gradualism in Darwinian evolution. The explanations focus on: (1) a numerical illustration involving three mutation sites of six mutations each, (2) the breakup of a large piece of improbability into smaller pieces, (3) the analysis of the vector of evolution into its component vectors of improbability and gradualism and (4) the gradualism of mutant forms in an ordered sequence approaching mathematical continuity.

Dawkins presents these explanations as in the mainstream of the scientific acceptance of Darwinian evolution. He is not attempting to re-interpret Darwinian evolution or to present any departure from it. His explanations are sufficiently explicit that they are a clear testimony to the coherent mathematics underlying Darwinian evolution in spite of Dawkins’ errors in explaining the mathematics.

The Role of Gradualism, a Numerical Illustration (Ref 4)

A set of three mutation sites of six mutations each defines 216 different mutations, which is 6 x 6 x 6. The 216 mutations may be viewed as an ordered sequence consisting of the initial mutation, the final mutation and 214 ordered intermediate mutations. Let a pool of one copy of each mutation be generated and subjected to natural selection. Natural selection would cull all but one mutation, the final mutation. The pool of 216 would have been non-randomly generated. The probability of success of natural selection would be 100%.

Let this single cycle of mutation and natural selection be replaced by the gradualism of three cycles, where each cycle affects one of the mutation sites independently of the other two. In each cycle a pool of six non-randomly generated mutations would be subjected to natural selection. A total of 18 mutations would be non-randomly generated. The probability of success of natural selection for each cycle would be 100% and the overall probability of success of natural selection would be 100%, i.e. 100% x 100% x 100%.

The gradualism of sub-cycles would generate a total of 18 mutations consisting of the two endpoints, but only 16 of the ordered 214 intermediates defined by evolution overall. Gradualism introduces large gaps, not just missing links, in the actually generated spectrum of ordered mutations in comparison to the ordered spectrum defined by the mutation sites. Gradualism introduces an efficiency factor of 216/18 = 12 in non-random mutations without any change in the probability of success of natural selection, which is 100% for both the overall cycle and the series of sub-cycles of gradualism.

Although such is Dawkins’ illustration of gradualism, it is not his interpretation. Dawkins refers to the non-random mutations as ‘tries’ implying that they are random mutations. He refers to a one in 216 chance compared to three chances of one in 6. He calls gradualism smearing out the luck into cumulative small dribs and drabs of luck. Dawkins erroneously believes he has illustrated random mutation rather than non-random mutation. Dawkins thinks he has illustrated, not an increase in efficiency in the number of mutations due to gradualism. Dawkins erroneously claims to have illustrated an increase in the probability of success of natural selection, the probability of success of Darwinian evolution due to gradualism.

If the pools of mutations subjected to natural selection in the illustration are generated by random mutation, a similar efficiency in the number of mutations is achieved, without any change in the probability of success of natural selection. A pool of 19 randomly generated mutations in each of three sub-cycles would yield a probability of success of natural selection of 96.9% for each cycle and an overall probability of success of natural selection of 90.9%. For a single cycle of random mutation involving all three mutation sites, a pool of 516 random mutations would yield a probability of success of natural selection of 90.9%. The efficiency factor in random mutations would be 516/57 = 9 due to gradualism with no change in the probability of success of natural selection.

The probability, P, of at least one copy of the mutation surviving natural selection in a pool of x randomly generated mutations with a base of n different mutations is: P = 1 – ((n -1)/n)^x.

The Breakup of a Piece of Improbability (Ref. 5)

Dawkins claims that natural selection, which terminates each cycle of random mutation and natural selection, increases the probability of success of natural selection by forming a series of cycles. The overall improbability of evolutionary success in a single cycle is broken up into smaller pieces of improbability.

To break up a large piece of something into smaller pieces implies that the smaller pieces add up to the larger piece.

Probability and improbability are complements of one. They add up to one. The overall probability of a series of probabilities is the product of the probabilities forming the series, not their sum. Each value of probability in a series, if less than one, is greater than the overall probability.

The probability of any face of a red die is 1/6 = 16.7%. The improbability is 5/6 = 83.3%. The same is true of a green die. The probability of any combination of the faces of the two dice, one red one green, is 1/36 = 2.8%. Its improbability is 35/36= 97.2%. Suppose it were mathematically valid to say that rolling the dice individually in a series, rather than both together, ‘breaks the improbability of 97.2% up into two smaller pieces of improbability of 83.3% each’. It would then be mathematically valid to say that rolling the dice in series rather than both together, ‘breaks up the small piece of probability of 2.8% into two bigger pieces of probability of 16.7% each’. Both statements are nonsense. They confuse multiplication and division with addition and subtraction. Yet, Richard Dawkins claims that those, who insist that the overall probability of a series is the product of the probabilities, don’t understand ‘the power of accumulation’, i.e. the power of addition. He claims that ‘natural selection is a cumulative process, which breaks the problem of improbability up into small pieces. Each piece is slightly improbable, but not prohibitively so.’ (Ref. 5) In other words, gradualism increases the probability of evolutionary success.

The probability of the outcome of the roll of three dice, one red, one green and one blue is 1/216. If each die is rolled separately the probability of each roll is 1/6 and the overall probability is 1/216. If the probabilities of the three individual rolls were cumulative as Dawkins indicates (Ref. 5), the overall probability of the series would be 1/6 + 1/6 +1/6 = 1/2. Yet, Dawkins implies the accumulation would be a probability of 1/18 (Ref. 4). Similarly, Wikipedia states, ‘The probability of unlocking the combination, in three separate phases, falls to one in eighteen.’ (Ref. 6). The non-random mutations of three individual sites of six mutations each, e.g. those of three dice, add up to 18. The fraction, 1/18, is not the probability of the series of three individual probabilities. It is simply the mathematical inverse of the 18 total mutations.

The Analysis of the Vector of Evolution (Ref. 7)

Dawkins proposes a parable of Darwinian evolution, ‘Climbing Mt. Improbable’. In the parable, evolution is a vector slope, the sum of a vertical vector, improbability, and a horizontal vector, gradualism. If gradualism is zero, then the vector of evolution is solely a vector of improbability. In the parable, the role of gradualism in Darwinian evolution is to change the slope of the vector of evolution from infinity when gradualism equals zero to some finite value, when gradualism is greater than zero. Dawkins implies that gradualism in the parable decreases the improbability. It does not. Both in Dawkins’ parable and in Darwinian evolution, the overall improbability is unaffected by gradualism, whether or not the vectors of evolution and gradualism are incremental. Also, improbability is not a vector. Consequently, Dawkins’ metaphor of Darwinian evolution as a vector slope, equaling the sum of its horizontal and vertical vectors, is irrelevant both to improbability and to Darwinian evolution.

The Spectrum of Ordered Mutations as Approaching Mathematical Continuity (Ref. 8)

In Darwinian evolution, each cycle of random mutation and natural selection defines a finite set of different mutations, n, which is the base of random numbers generation. The pool of x generated random numbers is subjected to natural selection, i.e. the pool is subjected to a filter at a discrimination ratio of 1/n, which culls all mutations except copies of the one mutation which survives natural selection. The set of n different mutations is an ordered set, where the initial mutation is the lowest and the mutation, surviving natural selection, is the highest mutation in the ordered sequence.

Dawkins notes that, if all of the intermediates between the lowest and the highest mutation in a biological evolutionary sequence survived, the ordered sequence would be recognized as a sequence approaching mathematical continuity (Ref. 8). Furthermore, Dawkins claims that all of the intermediates have been biologically and randomly generated, but are typically extinct. Man and the pig have a common evolutionary ancestor. ‘But for the extinction of the intermediates which connect humans to the ancestor we share with pigs (it pursued its shrew-like existence 85 million years ago in the shadow of the dinosaurs), and but for the extinction of the intermediates that connect the same ancestor to modern pigs, there would be no clear separation between Homo sapiens and Sus scrofa.’ (Ref. 8).

Dawkins would be right within the context of Darwinian evolution, except for one feature of Darwinian evolution which he demonstrated in Reference 4. Gradualism in Darwinian evolution is highly efficient in the generation of random mutations, by not generating most of the intermediates. Due to the efficiency of gradualism, as Dawkins has shown, there are gaps in the actually generated set of intermediates in comparison to the mathematically defined spectrum of ordered mutations. Most of the mathematically defined intermediates, which connect humans to the ancestor we share with pigs, are not absent due to extinction, they are absent because they were never biologically generated, thanks to the efficiency of gradualism of Darwinian evolution. Darwinian evolution is not characterized by missing links in the fossil record. In its mathematical definition, it is characterized by gapping discontinuities in the sequence of ordered mutations actually generated in comparison to the sequence of ordered mutations defined, but not generated.

In 1991 (Ref. 4), Dawkins demonstrated that there are discontinuities in the spectrum of generated mutants due to gradualism. In 2011 (Ref. 8), he claimed that gradualism insures the generation of the complete spectrum, that any discontinuities are due to extinction, not to a lack of generation.


Dawkins does not understand the mathematics of gradualism or its role in Darwinian evolution. Yet, understanding the meaning of random/non-random, which he has identified as his life’s work, is integral to understanding the mathematics of and the role of gradualism in Darwinian evolution. The role of gradualism in Darwinian evolution is to increase the efficiency of random mutation. It has no effect on probability.

1. Page 120, “The God Delusion”
2. Page 121, “The God Delusion”
3. Minute 38:55
4. minute 4:25
5. Page 121, “The God Delusion”
6. Growing Up in the Universe, Part 3
7. Page 121-2, “The God Delusion”

Science is the determination of the mathematical relationships inherent in the measureable properties of material reality. Oftentimes, associated with the mathematical relationships of science is a visual narrative pleasing to the human imagination. Such a narrative is essentially extrinsic to the science.

The Relationship of the Intellect and the Imagination

From the time of Aristotle it has been recognized that human intellectual knowledge is immaterial, an abstraction from material reality. However, Aristotle noted that the action of the intellect is dependent upon a phantasm, a composite of the sense knowledge obtained through the animal senses of man. Thus the human intellect, though immaterial in its nature and activity, is extrinsically dependent upon the sensual phantasm in order to function and to be aware. Without such a sensual phantasm, the human intellect is inactive.

The scope of the senses, including the sensual phantasm, is limited. Material reality, however, has properties beyond the scope of the senses, which properties can be measured instrumentally. Material properties smaller than this scope have been designated, micro, and larger than this scope, macro. Although the human intellect is unlimited, the human imagination is limited to the mid-range scope, which is that of human sensation. The dilemma arises when the sensual imagination attempts to constrain intellectual thought to its level of sensation.

In the Context of Science

A simple scientific relationship is that among the measured values of pressure, volume and temperature of a gas. Gas confined to the unchanging volume of a tank, will increase in pressure when its temperature is increased and vice versa. The science is the mathematical relationship among the instrumentally measured values of pressure, volume and temperature. Associated with the mathematics is a popular visual narrative. The gas is composed of molecules, depicted as tiny ‘micro-sized’ balls in motion. Pressure is depicted as the balls striking the internal walls of the tank. When the temperature is increased the balls move faster, so they strike the walls with greater force, thereby increasing the pressure. This is a harmless visual narrative at the level of human sensation and pleasing to the human imagination. In contrast, the mathematical relationship among the measured values of pressure, volume and temperature is the science.

However, there are scientific relationships, which are not accompanied by visual narratives satisfying to the human imagination. Some measured properties of light are related by equations, which are accompanied by a visual narrative of particles of light. Other properties of light are related by equations which are accompanied by a visual narrative of waves of light. Yet, the human imagination finds the particle and wave narratives visually incompatible in spite of the validity of the mathematical relationships among the measured values in the two different results.

The Classic Dilemma

Collimated light is unidirectional. When it is passed through a slit it produces a pattern of intensity, which may be viewed as a pattern of particles or quanta or photons of light distributed about a norm. The distribution may also be viewed as a continuous curve symmetrical about a single peak (See figure 5 at the end of reference 1).

When collimated light is passed through two adjacent slits the result is a pattern of intensity, which may be viewed as a pattern of particles or quanta of light distributed about several norms. The distribution may be viewed as a continuous curve of a series of individually symmetrical peaks (See figures 3 and 4 at the end of reference 1). This multi-peak pattern is best described mathematically, i.e. scientifically, as due to the interference of two waves of light emanating from point sources as they emerge from the slits.

The purpose of the experimental setup in reference 1 is to track light, quantum by quantum. In our human imagination a single isolated particle, i.e. a discrete particle, cannot act as a continuous wave.

This same purpose was that of the experiment described in reference 2. The rationale of this experiment is that interference requires two waves, whereas a single particle or quantum of light cannot behave as two waves. (This rationale suspends the validity of the quantum/wave mathematics applied at the micro-level and imposes a dictum of the sensory-level human imagination: Particles in the human imagination are discrete particles, not continuous waves.) The experiment proposes that collimated light as an individual quantum is in transit along one of two paths when a de-collimating device or beam-splitter is introduced or not introduced farther along where the two paths exit collimation and cross. The human imagination demands that the beam-splitter cannot change the particle into two waves once the particle is in transit along one or the other path. Nevertheless, the presence of the beam-splitter introduces wave interference. The human imagination reaches the conclusion, ‘Then one decides the photon shall have travelled by one route or by both routes after it has already done the travel.’ This is an imaginative impossibility.

The experiment in reference 1 is clearer than that of reference 2. When the collimation of the light is maintained even though the light is tracked photon by photon, the intensity pattern is of a single peak (figure 5) by detectors D3 and D4 identified in figure 2. When the collimated light is released from collimation by mixing the paths, it displays wave interference in the cumulative pattern of the individually tracked photons in figures 3 and 4 from detectors D1 and D2 of figure 2, respectively.

Another experiment uses two telescopes to maintain the collimation of light. Without the telescopes the light from the two sources interferes as waves. If the light, maintained as collimated by the two telescopes, is permitted to mix after exiting the telescopes it interferes as waves. (See reference 3).

The dilemma of the imagination is, ‘How can a single quantum be ‘mixed’ with nothing by a bean-splitter and thereby act as a wave?’

Attempts to Placate the Imagination within the Context of Physics

Some physicists pacify their imaginations by viewing the fundamental equation as that of the wave and view the detection of a quantum as 1 and the non-detection as 0, the two possible outcomes of a probability event. The wave function is viewed as an expression of probability, the outcome of which is yes, one or no, zero. An outcome is said to be the collapse of the wave function. As we shake a pair of dice in our cupped hands, the probability of the sum, seven, is one-sixth, but when the dice have landed the probability, which was one-sixth has collapsed to 1 (seven) or 0 (non-seven). Unlike the eleven discrete probabilities, which are the sums of two dice, a wave as a probability function, may be viewed as indefinite in probability between 0 and 1. In this view, the imagination states that the quantum does not exist except in the potency (probability) of a wave. The wave collapses due to an observation. As a result of observation, the quantum comes into existence or doesn’t. Upon observation, the probability function no longer exists. Thus a quantum exists, only if observed.

The author of reference 1, concludes, ‘Ho-hum another experimental proof of quantum mechanics’. Another expression rejecting the need to placate the human imagination has been expressed as ‘Shut up and calculate!’ (See reference 4).

The need to placate the human imagination has been carried to two extremes. One extreme is to claim that observation creates material reality, such as ‘The moon is demonstrably not there, unless someone is looking.’ (See reference 5). The other extreme is to claim that there are as many universes as there are possible material outcomes. A quantum we observe is valid within our universe, but not universally valid across multi-verses. What exists in our universe in our observation is characteristic of our universe. All other possibilities exist in a multitude of universes encompassed in what appears in our universe to be a probability function, the wave (reference 6).

The middle course is to recognize that the compatibility of the discrete and the continuous is a sticky wicket, not to the intellect, but to the sensual imagination. The photon vs. wave phenomena of light is just one example where the human imagination tends to deny the compatibility between the discrete and the continuous.

The Central Problem, Imagining the Concept of Continuity

The most fundamental concept in mathematics is the counting of discrete elements. Material objects are easily counted. Another concept in the basic development of mathematics is the fractionation of a discrete element. Because mathematics is an abstraction from material reality, it is intellectually possible not only to fractionate an element into two parts, it is possible to fractionate it into an unlimited number of parts. You can’t do that with an apple or any material thing even of an apparently uniform structure. Our sensual imagination cannot keep up with our intellect. Our intellect has conceived the abstract idea of continuity.

An independent variable is continuous over its domain if its value can be specified to any arbitrary precision.

Suppose I had a building with a frontage of thirty meters. I could delineate two contiguous parallel parking spaces of ten meters each demarcated with painted end stripes twenty centimeters wide and having one stripe at the midpoint. That would be a 20.6 meter, continuous segment, of the frontage. However, if I asked a painter to paint the three twenty centimeter stripes to a precision of 10^(-11) meters, he would think I was crazy. He would note that the diameter of the electron cloud of the hydrogen atom was 10^(-10) meters and I was asking for a painted line to the precision of a tenth of the diameter of a hydrogen atom. He would point out that the abstract concept of mathematical continuity to such a precision did not apply to the context of the level of human sensation and specifically to the painting of lines in a street. He wouldn’t deny the validity of the mathematical concept of continuity. The painter would merely note that the human imagination did not have the power to concretize the abstract concepts of mathematics in the manner and context, which I proposed.

One of my favorite apparent incompatibilities of a mathematical concept with material reality, as viewed by the human imagination, involves the probability of the sequence of a deck of playing cards, namely 1 in 8.06 x 10^67. Intellectually we are content with emulating a ‘random selection’ from a set of that size by shuffling the deck. Yet a set of that size can only be logical, far beyond the scope of the human imagination. It would be foolish to claim that the shuffled result was the collapse of the fundamental reality, the probability function, into a materially observable event, where the collapse of the probability function was essentially due to the observation. It would likewise be foolish to claim that the shuffled result was a material event in my universe, but that the entire set of probable mutations of a deck of cards must exist in some other material universe(s), rather than solely in human logic. In both views, shuffling brings one material event into observable existence out of ‘real’ probabilities or ‘real’ worlds equaling 8.06 x 10^67. These views require material reality to conform to human thought in its subjection to the human imagination. The proper view is to require human judgments about reality to conform to reality, while also recognizing the immateriality of human thought and logic, which frees the human intellect from subordination to the human imagination, i.e. subordination solely to sense knowledge.

Note that adding two jokers to the deck would increase the ‘real’ probabilities and the number of ‘real’ worlds by a factor greater than 1000 to 2.3 x 10^71. The mass of the earth is a mere 5.97 x 10^36 nanograms.

The mathematics of probability concerns the identification of logical sets solely according to their fractional composition of logical elements. When this mathematics is applied to material sets, the material properties of the elements are completely irrelevant. The IDs of the elements are purely nominal. The elements have no relevant material properties. Material elements may be used only in emulation of the mathematics, while ignoring their material properties. Thus, the mathematics may be used as a tool of ignorance of the material properties of the elements.

To the extent that a wave function is considered a probability function, it renders the mathematics a tool of ignorance, i.e. a tool to compensate for ignorance of the material properties underlying the phenomena being studied. It is a serious error to take probability as fundamental reality or as a characteristic of reality, rather than as a mathematic tool of abstract logic.

1. Excerpts from “A Delayed Choice Quantum Eraser” by Yoon-Ho et al Phys. Rev. Lett. 84:1-5 (2000) with commentary by Ross Rhodes.
2. Wheeler’s delayed choice experiment by Alain Aspect

3. Wheeler’s Classic delayed choice experiment

4. Copenhagen interpretation

5. Video of the faith and science conference in 2011, at time 1:09:45

6. Chapter 11, Quantum Time, by Sean Carroll


Get every new post delivered to your Inbox.