Archive

Dawkins

In an exchange of comments with Phil Rimmer on the website, StrangeNotions.com, I attempted to explain the distinction between probability and efficiency. The topic deserves this fuller exposition.

I have argued that Richard Dawkins does not understand Darwinian evolution because he claims that the role of replacing a single stage of random mutation and natural selection with a series of sub-stages increases the probability of evolutionary success. In The God Delusion (p 121) he titles this ‘solving the problem of improbability’, i.e. the problem of low probability. My claim is that replacing the single stage with the series of sub-stages increases the efficiency of mutation while having no effect upon the probability of success.

Using Dawkins’ example of three mutation sites of six mutations each, I have illustrated the efficiency at a level of probability of 85.15%, where the series requires only 54 random mutations, while the single stage requires 478.

It may be noted that at a given number of mutations, the probability of success is greater for the series than for the single stage. A numerical example would be at 54 total mutations. For the series the probability of success in 85.15%, whereas at 54 total mutations, the probability of the single stage is only 22.17%. The series has the greater probability of success at a total of 54 mutations.

This would appear to be a mortal blow to my argument. It would seem that Richard Dawkins correctly identifies the role of the series of sub-stages as increasing the probability of success, while not denying its role of increasing the efficiency of mutation. It would seem that Bob Drury errs, not in identifying the role of the series as increasing the efficiency of mutation, but in denying its role in increasing the probability of evolutionary success.

Hereby, I address this apparently valid criticism of my position.

The Two Equations of Probability as a Function of Random Mutations

The probability of evolutionary success for the single stage, PSS, as a function of the total number of random mutations, MSS, is:
PSS = 1 – (215/216)^MSS

The probability of evolutionary success for the series of three sub-stages, PSR, as a function of the total number of random mutations per sub-stage, MSUB, is:
PSR = (1 – (5/6)^MSUB)^3.

For the series, the total number of mutations is 3 x MSUB.

Comparison of Probability at the Initial Condition

At zero mutations, both probabilities are zero. Initially, the probability of both processes, namely the single stage and the series of sub-stages is the same.

For the single stage at one random mutation, which is the minimum for a positive value of probability, the probability of success is 1/216 = 0.46%.

For the series of three stages, at one random mutation per stage, which is the minimum for a positive value of probability, the probability of success is (1/6)^3 = 1/216 = 0.46%. At this level of probability, the single stage has the greater mutational efficiency. It takes the series three random mutations to achieve the same probability of success as the single stage achieves in one random mutation.

Comparison of the Limit of Probability

For both the single stage and for the series of three stages, the limit of probability with the increasing number of mutations is the asymptotic value of 100% probability.

Comparison of the Method of Increasing Probability

For both the single stage and for the series of three stages, the method of increasing the probability is the same, namely increasing the number of random mutations. For both, probability is a function of the number of random mutations.

Comparison of the Intermediate Values between the Initial Condition and the Limit

For both the single stage and for the series of three stages, probability varies, but continually increases from the initial condition toward the limit.

Excepting for values of total mutations less than six, i.e. two per sub-stage, at every level of probability, the series requires fewer mutations than does the single stage. Correspondingly, at any number of mutations greater than six, the series has a higher value of probability than the single stage. Thus, if the comparison is at a constant value of probability, the series requires fewer mutations. If the comparison is at a constant value of mutations, the series has a higher value of probability.

Apparent Conclusion

Richard Dawkins is right in that the series increases the probability of success, without denying that it also increases the efficiency of mutation. Bob Drury is wrong in denying the increase in probability.

The Apparent Conclusion Is False, in Consideration of the Concept of Efficiency

Both the single stage and the series of sub-stages are able to achieve any value of probability over the range from zero toward the asymptotic limit.

Efficiency is the ratio of output to input. One system or process is more efficient than another, if its efficiency is numerically greater. There is no difficulty in comparing two processes where the efficiency of both systems is constant. In such a case, output starts at zero at input equals zero. Output is a linear function of input, having a constant positive slope. The process with the higher positive slope is more efficient than the other. However, in cases where the efficiencies vary, the comparison of efficiencies must be determined at the same value for the numerator of the ratio of efficiency, i.e. the output, or at the same value for the denominator, the input.

In this comparison of the single stage vs. the series of sub-stages, the output is probability and the input is the number of random mutations. Remember both processes increase probability by the same means, namely by increasing the number of random mutations. That is, output increases with increasing input. Also, remember that both processes do not differ in that they both approach the same limit of probability asymptotically.

Dawkins’ comparison of replacing the single stage with a series of sub-stages is the comparison of two processes.

In the numerical examples above we can calculate and compare the efficiencies of the two processes at a constant output, e. g. of 85.15% probability and at a constant input, e.g. of 54 mutations.

At the constant output of 85.15%, the efficiency for the single stage 85.15/478 = 0.18. For the series of sub-stages the efficiency is 85.15/54 = 1.57. The mutational efficiency is greater for the series than for the single stage at the constant output of 85.15% for both processes.

At the constant input of 54 mutations, the probability for the single stage is P = 1 – (215/216)^54 = 22.17%. Therefore, the efficiency is 22.17/54 = 0.41. At this constant input, efficiency for the series is 85.15/54 = 1.57. The mutational efficiency is greater for the series than for the single stage at the constant input of 54 mutations for both processes.

At the 85.15% probability level, the series is greater in mutational efficiency than the single stage by a factor of 478/54 = 8.8

Further evidence that Dawkins is illustrating an increase in efficiency and not an increase in probability is that he compares the temporal efficiencies of two computer programs. For both programs, the input of the number of random mutations is equated with the time of operation from initiation to termination. Termination is upon the random inclusion of one specific mutation. The sub-stages based program typically wins the race against the single stage based program. This demonstrates the greater mutational efficiency of the sub-series, not the greater probability of success.

In the numerical example of three sites of six mutations each, the specific mutation would be one of 216. Let us modify the computer program races slightly. This will give us a greater insight into the meaning of probability and the meaning of efficiency.

Let each program be terminated after 54 and 478 mutations for the series and the single stage, respectively. If the comparison is performed 10,000 times, one would anticipate that on the average, both programs would contain at least one copy of the specific mutation in 8,515 of the trials and no copies in 1,485 of the trials. The series program would be more efficient because it took only 54 mutations or units of time, compared to 478 mutations or units of time for the single stage program to achieve a probability of 85.15%.

For the numerical illustration of three mutation sites of three mutations each, both the single stage and the series of sub-stages have the same initial probability of success greater than zero, namely, 0.46%. Both can achieve any value of probability short of the asymptotic value of 100%. They do not differ in the probability of success attainable.

It doesn’t matter whether we compare the relative efficiencies of the series vs. the single stage at a constant output or a constant input, the series has the greater mutational efficiency for total mutations greater than six.

For the numerical illustration of three mutation sites of three mutations each, at a probability of 85.15%, the series is greater in mutational efficiency by a factor of 8.8. At 90% probability, the factor of efficiency is 8.9 in favor of the series. At a probability of 99.9999%, the factor of efficiency is 12.1 in favor of the series.

Analogy to a Different Set of Two Equations

Let the distance traveled by two autos be plotted as a function of fuel consumption. Distance increases with the amount of fuel consumed. Let the distance traveled at every value of fuel consumption be greater for auto two than auto one. Similarly, at every value of distance traveled, auto two would have used less fuel than auto one. My understanding would be inadequate and lacking comprehension, if I said that replacing auto one with auto two increases the distance traveled. It would be equally inane to say that auto two solves the problem of too low a distance. My understanding would be complete and lucid, if I said that replacing auto one with auto two increases fuel efficiency.

There is a distinction between distance and fuel efficiency. Understanding the comparison between the two autos is recognizing it as a comparison of fuel efficiency. Believing it to be a comparison of distances is a failure to understand the comparison.

For both the single stage and the series of sub-stages of evolution, probability increases with the number of random mutations. Except for the minimum number for the sub-series, at every greater number of random mutations, the probability is greater for the series of sub-stages than for the single stage of evolution. Similarly, except for the minimum positive value, at every value of probability, the series requires fewer random mutations. My understanding would be inadequate and lacking comprehension, if I said that replacing the single stage with the series increases the probability attained. It would be equally inane to say that the series solves the problem of too low a probability. My understanding would be complete and lucid, if I said that replacing the single stage with the series increases mutational efficiency.

The role of a series of sub-stages in replacing a single stage of random mutation and natural selection is to increase the efficiency of random mutation while having no effect on the probability of evolutionary success. This is evident by comparing the equations of probability for the series and for the single stage as functions of the number of random mutations. This is the very comparison proposed by Richard Dawkins for the sake of understanding evolution. He misunderstood it as “a solution to the problem of improbability” (The God Delusion, page 121), i.e. as solving the problem of too low a probability.

There is a distinction between probability and mutational efficiency. Understanding the comparison between the series of sub-stages and the single stage is recognizing it as a comparison of mutational efficiency. Believing it to be a comparison of probabilities is a failure to understand the comparison.

In order to understand Darwinian evolution, every high school student must know the basic arithmetic involved. The following example illustrates the mathematical explanation of Darwinian evolution by evolutionary biologist, Richard Dawkins (The God Delusion, page 121).

The Example

The improbability of the result of the simultaneous flip of a coin, the roll of a die and the random selection of a card from a deck is 99.84%, i.e. 1 – (0.5 x 0.166 x 0.0185). Sequentially flipping a coin, rolling a die and randomly selecting a card breaks up the improbability of 99.84% into three smaller pieces, namely, 50%, which is 1 – 0.5; 83.33%, which is 1 – 0.166 and 98.08%, which is 1 – 0.0185. This is how natural selection works.

The Explanation

“(N)atural selection is a cumulative process which breaks the problem of improbability up into small pieces. Each of the small pieces is slightly improbable, but not prohibitively so. When large numbers of these slightly improbable events are stacked up in series, the end product is very very improbable indeed, improbable enough to be far beyond the reach of chance. It is these end products that form the subjects of the creationist’s wearisomely recycled argument. The creationist completely misses the point, because he (women should once not mind being excluded by the pronoun) insists on treating the genesis of statistical improbability as a single, one-off event. He doesn’t understand the power of accumulation.”

Dawkins illustrated this with his own numerical example of three mutation sites of six mutations each. Taking random mutation as a one-off event affecting all three sites simultaneously, the improbability of the outcome is 99.54% for one random mutation, i.e. 1 – (1/216). Subjecting each site individually to random mutation, the improbability of each of the three stages is 83.33%, i.e. 1 – (1/6), for one random mutation in each sub-stage. In biological evolution a sub-stage is terminated by natural selection. Thereby, natural selection breaks the improbability of 99.54% into three smaller pieces each of which is 83.33%.

The creationist, would claim that there is no change in the improbability. The overall improbability of the series equals the improbability of the single one-off stage. The improbability would be 99.54% in both the single stage affecting all three sites and in the series of three sub-stages, each affecting only one site. The creationist mistakenly thinks that the probabilities of a series multiply, thereby yielding the same improbability for the one-off event and for the series. The creationist doesn’t understand the power of accumulation, which applied to a single, one-off event, breaks up the improbability into smaller pieces of improbability.

Coordinately, the power of accumulation must break up the small piece of probability of the single, one-off event into three larger pieces of probability forming the series. The small piece of probability of the single, one-off event is 0.46%, i.e. 1/216, while each of the three larger pieces of probability, into which the small piece is broken, is 16.67%, i.e. 1/6.

According to Dawkins, the creationist thinks the probability of the single, one-off event is equal to the product, not the sum, of a sub-series of probabilities. Thereby, according to Dawkins, the creationist is oblivious to the power of accumulation, i.e. to the power of addition in arithmetic.

Evaluation

In fact it is Dawkins who has the arithmetic wrong. The probabilities of a series are the factors, whose multiplication product is the overall probability of the series. The overall probability of a series is not the sum of the probabilities of a series, nor is the overall improbability of a series the sum of the improbabilities of the series. The overall improbability is not broken up into smaller pieces, which are united by accumulation, i.e. by summation. Neither is the overall probability broken up into larger pieces, which are united by accumulation, i.e. by summation.

Dawkins’ confusion of the arithmetical operations of addition and multiplication, leads him to the false belief that sub-staging in Darwinian evolution increases the probability of evolutionary success. It also blinds Dawkins to what natural selection truly accomplishes through sub-stages. Through sub-staging, natural selection increases the efficiency of random mutation.

In the case of one random mutation per sub-stage the probability of evolutionary success per sub-stage is 16.67%. Overall the probability of success is 0.46%, while the total number of mutations for the three stages is three. To achieve this same probability of success, i.e. 0.46%, the single stage requires only one mutation. At the 0.46% level of success, the single stage is more efficient in the number of mutations by a factor of three. However, at higher levels of evolutionary success, this quickly changes, resulting in greater mutational efficiency for the series of sub-stages.

At two random mutations per sub-stage for a total of six, the probability of success overall is 2.85%, while for that level of success, the single stage also requires six mutations by rounding down from a calculated 6.23 mutations. At three random mutations per sub-stage for a total of nine, the probability of success overall is 7.48%, while for that level of success, the single stage requires sixteen random mutations by rounding down from a calculated 16.75 mutations.

Mutational efficiency in favor of sub-stages increases the higher the level of evolutionary success. Eighteen random mutations per sub-stage for a total of fifty-four random mutations, yield an overall probability of evolutionary success of 89.15%. To achieve the 89.15% level of probability of success, the single, one-off stage requires 478 random mutations.

For levels of 0.46%, 2.85%, 7.48% and 89.15% of the probability of Darwinian evolutionary success, the efficiency factor for random mutations in favor of the series of sub-stages goes from less than one, namely 1/3, to 6/6 to 16/9 to 478/54. That last ratio is an efficiency factor of 8.85.

By confusing multiplication and addition, Dawkins fails to understand the role of sub-stages in Darwinian evolution. Sub-staging has no effect on the probability of evolutionary success. Rather, it increases the efficiency of random mutation in Darwinian evolution.

Note

The probability, P, of Darwinian evolutionary success for a stage of random mutation and natural selection is a function of n, the total number of different mutations defined by a stage, and of x, the number of random mutations occurring in that stage. P = 1 – ((n – 1)/n)^x. The probability of a series is the product of the probabilities of the stages in the series.

 

 

This essay is presented by its author on the supposition of his virtual assignment to the debate side expressed by the title. It is prompted by the impression that published views, on the con side of the debate, typically dismiss the pro side as intellectually and philosophically trivial. Consequently, the con side has not adequately addressed the issue of debate.

The issue or thesis is that human knowledge of material reality is the inference of mathematical probability. Hahn and Wiker (Answering the New Atheism, p 10) accuse Dawkins of an irrational faith in chance when Dawkins has explicitly denied chance as a solution (The God Delusion, p 119-120). Feser (The Last Superstition) does not even discuss mathematical probability, although identifying Dawkins as his main philosophical opponent. In a few instances Feser uses the word, probability, but in the sense of human certitude, not in the mathematical sense.

The Historical Issues

There were two dichotomies with which the ancient Greek philosophers wrestled. One was the discrete and the continuous. The other was the particular and the universal.

The Discrete and the Continuous

Zeno of Elia was a proponent of the discrete to the denial of the continuous. This took the form of a discrete analysis of motion. Any linear local motion takes a finite time to proceed halfway, leaving the remainder of the motion in the same situation. If local motion were real, it would take an infinite number of finite increments of time and also of distance to complete the motion. Therefore, motion is an illusion. From this perspective, it is assumed that the discrete is real. When subjected to discrete analysis, motion, which is continuous, is seen to be untenable.

Heraclitus of Ephesus took the opposite view. Everything is always changing. It is change, which is real. Things as entities, i.e. as implicitly stable, are mental constructs. They are purely logical. It is continuous fluidity which is reality.

The Particular and the Universal

It was apparent to both Plato and his student, Aristotle, that the object of sense knowledge was particular, completely specified. In contrast, intellectual concepts were universal, not characterized by particularity, but compatible with a multitude of incompatible particulars. Plato proposed that sense knowledge of the particular was a prompt to intellectual knowledge, recalling a memory when the human soul, apart from the body, had known the universals.

Aristotle proposed that material entities or substances were composites of two principles. One was intelligible and universal, the substantial form. The other was the principle of individuation or matter, which enabled the expression of that universal form in a complete set of particulars. The human soul had the power to abstract the universal form from a phantasm presented to it by sense knowledge of the individual material entity in its particularities.

From this binary division into the two principles of substantial form and matter arose the concept of formal causality. The form of an entity made an entity to be what it was. It was the formal cause, whereas the particular material substance, as a composite of form and matter, was the effect. Thus, cause and effect were binary variables. The cause is absent, 0, or present, 1, and its effect was correspondingly binary as absent, 0, or present, 1. Thereby, the philosophy of formal causality was tied to the discrete mathematics of binary arithmetic.

The Modern Assessment of Form

This discrete and binary view of formal causality was subtly undermined in the 19th century. What led to its demise was the study of variation in biological forms. Darwin proposed that the modification of biological forms was due to the generation of variants by random mutation and their differential survival due to natural selection.

Superficially this appeared to be consonant with the distinction of one substantial form, or identity of one species, as discretely distinct from another. However, it was soon realized that the spectrum of seemingly discrete and graduated forms was, in its limit, continuous variation. One species in an evolutionary line did not represent a discretely distinct substantial form from the next substance in the spectrum. Rather, they were related by continuous degree http://www.richarddawkins.net/news_articles/2013/1/28/the-tyranny-of-the-discontinuous-mind#. The distinction of one biological form from another, as substantial, was an imposition of the human mind on biological reality. To save at least the jargon of Aristotelian philosophy, it could be said that the evolutionary and graduated differences among living things were accidental differences among individuals of one substantial form, namely the substantial form, living thing.

The Resultant Modern Assessment of Efficient Causality

Apart from formal causality, Aristotle also identified efficient causality, namely the transition of potency to act. This would include all change, both substantial change and local motion. In keeping with the limitations of binary arithmetic, efficient causality and its effect were identified as absent, 0, and present, 1. However, concomitant to the implication of the random mutation of forms, which renders the substantial form of living things a continuum, is the implication of mathematical probability as the outcome of an event. Just as the realization that the mutation of forms defined a continuous spectrum for formal causality, probability defines a continuous spectrum from 0 to 1, for efficient causality. Efficient causality is the probability of an outcome, the probability of an event. The outcome or event as the effect is within a continuous spectrum and proportional to its continuous efficient cause, which is mathematical probability. Thus, the inference of mathematical probability as the mode of human knowledge of material reality, frees efficient causality and its effect from the restrictions of binary arithmetic.

Causality was no longer discrete and binary. Causality was the amplitude from 0 to 1 of the continuous variable, probability. Causality had now the nuance of degree, made possible by the rejection of discrete, binary arithmetic in favor of continuity. The magnitude of the effect was directly proportional to the amplitude of the cause. The simplicity of discrete, binary arithmetic, which is so satisfying to the human mind, was replaced by what we see in nature, namely degree.

A Clearer Understanding of Chance

Hume had rejected the idea of efficient causality. He claimed that, which we propose as cause and effect, was simply a habit of association of a sequence of events. In this view, we label as an effect the next in a series of events according to what we anticipate due to our habit of association. The understanding of probability as causality having amplitude restores cause and effect, negating Hume’s denial.

Mathematical probability is the fractional concentration of an element, x, of quantity, n, in a logical set of N elements. This fraction, n/N, has a lower limit of 0 as n → 0. The limit, 0, is a non-fraction. The upper limit of the fraction, probability, n/N, as n → N is 1, a non-fraction. These non-fractional limits represent the old, binary conception of causality. Properly understood, these limits demarcate the continuum of probability, the continuum of efficient causality.

The binary definition of chance was an effect of 1, where the cause was 0. In recognizing probability as efficient causality, this does not change. No one offers chance as an explanation (The God Delusion, p 119-120). In the context of probability, however, the binary concept of chance yields to a properly nuanced understanding. Chance is directional within the continuum of probability. Causality tends toward chance as the probability tends toward 0. This is mathematically the same as improbability increasing toward 1. Consequently, Dawkins notes that a decrease in probability is moving away from chance by degree, “I want to continue demonstrating the problem which any theory of life must solve: how to escape from chance.” (The God Delusion, p120). This escape from chance by degree is explicit, “The answer is that natural selection is a cumulative process, which breaks the problem of improbability up into small pieces. Each of the small pieces is slightly improbable, but not prohibitively so.” (The God Delusion, p121)

Often in common parlance, chance and probability are synonyms: The chance or probability of heads in flipping a coin is one-half. In recognizing probability as the spectrum of efficient causality they are not synonyms. Chance is properly understood as directional movement toward the lower end of the spectrum of probability.

Mathematical Probability and Human Certitude Merge

The recognition of efficient causality as the continuum of probability introduces a distinction between mathematical chance as directional and mathematical probability as spectrum. On the other hand, this recognition merges the meaning of mathematical probability and probability in the sense of an individual’s certitude of the truth of a proposition.

In the Aristotelian discrete binary view of efficient causality, an individual’s certitude of the truth of a proposition though commonly labeled, probability, was strictly qualitative and subjective. One could of course, describe his certitude on a numerical scale, but this was simply a subjective accommodation. For example, stating a numerical degree of one’s certitude was just for the fun of it within a discussion of politics by TV pundits. In spite of adopting an arbitrary scale such as zero to ten, to express a pundit’s certitude, human certitude was still recognized as qualitative.

The recognition of efficient causality as the continuum of mathematical probability, implies that human knowledge is the inference of mathematical probability and, indeed, a matter of degree. There is no distinction between the probability of efficient causality and the degree of certitude of human knowledge. Human certitude, which was thought to be qualitative, is quantitative because human knowledge is the inference of mathematical probability.

Final Causality

Final causality or purpose is characteristic of human artifacts. However enticing as it may be, it is simply anthropomorphic to extrapolate purpose from human artifacts to material reality (The God Delusion, p 157). In the binary context of form and matter, it was quite easy to give in to the temptation. Once binary arithmetic was discarded with respect to formal and efficient causality, the temptation vanished. The continuity of probability not only erased the discrete distinctions among forms, but melded formal causality and efficient causality into the one continuous variable of probability. Final causality is identifiable in human artifacts and in a philosophy based on binary arithmetic. It serves no purpose in a philosophy based on the continuity arising from the inference of mathematical probability from material reality.

Conclusion: Regarding the Existence of God

Binary arithmetic was Aristotle’s basis for the distinction of substantial form and matter in solving the problem of the particular and the universal. The form was the intelligible principle which explained the composite, the particular substance. The composite was identified as the nature of the individual material entity. However, this implied a discrete distinction between the nature of the individual substance and its existence. One binary solution led to another binary problem: How do you explain the existence of the individual when its form, in association with matter, merely explains its nature? The Aristotelian solution lay in claiming there must be a being, outside of human experience in which there was no discrete distinction between nature and existence. That being would be perfectly integral in itself. Thereby, it would be its own formal, efficient and final causes. Its integrity would be the fix needed to amend the dichotomy of the nature and existence of the entities within our experience.

Both the problem and its solution arise out of the mindset of binary arithmetic. The problem is to explain a real, discrete distinction between nature and existence in material entities. Its solution is God, an integral whole. In contrast, the problem does not arise in the philosophy of probability, which expands philosophical understanding to permit the concept of mathematical continuity. That philosophy allows the human epistemological inference of mathematical probability. Probability and its inference from material reality, do not require a dichotomy between formal and efficient causality. In that inference, expressed as amplitude, both form and existence are integral. There is no need of a God, an external source, to bind into a whole that which is already integral in itself.

In Aristotelian philosophy, it is said that there is only a logical distinction between God’s nature and God’s existence, whereas there is a real distinction of nature and existence in created entities. The philosophy of probability avoids the dichotomies arising out of Aristotelian binary arithmetic. In the philosophy of probability there is only a logical distinction between formal and efficient causality in material things. There is no real dichotomy for a God to resolve.

 

Gradualism in Darwinian evolution is identified as the replacement of a conceptual overall cycle of random mutation and natural selection with an actual, gradual series of sub-cycles. It is assumed that the series of sub-cycles randomly generates all the mutations defined by the conceptual overall cycle, but in stages rather than in one gigantic evolutionary cycle. The gigantic set of mutations is itself a graded set, not of sub-cycles, but of mutations.

The mutations defined in each sub-cycle is a subset of the graded set of mutations defined by the overall cycle. Each sub-cycle consists of subjecting its subset of mutations to random generation and the resulting pool of random mutations to natural selection.

The gradualism of sub-cycles is often taken to be synonymous with the gradualism represented by the entire graduated sequence of mutations defined by the associated conceptual overall cycle of evolution.

Everyone agrees that replacing a single cycle of Darwinian evolution by a series of sub-cycles yields a series of sub-cycles, each of which has a probability of evolutionary success greater than the probability of success overall. This is simple arithmetic. The product of a series of factors, each a fraction of 1, is less than any of its factors.

However, proponents of Intelligent Design claim that there are some biological structures that cannot be assembled gradually in a series of subsets because survival in the face of natural selection requires the full functionality of the surviving mutant in each subset.

There are in fact two distinct Intelligent Design arguments against Neo-Darwinism. One argument is entitled, irreducible complexity. The other argument is the argument of gradualism presented by Stephen Meyer (Ref. 1). Both of the Intelligent Design arguments cite complex biological structures such as the ‘motor assembly’ of the bacterial flagellum. In opposition to Neo-Darwinism, the Intelligent Design arguments claim that it is the integrity of the assembled unit that confers functionality and thereby survivability when subjected to natural selection. Those mutants, which have partial assemblies, have no functionality and therefore no survivability based on functionality.

Intelligent Design’s Irreducible Complexity Argument

This argument acknowledges that the gradualism of sub-cycles increases the probability of evolutionary success in terms of the probability of the individual sub-cycle. However, it is argued that the integrity of a cited biological assembly requires a single cycle of such a size that it thereby has a level of probability too low to be acceptable. The assembly is not reducible, by a series of sub-cycles, to a lower level of complexity without sacrificing survivability. The lower the complexity, the greater the probability of evolutionary success. Thus, the level of complexity is irreducible by means of sub-cycles to a low level of complexity, which would raise the probability of each sub-cycle to an acceptably high level of probability. According to this argument, Darwinian evolution fails on the basis of probability.

Intelligent Design’s Gradualism Argument

Stephen Meyer’s Intelligent Design argument (Ref. 1) ignores the numeric values of probability and an alleged value of probability above which probability becomes large enough to serve as an explanation. Rather, the argument concentrates on the proposition that the gradualism of Darwinian evolution requires the actual generation of the entire, graduated spectrum of mutations. If there is a sub-sequence of graduated mutations, which have no functionality and therefore have no survivable utility, then the terminal of this sub-sequence could never be generated. The ‘motor assembly’ of the bacterial flagellum is cited as one example. Consequently, the evolution of such a terminal mutation is incompatible with gradualism, which is a key characteristic of Darwinian evolution.

According to this argument Darwinian evolution fails on two grounds with respect to gradualism. (1) The complete biological assembly, in the cited instances, cannot be assembled gradually because the necessary precursors, namely incomplete biological assemblies, would not pass the test of natural selection to which each precursor would be subjected in its sub-stage of gradualism. (2) The fossil record has gaps in the spectrum of mutations, which gaps are incompatible with gradualism.

Critique of Both Irreducible Complexity and Darwinian Evolution with Respect to Probability

Probability is defined over the range of 0 to 1. There is no logical basis for dividing this continuous range of definition into two segments: one segment from 0 up to an arbitrary point, where probability is too small to serve as an explanation; a second segment from that point through 1, where probability is numerically large enough to serve as an explanation.

The Irreducible Complexity Argument claims there are cycles of evolution for which the probability is in the segment near zero. Because these cycles cannot be replaced by sub-cycles, the gradualism required by evolution cannot be achieved. The Neo-Darwinian response is that there are no such cycles. Any cycle, which due to its size would have too low a probability, superficially may appear to be indivisible into sub-cycles, but is not so in fact.

Both Irreducible Complexity and Darwinian evolution claim that it is the replacement of a cycle by a series of sub-cycles which solves the ‘problem of improbability’ (Ref. 2).

Granted that each probability of a series of probabilities is larger than the overall or net probability, the net probability remains constant. The net probability is the product of the probabilities in the series. Consequently, it is nonsensical to say ‘natural selection is a cumulative process, which breaks the problem of improbability up into small pieces. Each of the small pieces is slightly improbable, but not prohibitively so’ (Ref. 2). The individual probabilities of the sub-cycles in the series do not change the net probability of evolution.

Critique of Both the Intelligent Design and the Darwinian Evolution Claims of Gradualism

Meyer’s Intelligent Design argument agrees with Neo-Darwinism that the gradualism of sub-cycles ensures the generation of the entire spectrum of graded mutations defined by the overall cycle of evolution (Refs. 1 and 3). To what they agree is false. The fact is that role of sub-cycles is to increase the efficiency of mutation by eliminating the possibility of the generation of most of the graded mutations in the defined spectrum. Although he misinterpreted what he was demonstrating, Dawkins did an admirable job of demonstrating this efficiency (Ref. 4).

In Ref. 4, Dawkins used an example of three mutation sites of six mutations each to illustrate the efficiency of sub-cycles, where efficiency is achieved by eliminating the possibility of the generation of most intermediate evolutionary forms. The excellence of his illustration of increased mutational efficiency is not vitiated by the fact that Dawkins mistakenly thought he was illustrating an increase in the probability of evolutionary success by natural selection. The net probability of success is unaffected by the introduction of sub-cycles (Ref. 5).

Three mutation sites of six mutations each defines 216 different graded mutations, i.e. 6 x 6 x 6 = 216. These mutations are the two end points and 214 intermediates. Let the 216 mutations of the graded spectrum be designated 000 to 555.

In a single cycle of Darwinian evolution, all 216 different mutations are liable to be generated randomly. In Dawkins’ illustration of gradualism, a series of three sub-cycles replaces the single overall cycle. Each of the three sub-cycles in the series subjects only one site out of the three to random generation and natural selection, independently of the other two sites. This entails only six different mutations per sub-cycle. In the first sub-cycle, the six mutations are between 00’0′ and 00’5′ inclusively. The six mutations of the second sub-cycle are between 0’0’5 and 0’5’5 inclusively. The six mutations of the third sub-cycle are between ‘0’55 and ‘5’55 inclusively.

Although there are six possible different mutations per sub-cycle, in the second sub-cycle mutation 005 is a duplicate of 005 of the first sub-cycle. In the third sub-cycle mutation 055 is a duplicate of 055 of the second sub-cycle. That yields only 16 different mutations in total, which are liable to random generation, not 18.

In the single overall cycle there are no missing links or missing gaps in the spectrum of 216 mutations which are liable to random mutation. These are 000 to 555, for a total of 216.

In the first sub-cycle of the illustration, all six graded mutations are liable to be randomly generated, i.e. 00’0′ to 00’5’. In the second sub-cycle the six mutations liable to be randomly generated are separated by gaps in the graded spectrum. The gaps are of 5 mutations each. The six mutations which can be generated in the second sub-cycle are 0’0’5, 0’1’5, 0’2’5, 0’3’5, 0’4’5 and 0’5’5. The first gap comprises the five mutations between 0’0’5 and 0’1’5. These are 010, 011, 012, 013 and 014. There are 5 gaps of 5 mutations each, for a total of 25 mutations of the overall spectrum which cannot be generated in the second sub-cycle due to the gradualism of sub-cycles.

In the third sub-cycle of the illustration, the six mutations which are liable to be randomly generated are ‘0’55, ‘1’55, ‘2’55, ‘3’55, ‘4’55 and ‘5’55. Between each of these mutations there is a gap of 35 graded mutations which cannot be generated due to the gradualism of sub-cycles. For the first gap, the 35 are 100 to 154, inclusive. The total of different graded mutations, which cannot be generated in the third sub-cycle, is 35 x 5 = 175.

The totals for the three sub-cycles of different mutations are: Sub-cycle one: 6 mutations possibly generated, 0 mutations in non-generated gaps; Sub-cycle two: 5 mutations possibly generated, 25 mutations in non-generated gaps; Sub-cycle three: 5 mutations possibly generated, 175 mutations in non-generated gaps. Totals for the three sub-cycles: 16 mutations possibly generated, 200 mutations in non-generated gaps.

For a critique of gradualism from the perspective of probabilities see Refs. 5 and 6.

Conclusion

Both its proponents (Ref. 3) and its critics (Ref. 1) assume that a key characteristic of Darwinian evolution is the generation of a complete spectrum of graded mutations. This shared view assumes that the generation of all mutations in this spectrum is facilitated by the gradualism of a series of sub-cycles of random mutation and natural selection. This is false. The Darwinian algorithm of random mutation and natural selection, applied in series, ensures that most of the mutations, defined by the overall graded spectrum, cannot be generated. The role of sub-staging in Darwinian evolution is the increased efficiency of mutation due to the non-generation of most of the mutations comprising the defined graded spectrum. This results in huge gaps in the spectrum of mutations actually generated.

To the typical bystander (Ref. 7), the debate between Intelligent Design and Neo-Darwinism appears to be one of science vs. science or, as the Dover Court ruled, faith vs. science. In fact, the arguments of both sides are based on their mutual misunderstanding of the arithmetical algorithm, which is Darwinian evolution.

References

1. “Darwin’s Doubt” with Stephen Meyer, http://vimeo.com/81215936
2. “The God Delusion”, page 121
3. http://www.richarddawkins.net/news_articles/2013/1/28/the-tyranny-of-the-discontinuous-mind#
4. http://www.youtube.com/watch?v=JW1rVGgFzWU minute 4:25
5. https://theyhavenowine.wordpress.com/2014/04/04/dawkins-on-gradualism/
6. https://theyhavenowine.wordpress.com/2014/04/10/smearing-out-the-luck/
7. http://www.ncregister.com/blog/pat-archbold/they-call-them-theories-for-a-reason

Note: The single quote marks are used simply to highlight the mutation site in question.

Richard Dawkins has extensively discussed arithmetic. The theme of The God Delusion is that there is an arithmetical solution to the improbability of evolution in a one-off event, namely gradualism, whereas there is no arithmetical solution to the improbability of God. Obviously, the ‘improbability’ of God cannot be solved by gradualism.

It is encouraging that Richard Dawkins is interested in mathematics. If he were to learn to correct his mistakes in math, he might do very well in re-educating those whom he has deceived in mathematics, science and philosophy due to his errors in arithmetic.

The following is a math quiz based on problems in arithmetic addressed by Richard Dawkins and his answers, whether implicit or explicit, in his public work. I present this as a helpful perspective in the delineation of Dawkins’ public errors in arithmetic.

1) What is the opposite of +1?
Correct Answer: -1
Student Dawkins: Zero. Let us then take the idea of a spectrum of probabilities seriously between extremes of opposite certainty. The spectrum is continuous, but can be represented by the following seven milestones along the way:
Strong positive, 100%; Short of 100%; Higher than 50%; 50%; Lower than 50%; Short of zero; Strong negative, 0% (p 50-51. Ref 1).
Those, who aver that we cannot say anything about the truth of the statement, should refuse to place themselves anywhere in the spectrum of probabilities, i.e. of certitude. (p 51, Ref 1)
Critique of Dawkins’ answer:
On page 51 of The God Delusion, Dawkins devotes a paragraph to discussing the fact that his spectrum of certitude, from positive to its negative opposite, does not accommodate ‘no opinion’. Yet, he fails to recognize what went so wrong that there is no place in his spectrum for ‘no opinion’. The reason, that there is no place, is that he has identified a negative opinion as zero, rather than as -1. If he had identified a negative opinion as -1, which is the opposite of his +1 for a positive opinion, then ‘no opinion’ would have had a place in his spectrum of certitude at its midpoint of zero. Instead Dawkins discusses the distinction between temporarily true zeros in practice and permanently true zeros in principle, neither of which are accommodated by his spectrum ‘between two extremes of opposite certainty’ in which the opposite extreme of positive is not negative, but a false zero.

2) Is probability in the sense of rating one’s personal certitude of the truth of a statement a synonym for mathematical probability, which is the fractional concentration of an element in a logical set? For example, is probability used univocally in these two concepts: (a) The probability of a hurricane’s assembling a Boeing 747 while sweeping through a scrapyard. (b) The probability of a multiple of three in the set of integers, one through six?
Correct Answer: No. The probability of a hurricane’s assembling a Boeing 747 does not identify, even implicitly, a set of elements, one of which is the assembling of a Boeing 747. Probability as a numerical rating of one’s personal certitude of the truth of such a proposition has nothing to do with mathematical probability. In contrast to the certitude of one’s opinion about the capacity of hurricanes to assemble 747’s, the probability of a multiple of 3 in the set of integers, one through six, namely 1/3, is entirely objective.
Student Dawkins: Probability as the rating of one’s personal certitude of the truth of a proposition, has the same spectrum of definition as mathematical probability, namely 0 to +1 (p 50, Ref. 1). The odds against assembling a fully functioning horse, beetle or ostrich by randomly shuffling its parts are up there in the 747 territory of the chance that a hurricane, sweeping through a scrapyard would have the luck to assemble a Boeing 747. (p 113, Ref. 1) There is only one meaning of probability, whether it is the probability of the existence of God, the probability of a hurricane’s assembling a Boeing 747, the probability of success of Darwinian evolution based on the random generation of mutations or the probability of seven among the mutations of the sum of the paired output of two generators of random numbers to the base, six.

3) In arithmetic, is there a distinction between the factors of a product and the parts of a sum? Is the probability of a series of probabilities, the product or the sum of the probabilities of the series?
Correct Answers: Yes; The product. The individual probabilities of the series are its factors.
Student Dawkins: Yes; The relationship of a series of probabilities is more easily assessed from the perspective of improbability. An improbability can be broken up into smaller pieces of improbability. Those, who insist that the probability of a series is the product of the probabilities of the series, don’t understand the power of accumulation. Only an obscurantist would point out that if a large piece of improbability can be broken up into smaller pieces of improbability as the parts of a sum, i.e. as parts of an accumulation, then it must be true that its complement, the corresponding small piece of probability, is concomitantly broken up into larger pieces of probability, where the larger pieces of probability are the parts, whose sum (accumulation) equals the small piece of probability. (p 121, Ref. 1).

4) Jack and Jill go to a carnival. They view one gambling stand where for one dollar, the gambler can punch out one dot of a 100 dot card, where each dot is a hidden number from 00 to 99. The 100 numbers are randomly distributed among the dots of each card. If the gambler punches out the dot containing 00, he wins a kewpie doll. Later they view another stand where for one dollar, the gambler gets one red and one blue card, each with 10 dots. The hidden numbers 0 to 9 are randomly distributed among the dots of each card. If in one punch of each card, the gambler punches out red 0 and blue 0, he wins a kewpie doll. This second stand has an interesting twist, lacking in the first stand. A gambler, of course, may buy as many sets of one red card and one blue card as he pleases at one dollar per set. However, he need not pair up the cards to win a kewpie doll until after he punches all of the cards and examines the results.
(a) If a gambler buys one card from stand one and one pair of cards from stand two, what are his respective probabilities of winning a kewpie doll?
Correct Answer: The probability is 1/100 for both.
Student Dawkins: The probability of winning in one try at the first stand is 1/100. At the second stand the probability of winning is smeared out into probabilities of 1/10 for each of the two tries.
(b) How many dollars’ worth of cards must a gambler buy from each stand to reach a level of probability of roughly 50% that he will win at least one kewpie doll?
Correct Answers: $69 worth or 69 cards from stand one yields a probability of 50.0%. $12 worth or 24 cards (12 sets) from stand two yields a probability of 51.5%. (A probability closer to 50% for the second stand is not conveniently defined.)
Student Dawkins: A maximum of $50 and 50 cards from stand one yields a probability of 50%. A maximum of $49 and 49 cards from stand one yields a probability of 49%.A maximum of $14 and 28 cards yields a probability of 49%. (A probability closer to 50% for the second stand is not conveniently defined.)
(c) In the case described in (b), is the probability of winning greater at stand two?
Correct Answer: No
Student Dawkins: Yes
(d) In the case described in (b), is winning more efficient or less efficient in terms of dollars and in terms of total cards at the second carnival stand?
Correct Answer: More efficient. The second stand is based on two sub-stages of Darwinian evolution compared to the first stand, which is based on one overall stage of Darwinian evolution. The gradualism of sub-stages is more efficient in the number of random mutations while having no effect on the probability of evolutionary success. Efficiency is seen in the lower input of $12 or 24 random mutations compared to $69 or 69 random mutations to produce the same output, namely the probability of success of roughly 50%.
Student Dawkins: Efficiency is irrelevant. It’s all about probability. The gradualism of stand two breaks up the improbability of stand one into smaller pieces of improbability. (p 121, Ref. 1)
This problem is an illustration of two mutation sites of ten mutations each. I analyzed these relationships in Ref. 2, using an illustration of three mutation sites of six mutations each. In that illustration, I introduced two other modifications. One modification was that the winning number was unknown to the gambler. The other was that the gambler could choose the specific numbers on which to bet, so his tries or mutations were non-random. With the latter deviation from the Darwinian algorithm, the probability of winning a kewpie doll required a maximum of 216 non-random tries for the first stand and a maximum of 18 non-random tries for the second stand. The gradualism of the second stand smears out the luck required by the first stand. The increased probability of winning a kewpie doll at the second stand is due to the fact that one need not get his luck in one big dollop, as one does at the first stand. He can get it in dribs and drabs. It takes, respectively at the two stands, maxima of 216 and 18 tries for a probability of 100% of winning a kewpie doll. Consequently and respectively, it would take maxima of 125 and 15 tries to achieve a probability of 57.9% of winning a kewpie doll. Whether one compares 216 tries for the first stand to 18 tries for the second stand or 125 tries to 15 tries, the probability of winning a kewpie doll is greater at the second stand because it takes fewer tries. (See also the Wikipedia explanation, which is in agreement with Student Dawkins, Ref. 3)
Another example of extreme improbability is the combination lock of a bank vault. A bank robber could get lucky and hit upon the combination by chance. In practice the lock is designed with enough improbability to make this tantamount to impossible. But imagine a poorly designed lock. When each dial approaches its correct setting the vault door opens another chink. The burglar would home in on the jackpot in no time. ‘In no time’ indicates greater probability than that of his opening the well-designed lock. Any distinction between probability of success and efficiency in time is irrelevant. Also, any distinction between the probability of success and efficiency in tries, whether the tries are random mutations or non-random mutations is irrelevant. (p 122. Ref. 1)

5) If packages of 4 marbles each are based on a density of 1 blue marble per 2 marbles, how many blue marbles will a package selected at random contain?
Correct Answer: 2 blue marbles
Student Dawkins: 2 blue marbles.

6) If packages of 4 marbles each are based on a probability of blue marbles of 1/2, how many blue marbles will a package selected at random contain?
Correct Answer: Any number from 0 to 4 blue marbles.
Student Dawkins: 2 blue marbles. This conclusion is so surprising, I’ll say it again: 2 blue marbles. My calculation would predict that with the odds of success at 1 to 1, each package of 4 marbles would contain 2 blue marbles. (p 138, Ref. 1)

References:
1. The God Delusion
2. http://www.youtube.com/watch?v=JW1rVGgFzWU minute 4:25
3. http://en.wikipedia.org/wiki/Growing_Up_in_the_Universe#Part_3:_Climbing_Mount_Improbable

Calculations:
Where the total number of generated mutations, x, is random, and n is the number of different mutations, the Probability of success, P, equals 1- ((n – 1)/n)^x.
For n = 100 and x = 69, P = 50.0%
For n = 10 and x = 12, P = 71.7%. For P^2 = 51.5%, the sum of x = 24
Where the total number of generated mutations, x, is non-random, and n is the number of different mutations, the Probability of success, P, equals x/n
For n = 100 and x = 50, P = 50%
For n = 100 and x = 49, P = 49%
For n = 10 and x = 7, P = 70.0%. For P^2 = 49%, the sum of x = 14
For n = 216 and x = 216, P = 100%
For n = 6 and x = 6, P = 100%. For P^3 = 100%, the sum of x = 18
For n = 216 and x = 125, P = 57.9%
For n = 6 and x = 5, P = 83.3%. For P ^3 = 57.9%, the sum of x = 15

My previous post (Ref. 1), may have given the false impression that no one agreed with Richard Dawkins’ explanation of smearing out the luck of Darwinian evolution. This post hopefully corrects that impression.

Dawkins has stated “. . . natural selection is a cumulative process, which breaks the problem of improbability up into small pieces. Each of the small pieces is slightly improbable, but not prohibitively so.” (Ref. 2) What does this mean? We can tell from his illustrations (Ref. 3 – 4).

It would seem that Dawkins is saying that the probability of the generation of a given number by a random numbers generator, is increased by the introduction of natural selection. This doesn’t fly. Natural selection doesn’t generate mutations. It culls mutations. It permits only copies of one particular mutation to survive. It doesn’t affect the generation of the survivable mutation from which arises its probability.

Consider a single mutation site defining six different mutations. The six faces of a die define six different mutations. In this example of a total of six defined mutations, the probability of the random generation of at least one copy of the number, 6, for a total of one randomly generated mutation is 1/6 = 16.7%, with or without natural selection. Similarly the probability of the random generation of at least one copy of 6 for a total of six randomly generated mutations is 80.6%, with or without natural selection. In Darwinian evolution natural selection has no effect on probability (Ref. 1). It merely eliminates superfluous mutations, whether the superfluous mutations have been generated randomly or non-randomly.

It is apparent that Dawkins is not assessing the role of natural selection, but analyzing the replacement of a single cycle of random mutation and natural selection with several sub-cycles. In Ref. 3, he compares a single cycle affecting three mutation sites of six mutations each to three sub-cycles, each affecting a single site. The replacement of a single cycle with a series of sub-cycles has no effect on probability. Rather, it increases the efficiency of random mutation. Yet, Dawkins does not identify this as efficiency in mutation due to sub-cycles. He calls it ‘smearing out the luck’, as if the probability of success changed from 1/216 to 1/18. Dawkins is comparing 216 non-random mutations to 18 non-random mutations at a probability of success of 100% (Ref. 4).

Some of Dawkins’ reviewers have agreed with him. The Wikipedia review says the comparison is between probabilities of 1/216 and 1/18 (Ref. 5). More remarkably, in referring to a set of twenty-eight mutation sites of twenty-seven mutations each, John Lennox cites a probability of 10^(-31) and one billion mutations for a single cycle compared to the probability and the number of mutations for a series of 28 sub-cycles (Ref. 6). A computer simulation of the sub-cycles reached a probability of 1 in a maximum of 43 mutations per sub-cycle. In accord with Dawkins, Lennox refers to this as drastically increasing the probabilities. Superficially, Lennox’ comparison appears to imply an increase in probability due to the introduction of sub-cycles.

However, the Darwinian algorithm with sub-cycles, as well as this example of it, does not increase probability. In fact, the comparison in Lennox’ example, implies efficiency in the number of mutations due to sub-cycling with no change to the probability. A more appropriate comparison would have been at a probability of 90% for both the single overall cycle and for the series of 28 sub-cycles. This would compare 2.3 x (27)^28, i.e. roughly 2.7 x 10^40 mutations for the single cycle to 4144 mutations for the series of 28 sub-cycles at the same probability of success, namely 90%.. The 4144 mutations are 148 mutations for each of 28 sub-cycles, where the probability of success for each sub-cycle is 99.6%. This yields a probability of 90% for the series of 28.

Contrasting non-random vs random mutation, within the algorithm of Darwinian evolution for a single cycle, also shows that natural selection has no effect upon probability. For non-random mutation, one mutation yields a probability of 1/n. This increases linearly to a probability of 1 as the number of non-random mutations reaches n. Natural selection merely culls the superfluous mutants. Similarly, random mutation starts out at a probability of 1/n with one mutation and asymptotically approaches 1 as the number of random mutations increases. When the number of non-random mutations is respectively, n, 2.3n, 4.6n and 11.5n, then the respective probabilities are 63%, 90%, 99% and 99.999%. Here too, natural selection merely culls the superfluous mutants.

Another common error in the evaluation of Darwinian evolution is to attribute temporal and material constraints to random mutation. Due to the fact that Darwinian evolution is strictly a logical algorithm of random mutation and natural selection, it is not subject to any temporal or material constraints. It is material simulations, not the logical algorithm, which can exceed such constraints. Also, in a material simulation, there is no increase in time or material due to a random mutation compared to a non-random mutation.

There are 52 factorial or 8.06 x 10^67 different sequences of 52 elements. The inverse of this is the probability of any sequence. In a material simulation, how many decks of cards and how long does it take to generate randomly any sequence, if shuffling for five seconds is granted to be a random selection? The answer is one deck and five seconds. Granted this, how many decks of cards and how long would it take to generate a pool of decks of cards containing at least one copy of a particular sequence at a probability of 90%? The answer is 2.3 x 8.06 x 10^67 decks and 5 x 2.3 x 8.06 x 10^67 seconds. If we apply the Darwinian algorithm of a single cycle of random mutation and natural selection, these paired numbers of decks and seconds are required for a probability of success of evolution of 90%. This exceeds by far any practical temporal and material limits. However, if we are content with any value of probability, then we would be content with one random mutation. Natural selection does not affect probability. It merely culls superfluous, randomly generated mutants. If we trust success to just one random mutation, then there is no need for natural selection, while the material and temporal requirements of the simulation are insignificant, namely one deck and five seconds.

Indeed, we must be content with any and every value of probability. I have argued that no value of probability represents a ‘problem of improbability’. To claim that ‘the probability of this outcome is so close to zero that it could not be due to chance’ is a self-contradiction. Of course, I am not claiming that probability is to be accepted as an explanation. Rather, if probability is accepted in any instance as an explanation, then in no instance can it be rejected as an explanation on the basis of its numerical value, irrespective of how close it is to zero. (Ref. 7). Similarly, the acceptance of probability as an explanation is not bolstered by a value of probability closer to 1.

References:
(1) https://theyhavenowine.wordpress.com/2014/04/04/dawkins-on-gradualism/
(2) “The God Delusion”, page 121
(3) “The God Delusion”, page 122
(4) http://www.youtube.com/watch?v=JW1rVGgFzWU minute 4:25
(5) Growing Up in the Universe, Part 3 http://en.wikipedia.org/wiki/Growing_Up_in_the_Universe
(6) “God’s Undertaker Has Science Buried God?” Page 165-167
(7) https://theyhavenowine.wordpress.com/2013/12/30/too-improbable-to-be-due-to-chance/

According to Richard Dawkins the problem which any theory of life must solve is how to escape from chance (Ref. 1), how to solve the problem of improbability (Ref. 2).

Darwinian evolution consists not simply in a single cycle of random mutation and natural selection, but the gradualism of a series of such cycles. According to Dawkins, the problem of improbability of a single cycle is solved by the gradualism due to cycles, each of which is terminated by natural selection. Although Dawkins has declared that the meaning of non-random is his life’s work (Ref 3), it is really gradualism which is his key to explaining the randomness and improbability of Darwinian evolution.

What follows are analyses of four explanations Dawkins has offered to elucidate the role of gradualism in Darwinian evolution. The explanations focus on: (1) a numerical illustration involving three mutation sites of six mutations each, (2) the breakup of a large piece of improbability into smaller pieces, (3) the analysis of the vector of evolution into its component vectors of improbability and gradualism and (4) the gradualism of mutant forms in an ordered sequence approaching mathematical continuity.

Dawkins presents these explanations as in the mainstream of the scientific acceptance of Darwinian evolution. He is not attempting to re-interpret Darwinian evolution or to present any departure from it. His explanations are sufficiently explicit that they are a clear testimony to the coherent mathematics underlying Darwinian evolution in spite of Dawkins’ errors in explaining the mathematics.

The Role of Gradualism, a Numerical Illustration (Ref 4)

A set of three mutation sites of six mutations each defines 216 different mutations, which is 6 x 6 x 6. The 216 mutations may be viewed as an ordered sequence consisting of the initial mutation, the final mutation and 214 ordered intermediate mutations. Let a pool of one copy of each mutation be generated and subjected to natural selection. Natural selection would cull all but one mutation, the final mutation. The pool of 216 would have been non-randomly generated. The probability of success of natural selection would be 100%.

Let this single cycle of mutation and natural selection be replaced by the gradualism of three cycles, where each cycle affects one of the mutation sites independently of the other two. In each cycle a pool of six non-randomly generated mutations would be subjected to natural selection. A total of 18 mutations would be non-randomly generated. The probability of success of natural selection for each cycle would be 100% and the overall probability of success of natural selection would be 100%, i.e. 100% x 100% x 100%.

The gradualism of sub-cycles would generate a total of 18 mutations consisting of the two endpoints, but only 14 of the ordered 214 intermediates defined by evolution overall. Gradualism introduces large gaps, not just missing links, in the actually generated spectrum of ordered mutations in comparison to the ordered spectrum defined by the mutation sites. Gradualism introduces an efficiency factor of 216/18 = 12 in non-random mutations without any change in the probability of success of natural selection, which is 100% for both the overall cycle and the series of sub-cycles of gradualism.

Although such is Dawkins’ illustration of gradualism, it is not his interpretation. Dawkins refers to the non-random mutations as ‘tries’ implying that they are random mutations. He refers to a one in 216 chance compared to three chances of one in 6. He calls gradualism smearing out the luck into cumulative small dribs and drabs of luck. Dawkins erroneously believes he has illustrated random mutation rather than non-random mutation. Dawkins thinks he has illustrated, not an increase in efficiency in the number of mutations due to gradualism. Dawkins erroneously claims to have illustrated an increase in the probability of success of natural selection, the probability of success of Darwinian evolution due to gradualism.

If the pools of mutations subjected to natural selection in the illustration are generated by random mutation, a similar efficiency in the number of mutations is achieved, without any change in the probability of success of natural selection. A pool of 19 randomly generated mutations in each of three sub-cycles would yield a probability of success of natural selection of 96.9% for each cycle and an overall probability of success of natural selection of 90.9%. For a single cycle of random mutation involving all three mutation sites, a pool of 516 random mutations would yield a probability of success of natural selection of 90.9%. The efficiency factor in random mutations would be 516/57 = 9 due to gradualism with no change in the probability of success of natural selection.

The probability, P, of at least one copy of the mutation surviving natural selection in a pool of x randomly generated mutations with a base of n different mutations is: P = 1 – ((n -1)/n)^x.

The Breakup of a Piece of Improbability (Ref. 5)

Dawkins claims that natural selection, which terminates each cycle of random mutation and natural selection, increases the probability of success of natural selection by forming a series of cycles. The overall improbability of evolutionary success in a single cycle is broken up into smaller pieces of improbability.

To break up a large piece of something into smaller pieces implies that the smaller pieces add up to the larger piece.

Probability and improbability are complements of one. They add up to one. The overall probability of a series of probabilities is the product of the probabilities forming the series, not their sum. Each value of probability in a series, if less than one, is greater than the overall probability.

The probability of any face of a red die is 1/6 = 16.7%. The improbability is 5/6 = 83.3%. The same is true of a green die. The probability of any combination of the faces of the two dice, one red one green, is 1/36 = 2.8%. Its improbability is 35/36= 97.2%. Suppose it were mathematically valid to say that rolling the dice individually in a series, rather than both together, ‘breaks the improbability of 97.2% up into two smaller pieces of improbability of 83.3% each’. It would then be mathematically valid to say that rolling the dice in series rather than both together, ‘breaks up the small piece of probability of 2.8% into two bigger pieces of probability of 16.7% each’. Both statements are nonsense. They confuse multiplication and division with addition and subtraction. Yet, Richard Dawkins claims that those, who insist that the overall probability of a series is the product of the probabilities, don’t understand ‘the power of accumulation’, i.e. the power of addition. He claims that ‘natural selection is a cumulative process, which breaks the problem of improbability up into small pieces. Each piece is slightly improbable, but not prohibitively so.’ (Ref. 5) In other words, gradualism increases the probability of evolutionary success.

The probability of the outcome of the roll of three dice, one red, one green and one blue is 1/216. If each die is rolled separately the probability of each roll is 1/6 and the overall probability is 1/216. If the probabilities of the three individual rolls were cumulative as Dawkins indicates (Ref. 5), the overall probability of the series would be 1/6 + 1/6 +1/6 = 1/2. Yet, Dawkins implies the accumulation would be a probability of 1/18 (Ref. 4). Similarly, Wikipedia states, ‘The probability of unlocking the combination, in three separate phases, falls to one in eighteen.’ (Ref. 6). The non-random mutations of three individual sites of six mutations each, e.g. those of three dice, add up to 18. The fraction, 1/18, is not the probability of the series of three individual probabilities. It is simply the mathematical inverse of the 18 total mutations.

The Analysis of the Vector of Evolution (Ref. 7)

Dawkins proposes a parable of Darwinian evolution, ‘Climbing Mt. Improbable’. In the parable, evolution is a vector slope, the sum of a vertical vector, improbability, and a horizontal vector, gradualism. If gradualism is zero, then the vector of evolution is solely a vector of improbability. In the parable, the role of gradualism in Darwinian evolution is to change the slope of the vector of evolution from infinity when gradualism equals zero to some finite value, when gradualism is greater than zero. Dawkins implies that gradualism in the parable decreases the improbability. It does not. Both in Dawkins’ parable and in Darwinian evolution, the overall improbability is unaffected by gradualism, whether or not the vectors of evolution and gradualism are incremental. Also, improbability is not a vector. Consequently, Dawkins’ metaphor of Darwinian evolution as a vector slope, equaling the sum of its horizontal and vertical vectors, is irrelevant both to improbability and to Darwinian evolution.

The Spectrum of Ordered Mutations as Approaching Mathematical Continuity (Ref. 8)

In Darwinian evolution, each cycle of random mutation and natural selection defines a finite set of different mutations, n, which is the base of random numbers generation. The pool of x generated random numbers is subjected to natural selection, i.e. the pool is subjected to a filter at a discrimination ratio of 1/n, which culls all mutations except copies of the one mutation which survives natural selection. The set of n different mutations is an ordered set, where the initial mutation is the lowest and the mutation, surviving natural selection, is the highest mutation in the ordered sequence.

Dawkins notes that, if all of the intermediates between the lowest and the highest mutation in a biological evolutionary sequence survived, the ordered sequence would be recognized as a sequence approaching mathematical continuity (Ref. 8). Furthermore, Dawkins claims that all of the intermediates have been biologically and randomly generated, but are typically extinct. Man and the pig have a common evolutionary ancestor. ‘But for the extinction of the intermediates which connect humans to the ancestor we share with pigs (it pursued its shrew-like existence 85 million years ago in the shadow of the dinosaurs), and but for the extinction of the intermediates that connect the same ancestor to modern pigs, there would be no clear separation between Homo sapiens and Sus scrofa.’ (Ref. 8).

Dawkins would be right within the context of Darwinian evolution, except for one feature of Darwinian evolution which he demonstrated in Reference 4. Gradualism in Darwinian evolution is highly efficient in the generation of random mutations, by not generating most of the intermediates. Due to the efficiency of gradualism, as Dawkins has shown, there are gaps in the actually generated set of intermediates in comparison to the mathematically defined spectrum of ordered mutations. Most of the mathematically defined intermediates, which connect humans to the ancestor we share with pigs, are not absent due to extinction, they are absent because they were never biologically generated, thanks to the efficiency of gradualism of Darwinian evolution. Darwinian evolution is not characterized by missing links in the fossil record. In its mathematical definition, it is characterized by gapping discontinuities in the sequence of ordered mutations actually generated in comparison to the sequence of ordered mutations defined, but not generated.

In 1991 (Ref. 4), Dawkins demonstrated that there are discontinuities in the spectrum of generated mutants due to gradualism. In 2011 (Ref. 8), he claimed that gradualism insures the generation of the complete spectrum, that any discontinuities are due to extinction, not to a lack of generation.

Conclusion

Dawkins does not understand the mathematics of gradualism or its role in Darwinian evolution. Yet, understanding the meaning of random/non-random, which he has identified as his life’s work, is integral to understanding the mathematics of and the role of gradualism in Darwinian evolution. The role of gradualism in Darwinian evolution is to increase the efficiency of random mutation. It has no effect on probability.

References:

1. Page 120, “The God Delusion”

2. Page 121, “The God Delusion”

3. http://www.youtube.com/watch?v=tD1QHO_AVZA Minute 38:55

4. http://www.youtube.com/watch?v=JW1rVGgFzWU minute 4:25

5. Page 121, “The God Delusion”

6. Growing Up in the Universe, Part 3 http://en.wikipedia.org/wiki/Growing_Up_in_the_Universe

7. Page 121-2, “The God Delusion”

8. http://www.richarddawkins.net/news_articles/2013/1/28/the-tyranny-of-the-discontinuous-mind#