Archive

Monthly Archives: September 2015

In an exchange of comments with Phil Rimmer on the website, StrangeNotions.com, I attempted to explain the distinction between probability and efficiency. The topic deserves this fuller exposition.

I have argued that Richard Dawkins does not understand Darwinian evolution because he claims that the role of replacing a single stage of random mutation and natural selection with a series of sub-stages increases the probability of evolutionary success. In The God Delusion (p 121) he titles this ‘solving the problem of improbability’, i.e. the problem of low probability. My claim is that replacing the single stage with the series of sub-stages increases the efficiency of mutation while having no effect upon the probability of success.

Using Dawkins’ example of three mutation sites of six mutations each, I have illustrated the efficiency at a level of probability of 85.15%, where the series requires only 54 random mutations, while the single stage requires 478.

It may be noted that at a given number of mutations, the probability of success is greater for the series than for the single stage. A numerical example would be at 54 total mutations. For the series the probability of success in 85.15%, whereas at 54 total mutations, the probability of the single stage is only 22.17%. The series has the greater probability of success at a total of 54 mutations.

This would appear to be a mortal blow to my argument. It would seem that Richard Dawkins correctly identifies the role of the series of sub-stages as increasing the probability of success, while not denying its role of increasing the efficiency of mutation. It would seem that Bob Drury errs, not in identifying the role of the series as increasing the efficiency of mutation, but in denying its role in increasing the probability of evolutionary success.

Hereby, I address this apparently valid criticism of my position.

The Two Equations of Probability as a Function of Random Mutations

The probability of evolutionary success for the single stage, PSS, as a function of the total number of random mutations, MSS, is:
PSS = 1 – (215/216)^MSS

The probability of evolutionary success for the series of three sub-stages, PSR, as a function of the total number of random mutations per sub-stage, MSUB, is:
PSR = (1 – (5/6)^MSUB)^3.

For the series, the total number of mutations is 3 x MSUB.

Comparison of Probability at the Initial Condition

At zero mutations, both probabilities are zero. Initially, the probability of both processes, namely the single stage and the series of sub-stages is the same.

For the single stage at one random mutation, which is the minimum for a positive value of probability, the probability of success is 1/216 = 0.46%.

For the series of three stages, at one random mutation per stage, which is the minimum for a positive value of probability, the probability of success is (1/6)^3 = 1/216 = 0.46%. At this level of probability, the single stage has the greater mutational efficiency. It takes the series three random mutations to achieve the same probability of success as the single stage achieves in one random mutation.

Comparison of the Limit of Probability

For both the single stage and for the series of three stages, the limit of probability with the increasing number of mutations is the asymptotic value of 100% probability.

Comparison of the Method of Increasing Probability

For both the single stage and for the series of three stages, the method of increasing the probability is the same, namely increasing the number of random mutations. For both, probability is a function of the number of random mutations.

Comparison of the Intermediate Values between the Initial Condition and the Limit

For both the single stage and for the series of three stages, probability varies, but continually increases from the initial condition toward the limit.

Excepting for values of total mutations less than six, i.e. two per sub-stage, at every level of probability, the series requires fewer mutations than does the single stage. Correspondingly, at any number of mutations greater than six, the series has a higher value of probability than the single stage. Thus, if the comparison is at a constant value of probability, the series requires fewer mutations. If the comparison is at a constant value of mutations, the series has a higher value of probability.

Apparent Conclusion

Richard Dawkins is right in that the series increases the probability of success, without denying that it also increases the efficiency of mutation. Bob Drury is wrong in denying the increase in probability.

The Apparent Conclusion Is False, in Consideration of the Concept of Efficiency

Both the single stage and the series of sub-stages are able to achieve any value of probability over the range from zero toward the asymptotic limit.

Efficiency is the ratio of output to input. One system or process is more efficient than another, if its efficiency is numerically greater. There is no difficulty in comparing two processes where the efficiency of both systems is constant. In such a case, output starts at zero at input equals zero. Output is a linear function of input, having a constant positive slope. The process with the higher positive slope is more efficient than the other. However, in cases where the efficiencies vary, the comparison of efficiencies must be determined at the same value for the numerator of the ratio of efficiency, i.e. the output, or at the same value for the denominator, the input.

In this comparison of the single stage vs. the series of sub-stages, the output is probability and the input is the number of random mutations. Remember both processes increase probability by the same means, namely by increasing the number of random mutations. That is, output increases with increasing input. Also, remember that both processes do not differ in that they both approach the same limit of probability asymptotically.

Dawkins’ comparison of replacing the single stage with a series of sub-stages is the comparison of two processes.

In the numerical examples above we can calculate and compare the efficiencies of the two processes at a constant output, e. g. of 85.15% probability and at a constant input, e.g. of 54 mutations.

At the constant output of 85.15%, the efficiency for the single stage 85.15/478 = 0.18. For the series of sub-stages the efficiency is 85.15/54 = 1.57. The mutational efficiency is greater for the series than for the single stage at the constant output of 85.15% for both processes.

At the constant input of 54 mutations, the probability for the single stage is P = 1 – (215/216)^54 = 22.17%. Therefore, the efficiency is 22.17/54 = 0.41. At this constant input, efficiency for the series is 85.15/54 = 1.57. The mutational efficiency is greater for the series than for the single stage at the constant input of 54 mutations for both processes.

At the 85.15% probability level, the series is greater in mutational efficiency than the single stage by a factor of 478/54 = 8.8

Further evidence that Dawkins is illustrating an increase in efficiency and not an increase in probability is that he compares the temporal efficiencies of two computer programs. For both programs, the input of the number of random mutations is equated with the time of operation from initiation to termination. Termination is upon the random inclusion of one specific mutation. The sub-stages based program typically wins the race against the single stage based program. This demonstrates the greater mutational efficiency of the sub-series, not the greater probability of success.

In the numerical example of three sites of six mutations each, the specific mutation would be one of 216. Let us modify the computer program races slightly. This will give us a greater insight into the meaning of probability and the meaning of efficiency.

Let each program be terminated after 54 and 478 mutations for the series and the single stage, respectively. If the comparison is performed 10,000 times, one would anticipate that on the average, both programs would contain at least one copy of the specific mutation in 8,515 of the trials and no copies in 1,485 of the trials. The series program would be more efficient because it took only 54 mutations or units of time, compared to 478 mutations or units of time for the single stage program to achieve a probability of 85.15%.

For the numerical illustration of three mutation sites of three mutations each, both the single stage and the series of sub-stages have the same initial probability of success greater than zero, namely, 0.46%. Both can achieve any value of probability short of the asymptotic value of 100%. They do not differ in the probability of success attainable.

It doesn’t matter whether we compare the relative efficiencies of the series vs. the single stage at a constant output or a constant input, the series has the greater mutational efficiency for total mutations greater than six.

For the numerical illustration of three mutation sites of three mutations each, at a probability of 85.15%, the series is greater in mutational efficiency by a factor of 8.8. At 90% probability, the factor of efficiency is 8.9 in favor of the series. At a probability of 99.9999%, the factor of efficiency is 12.1 in favor of the series.

Analogy to a Different Set of Two Equations

Let the distance traveled by two autos be plotted as a function of fuel consumption. Distance increases with the amount of fuel consumed. Let the distance traveled at every value of fuel consumption be greater for auto two than auto one. Similarly, at every value of distance traveled, auto two would have used less fuel than auto one. My understanding would be inadequate and lacking comprehension, if I said that replacing auto one with auto two increases the distance traveled. It would be equally inane to say that auto two solves the problem of too low a distance. My understanding would be complete and lucid, if I said that replacing auto one with auto two increases fuel efficiency.

There is a distinction between distance and fuel efficiency. Understanding the comparison between the two autos is recognizing it as a comparison of fuel efficiency. Believing it to be a comparison of distances is a failure to understand the comparison.

For both the single stage and the series of sub-stages of evolution, probability increases with the number of random mutations. Except for the minimum number for the sub-series, at every greater number of random mutations, the probability is greater for the series of sub-stages than for the single stage of evolution. Similarly, except for the minimum positive value, at every value of probability, the series requires fewer random mutations. My understanding would be inadequate and lacking comprehension, if I said that replacing the single stage with the series increases the probability attained. It would be equally inane to say that the series solves the problem of too low a probability. My understanding would be complete and lucid, if I said that replacing the single stage with the series increases mutational efficiency.

The role of a series of sub-stages in replacing a single stage of random mutation and natural selection is to increase the efficiency of random mutation while having no effect on the probability of evolutionary success. This is evident by comparing the equations of probability for the series and for the single stage as functions of the number of random mutations. This is the very comparison proposed by Richard Dawkins for the sake of understanding evolution. He misunderstood it as “a solution to the problem of improbability” (The God Delusion, page 121), i.e. as solving the problem of too low a probability.

There is a distinction between probability and mutational efficiency. Understanding the comparison between the series of sub-stages and the single stage is recognizing it as a comparison of mutational efficiency. Believing it to be a comparison of probabilities is a failure to understand the comparison.

Advertisements