This essay is presented by its author on the supposition of his virtual assignment to the debate side expressed by the title. It is prompted by the impression that published views, on the con side of the debate, typically dismiss the pro side as intellectually and philosophically trivial. Consequently, the con side has not adequately addressed the issue of debate.

The issue or thesis is that human knowledge of material reality is the inference of mathematical probability. Hahn and Wiker (Answering the New Atheism, p 10) accuse Dawkins of an irrational faith in chance when Dawkins has explicitly denied chance as a solution (The God Delusion, p 119-120). Feser (The Last Superstition) does not even discuss mathematical probability, although identifying Dawkins as his main philosophical opponent. In a few instances Feser uses the word, probability, but in the sense of human certitude, not in the mathematical sense.

The Historical Issues

There were two dichotomies with which the ancient Greek philosophers wrestled. One was the discrete and the continuous. The other was the particular and the universal.

The Discrete and the Continuous

Zeno of Elia was a proponent of the discrete to the denial of the continuous. This took the form of a discrete analysis of motion. Any linear local motion takes a finite time to proceed halfway, leaving the remainder of the motion in the same situation. If local motion were real, it would take an infinite number of finite increments of time and also of distance to complete the motion. Therefore, motion is an illusion. From this perspective, it is assumed that the discrete is real. When subjected to discrete analysis, motion, which is continuous, is seen to be untenable.

Heraclitus of Ephesus took the opposite view. Everything is always changing. It is change, which is real. Things as entities, i.e. as implicitly stable, are mental constructs. They are purely logical. It is continuous fluidity which is reality.

The Particular and the Universal

It was apparent to both Plato and his student, Aristotle, that the object of sense knowledge was particular, completely specified. In contrast, intellectual concepts were universal, not characterized by particularity, but compatible with a multitude of incompatible particulars. Plato proposed that sense knowledge of the particular was a prompt to intellectual knowledge, recalling a memory when the human soul, apart from the body, had known the universals.

Aristotle proposed that material entities or substances were composites of two principles. One was intelligible and universal, the substantial form. The other was the principle of individuation or matter, which enabled the expression of that universal form in a complete set of particulars. The human soul had the power to abstract the universal form from a phantasm presented to it by sense knowledge of the individual material entity in its particularities.

From this binary division into the two principles of substantial form and matter arose the concept of formal causality. The form of an entity made an entity to be what it was. It was the formal cause, whereas the particular material substance, as a composite of form and matter, was the effect. Thus, cause and effect were binary variables. The cause is absent, 0, or present, 1, and its effect was correspondingly binary as absent, 0, or present, 1. Thereby, the philosophy of formal causality was tied to the discrete mathematics of binary arithmetic.

The Modern Assessment of Form

This discrete and binary view of formal causality was subtly undermined in the 19th century. What led to its demise was the study of variation in biological forms. Darwin proposed that the modification of biological forms was due to the generation of variants by random mutation and their differential survival due to natural selection.

Superficially this appeared to be consonant with the distinction of one substantial form, or identity of one species, as discretely distinct from another. However, it was soon realized that the spectrum of seemingly discrete and graduated forms was, in its limit, continuous variation. One species in an evolutionary line did not represent a discretely distinct substantial form from the next substance in the spectrum. Rather, they were related by continuous degree http://www.richarddawkins.net/news_articles/2013/1/28/the-tyranny-of-the-discontinuous-mind#. The distinction of one biological form from another, as substantial, was an imposition of the human mind on biological reality. To save at least the jargon of Aristotelian philosophy, it could be said that the evolutionary and graduated differences among living things were accidental differences among individuals of one substantial form, namely the substantial form, living thing.

The Resultant Modern Assessment of Efficient Causality

Apart from formal causality, Aristotle also identified efficient causality, namely the transition of potency to act. This would include all change, both substantial change and local motion. In keeping with the limitations of binary arithmetic, efficient causality and its effect were identified as absent, 0, and present, 1. However, concomitant to the implication of the random mutation of forms, which renders the substantial form of living things a continuum, is the implication of mathematical probability as the outcome of an event. Just as the realization that the mutation of forms defined a continuous spectrum for formal causality, probability defines a continuous spectrum from 0 to 1, for efficient causality. Efficient causality is the probability of an outcome, the probability of an event. The outcome or event as the effect is within a continuous spectrum and proportional to its continuous efficient cause, which is mathematical probability. Thus, the inference of mathematical probability as the mode of human knowledge of material reality, frees efficient causality and its effect from the restrictions of binary arithmetic.

Causality was no longer discrete and binary. Causality was the amplitude from 0 to 1 of the continuous variable, probability. Causality had now the nuance of degree, made possible by the rejection of discrete, binary arithmetic in favor of continuity. The magnitude of the effect was directly proportional to the amplitude of the cause. The simplicity of discrete, binary arithmetic, which is so satisfying to the human mind, was replaced by what we see in nature, namely degree.

A Clearer Understanding of Chance

Hume had rejected the idea of efficient causality. He claimed that, which we propose as cause and effect, was simply a habit of association of a sequence of events. In this view, we label as an effect the next in a series of events according to what we anticipate due to our habit of association. The understanding of probability as causality having amplitude restores cause and effect, negating Hume’s denial.

Mathematical probability is the fractional concentration of an element, x, of quantity, n, in a logical set of N elements. This fraction, n/N, has a lower limit of 0 as n → 0. The limit, 0, is a non-fraction. The upper limit of the fraction, probability, n/N, as n → N is 1, a non-fraction. These non-fractional limits represent the old, binary conception of causality. Properly understood, these limits demarcate the continuum of probability, the continuum of efficient causality.

The binary definition of chance was an effect of 1, where the cause was 0. In recognizing probability as efficient causality, this does not change. No one offers chance as an explanation (The God Delusion, p 119-120). In the context of probability, however, the binary concept of chance yields to a properly nuanced understanding. Chance is directional within the continuum of probability. Causality tends toward chance as the probability tends toward 0. This is mathematically the same as improbability increasing toward 1. Consequently, Dawkins notes that a decrease in probability is moving away from chance by degree, “I want to continue demonstrating the problem which any theory of life must solve: how to escape from chance.” (The God Delusion, p120). This escape from chance by degree is explicit, “The answer is that natural selection is a cumulative process, which breaks the problem of improbability up into small pieces. Each of the small pieces is slightly improbable, but not prohibitively so.” (The God Delusion, p121)

Often in common parlance, chance and probability are synonyms: The chance or probability of heads in flipping a coin is one-half. In recognizing probability as the spectrum of efficient causality they are not synonyms. Chance is properly understood as directional movement toward the lower end of the spectrum of probability.

Mathematical Probability and Human Certitude Merge

The recognition of efficient causality as the continuum of probability introduces a distinction between mathematical chance as directional and mathematical probability as spectrum. On the other hand, this recognition merges the meaning of mathematical probability and probability in the sense of an individual’s certitude of the truth of a proposition.

In the Aristotelian discrete binary view of efficient causality, an individual’s certitude of the truth of a proposition though commonly labeled, probability, was strictly qualitative and subjective. One could of course, describe his certitude on a numerical scale, but this was simply a subjective accommodation. For example, stating a numerical degree of one’s certitude was just for the fun of it within a discussion of politics by TV pundits. In spite of adopting an arbitrary scale such as zero to ten, to express a pundit’s certitude, human certitude was still recognized as qualitative.

The recognition of efficient causality as the continuum of mathematical probability, implies that human knowledge is the inference of mathematical probability and, indeed, a matter of degree. There is no distinction between the probability of efficient causality and the degree of certitude of human knowledge. Human certitude, which was thought to be qualitative, is quantitative because human knowledge is the inference of mathematical probability.

Final Causality

Final causality or purpose is characteristic of human artifacts. However enticing as it may be, it is simply anthropomorphic to extrapolate purpose from human artifacts to material reality (The God Delusion, p 157). In the binary context of form and matter, it was quite easy to give in to the temptation. Once binary arithmetic was discarded with respect to formal and efficient causality, the temptation vanished. The continuity of probability not only erased the discrete distinctions among forms, but melded formal causality and efficient causality into the one continuous variable of probability. Final causality is identifiable in human artifacts and in a philosophy based on binary arithmetic. It serves no purpose in a philosophy based on the continuity arising from the inference of mathematical probability from material reality.

Conclusion: Regarding the Existence of God

Binary arithmetic was Aristotle’s basis for the distinction of substantial form and matter in solving the problem of the particular and the universal. The form was the intelligible principle which explained the composite, the particular substance. The composite was identified as the nature of the individual material entity. However, this implied a discrete distinction between the nature of the individual substance and its existence. One binary solution led to another binary problem: How do you explain the existence of the individual when its form, in association with matter, merely explains its nature? The Aristotelian solution lay in claiming there must be a being, outside of human experience in which there was no discrete distinction between nature and existence. That being would be perfectly integral in itself. Thereby, it would be its own formal, efficient and final causes. Its integrity would be the fix needed to amend the dichotomy of the nature and existence of the entities within our experience.

Both the problem and its solution arise out of the mindset of binary arithmetic. The problem is to explain a real, discrete distinction between nature and existence in material entities. Its solution is God, an integral whole. In contrast, the problem does not arise in the philosophy of probability, which expands philosophical understanding to permit the concept of mathematical continuity. That philosophy allows the human epistemological inference of mathematical probability. Probability and its inference from material reality, do not require a dichotomy between formal and efficient causality. In that inference, expressed as amplitude, both form and existence are integral. There is no need of a God, an external source, to bind into a whole that which is already integral in itself.

In Aristotelian philosophy, it is said that there is only a logical distinction between God’s nature and God’s existence, whereas there is a real distinction of nature and existence in created entities. The philosophy of probability avoids the dichotomies arising out of Aristotelian binary arithmetic. In the philosophy of probability there is only a logical distinction between formal and efficient causality in material things. There is no real dichotomy for a God to resolve.

 

Gradualism in Darwinian evolution is identified as the replacement of a conceptual overall cycle of random mutation and natural selection with an actual, gradual series of sub-cycles. It is assumed that the series of sub-cycles randomly generates all the mutations defined by the conceptual overall cycle, but in stages rather than in one gigantic evolutionary cycle. The gigantic set of mutations is itself a graded set, not of sub-cycles, but of mutations.

The mutations defined in each sub-cycle is a subset of the graded set of mutations defined by the overall cycle. Each sub-cycle consists of subjecting its subset of mutations to random generation and the resulting pool of random mutations to natural selection.

The gradualism of sub-cycles is often taken to be synonymous with the gradualism represented by the entire graduated sequence of mutations defined by the associated conceptual overall cycle of evolution.

Everyone agrees that replacing a single cycle of Darwinian evolution by a series of sub-cycles yields a series of sub-cycles, each of which has a probability of evolutionary success greater than the probability of success overall. This is simple arithmetic. The product of a series of factors, each a fraction of 1, is less than any of its factors.

However, proponents of Intelligent Design claim that there are some biological structures that cannot be assembled gradually in a series of subsets because survival in the face of natural selection requires the full functionality of the surviving mutant in each subset.

There are in fact two distinct Intelligent Design arguments against Neo-Darwinism. One argument is entitled, irreducible complexity. The other argument is the argument of gradualism presented by Stephen Meyer (Ref. 1). Both of the Intelligent Design arguments cite complex biological structures such as the ‘motor assembly’ of the bacterial flagellum. In opposition to Neo-Darwinism, the Intelligent Design arguments claim that it is the integrity of the assembled unit that confers functionality and thereby survivability when subjected to natural selection. Those mutants, which have partial assemblies, have no functionality and therefore no survivability based on functionality.

Intelligent Design’s Irreducible Complexity Argument

This argument acknowledges that the gradualism of sub-cycles increases the probability of evolutionary success in terms of the probability of the individual sub-cycle. However, it is argued that the integrity of a cited biological assembly requires a single cycle of such a size that it thereby has a level of probability too low to be acceptable. The assembly is not reducible, by a series of sub-cycles, to a lower level of complexity without sacrificing survivability. The lower the complexity, the greater the probability of evolutionary success. Thus, the level of complexity is irreducible by means of sub-cycles to a low level of complexity, which would raise the probability of each sub-cycle to an acceptably high level of probability. According to this argument, Darwinian evolution fails on the basis of probability.

Intelligent Design’s Gradualism Argument

Stephen Meyer’s Intelligent Design argument (Ref. 1) ignores the numeric values of probability and an alleged value of probability above which probability becomes large enough to serve as an explanation. Rather, the argument concentrates on the proposition that the gradualism of Darwinian evolution requires the actual generation of the entire, graduated spectrum of mutations. If there is a sub-sequence of graduated mutations, which have no functionality and therefore have no survivable utility, then the terminal of this sub-sequence could never be generated. The ‘motor assembly’ of the bacterial flagellum is cited as one example. Consequently, the evolution of such a terminal mutation is incompatible with gradualism, which is a key characteristic of Darwinian evolution.

According to this argument Darwinian evolution fails on two grounds with respect to gradualism. (1) The complete biological assembly, in the cited instances, cannot be assembled gradually because the necessary precursors, namely incomplete biological assemblies, would not pass the test of natural selection to which each precursor would be subjected in its sub-stage of gradualism. (2) The fossil record has gaps in the spectrum of mutations, which gaps are incompatible with gradualism.

Critique of Both Irreducible Complexity and Darwinian Evolution with Respect to Probability

Probability is defined over the range of 0 to 1. There is no logical basis for dividing this continuous range of definition into two segments: one segment from 0 up to an arbitrary point, where probability is too small to serve as an explanation; a second segment from that point through 1, where probability is numerically large enough to serve as an explanation.

The Irreducible Complexity Argument claims there are cycles of evolution for which the probability is in the segment near zero. Because these cycles cannot be replaced by sub-cycles, the gradualism required by evolution cannot be achieved. The Neo-Darwinian response is that there are no such cycles. Any cycle, which due to its size would have too low a probability, superficially may appear to be indivisible into sub-cycles, but is not so in fact.

Both Irreducible Complexity and Darwinian evolution claim that it is the replacement of a cycle by a series of sub-cycles which solves the ‘problem of improbability’ (Ref. 2).

Granted that each probability of a series of probabilities is larger than the overall or net probability, the net probability remains constant. The net probability is the product of the probabilities in the series. Consequently, it is nonsensical to say ‘natural selection is a cumulative process, which breaks the problem of improbability up into small pieces. Each of the small pieces is slightly improbable, but not prohibitively so’ (Ref. 2). The individual probabilities of the sub-cycles in the series do not change the net probability of evolution.

Critique of Both the Intelligent Design and the Darwinian Evolution Claims of Gradualism

Meyer’s Intelligent Design argument agrees with Neo-Darwinism that the gradualism of sub-cycles ensures the generation of the entire spectrum of graded mutations defined by the overall cycle of evolution (Refs. 1 and 3). To what they agree is false. The fact is that role of sub-cycles is to increase the efficiency of mutation by eliminating the possibility of the generation of most of the graded mutations in the defined spectrum. Although he misinterpreted what he was demonstrating, Dawkins did an admirable job of demonstrating this efficiency (Ref. 4).

In Ref. 4, Dawkins used an example of three mutation sites of six mutations each to illustrate the efficiency of sub-cycles, where efficiency is achieved by eliminating the possibility of the generation of most intermediate evolutionary forms. The excellence of his illustration of increased mutational efficiency is not vitiated by the fact that Dawkins mistakenly thought he was illustrating an increase in the probability of evolutionary success by natural selection. The net probability of success is unaffected by the introduction of sub-cycles (Ref. 5).

Three mutation sites of six mutations each defines 216 different graded mutations, i.e. 6 x 6 x 6 = 216. These mutations are the two end points and 214 intermediates. Let the 216 mutations of the graded spectrum be designated 000 to 555.

In a single cycle of Darwinian evolution, all 216 different mutations are liable to be generated randomly. In Dawkins’ illustration of gradualism, a series of three sub-cycles replaces the single overall cycle. Each of the three sub-cycles in the series subjects only one site out of the three to random generation and natural selection, independently of the other two sites. This entails only six different mutations per sub-cycle. In the first sub-cycle, the six mutations are between 00’0′ and 00’5′ inclusively. The six mutations of the second sub-cycle are between 0’0’5 and 0’5’5 inclusively. The six mutations of the third sub-cycle are between ‘0’55 and ‘5’55 inclusively.

Although there are six possible different mutations per sub-cycle, in the second sub-cycle mutation 005 is a duplicate of 005 of the first sub-cycle. In the third sub-cycle mutation 055 is a duplicate of 055 of the second sub-cycle. That yields only 16 different mutations in total, which are liable to random generation, not 18.

In the single overall cycle there are no missing links or missing gaps in the spectrum of 216 mutations which are liable to random mutation. These are 000 to 555, for a total of 216.

In the first sub-cycle of the illustration, all six graded mutations are liable to be randomly generated, i.e. 00’0′ to 00’5′. In the second sub-cycle the six mutations liable to be randomly generated are separated by gaps in the graded spectrum. The gaps are of 5 mutations each. The six mutations which can be generated in the second sub-cycle are 0’0’5, 0’1’5, 0’2’5, 0’3’5, 0’4’5 and 0’5’5. The first gap comprises the five mutations between 0’0’5 and 0’1’5. These are 010, 011, 012, 013 and 014. There are 5 gaps of 5 mutations each, for a total of 25 mutations of the overall spectrum which cannot be generated in the second sub-cycle due to the gradualism of sub-cycles.

In the third sub-cycle of the illustration, the six mutations which are liable to be randomly generated are ‘0’55, ‘1’55, ‘2’55, ‘3’55, ‘4’55 and ‘5’55. Between each of these mutations there is a gap of 35 graded mutations which cannot be generated due to the gradualism of sub-cycles. For the first gap, the 35 are 100 to 154, inclusive. The total of different graded mutations, which cannot be generated in the third sub-cycle, is 35 x 5 = 175.

The totals for the three sub-cycles of different mutations are: Sub-cycle one: 6 mutations possibly generated, 0 mutations in non-generated gaps; Sub-cycle two: 5 mutations possibly generated, 25 mutations in non-generated gaps; Sub-cycle three: 5 mutations possibly generated, 175 mutations in non-generated gaps. Totals for the three sub-cycles: 16 mutations possibly generated, 200 mutations in non-generated gaps.

For a critique of gradualism from the perspective of probabilities see Refs. 5 and 6.

Conclusion

Both its proponents (Ref. 3) and its critics (Ref. 1) assume that a key characteristic of Darwinian evolution is the generation of a complete spectrum of graded mutations. This shared view assumes that the generation of all mutations in this spectrum is facilitated by the gradualism of a series of sub-cycles of random mutation and natural selection. This is false. The Darwinian algorithm of random mutation and natural selection, applied in series, ensures that most of the mutations, defined by the overall graded spectrum, cannot be generated. The role of sub-staging in Darwinian evolution is the increased efficiency of mutation due to the non-generation of most of the mutations comprising the defined graded spectrum. This results in huge gaps in the spectrum of mutations actually generated.

To the typical bystander (Ref. 7), the debate between Intelligent Design and Neo-Darwinism appears to be one of science vs. science or, as the Dover Court ruled, faith vs. science. In fact, the arguments of both sides are based on their mutual misunderstanding of the arithmetical algorithm, which is Darwinian evolution.

References

1. “Darwin’s Doubt” with Stephen Meyer, http://vimeo.com/81215936
2. “The God Delusion”, page 121
3. http://www.richarddawkins.net/news_articles/2013/1/28/the-tyranny-of-the-discontinuous-mind#
4. http://www.youtube.com/watch?v=JW1rVGgFzWU minute 4:25
5. https://theyhavenowine.wordpress.com/2014/04/04/dawkins-on-gradualism/
6. https://theyhavenowine.wordpress.com/2014/04/10/smearing-out-the-luck/
7. http://www.ncregister.com/blog/pat-archbold/they-call-them-theories-for-a-reason

Note: The single quote marks are used simply to highlight the mutation site in question.

It is perfectly acceptable in thought to characterize material processes as mathematically random. For example, the roll of two dice is characterized as random such that their sum of seven is said to have a probability of 1/6. The equation of radioactive decay may be characterized as the probability function of the decay of a radioactive element. Wave equations in quantum mechanics may be characterized as probability functions. However, such valid mathematical characterizations do not attest to randomness and probability as being characteristics of material reality. Rather, such characterizations attest to mathematical randomness and probability as being characteristic of human knowledge in its limitations.

If randomness and probability were characteristic of material reality at any level, including the atomic and sub-atomic level, material reality would be inherently unintelligible in itself. Material reality would be inexplicable and as such inherently mysterious. Yet, to view material reality as inherently mysterious is superstition. Superstition denies causality by claiming that material results are a mystery in themselves, e.g. that they are materially random.

It is an erroneous interpretation to hold that quantum mechanics requires material reality to be random and probable in itself. Wave equations may be viewed as probability functions only in the same sense that the result of rolling dice is mathematically probable. That sense is in the suspension of human knowledge of material causality at the level of physical forces for the sake of a mathematical utility without the denial of material causality.

A commenter on a recent post at catholicstand.com (ref. 1), was so enamored with the validity, utility and beauty of the mathematics of quantum mechanics that he declared, “This randomness is inherent in nature.” Indeed it is inherent in nature, i.e. in human nature in the limitations of the human knowledge of the measurable properties of material reality.

Material reality is not random in its nature. The nature of material reality, in light of the utility of application of the mathematics of probability or in light of perceiving a mathematical function as one of probability, is not a question within the scope of science or mathematics. The nature of material reality is always a question within philosophy. In contrast, the mathematical and scientific question is the suitability of specific mathematics in representing the relationships among the measurable properties of material reality including those properties, which can only be detected and measured with instruments.

Let it be noted that scientific knowledge cannot demonstrate the fundamental invalidity of human sensory knowledge and human intellectual knowledge because the validity of scientific knowledge depends on the fundamental validity of these.

It has been recognized since the time of Aristotle that the human intellect is extrinsically dependent in its activity upon a sensual phantasm, i.e. a composite of sense knowledge. This and all visualizations or imaginative representations are necessarily restricted to the scope of the senses, although the intellect is not. Consequently, science at the atomic and sub-atomic level cannot consist in an analysis of visual or imaginative simulations, which are confined to the scope of human sensation. Rather, the science consists in the mathematics, which identifies quantitative relationships among instrumental measurements. It would be a fool’s quest to attempt to determine a one to one correspondence between science and an imaginative representation of the atomic and sub-atomic level or to constrain the understanding of the science to such a representation (Ref. 2).

Remarkably, in an analogy of a wave function in quantum mechanics as a probability function which collapses into a quantum result, the physicist, Stephan M. Barr, did not choose an example of mathematical probability (Ref. 3). He could have proposed an analogy of mathematical probability simulated by flipping a coin. When the coin is rotating in the air due to being flipped it could be viewed as a probability function of heads of 50%, which collapses into a quantum result of heads, namely one, or tails, namely zero, upon coming to rest on the ground.

Instead, he chose an example where the meaning of probability is not mathematical, but qualitative.

Mathematical probability is the fractional concentration of an element in a logical set, e.g. heads has a fractional concentration of 50% in the logical set of two logical elements with the nominal identities of heads and tails. A coin is material simile.

A completely unrelated meaning of the word, probability, is an individual’s personal lack of certitude of the truth of a statement. Examples: ‘I probably had eggs for breakfast in the past two weeks’ or ‘Jane will probably pass the French exam.’ These statements identify no set of elements or anything quantitative. Personal human certitude is qualitative. Yet, we are bent upon quantitatively rating the certitude with which we hold our personal judgments.

Barr succumbs to this penchant for quantifying personal certitude. He illustrates the collapse of a wave function in quantum mechanics with the seemingly objective quantitative statement:
“This is where the problem begins. It is a paradoxical (but entirely logical) fact that a probability only makes sense if it is the probability of something definite. For example, to say that Jane has a 70% chance of passing the French exam only means something if at some point she takes the exam and gets a definite grade. At that point, the probability of her passing no longer remains 70%, but suddenly jumps to 100% (if she passes) or 0% (if she fails). In other words, probabilities of events that lie in between 0 and 100% must at some point jump to 0 or 100% or else they meant nothing in the first place.”
Barr mistakenly thinks that probability, whether referring either to mathematics or to human certitude, refers to coming into existence, to happening. In fact, both meanings are purely static. The one refers to the composition of mathematical sets, although its jargon may imply occurrence or outcome. The other refers to one’s opinion of the truth of a statement, which may be predictive. That Jane has a 70% chance or will probably pass the French exam obviously expresses the certitude of some human’s opinion, which has no objective measurement even if arrived at by some arbitrary algorithm.

Probability in mathematics is quantitative, but static. It is the fractional composition of logical sets. Probability in the sense of human certitude, like justice, is a quality. It cannot be measured because it is not material. This, however, does not diminish our penchant for quantifying everything (Ref. 4).

Barr’s identification of probability, as potential prior to its transition to actuality in an outcome, is due to taking the jargon of the mathematics of sets for the mathematics of sets itself. We say that the outcome of flipping a coin had a probability of 50% heads prior to flipping, which results in an outcome or actuality of 100% or 0%. What we mean to illustrate by such a material simulation is a purely static relationship involving the fractional concentration of the elements of logical sets. The result of the coin flip illustrates the formation or definition of a population of new sets of elements based on a source set of elements. In this case the source set is a set of two elements of different IDs. The newly defined population of sets consists of one set identical to the original set, or, if you wish, a population of any multiple of such sets.

Another illustration is defining a population of sets of three elements each, based on the probabilities of a source set of two elements of different nominal IDs, such as A and B. The population is identified by eight sets. One set is a set of three elements, A, at a probability (fractional concentration) of 12.5% in the newly defined population of sets. One set is a set of three elements, B, at a probability of 12.5%. Three sets are of two elements A and one element B, at a probability of 37.5%. Three sets are of two elements B and one element A, at a probability of 37.5%. The relationships are purely static. We may imagine the sets as being built by flipping a coin. Indeed, we use such jargon in discussing the mathematics of the relationship of sets. The flipping of a coin in the ‘building’ of the population of eight sets, or multiples thereof, is a material simulation of the purely logical concept of random selection. Random selection is the algorithm for defining the fractional concentrations of the population of eight new sets based on the probabilities of the source set. It is only jargon, satisfying to our sensual imagination, in which the definitions of the eight new sets in terms of the fractional concentration of their elements are viewed as involving a transition from potency to act or probability to outcome. The mathematics, in contrast to the analogical imaginative aid, is the logic of static, quantitative relationships among the source set and the defined population of eight new sets.

Random selection, or random mutation, is not a material process. It is a logical concept within an algorithm, which defines a logical population of sets based on the probabilities of a logical source set.

It is a serious error to conflate mathematical probability with the certitude of human judgment. It is also a serious error to believe that either refers to coming into existence or to the transition from potency to act, which are subjects of philosophical inquiry.

Ref. 1 “When Randomness Becomes Superstition” http://catholicstand.com/randomness-becomes-superstition/

Ref. 2 “Random or Non-random, Math Option or Natural Principle?” https://theyhavenowine.wordpress.com/2014/08/24/random-or-non-random-math-option-or-natural-principle/

Ref. 3 “Does Quantum Physics Make It Easier to Believe in God?” https://www.bigquestionsonline.com/content/does-quantum-physics-make-it-easier-believe-god

Ref. 4 “The Love of Quantification” https://theyhavenowine.wordpress.com/2013/08/11/the-love-of-quantification-2/

Richard Dawkins has extensively discussed arithmetic. The theme of The God Delusion is that there is an arithmetical solution to the improbability of evolution in a one-off event, namely gradualism, whereas there is no arithmetical solution to the improbability of God. Obviously, the ‘improbability’ of God cannot be solved by gradualism.

It is encouraging that Richard Dawkins is interested in mathematics. If he were to learn to correct his mistakes in math, he might do very well in re-educating those whom he has deceived in mathematics, science and philosophy due to his errors in arithmetic.

The following is a math quiz based on problems in arithmetic addressed by Richard Dawkins and his answers, whether implicit or explicit, in his public work. I present this as a helpful perspective in the delineation of Dawkins’ public errors in arithmetic.

1) What is the opposite of +1?
Correct Answer: -1
Student Dawkins: Zero. Let us then take the idea of a spectrum of probabilities seriously between extremes of opposite certainty. The spectrum is continuous, but can be represented by the following seven milestones along the way:
Strong positive, 100%; Short of 100%; Higher than 50%; 50%; Lower than 50%; Short of zero; Strong negative, 0% (p 50-51. Ref 1).
Those, who aver that we cannot say anything about the truth of the statement, should refuse to place themselves anywhere in the spectrum of probabilities, i.e. of certitude. (p 51, Ref 1)
Critique of Dawkins’ answer:
On page 51 of The God Delusion, Dawkins devotes a paragraph to discussing the fact that his spectrum of certitude, from positive to its negative opposite, does not accommodate ‘no opinion’. Yet, he fails to recognize what went so wrong that there is no place in his spectrum for ‘no opinion’. The reason, that there is no place, is that he has identified a negative opinion as zero, rather than as -1. If he had identified a negative opinion as -1, which is the opposite of his +1 for a positive opinion, then ‘no opinion’ would have had a place in his spectrum of certitude at its midpoint of zero. Instead Dawkins discusses the distinction between temporarily true zeros in practice and permanently true zeros in principle, neither of which are accommodated by his spectrum ‘between two extremes of opposite certainty’ in which the opposite extreme of positive is not negative, but a false zero.

2) Is probability in the sense of rating one’s personal certitude of the truth of a statement a synonym for mathematical probability, which is the fractional concentration of an element in a logical set? For example, is probability used univocally in these two concepts: (a) The probability of a hurricane’s assembling a Boeing 747 while sweeping through a scrapyard. (b) The probability of a multiple of three in the set of integers, one through six?
Correct Answer: No. The probability of a hurricane’s assembling a Boeing 747 does not identify, even implicitly, a set of elements, one of which is the assembling of a Boeing 747. Probability as a numerical rating of one’s personal certitude of the truth of such a proposition has nothing to do with mathematical probability. In contrast to the certitude of one’s opinion about the capacity of hurricanes to assemble 747’s, the probability of a multiple of 3 in the set of integers, one through six, namely 1/3, is entirely objective.
Student Dawkins: Probability as the rating of one’s personal certitude of the truth of a proposition, has the same spectrum of definition as mathematical probability, namely 0 to +1 (p 50, Ref. 1). The odds against assembling a fully functioning horse, beetle or ostrich by randomly shuffling its parts are up there in the 747 territory of the chance that a hurricane, sweeping through a scrapyard would have the luck to assemble a Boeing 747. (p 113, Ref. 1) There is only one meaning of probability, whether it is the probability of the existence of God, the probability of a hurricane’s assembling a Boeing 747, the probability of success of Darwinian evolution based on the random generation of mutations or the probability of seven among the mutations of the sum of the paired output of two generators of random numbers to the base, six.

3) In arithmetic, is there a distinction between the factors of a product and the parts of a sum? Is the probability of a series of probabilities, the product or the sum of the probabilities of the series?
Correct Answers: Yes; The product. The individual probabilities of the series are its factors.
Student Dawkins: Yes; The relationship of a series of probabilities is more easily assessed from the perspective of improbability. An improbability can be broken up into smaller pieces of improbability. Those, who insist that the probability of a series is the product of the probabilities of the series, don’t understand the power of accumulation. Only an obscurantist would point out that if a large piece of improbability can be broken up into smaller pieces of improbability as the parts of a sum, i.e. as parts of an accumulation, then it must be true that its complement, the corresponding small piece of probability, is concomitantly broken up into larger pieces of probability, where the larger pieces of probability are the parts, whose sum (accumulation) equals the small piece of probability. (p 121, Ref. 1).

4) Jack and Jill go to a carnival. They view one gambling stand where for one dollar, the gambler can punch out one dot of a 100 dot card, where each dot is a hidden number from 00 to 99. The 100 numbers are randomly distributed among the dots of each card. If the gambler punches out the dot containing 00, he wins a kewpie doll. Later they view another stand where for one dollar, the gambler gets one red and one blue card, each with 10 dots. The hidden numbers 0 to 9 are randomly distributed among the dots of each card. If in one punch of each card, the gambler punches out red 0 and blue 0, he wins a kewpie doll. This second stand has an interesting twist, lacking in the first stand. A gambler, of course, may buy as many sets of one red card and one blue card as he pleases at one dollar per set. However, he need not pair up the cards to win a kewpie doll until after he punches all of the cards and examines the results.
(a) If a gambler buys one card from stand one and one pair of cards from stand two, what are his respective probabilities of winning a kewpie doll?
Correct Answer: The probability is 1/100 for both.
Student Dawkins: The probability of winning in one try at the first stand is 1/100. At the second stand the probability of winning is smeared out into probabilities of 1/10 for each of the two tries.
(b) How many dollars’ worth of cards must a gambler buy from each stand to reach a level of probability of roughly 50% that he will win at least one kewpie doll?
Correct Answers: $69 worth or 69 cards from stand one yields a probability of 50.0%. $12 worth or 24 cards (12 sets) from stand two yields a probability of 51.5%. (A probability closer to 50% for the second stand is not conveniently defined.)
Student Dawkins: A maximum of $50 and 50 cards from stand one yields a probability of 50%. A maximum of $49 and 49 cards from stand one yields a probability of 49%.A maximum of $14 and 28 cards yields a probability of 49%. (A probability closer to 50% for the second stand is not conveniently defined.)
(c) In the case described in (b), is the probability of winning greater at stand two?
Correct Answer: No
Student Dawkins: Yes
(d) In the case described in (b), is winning more efficient or less efficient in terms of dollars and in terms of total cards at the second carnival stand?
Correct Answer: More efficient. The second stand is based on two sub-stages of Darwinian evolution compared to the first stand, which is based on one overall stage of Darwinian evolution. The gradualism of sub-stages is more efficient in the number of random mutations while having no effect on the probability of evolutionary success. Efficiency is seen in the lower input of $12 or 24 random mutations compared to $69 or 69 random mutations to produce the same output, namely the probability of success of roughly 50%.
Student Dawkins: Efficiency is irrelevant. It’s all about probability. The gradualism of stand two breaks up the improbability of stand one into smaller pieces of improbability. (p 121, Ref. 1)
This problem is an illustration of two mutation sites of ten mutations each. I analyzed these relationships in Ref. 2, using an illustration of three mutation sites of six mutations each. In that illustration, I introduced two other modifications. One modification was that the winning number was unknown to the gambler. The other was that the gambler could choose the specific numbers on which to bet, so his tries or mutations were non-random. With the latter deviation from the Darwinian algorithm, the probability of winning a kewpie doll required a maximum of 216 non-random tries for the first stand and a maximum of 18 non-random tries for the second stand. The gradualism of the second stand smears out the luck required by the first stand. The increased probability of winning a kewpie doll at the second stand is due to the fact that one need not get his luck in one big dollop, as one does at the first stand. He can get it in dribs and drabs. It takes, respectively at the two stands, maxima of 216 and 18 tries for a probability of 100% of winning a kewpie doll. Consequently and respectively, it would take maxima of 125 and 15 tries to achieve a probability of 57.9% of winning a kewpie doll. Whether one compares 216 tries for the first stand to 18 tries for the second stand or 125 tries to 15 tries, the probability of winning a kewpie doll is greater at the second stand because it takes fewer tries. (See also the Wikipedia explanation, which is in agreement with Student Dawkins, Ref. 3)
Another example of extreme improbability is the combination lock of a bank vault. A bank robber could get lucky and hit upon the combination by chance. In practice the lock is designed with enough improbability to make this tantamount to impossible. But imagine a poorly designed lock. When each dial approaches its correct setting the vault door opens another chink. The burglar would home in on the jackpot in no time. ‘In no time’ indicates greater probability than that of his opening the well-designed lock. Any distinction between probability of success and efficiency in time is irrelevant. Also, any distinction between the probability of success and efficiency in tries, whether the tries are random mutations or non-random mutations is irrelevant. (p 122. Ref. 1)

5) If packages of 4 marbles each are based on a density of 1 blue marble per 2 marbles, how many blue marbles will a package selected at random contain?
Correct Answer: 2 blue marbles
Student Dawkins: 2 blue marbles.

6) If packages of 4 marbles each are based on a probability of blue marbles of 1/2, how many blue marbles will a package selected at random contain?
Correct Answer: Any number from 0 to 4 blue marbles.
Student Dawkins: 2 blue marbles. This conclusion is so surprising, I’ll say it again: 2 blue marbles. My calculation would predict that with the odds of success at 1 to 1, each package of 4 marbles would contain 2 blue marbles. (p 138, Ref. 1)

References:
1. The God Delusion
2. http://www.youtube.com/watch?v=JW1rVGgFzWU minute 4:25
3. http://en.wikipedia.org/wiki/Growing_Up_in_the_Universe#Part_3:_Climbing_Mount_Improbable

Calculations:
Where the total number of generated mutations, x, is random, and n is the number of different mutations, the Probability of success, P, equals 1- ((n – 1)/n)^x.
For n = 100 and x = 69, P = 50.0%
For n = 10 and x = 12, P = 71.7%. For P^2 = 51.5%, the sum of x = 24
Where the total number of generated mutations, x, is non-random, and n is the number of different mutations, the Probability of success, P, equals x/n
For n = 100 and x = 50, P = 50%
For n = 100 and x = 49, P = 49%
For n = 10 and x = 7, P = 70.0%. For P^2 = 49%, the sum of x = 14
For n = 216 and x = 216, P = 100%
For n = 6 and x = 6, P = 100%. For P^3 = 100%, the sum of x = 18
For n = 216 and x = 125, P = 57.9%
For n = 6 and x = 5, P = 83.3%. For P ^3 = 57.9%, the sum of x = 15

The distinction between random and non-random is a distinction in mathematical logic. Is it also a distinction in nature, a distinction between natural principles?

Consider one mutation site of fifty-two different mutations. An analogy would be a playing card.
(1) Let each of the fifty-two different mutations be generated deliberately and one mutation be selected randomly, discarding the rest.
(2) Let fifty-two mutations be generated randomly and the selection of a specified mutation, if generated, be deliberate, discarding the rest.

These two algorithms are grossly the same. They present a proliferation of mutations followed by its reduction to a single mutation. They differ in whether the proliferation is identified as random or non-random and whether the reduction to a single mutation is identified as random or non-random.

Apply these two mathematical algorithms analogically to playing cards.

For the evolution of the Ace of Spades, the first algorithm would begin with a deck of fifty-two cards followed by selecting one card at random from the deck. If it is the Ace of Spades, it is kept. If not, it is discarded. The probability of evolutionary success would be 1/52 = 1.9%.

For the evolution of the Ace of Spades by the second algorithm, fifty-two decks of cards would be used to select randomly one card from each deck. The resulting pool of fifty-two cards would be sorted, discarding all cards except for copies of the Ace of Spades, if any. The probability of evolutionary success would be 1 – (51/52)^52 = 63.6%.

The probability of success of the second algorithm can be increased by increasing the number of random mutations generated. If 118 mutations are generated randomly, the probability of this pool’s containing at least one copy of the Ace of Spades is 90%.

Notice of the two processes, the generation of mutations and their differential survival, that either process is arbitrarily represented as mathematically random and the other is arbitrarily represented as mathematically non-random.

Also, notice that in the material analogy of the mathematics, the analog of randomness is human ignorance and lack of knowledgeable control. In its materiality, ‘random selection’ of a playing card is a natural, scientifically delineable, non-random, material process.

In the mathematics of probability, random selection is solely a logical relationship of logical elements of logical sets. It is only analogically applied to material elements and sets of material elements. The IDs of the elements are purely nominal. Measurable properties, which are the subject of science, and which are implicitly associated with the IDs, are completely irrelevant to the mathematical relationships. A set of seven elements consisting of four sheep and three roofing nails has the exact same mathematical relationships of randomness and probability as a set consisting of four elephants and three sodium ions.

In the logic of the mathematics of probability, the elementary composition of sets is arbitrary. The logic does not apply to material things as such because the IDs of elements and the IDs of sets can only be nominal due to the logical relationships defined by the mathematics. This is in contrast to the logic of the syllogism in which the elementary composition of sets is not arbitrary. The logic of the syllogism does apply to material things, but only if the material things are not arbitrarily assigned as elements to sets, but are assigned as elements to sets according to their natural properties, which properties are irrelevant to the mathematics of probability. The logic of the syllogism applies to material things, if the IDs are natural rather than nominal.

Charles Darwin published The Origin of Species in 1859. Meiosis, which is essential to the detailed modern scientific knowledge of genetic inheritance, was discovered in 1876 by Oscar Hertwig. In the interim, Gergor Mendel applied mathematical probability as a tool of ignorance of the details of genetics to the inheritance of flower color in peas. The conclusion was not that the material processes of genetics are random. The conclusion was that the material processes involved binary division of genetic material in parents and its recombination in their offspring. The binary division and recombination are now known in scientific detail as meiosis and fertilization.

The mathematics of randomness and probability, which can be applied only analogically to material, serves as a math of the gaps in the scientific knowledge of the details of material processes.

Consider the following two propositions. Can both be accepted as compatible, as applying the mathematics of randomness and probability optionally to one process or the other? Can either be rejected as scientifically untenable in principle, without rejecting the other by that same principle?
(I) The generation of biological mutations is random, while their differential survival is due to natural, non-random, scientifically delineable, material processes.
(II) The generation of biological mutations is due to natural, non-random, scientifically delineable, material processes, while their differential survival is random.

My detailed answers are contained in the essay, “The Imposition of Belief by Government”, Delta Epsilon Sigma Journal, Vol. LIII, p 44-53 (2008). My answers are also readily inferred from the context in which I have presented the questions in this essay.

In its recent decision in Derwin v. State U., the State Supreme Court ordered the State University to award Charles Derwin the degree of doctor of philosophy. Derwin admitted that his case, which he lost in all the lower courts, depended upon one sarcastic statement made in writing by Prof. Stickler of the faculty panel, which heard his defense of his graduate thesis. The bylaws of the University in awarding the degree of doctor of philosophy require unanimous approval of the faculty panel by written yes or no voting. The members of the panel are free to offer verbal or written criticism during and after the defense, but must mark their ballots simply yes or no. However, in casting the lone negative vote, Prof. Stickler wrote in addendum to his ‘No’, “If Derwin and his major advisor were to submit an article for publication reporting the experimental results of Derwin’s thesis, I suggest they submit it to The Journal of Random Results or to The Journal of Darwinian Evolution.”

Derwin’s legal team argued that Stickler violated the university bylaws by adding the written addendum as well as academic decorum by its sarcasm. Further, and most importantly, they argued that Prof. Stickler exposed his own incompetence to judge the thesis by his attempt to belittle Darwinian Evolution. By this Stickler had disqualified himself as a judge of the thesis panel. The State Supreme Court agreed and ordered the State University to award the degree in accord with the university bylaws requiring unanimous approval by the thesis panel. The Court noted that the university bylaws allow for a panel of six to eight faculty members. The panel, which heard the defense of his thesis by Derwin, consisted of seven, including Prof. Stickler. The Court also ruled that the academic level arguments presented in the lower courts by both sides regarding ‘random results’ in general and ‘random mutation’ in the particular case of Darwinian evolution, were simply of academic interest and irrelevant to the legal case.

For their academic interest those arguments are presented here.

Prof. Stickler stated that random experimental results are of no scientific value and that Derwin conceded the results he reported in his thesis could be characterized as random. Stickler argued that even those, who contend that genetic mutation is random, claim that Darwinian evolution is non-random, and therefore scientific, even though their claim is erroneous. Stickler attributed the following quote to Emeritus Prof. Richard Dawkins of Oxford University as his response to the question, “Could you explain the meaning of non-random?” Dawkins replied, “Of course, I could. It’s my life’s work. There is random genetic variation and non-random survival and non-random reproduction. . . That is quintessentially non-random. . . . Darwinian evolution is a non-random process. . . . It (evolution) is the opposite of a random process.” (Ref.1).

Stickler stated that Dawkins’ argument that Darwinian evolution is non-random and therefore, scientific, does not hold water. He noted that the pool of genetic mutants subjected to natural selection in Darwinian evolution is formed by random mutation. The pool’s containing the mutant capable of surviving natural selection is a matter of probability. Consequently, the success of natural selection cannot be 100%. Evolutionary success is equal to the probability of the presence of at least one copy of the survivable mutant in the pool subjected to natural selection and is therefore random.

In his rebuttal, Derwin agreed with Stickler that Darwinian evolution was indeed characterized by probability and randomness. However, the universal scientific acceptance of Darwinian evolution indicates that random results are indeed scientific, which he noted was the pertinent issue in the case.

Stickler’s counter argument was to note that Darwinian evolution is based on data consisting of a series of cycles, each cycle consisting of the proliferation of genetically variant forms and their diminishment to a single form. Darwinian evolution explains such cycles by the hypothesis of the random generation of genetic variants and their reduction to singularity by natural differential survival. Stickler claimed the data could also be explained by what he called ‘The inverse Darwinian Theory of Evolution’. The inverse theory explains the same cyclic data of the standard theory, but as the non-random, natural generation of genetic variants by scientifically identifiable material processes and the reduction of this pool of genetic variants to singularity by random, differential survival.

Deciding which hypothesis, if either, was valid would require some ingenuity beyond the stipulated data of the proliferation and diminishment of variant genetic mutations. He noted, however, that the inverse hypothesis would be rejected a priori by the claim that we know at least some of the scientific factors affecting differential survival, so it could not be hypothesized that differential survival was random. This, Stickler claimed, demonstrates that randomness and probability cannot be proffered as a scientific explanation. He said, “If randomness is rejected a priori as scientifically untenable as an explanation of variant survival because differential survival is due to scientific material processes, then randomness must be rejected a priori as scientifically untenable as an explanation for the generation of genetic variants for the same reason. If randomness is sauce for the goose of scientifically variant generation, randomness must potentially be sauce for the gander of scientifically variant survival. In fact it is sauce, i.e. the mathematics of randomness and probability is a tool of ignorance to cover a gap in scientific knowledge. It is in the context of the absence of the scientific knowledge of genetics in the mid-nineteenth century that made it seem plausible at that time to propose ignorance of the scientific knowledge of material processes, that is, to propose random changes, as part and parcel of a scientific theory.”

In response, Derwin noted that quantum mechanics, perhaps the most basic of the sciences, is recognized as founded on probability and therefore randomness. (Ref.2).

References:
1) http://www.youtube.com/watch?v=tD1QHO_AVZA (minute 38:56)
2) https://theyhavenowine.wordpress.com/2014/03/26/quantum-or-wave-dilemma-of-the-imagination/

100_0023

Amaryllis is a form of lily and as such its petals are in sets of three. It has two such sets, one fore and one aft. One set may be roughly identified by direction as north, southeast and southwest. The other set as south, northeast and northwest. These sets are natural measureable properties of the amaryllis plant and are useful in the science of taxonomy.

The mathematics of sets may be characteristically embodied in nature and when so, it forms part of the base of science. However, that is not to say that the entirety of the mathematics of sets is characteristically embodied in nature. One important exception is the mathematics of probability.

Probability is the fractional concentration of an element in a logical set. The mathematics of probability is centered in the formation of other logical sets from a source set, where all sets are identified by their probabilities, i.e. their fractional compositions.

In discussing the mathematics of probability where the source set is one of two subsets of three unique elements, it would not be inappropriate to refer to the amaryllis flower as a visual aid in discussing what is essentially logical and in no way characteristic of the nature of the amaryllis flower.

In randomly forming sets of two petals where the source set is the amaryllis flower, what are the probabilities of the population of sets defined in terms of fore and aft petals? The population defined is of four sets of two petals each. One set consists of two fore petals. One set consists of two aft petals. Each of the other two sets of the population consist of one fore petal and one aft petal. The probability of sets of two fore or two aft petals in the population of sets is 25%, while the probability of a set of one fore and one aft petal is 50%.

One could imagine placing one set of six petals from an amaryllis flower in each of four hundred pairs of hats and blindly selecting a petal from each of two paired hats. One would expect the distribution of paired sets of petals to be roughly one hundred of two fore petals, one hundred of two aft petals and two hundred of one fore and one aft petal. This would represent a material simulation of the purely logical concept of probability.

What is the probability of a set of eighteen randomly selected amaryllis petals containing at least one ‘north’ petal? The answer, P, equals (1- ((n – 1)/n)^x), where n = 6 and x = 18. P = 66.5%.

Notice that these questions in the logic of probability have nothing to do with amaryllis flowers or petals or their manipulation. The petals and their visual (or actual) manipulation are entirely visual aids in a discussion of pure logic. It is materially impossible to select any material thing at random from a set of material things. Selection is always explicable in terms of the material forces involved. Thus, it is always non-random. It is by convention that we equate human ignorance of the details of the material process of selection with mathematical randomness.

We say that the probability of one sperm fertilizing a mammalian egg is one in millions. What we mean is that the fractional concentration of any one sperm is one in millions and that we are ignorant of the detailed non-random physical, chemical and biological processes by which one sperm of the natural set fertilizes the egg.

It is, of course, permissible to use the mathematics of probability in many instances when for any number of reasons we are ignorant of the scientific explanation of material processes. However, we must be constantly aware that the mathematics of probability characterizes human ignorance and not material reality when it is used as a tool to compensate for a lack of knowledge.

The mathematics of probability is an exercise in logic, unrelated to the nature of material things and their measurable properties. In contrast, science is the determination of the mathematical relationships among the measureable properties of things, which properties are characteristic of the nature of material things.

Follow

Get every new post delivered to your Inbox.