Every depiction of motion requires a reference that is not in motion. We may identify that which is not in motion as a reference frame or even a point of orientation. Every depiction or description of motion, however verbal or however mathematical, may be rendered as a geometrical sketch. If nothing else, the sketchpad itself, of whatever medium, is a frame of reference.

The Ultimate or Absolute Reference Frame

Of course, there is as an ultimate or absolute reference frame. It is the self-orientation of the individual observer of motion or the self-orientation of the individual device recording motion. One cannot get out of one’s own observational skin, but one can get out of one’s own skin analytically through an analytic depiction of motion from a point of orientation of one’s choice.

Funny Thing about Communicating a Depiction of Motion

Obviously the most valid communication of an observed motion to someone else would be a record of the motion as experienced. Funny thing is that in our own minds as well as in our communication of an observed motion to others, we do not typically present the motion as it was observed by us. We typically express it to others (and even to ourselves) analytically for the sake of simplicity and for the sake of effective communication (and for the sake of effective self-memorization).

If I were to direct you to the grocery store from my house, the most realistic presentation would be to show you a record of the actual motion as observed by a camcorder in the passenger seat of my car and aimed out the windshield. However, for the clarity of communication, I wouldn’t do that. Instead, I would give you analytical directions on how to drive to the grocery store. I would say, “Make a left turn out of my driveway. Turn left at the second traffic signal. Turn left at the third traffic signal and you will be in the parking lot of the grocery store.” That analysis is readily communicated and followed. However, the route is not the one I would take. The route I would take would pass through only one traffic signal and pass through one 4-way stop. However, an analysis of that route is not as easily communicated or as clearly landmarked as the depiction above involving five traffic signals.

The key to depicting and communicating motion is not in the details of its visual experience. The key to depiction is analytical simplicity. This is particularly evident in the choice of a point of orientation (or the choice of a reference frame) for the analysis. The choice of a point of analytical orientation is not right or wrong. The choice is usually a question of simplicity.

The analytical directions above from my house through five traffic signals to the grocery store are based on an observational orientation point that is the driver of the car. This is the same orientation point of my experience as a driver, but it skips most of the visual details of a camcorder. If the directions were based on a map, marked to show the route from my house to the grocery store, the observational orientation point of the analytical depiction would be that of a virtual observer above the route. An analyst of motion has the option of considering himself as a virtual observer outside the analytical depiction or as a virtual observer within the depiction. An actual observer of motion has no choice but to observe from his actual vantage point.

Comparison of Two Analytical Depictions

As an example, both of the following two depictions or sketches are of only three objects, namely three Spheres, A, B, and C. Both depictions are on a two dimensional sketchpad.







 

 

 

In Sketch 1, Sphere A is represented as a circle at the top center of the pad, but it is in the background, i.e. beyond the surface of the plane of the sketchpad.

Sphere B is in a circular orbit around point B’ . Sphere B itself is represented by six circles at different locations along its orbital path. Point B’ is in the plane of the sketchpad.

Sphere C is in a vertical line with Sphere A and point B’. This alignment appears at the center of the sketchpad. However, Sphere C is in the foreground, i.e. above the plane of the sketchpad.

Thus, the centers of Spheres A and C and point B’ form a plane perpendicular to the plane of the sketchpad which plane intersects the plane of the sketchpad as a vertical line.

The virtual observer is outside of the range of the three spheres and is elevated above the plane of Sphere B’s orbit. (If the virtual observer were in the plane of Sphere B’s orbit, the orbit would be a horizontal line rather than an oval as depicted.) The virtual observer would describe Sphere B as in a planar orbit, whose center point B’ was stationary with respect to his observation point. Spheres A and C are also stationary with respect to the virtual observation point.

In Sketch 2, Sphere A has been moved forward, but kept in the plane defined by Spheres A and C and point B’. It is then on a line perpendicular to the plane of Sphere B’s orbit and intersects the plane of the orbit at its center point, B’, i.e. Sphere A is on the axis of Sphere B’s orbit. Also, in Sketch 2, Sphere A has been moved to a point along the axis just above the orbital plane of Sphere B.

Let it be that the optics of the virtual observer were such that he couldn’t distinguish the location of Sphere A as that of Sketch 1 or of Sketch 2. Depth perception and vertical perception in the case of Sphere A is lacking due to deficiency in the optics employed by the virtual observer.

Notice that the illumination of Sketches 1 and 2 implicitly is in the same direction from which the Spheres are viewed by the virtual observer so that all three Spheres are fully lighted in the locations in which they are depicted.

A Different Source of Illumination

Let the illumination implicit in Sketches 1 and 2 be turned off. Also, let Sphere A be the source of light. The light emitted by Sphere A, would be reflected off of the surface of Sphere B. Let the virtual observer still lack depth and vertical perception with respect to the location of Sphere A relative to the planar orbit of Sphere B. However, let the new optics employed be such that the virtual observer could discern the pattern of light reflected by Sphere B in the six locations of Sphere B depicted. The location of Sphere A relative to the planar orbit of Sphere B could then be determined based on the pattern of light reflected.

The Two Depictions Modified when Sphere A is the Source of Light

 

 

 

 

 

 




Given the relative location of Sphere A in Sketch 1 with A as a source of light, the observed pattern of reflection from Sphere B, in the six depicted locations in its orbit, would be from the top of Sphere B as depicted.

Given the relative location of Sphere A in Sketch 2 with A as a source of light, the observed pattern of reflection from Sphere B, in the six depicted locations in its orbit, would vary as depicted.

Let it be that the pattern of light from Sphere A, reflected by Sphere B and detected by the virtual observer is that of Sketch 2 with A as a source of light. The virtual observer would be prompted to state that Sphere B was in orbit around Sphere A.

Implicitly in all four Sketches, the views of Spheres A and B would not be significantly altered, if the location of the virtual observer were Sphere C.

Conclusion

All four Sketches would not be significantly altered, if the implicit location of the virtual observer were explicitly identified as that of Sphere C. Consequently, all four depictions are essentially Sphere-C-ocentric depictions in which Spheres A and C are not in relative motion to one another.

For a set of two analytical depictions comparable to the set of two Sketches with A as a light source, see the University of Arizona archives. These archives present two locations of Sphere A, the Sun, with respect to the orbit of Venus (Sphere B). The two different patterns of Sunlight reflected from Venus are viewed by an observer on Sphere C, the Earth, which is depicted as not in relative motion to the Sun. These two analytical depictions are thus comparable to Sketches 1 and 2 with Sphere A as a source of light.

In the referenced University of Arizona archives, simplicity of analytical depiction is achieved by presenting the Sun and Earth as stationary with respect to one another. No one accepts that as true, yet this does not invalidate these two analytical depictions for their purpose, which is simplicity in illustrating two different patterns in the phases of Venus dependent upon the location of the Sun with respect to the orbit of Venus.

Another example: Simplicity of analytical expression is achieved by depicting the Earth as not rotating on its axis when the Sun’s diurnal motion relative to the Earth is accurately identified in common parlance by Sunrise and Sunset.

It is simplicity of expression and communication which determines how we analytically depict motions. An analytical depiction of motion in its choice of orientation is validated by simplicity and utility, not by its being an exclusionary singularity of what is true. A different choice of orientation may be rejected for being more complex and less useful, but not for being inaccurate.

In the case of the two depictions of motion yielding different patterns in the phases of Venus in the University of Arizona archives, the orientation is the same for both depictions. It is geocentric. The depictions are equal in simplicity and utility. The two depictions differ in the location of the Sun relative to a static orbit of Venus and a static Earth. With the Earth as the orientation point for both depictions, the static location of the Sun could be determined by the pattern of the phases of Venus actually observed, which is that of the second depiction. It would be wrong to discredit these two depictions because they represent the Sun as static with respect to the Earth. That is not their purpose. In fact to represent the Sun’s motion with respect to the Earth during the cycle of the orbit of Venus would obscure the purpose of these analytical depictions.

In his numerical exposition of his analogy, “Climbing Mount Improbable”, Richard Dawkins forgot that Darwinian evolution involves random mutation in the generation of a pool of mutations. Granting the legitimacy of a pool of mutations, non-randomly generated, I discussed in my previous essay, the errors within his analogy. In this essay I discuss why Dawkins was oblivious to his forgetting that Darwinian evolution involves random mutation.

The Two Algorithms of Darwinian Evolution

Darwinian evolution consists of two algorithms, i.e. two logical processes. The first algorithm is the random generation of a pool of integers to the base, N, where the probability of the generation of any integer is 1/N. The second algorithm is the screening of each integer in the pool at a discrimination ratio of 1/N. This screening, called natural selection, culls all integers except one specific integer, all copies of which, if there are any, survive screening.

Of course, the algorithms are not typically expressed in such stark mathematical lingo. Darwinian evolution is typically described as the formation of a pool of biological mutations, by random mutation of an existing species, followed by the survival via natural selection of but one of these mutations. Thereby a new biological species is formed because it alone among the other mutations was capable of fitting a new ecological niche, the entrance to which is guarded by natural selection.

A Blunter Description

A blunter description of Darwinian evolution would be: A pair of algorithms by which a numerical ratio, 1/N, is initially labelled random selection and subsequently relabeled non-random selection.

In a non-biological but material analogy, a card, selected at random from a deck, has a probability of 1/52 of being any specific card. If the one card, in this pool of one random mutation, is then non-randomly screened for Aces of Spades, the non-random discrimination ratio of screening is 1/52. The probability that the card will survive the non-random screening is 1/52

In a similar analogy, one card is selected at random from each of 120 different decks of cards. Each card, randomly selected, has a probability of 1/52 of being any specific card. Each card in the pool of 120 mutations is then non-randomly screened for Aces of Spades. All other specific cards are culled. The non-random discrimination ratio is 1/52. The probability that at least one card will survive the non-random screening is 90%.

Dawkins’ Numerical Analogy of Climbing Mount Improbable

In 1991, Richard Dawkins proposed a numerical analogy to Darwinian evolution. He called it “Climbing Mount Improbable”. It concerned a single stage of Darwinian evolution affecting three mutation sites of six mutations each, contrasted to a series of three substages, where each substage affected only one of the three mutation sites. On page 122 of The God Delusion, Dawkins cited a similar analogy of a bank robber’s attempting to open two different bank vaults having combination locks with the same number of dials. The combination lock of the one vault was constructed so that the dials were interdependent in controlling one locking mechanism, while for the other vault, the dials were independent of one another, each controlling its own locking mechanism.

In 1991, Dawkins noted that in his numerical analogy, the single stage defined 6 x 6 x 6 = 216 mutations, while each substage defined 6 mutations for a total of 18 mutations for the series of three substages. In this analogy to Darwinian evolution, there was no random mutation. Instead, the single stage pool of mutations was a given. The pool consisted of one copy of each of the 216 defined mutations. Similarly, there was no random mutation in any of the three substages. In each substage a pool of 6 mutations was a given, each pool contained one copy of each of the six mutations defined by that substage.

Dawkins’ analogy was solely to natural selection in Darwinian evolution, completely forgetting that the random generation of mutations is characteristic of Darwinian evolution. Not only was random mutation forgotten, but natural selection in the analogy differed fundamentally from Darwinian natural selection.

Outline of Natural Selection in Darwinian Evolution

In Darwinian evolution, natural selection screens every mutation in a pool of randomly generated mutations at a discrimination ratio of 1/N, where N is the number of different mutations defined in that stage of evolution. All those mutations in the pool that are one specific mutation, if any, survive screening by natural selection. Mutations, other than that one specific mutation, are culled by the screening process. In Darwinian evolution, all mutations in the pool are subjected to screening by natural selection. They are not screened consecutively, as if natural selection was a humanly engineered screening device, such that it could only test mutations consecutively, one at a time.

Outline of Natural Selection in Dawkins’ Analogy

In Dawkins’ analogy natural selection screens the mutations in the pool consecutively, one at a time. It stops the screening when the one and only specific mutation in the pool tests positive, thereby passing the screen of natural selection. Thus, not all mutations lacking the specific mutation are culled. Dawkins affirms these two requirements (1) of consecutive screening and (2) the stopping of screening, by stating that the maximum number of tries to achieve the success of natural selection in the single stage is 216. Each ‘try’ is the screening of a mutation in a linear sequence. This consecutive screening ceases with the positive screen test, which would occur at maximum of 216 screenings or ‘tries’, but only if the one and only specific mutation that can test positive is last in the line of mutations consecutively screened.

Dawkins Mistakes the Discrimination Ratio of Natural Selection for a Probability

Dawkins mistakes the discrimination ratio of natural selection, 1/216, for a probability, when in fact in Darwinian evolution, natural selection is non-random. It is a constant, a screening ratio.

For the given pool, namely one copy of each defined mutation, and where mutations are screened consecutively in a random linear sequence, i.e. within his strange analogy, Dawkins is right that the probability of a mutation in the pool passing the screen of natural selection is 1/216, when the total screened is 1.

This probability increases to 2/216, then to 3/216, etc. as mutations consecutively fail the screen of natural selection. Nevertheless, the screening by natural selection remains non-random at a constant discrimination ratio of 1/216, for every mutation screened, even though the linear screening sequence is of random order.

It is apparent that Dawkins confused the discrimination ratio of natural selection with the probability of the generation of mutation, which is numerically the same, 1/216. This is excusable because the essence of Darwinian evolution is labeling a numerical ratio, initially as the probability of the random generation of a mutation and then relabeling the same numerical ratio as the non-random discrimination ratio of natural selection. Dawkins included both random selection and non-random selection in his analogy in keeping with Darwinian evolution. However, instead of identifying the generation of mutations as random and the natural selection screening of mutations for survival as non-random in the Darwinian tradition, Dawkins applied both labels to the screening process of natural selection.

Dawkins made the same errors in his explanation of his analogy for each of the three substages of the series.

Gradualism: The Single Stage VS. the Series of Substages in Dawkins’ Version of Natural Selection

In Dawkins’ version, Natural Selection ceases screening mutations in a random sequence as soon as the one specific mutation that yields a positive result is screened. This is a maximum of six for each substage in his numerical illustration and results in the 100% probability of success of natural selection. For the series of three substages the maximum is a total of 18 mutations screened. In contrast, the maximum number of screenings necessary for 100% success for the single stage is 216.

What happens, if only 5 screenings in a random sequence is applied in each substage, whether or not a positive screen test occurs? The probability of success of natural selection would be 5/6 for each substage yielding a probability of (5/6)^3 for the series, i.e. 57.87%. To attain this probability for the single stage, the number of screenings in the random sequence would be a total of 125. For the single stage, 125/216 = 57.87% = (5/6)^3.

In Dawkins’ scheme of random natural selection, a probability of success of 100%, the series of substages is more efficient in the total number of mutations screened by a factor of 216/18 = 12. The efficiency of mutation in favor of the series at a probability of success of 57.97% is 125/15 = 8.33.

The Designs and the Roles of Designers in Dawkins’ Analogy of Darwinian Evolution

In Dawkins’ analogy to gradualism in Darwinian evolution, a bicycle lock is designed with three dials of six positions each for a total of 216 defined mutations of the lock by the designer, which 216 mutations are incorporated into the design of his artifact. In the mechanism, one of these mutations is designed to open the lock. This lock represents Darwinian evolution without gradualism. The designer makes a second lock in which the dials are not interdependent, but in which each dial controls its own locking mechanism by one of six mutations. This lock incorporates the gradualism of Darwinian evolution.

In the analogy, the pool of mutations is defined by the designing locksmith as one copy of each mutation. His artifact, the lock, incorporates this as a mechanical pool. Thus, the generation of mutations is non-random.

In Dawkins’ analogy a second intelligent agent preforms the action of natural selection. There would seem to be no problem with this as an analogy because natural selection in Darwinian evolution is non-random. However, this second intelligent agent is a bicycle thief, who is ignorant of the locksmith’s design with respect to the mutation that opens the lock. In his ignorance, he must employ random selection. Of course, in his own mind he would screen mutations orderly and he would screen them consecutively in that “orderly” sequence. Nevertheless, with respect to the actual unlocking mutation, any list composed by the bicycle thief, though non-random to him, would be random with respect to the actual unlocking mutation, due to the thief’s ignorance.

Conclusion

In his analogy, “Climbing Mount Improbable”, Richard Dawkins forgot that Darwinian evolution involves random mutation in the generation of a pool of mutations. However, he preserved a form of random selection in his analogy in that natural selection screened a pool of mutations consecutively in a random sequence.

Dawkins converted natural selection, which is imposed by an environmental niche or ecosystem in Darwinian evolution, into a bicycle thief, capable of screening only one mutation at a time in a linear, random sequence. The thief stopped the screening and culling, once the one and only specific mutation in Dawkins’ analogy yielded a positive screen test. Thereby, Dawkins could identify the ‘maximum’ number of ‘tries’ by the thief as 216 (and 6 for each substage). However, by stopping the screening by ‘natural selection’ with the positive screen test, all those mutations, waiting in line still to be screened, survive, even though each, according to Dawkins’ analogy, would have been culled, if screened.

Of course, it is not inherently wrong to choose intelligent agents or human artifacts as analogies to illustrate scientific theories. Such an analogy does fail when it identifies the logically random (the generation of mutations) as non-random and the logically non-random (the differential survival of mutations) as random.

Mathematical probability is solely logical. It can be applied to material sets only by analogy. In such analogies, human knowledge of material structures and/or forces is equated with non-random selection, while human ignorance of material structures and/or forces is equated with random selection. In his analogy, Dawkins did this, but not in accord with the Darwinian algorithms of evolution.

Richard Dawkins’ life’s work, through the clarity of his arguments and in his own words, is the affirming of the absurdity of Darwinian evolution.

Superficially it would appear that Richard Dawkins’ life’s work is demonstrating how the Darwinian evolution of complex biological structures is not absurd. Dawkins believes that he has demonstrated how the gradualism of evolution provides an escape from chance, an escape from absurdity. In his 2012 debate with Cardinal Pell, Dawkins identified Darwinian evolution and its non-randomness as his life’s work. However, over the years from 1991 to 2006, Dawkins had presented a cogent argument which concluded that Darwinian evolution is an absurdity. The expressions: Life’s work, how to escape from chance, and absurd; are from Dawkins own lexicon, which he employed in his characterization of Darwinian evolution.

Two Errors of Commission Plus Two Errors of Omission, 1991

In his 1991 lecture, “Climbing Mount Improbable,” Dawkins elucidated the role of gradualism in Darwinian evolution by contrasting (1) Darwinian evolution in a single stage involving three mutation sites of six mutations each to (2) Darwinian evolution in a series of three substages, serially involving the three mutation sites, one at a time. The single stage was analogous to attempting to reach the summit of Mt. Improbable in a single leap at a side of the mountain that was a sheer cliff. The series of substages was analogous to a gradual ascent up Mt. Improbable by a gentle slope opposite to the sheer cliff. Dawkins made two errors of commission and two errors of omission, which led to an erroneous conclusion.

Instead of concluding that sub-staging increased the efficiency of mutation while having no effect on the probability of success of natural selection, Dawkins falsely concluded that the gradualism of sub-staging increased the probability of success of natural selection. Gradualism thereby provided the escape from chance and absurdity. (“Chance is not a solution, . . . and no sane biologist ever said that it was.”, The God Delusion, p 119-120)

In 1991, Dawkins noted that three mutation sites, of six mutations each, identifies 216 different mutations. Each of the 216 different mutations is liable to generation in a single stage of Darwinian evolution. In contrast, only 16 of these 216 different mutations are liable to generation in the series of three substages of Darwinian evolution, where, in each sub-stage, mutation is restricted to one of the three mutation sites.

In his illustration of Darwinian evolution, Dawkins used the word, ‘try’ in a discussion of probability. In the context of probability ‘try’ is a synonym for ‘random selection’.

Dawkins made an error of commission by labelling each of these 216 different mutations a try, rather than correctly as a mutation. Further, Dawkins committed a second error of commission by stating that 216 was the maximum number of tries to achieve success in the single stage. In fact, 216 is the minimum size of a set containing each of the defined mutations required to achieve a probability of 100% success of natural selection in a single stage of non-random Darwinian evolution.

Similarly, Dawkins identified 18 as the maximum number of tries, rather than the sum of the minimum size of the three subsets, each containing the six defined mutations, required to achieve a probability of 100% success in the series of substages of non -random Darwinian evolution.

Dawkins made an error of omission by failing to identify the set of mutations (not tries) as non-randomly generated. By this omission and the use of the erroneous use of the word try, Dawkins implied that the generation of the mutations was random. Dawkins made a second error of omission by failing to identify 100% as the probability of success of natural selection for both the single stage and the series of three sub-stages. By this omission with respect to 100% probability and the use of the word, try, Dawkins implied that the success of natural selection differed in probability between the single stage and the series of sub-stages. That was his erroneous conclusion, namely that the series of substages increases the probability of success of natural selection, when compared to the probability of success of the single stage.

His errors of commission and omission led Dawkins to conclude that the single stage had a probability of success of natural selection equal to 1/216, while each of the sub-stages in the series had the greater probability of success of natural selection of 1/6.

In contrast to his false conclusions, what Dawkins had actually demonstrated was that the series of substages was more efficient with respect to the non-random generation of mutations than was the single stage at a level of probability of 100% success of natural selection. The mutational efficiency factor in favor of the series of sub-stages was 216/18 or 12 fold.

When the generation of mutations is random, as it is in Darwinian evolution, similar mutational efficiencies in favor of the series of sub-stages is achieved. The 89.17% level of probability of success of natural selection requires the generation of 479 random mutations in the single stage, while that level of probability overall for the series of substages requires a total of only 54 random mutations. The mutational efficiency factor is 479/54 or 8.87 in favor of the series of substages. (The probability of success within each substage is 96.25% due to the random generation of 6 mutations per substage. Thereby, 89.17%, which is 96.25% cubed, is the overall probability for the series of substages, while the overall total number of mutations is 54).

The Problem of Improbability and Its Mathematical Solution, 2006

In 2006, Dawkins identified low values of the probability of evolutionary success of large single stages of Darwinian evolution both as “the problem of how to escape from chance” (The God Delusion, p 120) and as “the problem of improbability” (p 121). Dawkins had erroneously convinced himself in 1991 that replacing a single stage of Darwinian evolution with a series of substages increases the probability of success of natural selection, when in fact it has no effect on the probability. Instead, it increases the efficiency of mutation as Dawkins, himself, had demonstrated, but didn’t notice.

Nonetheless, Dawkins described the series of substages as a process of accumulation. Accumulation cannot refer to the probabilities of the substages, which as factors of multiplication, yield the same overall probability of success as the single stage. An accumulating process can only refer to a sum. In this case, it refers to the sum of the mutations of the series of substages, which total is dramatically less than the number of mutations of the single stage for any given level of probability of success of natural selection. What Dawkins had demonstrated is that replacing a single stage of Darwinian evolution with a series of substages increases the efficiency of mutation, while having no effect on the probability of success of natural selection. This is an important insight into the algorithms of Darwinian evolution. We should be grateful to Dawkins for providing this insight to us by his 1991 demonstration.

Alluding to his erroneous solution to the ‘problem of improbability, which in analogy he had dubbed, “Climbing Mount Improbable”, Dawkins identified the single stage of Darwinian evolution as ‘absurd’, as attempting to leap in a single bound from the foot of Mt. Improbable to its summit at a side of the mountain that is a sheer cliff (p 122). In the next sentence he identified his alleged mathematical solution, namely a series of substages, as analogous to walking up the gentle slope of the mountain on a side opposite to the sheer cliff. He referred to the solution to absurdity, not as Darwinian evolution in a series of substages, but simply as ‘evolution’. Dawkins didn’t realize that (1) he had just identified Darwinian evolution in a single stage as ‘absurd’ because of its low value of probability of success and (2) that the series of substages had the same overall, and thereby the same ‘absurd’, probability of success.

Dawkins Utterly Rejects Darwinian Evolution in Principle in 2011

In 2006, characterizing Darwinian evolution of a complex biological structure in a single stage both as ‘absurd’ and as ‘prohibitively improbable’, Dawkins claimed that a series of substages of Darwinian evolution rescues Darwinian evolution in a single stage from absurdity and permits Darwinian evolution to ‘escape from chance’, by solving the ‘problem of improbability’. The series of substages “breaks the problem of improbability up into small pieces. Each piece is slightly improbable, but not prohibitively so.” (The God Delusion, p 120-122)

However in 2011, Dawkins affirmed a principle, which utterly rejects his distinction in kind of values of probability, whether the kinds are absurd vs. non-absurd or prohibitive vs. non-prohibitive. By principle there can be no such distinctions in kind and consequently there can be no problem of improbability in Darwinian evolution requiring a solution.

This valid principle averred by Dawkins is: Any two values of a variable, which is continuous over its range of definition, differ in degree and not in kind. Dawkins labeled the violation of this principle an act of ‘tyranny of the discontinuous mind’.

In principle, Darwinian evolution cannot be either absurd or not absurd, depending on the numerical value of the probability of success of natural selection. Darwinian evolution cannot be either prohibitive or not prohibitive, depending on the numerical value of the probability of success of natural selection.

Probability is a variable continuous over its range of definition, 0 to 1. Every value of probability in this range is of equal validity. The probability of 1 element in a set of 20 is 5.0%. The number of linear sequences of 20 elements is a set of 2.4 quintillion sequences (2.4 x 10^18). The probability of any of these linear sequences is 4.1 x 10^-17%. These two values of probability differ in degree by a factor close to a quadrillion (10^15), but they do not differ in kind, because no two values of probability differ from one another in kind, only in degree. This is the valid principle affirmed by Richard Dawkins in his 2011 essay, “The Tyranny of the Discontinuous Mind”.

Mutational Variants as Differing in Degree, Not Kind

In the 2011 essay, “The Tyranny of the Discontinuous Mind”, Dawkins not only affirms that values of a continuous variable, such as probability, differ only in degree, not kind, over the range of numerical definition, but that mutants in an evolutionary line of descent, differ only in degree, not kind. Furthermore, Dawkins claimed that the continuity of mutational forms would be apparent, if the intermediate mutants, having been randomly generated, had not become extinct.

But for the extinction of the intermediates which connect humans to the ancestor we share with pigs (it pursued its shrew-like existence 85 million years ago in the shadow of the dinosaurs), and but for the extinction of the intermediates that connect the same ancestor to modern pigs, there would be no clear separation between Homo sapiens and Sus scrofa. . . .
It is only the discontinuous mind that insists on drawing a hard and fast line between a species and the ancestral species that birthed it. Evolutionary change is gradual: there never was a line, never a line between any species and its evolutionary precursor.

Dawkins didn’t realize that he was a victim of the tyranny of his own discontinuous mind, when he made the distinction of kinds in the continuous variable probability, namely prohibitive and non-prohibitive, as well as, absurd and not absurd.

He also failed to realize that he had demonstrated an important feature of Darwinian evolution due to its not occurring in large, single stages, but through its gradual occurrence in series of substages. That feature is the efficiency of mutation, by which most intermediates do not become extinct. They cannot become extinct, because they cannot be generated due to Darwinian gradualism.

In Dawkins’ illustration, (three mutation sites of 6 mutations each) the 214 different intermediates between the initial mutation and the mutation surviving natural selection are all liable to be generated in a single stage of Darwinian evolution. In stark contrast, the series of three substages precludes the generation of 200 out the 214 intermediates. In the subseries only 14 different intermediates are liable to generation in addition to the initial mutation and the mutation surviving the final natural selection of the series.

It is not “But for the extinction of the intermediates”. It is, as is evident by Dawkins’ own demonstration, that the gradualism of Darwinian evolution eliminates the possibility of the generation of most intermediates. It is not by extinction, but by Darwinian definition that most intermediates do not and never have existed. We have Richard Dawkins to thank for this insight. Similarly, it is by definition within the Darwinian algorithms, that the generation of mutations is random and the differential survival of mutations is non-random, rather than vice versa.

Conclusion

Within any intellectual discipline, self-consistency is necessary as a mark of validity.

Richard Dawkins has significantly advanced our understanding of Darwinian evolution, most notably (1) by identifying the role of gradualism as increasing the efficiency of mutation while having no effect on the probability of success of natural selection and (2) by acknowledging the principle that any two values of the probability of success of natural selection differ only in degree, not kind.

That the series of sub-stages has the same probability of success of natural selection as the single stage, means: In the single stage of Darwinian evolution the shrew-like creature, living in the shadow of the dinosaurs, had the same probability of giving direct birth to a modern human, which survived a single culling by natural selection, as that shrew-like creature has of being the ancient ancestor of the modern human through a series of substages of Darwinian evolution.

Dawkins demonstrated that in a single stage, all possible mutational intermediates between the shrew-like creature and modern man are liable to be the direct offspring of it. Further, he showed that the series of substages, by changing the shrew-like creature from immediate parent into an ancient ancestor of the modern human, eliminates the possible generation of most of the intermediates. In other words, Dawkins demonstrated that the role of substages is not to increase the probability of success of natural selection, but to increase the efficiency of mutation. The total number of intermediates is less in the series because many intermediate mutations, defined by and liable to be generated in a single stage, cannot be generated in the series of substages.

The importance in advancing our understanding of the Darwinian algorithms, for which we are indebted to Richard Dawkins, is not diminished by his woeful inconsistencies: (1) in failing to realize the importance of his demonstration of mutational efficiency in gradual Darwinian evolution, and (2) in distorting Darwinian evolution by his false distinctions of kind, namely, of absurd/not absurd and prohibitive/not prohibitive. He thereby violated the principle, which he acknowledged in 2011, that any two values of a continuous variable, such as probability, differ in degree, not kind.

I hope that someday soon, Richard Dawkins will be given an award for his elucidation of the role of gradualism in Darwinian evolution. Gradualism increases the efficiency of mutational generation by eliminating the possibility of the generation of most mutational intermediates between species in a line of descent, while having no effect on the probability of success of natural selection.

The probability of the distributions of N elements into S subsets is illustrated using the dealing of playing cards as an analogy.

Analogy of a Full Deck

The number of different deals of 4 hands of 13 cards is:

52! / 4!(13!)^4 = 2 235 197 406 895 366 368 301 560 000 = 2.235 x 10^27

The probability of any deal is the inverse of the number of different deals:

4!(13!)^4 / 52! = 4.4738777743527142877118854689812 x 10^-28

Note: The symbol, !, is factorial, e. g., 4! = 4 x 3 x 2 x 1 = 24

Four Flushes

One of these trillions of trillions of deals is four hands, each a flush of 13 cards.

On page 25, Answering the New Atheism, Hahn and Wicker propose that the reasonable suspicion, based on the value of probability, is that somebody stacked the deck, if the result of a deal in bridge is four hands, each a flush of a single suit (spades, hearts, diamonds, and clubs). However, since this is the probability of every deal of 4 hands of 13 cards each, one must reach the same conclusion after every deal in bridge, namely that someone stacked the deck. Hahn and Wiker concur with Richard Dawkins that some values of probability can serve as explanations, while others cannot (The God Delusion, page 121f, Dawkins’ identification of and solution to a ‘problem of improbability’ for low, but not high values of probability). This is false because all values of probability are of equal validity. A probability of a deal in bridge at the tiny value of 4.4738777743527142877118854689812 x 10^-28 has the same validity as a value of probability of 0.5 of the flip of a coin. They are equal in validity as ratios of a subset to a set. There is no need to solve ‘the problem of improbability’ of a deal in bridge, just as there is no need to solve ‘the problem of improbability’ of the flip of a coin.

The General Formula

Given a set of N elements, where each element has a dual identity of S-E, such that N may be viewed as S subsets of E elements each and also as E subsets of S elements each, then the total number of different distributions, D, if the N elements are randomly divided into S subsets (i.e. if the elements are distributed irrespective of their individual identity) is:

D = (N!)/[(S!)×(E!)^S]

For every deal in bridge, N = 52, S = 4, and E =13, as illustrated above.

N! in the formula is the number of different sequences of a set of N elements. This number is diminished by S! in the denominator. S! is the number of sequences of S sets. We are not distinguishing sets by their order.

E! is the number of sequences within each set. We don’t care about the order of the elements in a set, just as a poker player with a hand containing four aces and a fifth card doesn’t care about the order in which the aces and the fifth card are dealt. We have to divide by E! for each set, i.e. we divide by (E!)^S.

The validity of the formula may be illustrated for sets much smaller than N = 52.

Examples of Low Values of N

For N = 6 cards, S = 2 suits (e.g. R and G), and E = 3 (e.g. 1, 2 ,3)

D = (N!)/[(S!)×(E!)^S]

D = (6!)/[(2!)×(3!)^2] = 10

The 10 possible distributions, into two subsets of three elements each (where the sequence of the two subsets and the sequences within each of the two subsets are irrelevant) are:
1) R1, R2, R3 and G1, G2, G3
2) R1, R2, G1 and R3, G2, G3
3) R1, R2, G2 and R3, G1, G3
4) R1, R2, G3 and R3, G1, G2
5) R1, R3, G1 and R2, G2, G3
6) R1, R3, G2 and R2, G1, G3
7) R1, R3, G3 and R2, G1, G2
8) R2, R3, G1and R1, G2, G3
9) R2, R3, G2 and R1, G1, G3
10) R2, R3, G3 and R1, G1, G2

The probability of any distribution is 1/D = 1/10 = 0.1

An even simpler set would be N = 4, S = 2, and E = 2.

One such set would be the four playing cards: The Ace of Hearts, AH; the Deuce of Hearts, DH; the Ace of Spades, AS; and the Deuce of Spades, DS.

The number of different distributions or deals of two sets, S, and two cards per set, E, would yield:

D = (N!)/[(S!)×(E!)^S]

D = (4!)/[(2!)×(2!)^2] = 3

The three possible distributions are:

1) AH, DH and AS, DS
2) AH, AS and DH, DS
3) AH, DS and DH, AS

Remember of the two Sets S, whether one is distinguished as first, rather than second, is of no consequence nor is the order of the two cards within each set. The probability of any distribution is 1/D = 1/3 = 0.3333…

Note:

Probability is the ratio of a subset to a set. Randomness refers to the algorithmic processing of a set of elements according to their generic identity as members of a set. Non-randomness refers to the algorithmic processing of a set of elements according to each element’s specific identity.
An analogy to a random algorithm is the dealing of cards face down: the cards are processed according to their generic identity. They all have the same generic identity, namely the design on the back of the cards. An analogy to a non-random algorithm is the processing of the cards by each player as he sorts the cards dealt to him, according to their specific identity, namely each card’s face identity.

The Definition:

Science is the inference of mathematical relationships among the properties of material reality as those properties are quantified through the art of measurement, thereby achieving an understanding of the natures of material things at the level of their common material properties.

Example One: Biological Taxonomy

Biological Taxonomy classifies living things, itself a mathematical subset, into further subsets, often on the basis of geometry or simply length. Living things is a subset of material things.

As an illustration of how taxonomy fulfills the definition of science, consider a field guide to the birds of North America and in particular the distinction between the Eastern bluebird and the robin. In order to avoid too much tedium not all of the subsets involved in the identification of the Eastern bluebird and the robin are explicitly stated below.

North America is a subset of the land masses of the earth. Birds are a subset of warm blooded animals, which lay eggs and possess four appendages of which one pair are wings. A subset of songbirds are thrushes, who typically have spotted breasts, a robin like bill, and a diet of worms and fruit. Robins and bluebirds are two subsets of thrushes, which differ not only in color, but length, 8.5″ and 5.5″, respectively.

Notice that science is analytical. Yet, although detailed, the analysis of science does not match the subtle detail of sense observation itself, as is evident it the following. The taxonomic identification above is crude and would still be crude, even if much more elaborate. In contrast, our sense knowledge is comprehensive and refined. This is evident in how easily one could distinguish among a starling, a grackle, and a robin in dim light, just by the way each walks. Yet, it would be difficult to verbalize adequately the subtle sensual analysis upon which these distinctions in walking depend.

Also, notice how measurement is scientific even though it need not be as precise as it is often precise through the use sophisticated instrumentation made possible through the art of engineering.

Example Two: The Ratio of Force to Acceleration

A well-established inference in Physics is that the ratio of an unbalanced force to its corresponding acceleration for any given body is a constant, which has been named, mass. This inference requires instrumental measurements, which in turn are dependent upon standards of force, mass, length, and time. That standards of measurement are human choices is evident in the history of the current standard for time. Indeed measurement itself is a human activity.

Metaphysical Time and Quantified Time

The distinction between metaphysical time and quantified time helps elucidate the character of scientific knowledge.

Fundamentally, time is a quality. It is the existential property of being mutable, i.e. subject to change. In contrast, we almost always think of time as a quantity. However, the quantification of time is the human activity of comparing (measuring) one motion with another motion chosen as a standard. The distinction between time as the existential quality of mutability and the human concept of quantified time is evident in a proposal by the Cal Tech cosmologist Sean Carrol.

Carrol proposed that a thoroughly defined, and thereby possible universe, would be one consisting of a single elementary particle moving through Newtonian Space (Video, beginning about minute 9:35). However, a single elementary particle defines no space at all, let alone Newtonian three dimensional space. It would take two elementary particles to define a space of one dimension.

Although Carrol’s universe of one elementary particle moving through three dimensional space is an impossibility due to its self-inconsistencies, it does highlight the distinction between real time (the condition of mutability) and quantified time (a human thought of comparison, i.e. measurement). In Carrol’s simplest possible universe of one elementary particle, there could be no location, no mutability (metaphysical time), and no quantified time.

In this vein of conjecture, the simplest possible universe, exhibiting location and real, i.e. metaphysical time (mutability), would be a universe of two elementary particles. These two particles could be mutable by moving closer and/or farther away from each other in the one dimension which they define. Thereby there would be metaphysical time, i.e. there would exist two material entities subject to change. However, this metaphysical mutability or time could not be quantified. It takes two independent motions to quantify time by the human act of comparing one motion to the other, based on choosing one of these motions as a standard of measurement. In a universe consisting of only two existent material particles, real time, i.e. mutability as linear motion in one dimension, could exist. However, this mutability could not be quantified as time due to the lack of a second independent motion to serve as a standard of measurement.

Carrol’s universe of one particle moving in three dimensional space is not a possibility. His particle is not a material entity. Rather, it is a locus of mathematical points as a function of a linear algebraic variable, t. Consequently, his universe is solely a mathematical construct of human thought, which has no potential for existence.

The validity of the mathematical equations representing the measurable properties of material things does “not imbue mathematics with the ability to impart ontological status”, physicist Alexander Sich (Video 1:05:20). Indeed, since humans lack the power of creation, when we speak of something’s ‘potential for existence’, we are doing so analogically to the entities we do experience as existent.

Kantian Science Vs. Materiality

A common definition of science is methodological, but it is also backwards. It is commonly said that science is the testing of an hypothesis through experimentation. In this definition, science starts with the human mind, as does Kant’s categories of the mind, and then proceeds to matter for verification. This order is apparent in Sean Carrol’s proposal of a possible universe, which is fully identified by the human mind, and therein fails to correspond to anything that could be materially existent.

The proper definition of science starts with our material experience of existing things and infers mathematical relationships inherent in the natures of material entities as those natures are materially expressed in measurable properties.

A definition is a statement of the meaning of a word, whereas a synonym is another word of the same meaning. A synonym may serve as a definition, if the meaning of the synonym is already known, while one is ignorant of the meaning of the word.

It would appear as if a valid definition of probability is ‘the likelihood of an event’. However, likelihood is as vague a term as probability. This can be seen if likelihood is defined as ‘the probability of an event’. Thus, to state that probability is the likelihood of an event, is not to define probability, but to identify likelihood as a synonym for probability.

In contrast, probability can be defined without the synonym of likelihood. Probability is the ratio of the favorable cases of an event to the total of all possible cases or outcomes of an event. What is wrong with this definition? It is based on human ignorance of facts and not upon facts themselves.

An application of this definition will show that it requires a veil of human ignorance. It is said that seven as the sum of two rolled ‘fair’ dice is a probability of one-sixth. However, this requires the human ignorance of the forces to which the dice are subjected in rolling them. The actual forces to which the dice are subjected determine exactly the outcome of the event and the sum evident in that outcome. Given these forces, the outcome will be a sum of seven or a sum of non-seven. The result has nothing to do with other cases or outcomes. Thus, the roll of a pair of dice is always a singular determinate case, having nothing in fact to do with (1) ‘favorable cases’ and (2) ‘all possible cases’. There are no other cases, except in the human imagination. To characterize the singular case as a probability is to place it within human imagination, or more accurately, to veil it in human ignorance.

Does this mean that there is no definition of probability, which is not conditioned by human ignorance? No, the unconditional definition of probability is purely mathematical. It is the ratio of a subset to a set. For example the set of all possible sums of two integers, each of which may be an integer from 1 through 6, is a set of 36 elements. The size of the subset of sevens among this set is 6. Thus the probability of the sum seven for this set is one-sixth. When we play games involving the roll of two dice we typically adopt the relationships of probability of the subsets of this set of 36 as a mutual convention.

It should be evident that when we treat the definition of probability as the ratio of the favorable cases of an event to the total of all possible cases or outcomes of an event, we are proposing an analogy, which is based on human ignorance.

The definitions of random and non-random are also relevant to the meaning of probability.

Several physical constants are said to be fine-tuned to providing the conditions necessary for life on earth. Is this pro, con, or neutral to the existence of God?

A Philosophical argument for God

Everything is explicable in its existence. The things of whose existence we know directly within the scope of human sensation we know through their properties, the intelligibility of which is explained by the natures of those entities. The one intelligible aspect of these entities, that is not explained by their natures, is their very existence. Therefore, there must exist an intelligent agent beyond human experience which explains its own intelligibility and its own existence as well as the intelligibility and existence of all other entities. This intelligent agent’s, this being’s, nature must be To Exist. If it were not, it too would require an intelligent agent to explain its existence. We call this being, God.

Relevance of Fine-tuning

Underpinning this argument is: Every intelligent effect is due to an intelligent agent. Fine-tuning of the universe, if the universe is fine-tuned, is an intelligible effect, which requires an intelligent agent.

A Philosophical Justification of Fine-Tuning without Positing the Existence of a God

Human artifacts are characterized by order. Human artifacts are the effects of human intelligence. The only intelligent agents, of which we are aware, are humans. Yet, when we observe order in natural things, we gratuitously identify such order as the effect of an intelligent agent. Since this agent, gratuitously generated by our unwarranted anthropocentric extrapolation, is, by definition, beyond the scope of human observation, we identify it as some super human, and call this figment of our extrapolation, God. The rational course is to accept order in nature as given and recognize that it is only the order of human artifacts, which we can identify as intelligible effects of intelligent agents, namely human beings. (This is Richard Dawkins’ argument. The God Delusion, p. 157, “In the case of a man-made artifact, such as a watch, the designer was really an intelligent engineer. It is tempting to apply the same logic to an eye or a wing, a spider or a person,” He claims that the order evident in a biological entity is not due to an intelligent agent, but is due to natural selection, the ordering power of environment.)

The Agnostic Assessment

An agnostic accepts neither of these arguments, content that both of their conclusions are equally probable, while neither argument nor probability nudge him toward or away from belief in God.