100_0023

Amaryllis is a form of lily and as such its petals are in sets of three. It has two such sets, one fore and one aft. One set may be roughly identified by direction as north, southeast and southwest. The other set as south, northeast and northwest. These sets are natural measureable properties of the amaryllis plant and are useful in the science of taxonomy.

The mathematics of sets may be characteristically embodied in nature and when so, it forms part of the base of science. However, that is not to say that the entirety of the mathematics of sets is characteristically embodied in nature. One important exception is the mathematics of probability.

Probability is the fractional concentration of an element in a logical set. The mathematics of probability is centered in the formation of other logical sets from a source set, where all sets are identified by their probabilities, i.e. their fractional compositions.

In discussing the mathematics of probability where the source set is one of two subsets of three unique elements, it would not be inappropriate to refer to the amaryllis flower as a visual aid in discussing what is essentially logical and in no way characteristic of the nature of the amaryllis flower.

In randomly forming sets of two petals where the source set is the amaryllis flower, what are the probabilities of the population of sets defined in terms of fore and aft petals? The population defined is of four sets of two petals each. One set consists of two fore petals. One set consists of two aft petals. Each of the other two sets of the population consist of one fore petal and one aft petal. The probability of sets of two fore or two aft petals in the population of sets is 25%, while the probability of a set of one fore and one aft petal is 50%.

One could imagine placing one set of six petals from an amaryllis flower in each of four hundred pairs of hats and blindly selecting a petal from each of two paired hats. One would expect the distribution of paired sets of petals to be roughly one hundred of two fore petals, one hundred of two aft petals and two hundred of one fore and one aft petal. This would represent a material simulation of the purely logical concept of probability.

What is the probability of a set of eighteen randomly selected amaryllis petals containing at least one ‘north’ petal? The answer, P, equals (1- ((n – 1)/n)^x), where n = 6 and x = 18. P = 66.5%.

Notice that these questions in the logic of probability have nothing to do with amaryllis flowers or petals or their manipulation. The petals and their visual (or actual) manipulation are entirely visual aids in a discussion of pure logic. It is materially impossible to select any material thing at random from a set of material things. Selection is always explicable in terms of the material forces involved. Thus, it is always non-random. It is by convention that we equate human ignorance of the details of the material process of selection with mathematical randomness.

We say that the probability of one sperm fertilizing a mammalian egg is one in millions. What we mean is that the fractional concentration of any one sperm is one in millions and that we are ignorant of the detailed non-random physical, chemical and biological processes by which one sperm of the natural set fertilizes the egg.

It is, of course, permissible to use the mathematics of probability in many instances when for any number of reasons we are ignorant of the scientific explanation of material processes. However, we must be constantly aware that the mathematics of probability characterizes human ignorance and not material reality when it is used as a tool to compensate for a lack of knowledge.

The mathematics of probability is an exercise in logic, unrelated to the nature of material things and their measurable properties. In contrast, science is the determination of the mathematical relationships among the measureable properties of things, which properties are characteristic of the nature of material things.

In my previous post (Ref. 1), I noted that John Lennox concurs with Richard Dawkins that replacing a single cycle of Darwinian evolution with a series of sub-cycles ‘drastically increases the probabilities’ (Ref. 2).

However, Lennox is not in full agreement with Dawkins’ view. Lennox’ primary criticism of Dawkins’ interpretation of the biological version of Darwinian evolution is that it is a circular argument, “And strangest of all, the very information that the mechanisms are supposed to produce is apparently already contained somewhere within the organism, whose genesis he claims to be simulating by the process.” Lennox misinterpretation of Dawkins’ view is ‘within the organism’ (Ref. 3). For Dawkins and Darwin, natural selection is essentially inanimate and external to the evolving organism. It is natural selection which defines the target.

Dawkins has identified the generation of biological forms (at whatever level, such as that of the genome or the morphology of a bird’s wing) as random, while natural selection is non-random, thereby rendering, in Dawkins’ judgment, evolution overall as ‘quintessentially non-random’ (Ref. 4). Thus, consistently with the algorithm of Darwinian evolution, it is random mutation, which is ‘a blind, mindless, unguided process’, not natural selection, as Lennox alleges (Ref. 3).

Natural selection is entirely external to the organism and essentially physical, not biological. Natural selection may be described as biologically blind, mindless and unguided, while being physically sighted, intelligent and guided. Natural selection is an ecological niche, defined in purely inanimate, physical and chemical terms.

In the biological simulation of the Darwinian algorithm, life is plastic by means of random mutation. This plasticity is allowed survivable, discrete expressions only within the molds of environmental constraints. These biological expressions appear to us as biological norms when in fact they are environmental norms.

Typically we think in terms of biological norms, such as mice, amoebae, algae and tulips. According to Darwinian evolution and natural selection in particular, these are not biological norms. What is biological is simply potency working through random generation (mutation). What appears to us to be biological norms are really environmental, i.e. physical, norms, evidenced in biological terms. We are aware of the existence of the inanimate environmental niches by the existence of the biological forms, once randomly generated, which now fill them. The environmental norms are physically defined and physically formed. They are merely filled with living matter, which coincidentally expresses a biological form compatible with the determining environment.

In the biological simulation of the Darwinian algorithm, natural selection is completely external to the organism and may indeed be viewed as an existent target, where the culling of ‘failed’ mutations is one aspect of the overall physical processes of natural selection. The Darwinian algorithm, in itself and as presented by Dawkins, is not circular because the target is not within the organism as alleged by Lennox (Ref. 3).

It should be noted that Dawkins is in error in claiming that the overall algorithm of Darwinian evolution is ‘quintessentially non-random’ simply because the last part of the algorithm, namely natural selection is non-random. Though natural selection is non-random in itself, the probability of success of natural selection depends upon the probability of the presence of the survivable mutant in the pool of mutants randomly generated. Thus, the end result of the Darwinian algorithm is random. Darwinian evolution is quintessentially random. Darwinian random mutation is the generation of random numbers as Dawkins has illustrated in his excellent metaphor of the combination lock (Ref. 5).

References:

1. http://theyhavenowine.wordpress.com/2014/04/10/smearing-out-the-luck/
2. “God’s Undertaker”, page 165
3. “God’s Undertaker”, page 167
4. http://www.youtube.com/watch?v=tD1QHO_AVZA Minute 39:10
5. “The God Delusion”, page 122

My previous post (Ref. 1), may have given the false impression that no one agreed with Richard Dawkins’ explanation of smearing out the luck of Darwinian evolution. This post hopefully corrects that impression.

Dawkins has stated “. . . natural selection is a cumulative process, which breaks the problem of improbability up into small pieces. Each of the small pieces is slightly improbable, but not prohibitively so.” (Ref. 2) What does this mean? We can tell from his illustrations (Ref. 3 – 4).

It would seem that Dawkins is saying that the probability of the generation of a given number by a random numbers generator, is increased by the introduction of natural selection. This doesn’t fly. Natural selection doesn’t generate mutations. It culls mutations. It permits only copies of one particular mutation to survive. It doesn’t affect the generation of the survivable mutation from which arises its probability.

Consider a single mutation site defining six different mutations. The six faces of a die define six different mutations. In this example of a total of six defined mutations, the probability of the random generation of at least one copy of the number, 6, for a total of one randomly generated mutation is 1/6 = 16.7%, with or without natural selection. Similarly the probability of the random generation of at least one copy of 6 for a total of six randomly generated mutations is 80.6%, with or without natural selection. In Darwinian evolution natural selection has no effect on probability (Ref. 1). It merely eliminates superfluous mutations, whether the superfluous mutations have been generated randomly or non-randomly.

It is apparent that Dawkins is not assessing the role of natural selection, but analyzing the replacement of a single cycle of random mutation and natural selection with several sub-cycles. In Ref. 3, he compares a single cycle affecting three mutation sites of six mutations each to three sub-cycles, each affecting a single site. The replacement of a single cycle with a series of sub-cycles has no effect on probability. Rather, it increases the efficiency of random mutation. Yet, Dawkins does not identify this as efficiency in mutation due to sub-cycles. He calls it ‘smearing out the luck’, as if the probability of success changed from 1/216 to 1/18. Dawkins is comparing 216 non-random mutations to 18 non-random mutations at a probability of success of 100% (Ref. 4).

Some of Dawkins’ reviewers have agreed with him. The Wikipedia review says the comparison is between probabilities of 1/216 and 1/18 (Ref. 5). More remarkably, in referring to a set of twenty-eight mutation sites of twenty-seven mutations each, John Lennox cites a probability of 10^(-31) and one billion mutations for a single cycle compared to the probability and the number of mutations for a series of 28 sub-cycles (Ref. 6). A computer simulation of the sub-cycles reached a probability of 1 in a maximum of 43 mutations per sub-cycle. In accord with Dawkins, Lennox refers to this as drastically increasing the probabilities. Superficially, Lennox’ comparison appears to imply an increase in probability due to the introduction of sub-cycles.

However, the Darwinian algorithm with sub-cycles, as well as this example of it, does not increase probability. In fact, the comparison in Lennox’ example, implies efficiency in the number of mutations due to sub-cycling with no change to the probability. A more appropriate comparison would have been at a probability of 90% for both the single overall cycle and for the series of 28 sub-cycles. This would compare 2.3 x (27)^28, i.e. roughly 2.7 x 10^40 mutations for the single cycle to 4144 mutations for the series of 28 sub-cycles at the same probability of success, namely 90%.. The 4144 mutations are 148 mutations for each of 28 sub-cycles, where the probability of success for each sub-cycle is 99.6%. This yields a probability of 90% for the series of 28.

Contrasting non-random vs random mutation, within the algorithm of Darwinian evolution for a single cycle, also shows that natural selection has no effect upon probability. For non-random mutation, one mutation yields a probability of 1/n. This increases linearly to a probability of 1 as the number of non-random mutations reaches n. Natural selection merely culls the superfluous mutants. Similarly, random mutation starts out at a probability of 1/n with one mutation and asymptotically approaches 1 as the number of random mutations increases. When the number of non-random mutations is respectively, n, 2.3n, 4.6n and 11.5n, then the respective probabilities are 63%, 90%, 99% and 99.999%. Here too, natural selection merely culls the superfluous mutants.

Another common error in the evaluation of Darwinian evolution is to attribute temporal and material constraints to random mutation. Due to the fact that Darwinian evolution is strictly a logical algorithm of random mutation and natural selection, it is not subject to any temporal or material constraints. It is material simulations, not the logical algorithm, which can exceed such constraints. Also, in a material simulation, there is no increase in time or material due to a random mutation compared to a non-random mutation.

There are 52 factorial or 8.06 x 10^67 different sequences of 52 elements. The inverse of this is the probability of any sequence. In a material simulation, how many decks of cards and how long does it take to generate randomly any sequence, if shuffling for five seconds is granted to be a random selection? The answer is one deck and five seconds. Granted this, how many decks of cards and how long would it take to generate a pool of decks of cards containing at least one copy of a particular sequence at a probability of 90%? The answer is 2.3 x 8.06 x 10^67 decks and 5 x 2.3 x 8.06 x 10^67 seconds. If we apply the Darwinian algorithm of a single cycle of random mutation and natural selection, these paired numbers of decks and seconds are required for a probability of success of evolution of 90%. This exceeds by far any practical temporal and material limits. However, if we are content with any value of probability, then we would be content with one random mutation. Natural selection does not affect probability. It merely culls superfluous, randomly generated mutants. If we trust success to just one random mutation, then there is no need for natural selection, while the material and temporal requirements of the simulation are insignificant, namely one deck and five seconds.

Indeed, we must be content with any and every value of probability. I have argued that no value of probability represents a ‘problem of improbability’. To claim that ‘the probability of this outcome is so close to zero that it could not be due to chance’ is a self-contradiction. Of course, I am not claiming that probability is to be accepted as an explanation. Rather, if probability is accepted in any instance as an explanation, then in no instance can it be rejected as an explanation on the basis of its numerical value, irrespective of how close it is to zero. (Ref. 7). Similarly, the acceptance of probability as an explanation is not bolstered by a value of probability closer to 1.

References:
(1) http://theyhavenowine.wordpress.com/2014/04/04/dawkins-on-gradualism/
(2) “The God Delusion”, page 121
(3) “The God Delusion”, page 122
(4) http://www.youtube.com/watch?v=JW1rVGgFzWU minute 4:25
(5) Growing Up in the Universe, Part 3 http://en.wikipedia.org/wiki/Growing_Up_in_the_Universe
(6) “God’s Undertaker Has Science Buried God?” Page 165-167
(7) http://theyhavenowine.wordpress.com/2013/12/30/too-improbable-to-be-due-to-chance/

According to Richard Dawkins the problem which any theory of life must solve is how to escape from chance (Ref. 1), how to solve the problem of improbability (Ref. 2).

Darwinian evolution consists not simply in a single cycle of random mutation and natural selection, but the gradualism of a series of such cycles. According to Dawkins, the problem of improbability of a single cycle is solved by the gradualism due to cycles, each of which is terminated by natural selection. Although Dawkins has declared that the meaning of non-random is his life’s work (Ref 3), it is really gradualism which is his key to explaining the randomness and improbability of Darwinian evolution.

What follows are analyses of four explanations Dawkins has offered to elucidate the role of gradualism in Darwinian evolution. The explanations focus on: (1) a numerical illustration involving three mutation sites of six mutations each, (2) the breakup of a large piece of improbability into smaller pieces, (3) the analysis of the vector of evolution into its component vectors of improbability and gradualism and (4) the gradualism of mutant forms in an ordered sequence approaching mathematical continuity.

Dawkins presents these explanations as in the mainstream of the scientific acceptance of Darwinian evolution. He is not attempting to re-interpret Darwinian evolution or to present any departure from it. His explanations are sufficiently explicit that they are a clear testimony to the coherent mathematics underlying Darwinian evolution in spite of Dawkins’ errors in explaining the mathematics.

The Role of Gradualism, a Numerical Illustration (Ref 4)

A set of three mutation sites of six mutations each defines 216 different mutations, which is 6 x 6 x 6. The 216 mutations may be viewed as an ordered sequence consisting of the initial mutation, the final mutation and 214 ordered intermediate mutations. Let a pool of one copy of each mutation be generated and subjected to natural selection. Natural selection would cull all but one mutation, the final mutation. The pool of 216 would have been non-randomly generated. The probability of success of natural selection would be 100%.

Let this single cycle of mutation and natural selection be replaced by the gradualism of three cycles, where each cycle affects one of the mutation sites independently of the other two. In each cycle a pool of six non-randomly generated mutations would be subjected to natural selection. A total of 18 mutations would be non-randomly generated. The probability of success of natural selection for each cycle would be 100% and the overall probability of success of natural selection would be 100%, i.e. 100% x 100% x 100%.

The gradualism of sub-cycles would generate a total of 18 mutations consisting of the two endpoints, but only 16 of the ordered 214 intermediates defined by evolution overall. Gradualism introduces large gaps, not just missing links, in the actually generated spectrum of ordered mutations in comparison to the ordered spectrum defined by the mutation sites. Gradualism introduces an efficiency factor of 216/18 = 12 in non-random mutations without any change in the probability of success of natural selection, which is 100% for both the overall cycle and the series of sub-cycles of gradualism.

Although such is Dawkins’ illustration of gradualism, it is not his interpretation. Dawkins refers to the non-random mutations as ‘tries’ implying that they are random mutations. He refers to a one in 216 chance compared to three chances of one in 6. He calls gradualism smearing out the luck into cumulative small dribs and drabs of luck. Dawkins erroneously believes he has illustrated random mutation rather than non-random mutation. Dawkins thinks he has illustrated, not an increase in efficiency in the number of mutations due to gradualism. Dawkins erroneously claims to have illustrated an increase in the probability of success of natural selection, the probability of success of Darwinian evolution due to gradualism.

If the pools of mutations subjected to natural selection in the illustration are generated by random mutation, a similar efficiency in the number of mutations is achieved, without any change in the probability of success of natural selection. A pool of 19 randomly generated mutations in each of three sub-cycles would yield a probability of success of natural selection of 96.9% for each cycle and an overall probability of success of natural selection of 90.9%. For a single cycle of random mutation involving all three mutation sites, a pool of 516 random mutations would yield a probability of success of natural selection of 90.9%. The efficiency factor in random mutations would be 516/57 = 9 due to gradualism with no change in the probability of success of natural selection.

The probability, P, of at least one copy of the mutation surviving natural selection in a pool of x randomly generated mutations with a base of n different mutations is: P = 1 – ((n -1)/n)^x.

The Breakup of a Piece of Improbability (Ref. 5)

Dawkins claims that natural selection, which terminates each cycle of random mutation and natural selection, increases the probability of success of natural selection by forming a series of cycles. The overall improbability of evolutionary success in a single cycle is broken up into smaller pieces of improbability.

To break up a large piece of something into smaller pieces implies that the smaller pieces add up to the larger piece.

Probability and improbability are complements of one. They add up to one. The overall probability of a series of probabilities is the product of the probabilities forming the series, not their sum. Each value of probability in a series, if less than one, is greater than the overall probability.

The probability of any face of a red die is 1/6 = 16.7%. The improbability is 5/6 = 83.3%. The same is true of a green die. The probability of any combination of the faces of the two dice, one red one green, is 1/36 = 2.8%. Its improbability is 35/36= 97.2%. Suppose it were mathematically valid to say that rolling the dice individually in a series, rather than both together, ‘breaks the improbability of 97.2% up into two smaller pieces of improbability of 83.3% each’. It would then be mathematically valid to say that rolling the dice in series rather than both together, ‘breaks up the small piece of probability of 2.8% into two bigger pieces of probability of 16.7% each’. Both statements are nonsense. They confuse multiplication and division with addition and subtraction. Yet, Richard Dawkins claims that those, who insist that the overall probability of a series is the product of the probabilities, don’t understand ‘the power of accumulation’, i.e. the power of addition. He claims that ‘natural selection is a cumulative process, which breaks the problem of improbability up into small pieces. Each piece is slightly improbable, but not prohibitively so.’ (Ref. 5) In other words, gradualism increases the probability of evolutionary success.

The probability of the outcome of the roll of three dice, one red, one green and one blue is 1/216. If each die is rolled separately the probability of each roll is 1/6 and the overall probability is 1/216. If the probabilities of the three individual rolls were cumulative as Dawkins indicates (Ref. 5), the overall probability of the series would be 1/6 + 1/6 +1/6 = 1/2. Yet, Dawkins implies the accumulation would be a probability of 1/18 (Ref. 4). Similarly, Wikipedia states, ‘The probability of unlocking the combination, in three separate phases, falls to one in eighteen.’ (Ref. 6). The non-random mutations of three individual sites of six mutations each, e.g. those of three dice, add up to 18. The fraction, 1/18, is not the probability of the series of three individual probabilities. It is simply the mathematical inverse of the 18 total mutations.

The Analysis of the Vector of Evolution (Ref. 7)

Dawkins proposes a parable of Darwinian evolution, ‘Climbing Mt. Improbable’. In the parable, evolution is a vector slope, the sum of a vertical vector, improbability, and a horizontal vector, gradualism. If gradualism is zero, then the vector of evolution is solely a vector of improbability. In the parable, the role of gradualism in Darwinian evolution is to change the slope of the vector of evolution from infinity when gradualism equals zero to some finite value, when gradualism is greater than zero. Dawkins implies that gradualism in the parable decreases the improbability. It does not. Both in Dawkins’ parable and in Darwinian evolution, the overall improbability is unaffected by gradualism, whether or not the vectors of evolution and gradualism are incremental. Also, improbability is not a vector. Consequently, Dawkins’ metaphor of Darwinian evolution as a vector slope, equaling the sum of its horizontal and vertical vectors, is irrelevant both to improbability and to Darwinian evolution.

The Spectrum of Ordered Mutations as Approaching Mathematical Continuity (Ref. 8)

In Darwinian evolution, each cycle of random mutation and natural selection defines a finite set of different mutations, n, which is the base of random numbers generation. The pool of x generated random numbers is subjected to natural selection, i.e. the pool is subjected to a filter at a discrimination ratio of 1/n, which culls all mutations except copies of the one mutation which survives natural selection. The set of n different mutations is an ordered set, where the initial mutation is the lowest and the mutation, surviving natural selection, is the highest mutation in the ordered sequence.

Dawkins notes that, if all of the intermediates between the lowest and the highest mutation in a biological evolutionary sequence survived, the ordered sequence would be recognized as a sequence approaching mathematical continuity (Ref. 8). Furthermore, Dawkins claims that all of the intermediates have been biologically and randomly generated, but are typically extinct. Man and the pig have a common evolutionary ancestor. ‘But for the extinction of the intermediates which connect humans to the ancestor we share with pigs (it pursued its shrew-like existence 85 million years ago in the shadow of the dinosaurs), and but for the extinction of the intermediates that connect the same ancestor to modern pigs, there would be no clear separation between Homo sapiens and Sus scrofa.’ (Ref. 8).

Dawkins would be right within the context of Darwinian evolution, except for one feature of Darwinian evolution which he demonstrated in Reference 4. Gradualism in Darwinian evolution is highly efficient in the generation of random mutations, by not generating most of the intermediates. Due to the efficiency of gradualism, as Dawkins has shown, there are gaps in the actually generated set of intermediates in comparison to the mathematically defined spectrum of ordered mutations. Most of the mathematically defined intermediates, which connect humans to the ancestor we share with pigs, are not absent due to extinction, they are absent because they were never biologically generated, thanks to the efficiency of gradualism of Darwinian evolution. Darwinian evolution is not characterized by missing links in the fossil record. In its mathematical definition, it is characterized by gapping discontinuities in the sequence of ordered mutations actually generated in comparison to the sequence of ordered mutations defined, but not generated.

In 1991 (Ref. 4), Dawkins demonstrated that there are discontinuities in the spectrum of generated mutants due to gradualism. In 2011 (Ref. 8), he claimed that gradualism insures the generation of the complete spectrum, that any discontinuities are due to extinction, not to a lack of generation.

Conclusion

Dawkins does not understand the mathematics of gradualism or its role in Darwinian evolution. Yet, understanding the meaning of random/non-random, which he has identified as his life’s work, is integral to understanding the mathematics of and the role of gradualism in Darwinian evolution. The role of gradualism in Darwinian evolution is to increase the efficiency of random mutation. It has no effect on probability.

References:
1. Page 120, “The God Delusion”
2. Page 121, “The God Delusion”
3. http://www.youtube.com/watch?v=tD1QHO_AVZA Minute 38:55
4. http://www.youtube.com/watch?v=JW1rVGgFzWU minute 4:25
5. Page 121, “The God Delusion”
6. Growing Up in the Universe, Part 3 http://en.wikipedia.org/wiki/Growing_Up_in_the_Universe
7. Page 121-2, “The God Delusion”
8. http://www.richarddawkins.net/news_articles/2013/1/28/the-tyranny-of-the-discontinuous-mind#

Science is the determination of the mathematical relationships inherent in the measureable properties of material reality. Oftentimes, associated with the mathematical relationships of science is a visual narrative pleasing to the human imagination. Such a narrative is essentially extrinsic to the science.

The Relationship of the Intellect and the Imagination

From the time of Aristotle it has been recognized that human intellectual knowledge is immaterial, an abstraction from material reality. However, Aristotle noted that the action of the intellect is dependent upon a phantasm, a composite of the sense knowledge obtained through the animal senses of man. Thus the human intellect, though immaterial in its nature and activity, is extrinsically dependent upon the sensual phantasm in order to function and to be aware. Without such a sensual phantasm, the human intellect is inactive.

The scope of the senses, including the sensual phantasm, is limited. Material reality, however, has properties beyond the scope of the senses, which properties can be measured instrumentally. Material properties smaller than this scope have been designated, micro, and larger than this scope, macro. Although the human intellect is unlimited, the human imagination is limited to the mid-range scope, which is that of human sensation. The dilemma arises when the sensual imagination attempts to constrain intellectual thought to its level of sensation.

In the Context of Science

A simple scientific relationship is that among the measured values of pressure, volume and temperature of a gas. Gas confined to the unchanging volume of a tank, will increase in pressure when its temperature is increased and vice versa. The science is the mathematical relationship among the instrumentally measured values of pressure, volume and temperature. Associated with the mathematics is a popular visual narrative. The gas is composed of molecules, depicted as tiny ‘micro-sized’ balls in motion. Pressure is depicted as the balls striking the internal walls of the tank. When the temperature is increased the balls move faster, so they strike the walls with greater force, thereby increasing the pressure. This is a harmless visual narrative at the level of human sensation and pleasing to the human imagination. In contrast, the mathematical relationship among the measured values of pressure, volume and temperature is the science.

However, there are scientific relationships, which are not accompanied by visual narratives satisfying to the human imagination. Some measured properties of light are related by equations, which are accompanied by a visual narrative of particles of light. Other properties of light are related by equations which are accompanied by a visual narrative of waves of light. Yet, the human imagination finds the particle and wave narratives visually incompatible in spite of the validity of the mathematical relationships among the measured values in the two different results.

The Classic Dilemma

Collimated light is unidirectional. When it is passed through a slit it produces a pattern of intensity, which may be viewed as a pattern of particles or quanta or photons of light distributed about a norm. The distribution may also be viewed as a continuous curve symmetrical about a single peak (See figure 5 at the end of reference 1).

When collimated light is passed through two adjacent slits the result is a pattern of intensity, which may be viewed as a pattern of particles or quanta of light distributed about several norms. The distribution may be viewed as a continuous curve of a series of individually symmetrical peaks (See figures 3 and 4 at the end of reference 1). This multi-peak pattern is best described mathematically, i.e. scientifically, as due to the interference of two waves of light emanating from point sources as they emerge from the slits.

The purpose of the experimental setup in reference 1 is to track light, quantum by quantum. In our human imagination a single isolated particle, i.e. a discrete particle, cannot act as a continuous wave.

This same purpose was that of the experiment described in reference 2. The rationale of this experiment is that interference requires two waves, whereas a single particle or quantum of light cannot behave as two waves. (This rationale suspends the validity of the quantum/wave mathematics applied at the micro-level and imposes a dictum of the sensory-level human imagination: Particles in the human imagination are discrete particles, not continuous waves.) The experiment proposes that collimated light as an individual quantum is in transit along one of two paths when a de-collimating device or beam-splitter is introduced or not introduced farther along where the two paths exit collimation and cross. The human imagination demands that the beam-splitter cannot change the particle into two waves once the particle is in transit along one or the other path. Nevertheless, the presence of the beam-splitter introduces wave interference. The human imagination reaches the conclusion, ‘Then one decides the photon shall have travelled by one route or by both routes after it has already done the travel.’ This is an imaginative impossibility.

The experiment in reference 1 is clearer than that of reference 2. When the collimation of the light is maintained even though the light is tracked photon by photon, the intensity pattern is of a single peak (figure 5) by detectors D3 and D4 identified in figure 2. When the collimated light is released from collimation by mixing the paths, it displays wave interference in the cumulative pattern of the individually tracked photons in figures 3 and 4 from detectors D1 and D2 of figure 2, respectively.

Another experiment uses two telescopes to maintain the collimation of light. Without the telescopes the light from the two sources interferes as waves. If the light, maintained as collimated by the two telescopes, is permitted to mix after exiting the telescopes it interferes as waves. (See reference 3).

The dilemma of the imagination is, ‘How can a single quantum be ‘mixed’ with nothing by a bean-splitter and thereby act as a wave?’

Attempts to Placate the Imagination within the Context of Physics

Some physicists pacify their imaginations by viewing the fundamental equation as that of the wave and view the detection of a quantum as 1 and the non-detection as 0, the two possible outcomes of a probability event. The wave function is viewed as an expression of probability, the outcome of which is yes, one or no, zero. An outcome is said to be the collapse of the wave function. As we shake a pair of dice in our cupped hands, the probability of the sum, seven, is one-sixth, but when the dice have landed the probability, which was one-sixth has collapsed to 1 (seven) or 0 (non-seven). Unlike the eleven discrete probabilities, which are the sums of two dice, a wave as a probability function, may be viewed as indefinite in probability between 0 and 1. In this view, the imagination states that the quantum does not exist except in the potency (probability) of a wave. The wave collapses due to an observation. As a result of observation, the quantum comes into existence or doesn’t. Upon observation, the probability function no longer exists. Thus a quantum exists, only if observed.

The author of reference 1, concludes, ‘Ho-hum another experimental proof of quantum mechanics’. Another expression rejecting the need to placate the human imagination has been expressed as ‘Shut up and calculate!’ (See reference 4).

The need to placate the human imagination has been carried to two extremes. One extreme is to claim that observation creates material reality, such as ‘The moon is demonstrably not there, unless someone is looking.’ (See reference 5). The other extreme is to claim that there are as many universes as there are possible material outcomes. A quantum we observe is valid within our universe, but not universally valid across multi-verses. What exists in our universe in our observation is characteristic of our universe. All other possibilities exist in a multitude of universes encompassed in what appears in our universe to be a probability function, the wave (reference 6).

The middle course is to recognize that the compatibility of the discrete and the continuous is a sticky wicket, not to the intellect, but to the sensual imagination. The photon vs. wave phenomena of light is just one example where the human imagination tends to deny the compatibility between the discrete and the continuous.

The Central Problem, Imagining the Concept of Continuity

The most fundamental concept in mathematics is the counting of discrete elements. Material objects are easily counted. Another concept in the basic development of mathematics is the fractionation of a discrete element. Because mathematics is an abstraction from material reality, it is intellectually possible not only to fractionate an element into two parts, it is possible to fractionate it into an unlimited number of parts. You can’t do that with an apple or any material thing even of an apparently uniform structure. Our sensual imagination cannot keep up with our intellect. Our intellect has conceived the abstract idea of continuity.

An independent variable is continuous over its domain if its value can be specified to any arbitrary precision.

Suppose I had a building with a frontage of thirty meters. I could delineate two contiguous parallel parking spaces of ten meters each demarcated with painted end stripes twenty centimeters wide and having one stripe at the midpoint. That would be a 20.6 meter, continuous segment, of the frontage. However, if I asked a painter to paint the three twenty centimeter stripes to a precision of 10^(-11) meters, he would think I was crazy. He would note that the diameter of the electron cloud of the hydrogen atom was 10^(-10) meters and I was asking for a painted line to the precision of a tenth of the diameter of a hydrogen atom. He would point out that the abstract concept of mathematical continuity to such a precision did not apply to the context of the level of human sensation and specifically to the painting of lines in a street. He wouldn’t deny the validity of the mathematical concept of continuity. The painter would merely note that the human imagination did not have the power to concretize the abstract concepts of mathematics in the manner and context, which I proposed.

One of my favorite apparent incompatibilities of a mathematical concept with material reality, as viewed by the human imagination, involves the probability of the sequence of a deck of playing cards, namely 1 in 8.06 x 10^67. Intellectually we are content with emulating a ‘random selection’ from a set of that size by shuffling the deck. Yet a set of that size can only be logical, far beyond the scope of the human imagination. It would be foolish to claim that the shuffled result was the collapse of the fundamental reality, the probability function, into a materially observable event, where the collapse of the probability function was essentially due to the observation. It would likewise be foolish to claim that the shuffled result was a material event in my universe, but that the entire set of probable mutations of a deck of cards must exist in some other material universe(s), rather than solely in human logic. In both views, shuffling brings one material event into observable existence out of ‘real’ probabilities or ‘real’ worlds equaling 8.06 x 10^67. These views require material reality to conform to human thought in its subjection to the human imagination. The proper view is to require human judgments about reality to conform to reality, while also recognizing the immateriality of human thought and logic, which frees the human intellect from subordination to the human imagination, i.e. subordination solely to sense knowledge.

Note that adding two jokers to the deck would increase the ‘real’ probabilities and the number of ‘real’ worlds by a factor greater than 1000 to 2.3 x 10^71. The mass of the earth is a mere 5.97 x 10^36 nanograms.

The mathematics of probability concerns the identification of logical sets solely according to their fractional composition of logical elements. When this mathematics is applied to material sets, the material properties of the elements are completely irrelevant. The IDs of the elements are purely nominal. The elements have no relevant material properties. Material elements may be used only in emulation of the mathematics, while ignoring their material properties. Thus, the mathematics may be used as a tool of ignorance of the material properties of the elements.

To the extent that a wave function is considered a probability function, it renders the mathematics a tool of ignorance, i.e. a tool to compensate for ignorance of the material properties underlying the phenomena being studied. It is a serious error to take probability as fundamental reality or as a characteristic of reality, rather than as a mathematic tool of abstract logic.

References:
1. Excerpts from “A Delayed Choice Quantum Eraser” by Yoon-Ho et al Phys. Rev. Lett. 84:1-5 (2000) with commentary by Ross Rhodes. http://www.bottomlayer.com/bottom/kim-scully/kim-scully-web.htm
2. Wheeler’s delayed choice experiment by Alain Aspect

3. Wheeler’s Classic delayed choice experiment

http://www.bottomlayer.com/bottom/kim-scully/kim-scully-web.htm

4. Copenhagen interpretation

http://en.wikipedia.org/wiki/Copenhagen_interpretation

5. Video of the faith and science conference in 2011, at time 1:09:45

http://www.franciscan.edu/faculty/SichA/

6. Chapter 11, Quantum Time, by Sean Carroll

http://preposterousuniverse.com/eternitytohere/quantum/

I must begin with the disclaimer that I do not agree with the proponents of the Intelligent Design argument that it is a scientific argument or that it is even valid. However, I also do not agree with the common critiques of the ID argument especially in the contention that such critiques are themselves scientific.

This essay is to argue against the claim of John Derbyshire in the American Spectator (http://spectator.org/articles/57159/does-intelligent-design-provide-plausible-account-lifes-origins?page=3) that Michael Behe has not tried to get his scientific views expressed in the ‘scientific arena’. Derbyshire states, “You should at least try, as ID-ers like Behe obviously haven’t.”

Also, the animosity toward Michael Behe is so intense that he cannot state that the time interval between unexpected genetic mutations cannot be determined a priori but must be experimentally determined, without his opponents labelling his view as ‘naïve’ in a scientific journal.

My previous post on this blog noted the importance of distinguishing between probability and density. This post is another example. I composed the following essay some time ago, but it is timely in view of Derbyshire’s current essay:

Evolution in the Darwinian lexicon is a mathematical process which initially characterizes a particular numerical fraction as a random selection ratio and finally as a non-random selection ratio. Darwinian evolution itself has nothing to do with rates with respect to time. However, in some material applications the question of temporal rates will arise. It is therefore important to distinguish on the one hand, the two selection rates, which are essential to the concept of Darwinian evolution and are numerically equal to each other and, on the other hand, any temporal rates, which refer to particular material contexts.

Darwinian evolution consists of one or more stages of two selection processes applied essentially to the same logical set of n unique elements. They are random mutation and natural selection.

In each stage, the first selection process is random selection, at a ratio of 1/n. The selection is repeated multiple times randomly forming a pool of elements in which the probability of the pool’s containing each of the n elements is close to 1, due to the multiplicity of random selection. Randomness means that there is no rationale governing selection, so that the probability of selection, i.e. the rate of selection, is the fractional concentration of any element in the initial set, namely, 1/n. Note that random selection, which does not discriminate among the n different elements of the initial set, is a rate of selection with respect to the number of elements in the set. It is not the rate of selection with respect to time. Time is irrelevant.

In each stage, the second selection process is non-random (or if you prefer, ‘natural’) at a ratio of 1/n. This selection process is applied to each of the elements in the randomly formed pool. Since it follows a rationale, which recognizes each element in the initial set, it is the discrimination ratio of 1/n in its processing, by selection, of each of the elements in the random pool. Note that the non-random selection, which discriminates among the n different elements of the initial set, is a rate of selection with respect to the number of elements in the set. It is not the rate of selection with respect to time. Time is irrelevant.

These two mathematical selection processes constitute the mathematical transformation known as Darwinian evolution. In each stage, the process involves the initial labeling of a given numerical selection ratio as random and the final labeling of this same numerical selection ratio as non-random. Time and rates of selection with respect to time are intrinsically irrelevant.

Selection rates with respect to time can only be introduced alongside the completely distinct mathematical selection rates of Darwinian evolution by means of the analogical application of Darwinian evolution to material phenomena. The temporal rates of material changes can even lead to mistaking a non-Darwinian temporal material process for the purely logical process of Darwinian random mutation.

Random mutation is the random selection of an element from a logical set. Population genetics is concerned with the inheritance of a genetic marker based on its material density in a breeding population.

Consider a non-Darwinian scheme affecting two nucleotide sites in a genomic map. There are four different nucleotide pairs and consequently four possible variants for any site in a genomic map. Let some fraction of a population possess at one site a different specific base pair from the specific base pair of the rest of the population. Let the material density in the population for this inheritable marker be m1. For a second site, let the material density in the population for a second marker be m2. These material densities, m1 and m2 may be referred to as the respective probabilities of possessing these inheritable markers by a member of the population selected at random.

Contrast these probabilities with the probability of selecting a number at random to the base four. The probability is 1/4, which is the probability of the random selection from the set of four variant genomes identified by random mutation at a single site of a genomic map. Random mutation, of course, is random selection from a logical set. If the four variant genomes were a set of material elements, a random mutation would exist before the random mutation occurred.

Consider a population in which there are two distinct subsets, each identified by an inheritable marker. Over generations a subset of individuals in the population will form in which both of these markers are present. Durrett and Schmidt (Genetics 181: 821-822, 2009 http://www.genetics.org/content/181/2/821.full) showed that the mean waiting time for two inheritable markers to be inherited by the same individual of a population, one marker with a material density of m1 in the population and the other with a material density of m2 (when the first marker is neutral with respect to reproductive efficiency) is a function of m1 and a function of the square root of m2.

Contrast the mean generational waiting time for occurrence within a population with Darwinian random mutation of numbers to the base 16. Sixteen is the number of different genomes defined by random mutation at two genomic sites. Given a mere 72 Darwinian generators, at a probability of 99%, at least one copy of any of the 16 genomes, defined by variance at two nucleotide sites, will be generated in the length of time to generate one individual genome.

To label a change as random, where randomness implies the random selection of an element from a set of n unique elements, is to identify the change as the product of a random numbers generator to the base n. To label such a change as a random mutation, without adverting to it as the product of a random numbers generator, can lead to misunderstanding. Abandoning the concept of Darwinian random numbers generation at nucleotide sites in genomes, yet persisting with the terminology of random mutation at these sites, can be misleading.

Durrett and Schmidt, whose field of interest is population genetics, characterized the mathematics of Darwinian random mutation, i.e. the mathematics of random numbers generation, as naïve. Yet, they implied that Darwinian random mutation was their topic by the phrase, ‘the mean waiting time for two mutations to occur in the same individual’. Their actual topic was one of population genetics, which would have been evident, if they had used the phrase, ‘the mean generational time for two genetic markers to be inherited by the same individual, given their initial densities in the population’. Similarly, they alluded to the size of a subpopulation using the misleading phrase, ‘the probability of a mutation’, rather than the more clearly descriptive, ‘the material density or fractional concentration of an inheritable genetic marker within the breeding population’. The density of an inheritable marker in a population is not the probability of a mutation.

It is important to distinguish between pure mathematics and its applications. This distinction is pertinent to the distinction between probability and density.

Probability is the fractional concentration of an element in a purely logical set. Density is the fractional concentration of a physical element in a physical set. Let the probability of blue marbles in a set of logical marbles be 0.5. For a comparable physical set of marbles the density of blue marbles is 1 blue marble per 2 marbles. In contrast to the probability, the density of blue marbles is not 0.5, but 0.5 blue marbles per marble.

The necessity of such a distinction is seen in the following example. If we form (even logical) packages of 2 marbles each on the basis of a density of 0.5 blue marbles per marble, all packages will have exactly one and only one blue marble per package. If we form logical packages of two marbles each on the basis of a probability of blue marbles of 0.5, we will have defined a logical population of packages in which the proportions of packages with 0, 1 and 2 blue marbles are 1:2:1.

Note: the formation of material sets on the basis of the probabilities of a source set can only be analogous to or a simulation of the purely logical concepts of mathematical probability.

Probability is purely a logical concept. The formation of new sets based on the probabilities of a source set is called random selection from the source set. This is strictly logical. No physical thing can be selected with a total disregard to its physical properties, yet this is what is required by the concept of random selection. Thus the formation of new physical sets based on the probabilities of a physical source set, can only be an analogy of the mathematics. In such analogies, human ignorance of the IDs during selection is equated with randomness.

In contrast to probability, which is solely logical, density can be characteristic of material reality. An example is the density of carbon atoms in glucose, which is one carbon atom per four atoms.

An example of confusing density and probability is the calculation of the number of earthlike planets in the universe by Richard Dawkins in The God Delusion, page 138. He conservatively estimates the number of planets in the universe as one billion, billion. Assuming a density of 1 earthlike planet per billion planets, he correctly calculates the number of earthlike planets in the universe as 1 billion.

That, however, is not how Dawkins describes his calculation. Instead of assuming a density of 1 earthlike planet per billion planets, Dawkins claims he is assuming a staggeringly, absurdly, stupefyingly low probability of earthlike planets of 1 per billion. If Dawkins were assuming a probability rather than a density, he would not have been calculating the number of earthlike planets in the universe. He would have been defining a population of universes, which would cover the spectrum from 0 earthlike planets per universe to 1 billion, billion earthlike planets per universe. In the defined population, modal universes would contain exactly I billion earthlike planets. However, this mode would be a very small percentage of the total population of universes. This very small percentage would be the probability that a randomly selected universe would contain exactly 1 billion planets.

In my next post I will present a more subtle and intriguing instance of conflating probability and density.

Follow

Get every new post delivered to your Inbox.