Relevant extracts from my posts of March 1999
I'm not a theist (at the most a pantheist), but an evolutionist
even more consistent than Darwinists, because I do not accept
such a discontinuity in evolution as the magical appearence of
consciousness some billions years after 'big bang'.
It is quite normal that material in agreement with one's own
prejudices seems to be facts whereas the rest seems to be opinion!
It is my, Johnson's and many others' right to find it disquieting (or
even absurd) that "the mind is merely what the brain does" and
that "our thoughts and theories are products of mindless forces".
It is not difficult to make computer games or AI programs where
also chance influences the result. It may even happen that the result
is one not expected by the programmers, but I do not believe in results
which are in principle different from what could have thought possible
by the programmers.
In any case, such algorithms are rather evidence for a guided (there
are engineers and programmers) evolution than for a purely random.
J>" As we have seen, Darwinian evolution is by definition unguided
J>" and purposeless, and such evolution cannot in any meaningful
J>" sense be theistic.
> It is neither totally unguided, as natural selection is indeed
> guided by the environmental pressures with which an organism
> is faced, nor purposeless, as its function is to make populations
> more fit within their environments. It certainly is not designed
> to result in human beings, or to move toward "higher" life forms.
> If he had said, "intelligently guided" and "apparently purposeless,"
> then he would have been more accurate.
If for a first organism with the power of reproduction to apppear
only twenty different conditions with a probability of each 0.001
were necessary, then the probability of such an organism to appear
would be only 10^-60. Even if 19 of the 20 conditions were fulfilled,
the organism could not replicate and evolution of life could not
start. In realty certainly more conditions with lower probabilties
are necessary for such a self-replicating system with the additional
capacity to undergo further improvement by mutation and selection.
Probabilities must be multiplied!
Within neo-Darwinian framework one must not take it for granted
that the "first organism" "certainly would have been vastly
overpowered and driven to extinction by its more advanced
children who were born after successive mutation and selection."
Certainly at least 20 (independent) conditions with a probability
of at most 0.001 are necessary for a self-replicating system to
appear. If it were less, it should be possible to show how such a
system could appear and work or even to produce one in the
laboratory.
If reductionist causal laws cannot explain life, neo-Darwinism is
refuted and it is necessary to look for new entities or principles
which can explain life. I'm sure that neo-Darwinism will seem to
future generations a completely absurd theory, because it denies
the most obvious.
But I think that you overestimate science and scientists.
Representative of modern science is Galilei. At a time when
the Copernican theory was already spreading he usurped
this theory and fought the theories of Johannes Kepler.
Kepler had been the first to surpass substantially the
astronomy of Aristarchus by smashing the whole epicycle
theory and by introducing modern physical laws.
Also Kepler's explanation of life which was quite similar to
mine was ridiculed and fought.
For a self-replicating system at least 20 molecules which are at least
as complex as nucleotids or amino acids are necessary. According
to neo-Darwinism the movements of molecules depend on random
thermal collisions (apart from chemical and physical laws).
Now I assume that 20 molecules are enough for a self-replicating
system to appear, if every molecule has the right position in space.
A further simplification is needed. I assume that all 20 molecules
are in a cube which is subdivided into 1000 mini-cubes, and for the
right position nothing more is required than the center of gravity of
the molecule being located within the right mini-cube.
In this simpified case the probability for the self-replicating system
to appear is 10^-60. Common sense is enough to show that for a
self-replicating system to appear in nature, the probability is even
much much lower.
It is possible to calculate upper limits of such probabilities, because
we can recognize conditions which must be fulfilled in any case.
That's actually new for me: the existence of "replicating molecules
around 32 molecules long". The only possibility I see is the following:
there are RNA enzymes 32 bases long which replicate by base
pairing. Are you sure that such molecules actually have been
discovered, molecules replicating independently? However, even if
they actually exist, they are not enough to start evolution.
Do you know the results of Miller's experiment? A mixture of simple
organic molecules. This result is almost irrelevant to the explanation
of life and evolution.
What is a self-replicating protein? To assume that every
collision between molecules corresponds to a new arrangement
of a sequence of 32 amino acids seems absurd to me. Incorrect
chemical bonds between amino acids are possible. Bonds with
other molecules cannot be excluded. Where could a soup with
such a high proportion of amino acids have existed?
The concepts 'causality' and 'finality' are very old philosophical
concepts. Maybe until the time of Descartes (1596-1650) they had
equal rights in philosophy and science. There is, however, no
apriori reason why 'causality' should be more scientific than 'finality'.
Johnson's concept 'naturalistic' is in some respect almost the same
as my concept 'reductionist', but there is also a (maybe only linguistic)
difference: according to my usage of 'natural' there is nothing
supernatural. Final laws or souls are totally natural entities.
In this context it may be interesting to look at the history of
'naturalism'. A certainly questionable and maybe subjective
simplification is the assumption that there was an evolution from
animism to polytheism, to monotheism with God outside the world,
to monotheism with God inside the world, to pantheism and finally
to atheism. The difference between atheism and pantheism is not
big, because in pantheism 'God' is only a synomym for 'world' and
'nature', or means a special aspect of the world.
The basis of modern science was build in the 17th century. One of
the first consistent naturalists was Baruch Spinoza (1632-1677),
who explained the world in a panpsychist and panmaterialist way:
space or matter is one aspect of the world (or of God or of nature),
and thinking or consciousness a second. Johannes Kepler (1571-
1630) had explained the world in a quite similar way, based on a
monotheism with God inside the world.
Both Kepler and Spinoza were fought and ridiculed especially by
theologians but also by scientists. The alternative was the
philosophy of Descartes: on the one hand was the material world
and on the other human souls and God. Animals were considered pure
machines without consciousness. The current scientific world view
is based on the philosophy of Descartes. The big inconsistency of
Cartesianism (animals as pure machines, humans having souls)
was removed by removing the concept 'soul' (and 'God').
So why do you consider panpsychism as something supernatural?
One main reason for its defeat was that it was a naturalistic
explanation of the world not in agreement with theology.
According to my usage of the word 'reductionist', Descartes'
philosophy is reductionist with the exception of the concepts
'human soul' (and 'God'), whereas all explanations involving final or
teleological principles, or based on vitalism or panpsychism are not.
Many enzymes work at defined places in a cell. If we create an
enlarged model, where enzymes are like little balls, then the volume
of the whole cell is about 1000 cubic metres. Imagine concretely
this situation: a little ball must come very near to a substrat and
the substrat recognition even depends on the correct alignment of the
little ball. In addition to that, enzymes often have to pass cell
membranes in order to reach their destination. What is the moving
force of enzymes? It cannot be electromagnetic attraction or
repulsion. So the moving force must primarily depend on random
thermal motions (as Brownian movements do).
You certainly will object that we do not know well enough the
chemistry of enzymes in order to conclude that: there may always
be the needed chemical forces responsible for the 'apparently'
very purposeful motions of enzymes. This implies that the information
for these motions to desired destinations is somehow stored in
the amino acid sequence of an enzyme, in addition to to the
information for folding, substrat specifity and so on, because even
similar enzymes can have very different destinations. A mutation
could change a description factor in such a way that the protein
would search its usual substrate in a wrong chromosome.
The present-day scientific ignorance is no better evidence for
reductionism than for panpsychism! But is it really a necessary
ignorance? Ignorance is often the result of false premises.
I'm convinced that physical laws as described by classical physics
or by QM cannot be responsible for the fact that living organisms
evolved and survive. The often cited 'complex dynamic systems' as
e.g. the appearance of ordered vortices, waves or similar things
doesn't affect evolution much more than the appearance of solar
systems does. And the appearance of crystals (carefully studied
by Kepler) is rather evidence for panpsychism than for reductionism.
One must not confuse logical reasoning with empirical facts.
Calculations of probability must be based on clear and sound
assumptions, but the calculations themselves must not be
influenced by empirical facts. From the fact that evolution has
occured we cannot conclude that it can be explained on the basis
of the generally accepted metaphysical principles of current science.
My use of 'reductionism' is: to reduce all phenomena of nature to
a purely material basis.
I am a friend of a rationalist epistemology. But I am aware that
the theories by which we explain the world cannot be explained
themselves in the same way. The rationalist epistemology is
sometimes used to explain the simple (gravitation) by something
complicated (material gravitons).
Now it's me who is afraid that the explanation of the purposeful
movements of enzymes by an entirely mechanical "traffic control"
misses the point. How do you explain such a "transport system"?
Are there some kind of currents in the cell or even some kind of
taxis which transport the enzymes to their needed destinations?
When I wrote "the information for these motions to desired
destinations is somehow stored in the amino acid sequence
of an enzyme" I did not mean some kind of "bar code" on the
enzyme, which is interpreted by a cellular transport system being
responsible for the motions of the enzyme. It seems to me even
much more difficult to explain how such a "traffic control" could
transport all the enzymes to their very different destinations
depending on such a "bar code".
That enzymes may consist of several parts or domains and that
one such domain can be resonsible for the destination of the
whole enzyme is obvious. There is no clear transition line between
simple proteins and protein complexes.
It is actually a fact that "all theories that impute knowledge or
purpose to subcellular or cellular structures have been discarded"
by mainstream science. But my suspicion is that reductionist
(materialist) prejudices have been the main reason.
Normally the first who detects a new mechanism has the
possibility to provide a philosophical interpretation. I do not
agree with Monod's interpretation, because it is not based on
facts. The same facts can be interpreted in very different ways.
According to Monod's interpretation, every system we have
examined carefully enough can be called mechanical.
In my model, little proteins have diameters of few millimetres,
e.g. myoglobin with 153 amino acids about 4.5mm x 3.5mm x 2.5mm.
The major groove of DNA is about 2 mm large. The whole human
DNA (both chromosome sets) is about 2000 km long. The major
groove is even longer. If the recognition by transcription factors
depends on direct contact, then a transcription factor must
come very near (maybe 1 mm or 2 mm) to its destination and
should even have the correct alignment. A normal living cell
with a diameter of about 10 m would consists of 10^12 different
cubes of 1 mm length.
There is an essential difference between panpsychism and vitalism.
Vitalism starts with dead matter and introduces some kind of 'vital
forces' or souls. In panpsychism, however, there is no dead matter,
because even elementary particles have two aspects, a 'materialist'
and a 'vitalist'.
In a similar way a human soul influences the behaviour of a
human body, some kind of primitive soul is responsible for the
astonishingly complex behaviour of photons.
The immun system has turned out to be more complex than assumed
by the "clonal selection theory". To kill or inactivate those
cells whose antibodies react to "self" is not so easy.
There was once the following problem:
How can the immune system distinguish between "self"
and "foreign"?
It was 'resolved' in this way:
Cells or antibodies reacting to "self" are inactivated or
killed by the immune system whereas those reacting to
"foreign" are not.
Once I read somewhere (or I dreamt to have read) that if we put
antigens to one of two glasses with the same fresh blood, also
in the glass without antigens corresponding antibodies appear. If
it is true, it would be very strong evidence for the psychon theory.
> You are also overly fond of your own hypotheses, a usually-fatal
> affliction in science.
It is necessary to have a concrete imagination of the proportions
between cells, enzymes, molecules and so on. Therefore I have
introduced the enlarged model where 1 mm corresponds to 1 nm.
The 'diameters' of enzymes are then in the order of a few millimetres
and the 'diameter' of a water molecule is about 0.3 mm (there is
room for 33.3 water molecules in 1 cubic millimetre).
It seems that you do not know the theory of Brownian motion.
Einstein calculated in a 1905 paper that a particle with a
diameter of 0.001 mm (the size of a bacterium) would result
in an average motion of 0.0008 mm in a second and of 0.006 mm
(less than the length of normal cell) in a minute (at a temperature
of 17°C). The average motions per time unit of enzymes are
certainly longer because they are much smaller. The bigger
the particles, the slower random thermal motions. The reason
is simple: random collisions with many surrounding molecules
can cancel each other out and the remaining change in
momentum does not increase proportionally to the mass
of the moving particle.
In our model, a normal living cell with a diameter of about
10 m consists of 10^12 different cubes of 1 mm length. So its
rather difficult for a substrate to meet it's enzyme in the right
orientation "just by randomly bouncing around in the cell".
Don't forget, most of the 10^14 random collisions primarily with
water molecules have no effect at all! Every square millimetre
of the enzyme surface corresponds to about 10 water molecules.
Only at the most 5 percent of the hundreds of different
mitochondrial proteins are coded by mitochondrial DNA. The
proteins even have to pass a double membrane in order to
reach their destination. According to the very convincing
endosymbiont theory, at least most of these proteins (or their
ancestors) were coded once by the mitochondrial DNA itself.
How do you explain the fact that the proteins could find
even after the transfer of the genetic code to the nucleus
their destinations within the mitochondrion?
>> The voyage of transcription factors to their destiny can be compared
>> with the voyages of migratory birds and other migratory animals.
[ Theory of Brownian motion ]
I don't understand: according to Einstein's calculation a bacteria
sized particle travels 0.8 and not 800 micrometers in a second. The
mean path is proportional to the square root of both the absolute
temperature and the time, and inversely proportional to the square
root of the diameter (of a spherical particle). The mean path of a
spherical enzyme with a diameter of 10 nanometers is roughly
10 micrometers in a second and roughly 10 nanometers in a
microsecond. For a mean path of 1 nanometer (about three times
the lenght of a water molecule) ten nanoseconds are needed!
The mean path of a little protein such a trypsin in 10 nanoseconds
is about 2 nanometers.
Why is a 100-fold increase in time needed for 10-fold increase in
the mean path? Because the movements are purely random and
the particle will come back to the starting point over and over
again (in infinite time).
"The 'diameters' of enzymes are [in our model] in the order of a
few millimetres and the 'diameter' of a water molecule is about
0.3 mm (there is room for 33.3 water molecules in 1 cubic
millimetre)."
Please imagine very concretely the many water molecules
(0.3 millimeters) colliding with the enzyme (e.g. a diameter of
some millimeters). Think about the effect of such a collision on
a protein whose mass is thousands of times bigger than the
the mass of the water molecule.
Simple replicating proteins are not enough to start evolution,
because they cannot continuously evolve to proteins which
are coded by RNA or DNA. Take for instance a folded protein
with an amino acid length of 32. Can you imagine some
mechanism by which the sequence information is transfered
to a 96 (or 64 in a genetic precursor code) nucleodid long
RNA or DNA molecule?
How big is the probability that DNA or RNA sequences which
correspond to the yet evolved proteins could have appeared
by chance? How probable is it that proteins being nothing
more than dead matter could invent the genetic code?
According to the psychon theory, enzymes are like primitive
animals. At least proteins and RNA enzymes should have evolved
at first indepentently. Self replicating proteins learned
sometime to use short RNA templates in order to accelerate
the production of new proteins. This technique was
continuously improved not only by pure chance but also by
final laws of nature (one could call it God).
The emergence of the genetic code can be compared with
the emergence of human languages, and the emergence of
the highly complex living cell with the emergence of modern
cities. All species and evolutionary innovations were designed
in a similar way houses, cars or ships have been designed.
But at the same time they have evolved by chance in a similar
way houses, cars and ships have evolved.
If it is true that living cells using the modern genetic code
appeared very early on the earth, then it is highly improbable that
this very complex code evolved the first time on earth. Such an
information transfer from 'cosmic ancestors' to the earth is
possible, because an essential part of the information of
living systems is stored in the immaterial psychons.
We cannot reproduce the behaviour of enzymes of the early
earth because their behaviour depends on psychons which have
further evolved. So amino acid sequences which would have folded
millions or billions of years ago, today do not fold any more.
In the same way, sequences which fold today did not fold on the
early earth.
With this example I intend only to show how few conditions give such
a low probabilty of 10^-60. If for a self-replicating system only
60 building blocks were needed and the probability of the correct
chemical bonds with the others is 10% for each of the 60 blocks,
then the final probability of the system also results in 10^-60.
20^-32 = 2.33 * 10^-42
we have a probability of only
2^-2 = 0.25 !!!
... And it is not possible to impress me by scientific papers
which presuppose what should be explained.
It is clear that every obstacle to forming a correct chain of
amino acids can be removed by some techniques or by some special
assumptions. But they represent conditions to which we also
must ascribe a probability.
"Mineral catalized or assisted peptide formation" is not available
everywhere. Each of 20 amino acids must then be available in
high proportions exactly in places, where "mineral catalized or
assisted peptide formation" is possible.
>> I assume that the 10^14 interactions/sec are meant in the
>> following way: two neighbouring molecules collide with a
>> frequency of 10^-14 Hertz.
The existence of real self-replicating (the Lee protein is
only self-ligating) proteins would be strong evidence for the
psychon theory (in the same way as prions and inteins are
strong evidence for this theory).
There are proteins with very different behaviours.
Even if we take for granted that a 32-element protein creates
correct amino acid sequences of the needed length, a big
problem remains. There are 20^32 different sequences.
So general chemical and physical laws must lead to:
I garantee you, that's completely absurd.