Sunday, December 28, 2008

Neanderthal Timeline

The Neanderthal (Homo neanderthalensis) or Neandertal was a species of the Homo genus that inhabited Europe and parts of western Asia. The first proto-Neanderthal traits appear in Europe as early as 350,000 years ago. By 130,000 years ago, full blown Neanderthal characteristics had appeared and by 50,000 years ago, Neanderthals disappeared from Asia, although they did not reach extinction in Europe until 33,000 to 24,000 years ago, perhaps 15,000 years after Homo sapiens had migrated into Europe.


Thursday, December 25, 2008

LUCA:Last Universal Common Ancestor

In trying to find the elements necessary for the origin of life, another question of importance is "what was the last universal original ancestor of life?"

A 3.8-billion-year-old organism was not the creature usually imagined. In LUCA, the prevailing belief is that it was a heat-loving or hyperthermophilic organism; like those odd organisms living in the hot vents along the continental ridges deep in the oceans today, above 90 degrees Celsius .
However, the new data suggests that LUCA was actually sensitive to warmer temperatures and lived in a climate below 50 degrees.

The research compared genetic information from modern organisms to characterize the ancient ancestor of all life on earth. Researchers identified common genetic traits between animals, plant, bacteria, and used them to create a tree of life with branches representing separate species. These all stemmed from the same trunk – LUCA, the genetic makeup that we then further characterized.

The RNA Connection to the Origin of Life
What this means is that in the origin of life question an important step has taken place towards reconciling conflicting ideas about LUCA. In particular, they are much more compatible with the theory of an early RNA world, where early life on Earth was composed of ribonucleic acid (RNA), rather than deoxyribonucleic acid (DNA).

RNA is particularly sensitive to heat and is unlikely to be stable in the hot temperatures of the early Earth. But the data indicate that LUCA found a cooler micro-climate to develop, which helps resolve this paradox and shows that environmental micro domains played a critical role in the development of life on Earth.

Tuesday, December 23, 2008

RNA and the Origin of Life

What were the conditions necessary for the formation of life? Some scientists believe that RNA was responsible for the development. RNA, the single-stranded precursor to DNA, normally expands one nucleic base at a time, growing sequentially like a linked chain. The problem is that in the primordial world RNA molecules didn't have enzymes to catalyze this reaction, and while RNA growth can proceed naturally, the rate would be so slow the RNA could never get more than a few pieces long (for as nucleic bases attach to one end, they can also drop off the other).

The RNA mechanism to overcome this thermodynamic barrier has been studied by incubating short RNA fragments in water of different temperatures and pH. Under an acidic environment and temperature lower than 70 degrees Celsius, the RNA pieces ranging from 10-24 in length could naturally fuse into larger fragments. This was generally accomplished within 14 hours.

The operation involved the RNA fragments which came together as double-stranded structures then joined at the ends. The fragments did not have to be the same size, but the efficiency of the reactions was dependent on fragment size in which case the larger the better until an optimal efficiency is reached around 100 and then it drops again.

The researchers note that this spontaneous fusing, or ligation, would a simple way for RNA to overcome initial barriers to growth and reach a biologically important size; at around 100 bases long, RNA molecules can begin to fold into functional, 3D shapes.

Saturday, December 20, 2008

RadioActive Dating - Potassium-Argon

Potassium-Argon dating

The element potassium (symbol K) has three nuclides, K39, K40, and K41. Only K40 is radioactive; the other two are stable. K40 can decay in two different ways: it can break down into either calcium or argon. The ratio of calcium formed to argon formed is fixed and known. Therefore the amount of argon formed provides a direct measurement of the amount of potassium-40 present in the specimen when it was originally formed.

Because argon is an inert gas, it is not possible that it might have been in the mineral when it was first formed from molten magma. Any argon present in a mineral containing potassium-40 must have been formed as the result of radioactive decay. F, the fraction of K40 remaining, is equal to the amount of potassium-40 in the sample, divided by the sum of potassium-40 in the sample plus the calculated amount of potassium required to produce the amount of argon found. The age can then be calculated from equation (1).

In spite of the fact that it is a gas, the argon is trapped in the mineral and can't escape. (Creationists claim that argon escape renders age determinations invalid. However, any escaping argon gas would lead to a determined age younger, not older, than actual. The creationist "argon escape" theory does not support their young earth model.)

The argon age determination of the mineral can be confirmed by measuring the loss of potassium. In old rocks, there will be less potassium present than was required to form the mineral, because some of it has been transmuted to argon. The decrease in the amount of potassium required to form the original mineral has consistently confirmed the age as determined by the amount of argon formed.

Thursday, December 18, 2008

Human Migration From Asia To Americas

Finding A new set of ideas on migration to North America have been proposed. One idea is a hypothesis that seems to map the peopling process during the pioneering phase and well beyond, and another is that at the same time there was much more genetic diversity in the founder population than was previously believed.
The Conventional View
Questions about human migration from Asia to the Americas have perplexed anthropologists for decades, but as scenarios about the peopling of the New World come and go, the big questions have remained. One questions is do the ancestors of Native Americans derive from only a small number of “founders” who trekked to the Americas via the Bering land bridge? Also, how did their migration to the New World proceed? And was climate change involved; did the climate have anything to do with their migration? And finally what took them so long?

Changing the Conventional View
A phylogeographic analysis of a new mitochondrial genome dataset allows scientists to draw several conclusions.
30,000 years ago - clades
First, the ancestral population on its way to the Americas paused in Beringia long enough for specific mutations to accumulate. These mutations separate the New World founder lineages from their Asian sister-clades. (A clade is a group of mitochondrial DNAs (mtDNAs ) that share a recent common ancestor. Sister-clades would include two groups of mtDNAs that each share a recent common ancestor and the common ancestor for each clade is closely related.)

Another way to express this is the ancestors of Native Americans who first left Siberia for greener pastures perhaps as much as 30,000 years ago, came to a standstill on Beringia – a landmass that existed during the last glacial maximum that extended from Northeastern Siberia to Western Alaska, including the Bering land bridge – and they were isolated there long enough – as much as 15,000 years – to maturate and differentiate themselves genetically from their Asian sisters.
Lineages are distributed quickly not gradually
Founding lineages or haplotypes are uniformly distributed across North and South America instead of exhibiting a nested structure from north to south. So after the Beringian standstill, the initial North to South migration occured in a swift pioneering process, not a gradual diffusion.

Bi-directional Migrations to North America then Back to Beringia
The DNA data also suggest a lot more going back and forth than was previously suspected of populations during the past 30,000 years in Northeast Asia and North America. The dataset analysis shows that after the initial peopling of Beringia, there were a series of back migrations to Northeast Asia as well as forward migrations to the Americas from Beringia. There was a bi-directional gene flow between Siberia and the North American Arctic.

Using Mitochondrial datasets from populations in the Americas and East Asia
The investigation of the pioneering phase in the Americas, a research team, a group of geneticists from around the world, pooled their genomic datasets and then analyzed 623 complete mitochondrial DNAs (mtDNAs) from the Americas and Asia, including 20 new complete mtDNAs from the Americas and seven from Asia.
What Mitochondrial DNA data reflects
Mitochondrial DNA, that is, DNA found in organelles, rather than in the cell nucleus, is considered to be of separate evolutionary origin, and is inherited from only one parent – the female. The dataset sequence was used to direct genotyping from 20 American and 26 Asian populations.

The Discovery 3 New Sub-Clades
The team identified three new sub-clades that incorporate nearly all of Native American haplogroup C mtDNAs – all of them widely distributed in the New World, but absent in Asia; and they defined two additional founder groups, which differ by several mutations from the Asian-derived ancestral clades.

Disconnect in migration dates
Did the migration occur quickly or slowly? Migration may have occured 30,000 years ago, but the earliest archeological evidence is that it occurred only 15,000 years ago.
The point of departure places Homo sapiens at the Yana Rhinoceros Horn Site in Siberia as early as 30,000 years before the present, but the earliest archaeological site at the southern end of South America is dated to only 15,000 years ago.

Two possible scenarios
First the ancestors of Native Americans peopled Beringia before the Last Glacial Maximum, but remained locally isolated – likely because of ecological barriers – until entering the Americas 15,000 years before the present (the Beringian incubation model, BIM).
The second is that the ancestors of Native Americans did not reach Beringia until just before 15,000 years before the present, and then moved continuously on into the Americas, being recently derived from a larger parent Asian population (direct colonization model, DCM).

The conclusion of the study
The team set out to test the two hypotheses: one, that Native Americans’ ancestors moved directly from Northeast Asia to the Americas; the other, that Native American ancestors were isolated from other Northeast Asian populations for a significant period of time before moving rapidly into the Americas all the way down to Tierra del Fuego.

The data supports the second hypothesis: The ancestors of Native Americans peopled Beringia before the Last Glacial Maximum, but remained locally isolated until entering the Americas at 15,000 years before the present. So they moved into the Americas quickly.

Saturday, December 13, 2008

Genome Sequence of Neanderthal Man

Were Neanderthals and Humans connected at some time in the past? This question is still up for debate, but recently the complete mitochondrial genome of a 38,000-year-old Neanderthal has been sequenced.

At the Max-Planck Institute for Evolutionary Anthropology in Germany scientists have reconstructed the genome sequence. They sequenced the Neanderthal mitochondria—powerhouses of the cell with their own DNA including 13 protein-coding genes—nearly 35 times over. This coverage allowed them to sort out those differences between the Neanderthal and human genomes resulting from damage to the degraded DNA extracted from ancient bone versus true evolutionary changes.

This new sequence and its analysis confirms that the mitochondria of Neanderthals falls outside the variation found in humans today and it provides no evidence of integration between the two lineages although it remains a possibility. It also shows that the last common ancestor of Neanderthals and humans lived about 660,000 years ago, give or take 140,000 years.

The new sequence revealed that the Neanderthals have fewer evolutionary changes overall, but a greater number that alter the amino acid building blocks of proteins. This means that the Neanderthals had a smaller population size than humans do, which makes natural selection less effective in removing mutations.

That notion is consistent with arguments made by other scientists based upon the geological record. Anthropologists argue there were a few thousand Neanderthals that roamed over Europe 40,000 years ago. That smaller population might have been the result of the smaller size of Europe compared to Africa. Another geological issue was that the Neanderthals also would have had to deal with repeated glaciations.

Tuesday, December 2, 2008

Single Main Migration Across Bering Strait


Did a relatively small number of people from Siberia who trekked across a Bering Strait land bridge some 12,000 years ago give rise to the native peoples of North and South America?

Researcher working with an international team of geneticists and anthropologists, have produced new genetic evidence that's likely to hearten proponents of the land bridge theory. The study, is one of the most comprehensive analyses so far among efforts to use genetic data to shed light on the topic.

The researchers examined genetic variation at 678 key locations or markers in the DNA of present-day members of 29 Native American populations across North, Central and South America. They also analyzed data from two Siberian groups. The analysis shows:

  • genetic diversity, as well as genetic similarity to the Siberian groups, decreases the farther a native population is from the Bering Strait -- adding to existing archaeological and genetic evidence that the ancestors of native North and South Americans came by the northwest route.

  • a unique genetic variant is widespread in Native Americans across both American continents -- suggesting that the first humans in the Americas came in a single migration or multiple waves from a single source, not in waves of migrations from different sources. The variant, which is not part of a gene and has no biological function, has not been found in genetic studies of people elsewhere in the world except eastern Siberia.

The researchers say the variant likely occurred shortly prior to migration to the Americas, or immediately afterwards.

The Genetic Markers for North American Populations originate in East Asia
There is reasonably clear genetic evidence that the most likely candidate for the source of Native American populations is somewhere in east Asia, the research concludes. If there were a large number of migrations, and most of the source groups didn't have the variant, then you would not see the widespread presence of the mutation in the Americas.

Studies with Genetic Markers

Researchers studied the same set of 678 genetic markers used in the new study in 50 populations around the world, to learn which populations are genetically similar and what migration patterns might explain the similarities. For North and South America, the current research breaks new ground by looking at a large number of native populations using a large number of markers.

The pattern the research uncovered -- that as the founding populations moved south from the Bering Strait, genetic diversity declined -- is what one would expect when migration is relatively recent. There has not been time yet for mutations that typically occur over longer periods to diversify the gene pool.

The study also found that:

  • The study's findings hint at supporting evidence for scientists who believe early inhabitants followed the coasts to spread south into South America, rather than moving in waves across the interior.

  • Assuming a migration route along the coast provides a slightly better fit with the pattern that are seen in genetic diversity.

  • Populations in the Andes and Central America showed genetic similarities.

  • Populations from western South America showed more genetic variation than populations from eastern South America.

  • Among closely related populations, the ones more similar linguistically were also more similar genetically.

Wednesday, November 26, 2008

RadioActive Carbon Dating

The radiocarbon dating method was developed in the 1940's by Willard F. Libby and a team of scientists at the University of Chicago. It subsequently evolved into the most powerful method of dating late Pleistocene and Holocene artifacts and geologic events up to about 50,000 years in age.

Carbon Dating Carbon has unique properties that are essential for life on earth. Familiar to us as the black substance in charred wood, as diamonds, and the graphite in “lead” pencils, carbon comes in several forms, or isotopes. One rare form has atoms that are 14 times as heavy as hydrogen atoms: carbon-14, or 14C, or radiocarbon. Carbon-14 is made when cosmic rays knock neutrons out of atomic nuclei in the upper atmosphere. These displaced neutrons, now moving fast, hit ordinary nitrogen (14N) at lower altitudes, converting it into 14C. Unlike common carbon (12C), 14C is unstable and slowly decays, changing it back to nitrogen and releasing energy. This instability makes it radioactive.

We can take a sample of air, count how many 12C atoms there are for every 14C atom, and calculate the 14C/12C ratio. Because 14C is so well mixed up with 12C, we expect to find that this ratio is the same if we sample a leaf from a tree, or a part of your body. The C-14 within an organism is continually decaying into stable carbon isotopes, but since the organism is absorbing more C-14 during its life, the ratio of C-14 to C-12 remains about the same as the ratio in the atmosphere. When the organism dies, the ratio of C-14 within its carcass begins to gradually decrease. The rate of decrease is 1/2 the quantity at death every 5,730 years. That is the half-life of C-14. Obviously, this works only for things which were once living. It cannot be used to date volcanic rocks, for example. The rate of decay of 14C is such that half of an amount will convert back to 14N in 5,730 years (plus or minus 40 years). This is the “half-life.” So, in two half-lives, or 11,460 years, only one-quarter of that in living organisms at present, then it has a theoretical age of 11,460 years. Anything over about 50,000 years old, should theoretically have no detectable 14C left. That is why radiocarbon dating cannot give millions of years. In fact, if a sample contains 14C, it is good evidence that it is not millions of years old.

Because the decay rate is logarithmic, radiocarbon dating has significant upper and lower limits. It is not very accurate for fairly recent deposits. In recent deposits so little decay has occurred that the error factor (the standard deviation) may be larger than the date obtained. The practical upper limit is about 50,000 years, because so little C-14 remains after almost 9 half-lives that it may be hard to detect and obtain an accurate reading, regardless of the size of the sample.

The ratio of C-14 to C-12 in the atmosphere is not constant. Although it was originally thought that there has always been about the same ratio, radiocarbon samples taken and cross dated using other techniques like dendrochronology have shown that the ratio of C-14 to C-12 has varied significantly during the history of the Earth.

This variation is due to changes in the intensity of the cosmic radiation bombardment of the Earth, and changes in the effectiveness of the Van Allen belts and the upper atmosphere to deflect that bombardment. For example, because of the recent depletion of the ozone layer in the stratosphere, we can expect there to be more C-14 in the atmosphere today than there was 20-30 years ago. To compensate for this variation, dates obtained from radiocarbon laboratories are now corrected using standard calibration tables developed in the past 15-20 years.

When reading archaeological reports, be sure to check if the carbon-14 dates reported have been calibrated or not.

The major developments in the radiocarbon method up to the present day involve improvements in measurement techniques and research into the dating of different materials. Briefly, the initial solid carbon method developed by Libby and his collaborators was replaced with the Gas counting method in the 1950's. Liquid scintillation counting, utilising benzene, acetylene, ethanol, methanol etc, was developed at about the same time.

Today the vast majority of radiocarbon laboratories utilise these two methods of radiocarbon dating. Of major recent interest is the development of the Accelerator Mass Spectrometry method of direct C14 isotope counting.

In 1977, the first AMS measurements were conducted by teams at Rochester/Toronto and the General Ionex Corporation and soon after at the Universities of Simon Fraser and McMaster. The crucial advantage of the AMS method is that milligram sized samples are required for dating. Of great public interest has been the AMS dating of carbonacous material from prehistoric rock art sites, the Shroud of Turin and the Dead Sea Scrolls in the last few years.

The development of high-precision dating (up to ±2.0 per mille or ±16 yr) in a number of gas and liquid scintillation facilities has been of similar importance (laboratories at Belfast (N.Ireland), Seattle (US), Heidelberg (Ger), Pretoria (S.Africa), Groningen (Netherlands), La Jolla (US), Waikato (NZ) and Arizona (US) are generally accepted to have demonstrated radiocarbon measurements at high levels of precision).

The calibration research undertaken primarily at the Belfast and Seattle labs required that high levels of precision be obtained which has now resulted in the extensive calibration data now available. The development of small sample capabilities for LSC and Gas labs has likewise been an important development - samples as small as 100 mg are able to be dated to moderate precision on minigas counters with similar sample sizes needed using minivial technology in Liquid Scintillation Counting.

The radiocarbon dating method remains arguably the most dependable and widely applied dating technique for the late Pleistocene and Holocene periods.







Figure: The "Curve of Knowns"

The first acid test of the new method was based upon radiocarbon dating of known age samples primarily from Egypt (the dates are shown in the diagram by the red lines, each with a ±1 standard deviation included). The Egyptian King's name is given next to the date obtained. The theoretical curve was constructed using the half-life of 5568 years. The activity ratio relates to the carbon 14 activity ratio between the ancient samples and the modern activity. Each result was within the statistical range of the true historic date of each sample.



Other forms of Radioactive Dating

There are various other radiometric dating methods used today to give ages of millions or billions of years for rocks. These techniques, unlike carbon dating, mostly use the relative concentrations of parent and daughter products in radioactive decay chains. For example, potassium-40 decays to argon-40; uranium-238 decays to lead-206 via other elements like radium; uranium-235 decays to lead-207; rubidium-87 decays to strontium-87; etc. These techniques are applied to igneous rocks, and are normally seen as giving the time since solidification.

Rubidium occurs in nature as two isotopes: radioactive Rb-87 and stable Rb-85. Rb-87 decays with a half-life of 48.8 billion years to Sr-87. This half-life is so long that the Rb-Sr method is normally only used to date rocks that are older than about 100 million years.

Tuesday, November 25, 2008

Neutral Evolution and the Shape of Our Genome

Finding: There is a growing body of evidence which shows that many of the genetic bits and pieces that drive evolutionary changes do not confer any advantages or disadvantages to humans or other animals.

The conventional view
The basic belief of evolution was that all random genetic changes that manage to stick around have some selective advantage on the species.

But a study concludes that we are what we are largely due to random changes that are completely neutral. This study reinforces and highlights the equal, and in some cases greater, importance of neutral genetic drift.

Repeat Elements
Repeat elements are fragments of DNA containing the same repetitive sequence of chemical base pairs several hundred times. Experiments demonstrate that repeat elements rose to prominence without offering any benefits to the organism it inhabits. They are one of the major architectural markers of the human genome, and they make up over 40 percent of our genome,

Numts or Copies of mitochondrial sequences found in DNA portions

One type of repeat element was found while looking at genes associated with Bardet Biedl syndrome, a rare disorder. Researchers found portions of DNA that had been copied from the mitochondria, the energy-making apparatus of human cells that has its own small genome. These mitochondrial sequences are known as numts.

More Numts as the species gets more sophisticated
The whole human genome, has more than 1200 such pieces of mitochondrial DNA of various lengths embedded into chromosomes. While chimps have a comparable number, mice and rats only have around 600 numts. Since they increase in frequency as species advance, it suggested there was some evolutionary purpose to keeping them around.

But none of these numts contained an actual gene to make a protein that does anything, nor did they seem to control the function of any nearby genes. These numts are a neutral part of our genome. If anything, they may be mildly negative since long repeat sequences can be unstable or get inserted inside genes and disrupt them.

The researchers believe they have uncovered a possible reason why these potentially damaging but mostly neutral bits of DNA accumulate over time by comparing the sequences of human numts with those in different animals. How closely the different species' sequences match can provide an estimate of when that particular sequence got inserted into the ancestor of the human genome.

Numts became embedded roughly when primates emerged: 54 Million years ago
Calculations made about the location and structure of the numts revealed that most numts became embedded in our genome over a 10-million-year period centered roughly 54 million years ago -- right around the time when the first primates emerged. When new species emerge, their numbers and therefore their genetic differences are very small. The consequences are that this creates a genetic bottleneck during which any changes in the genome will either get eliminated quickly or spread to the whole population quickly.

Numts expanded because they were not eliminated - they were not detrimental

Numts, being "neutral," were generally at low levels in ancient mammals, but during the primate emergence 54 million years ago, they accumulated and spread through the small early primate populations precisely because they were not detrimental enough to be eliminated. Then, as these populations expanded, numts reached stable but higher frequencies.

Saturday, November 22, 2008

Radioactive Dating: Rubidium-Strontium

Rubidium-Strontium dating:The nuclide rubidium-87 decays, with a half life of 48.8 billion years, to strontium-87. Strontium-87 is a stable element; it does not undergo further radioactive decay. (Do not confuse with the highly radioactive isotope, strontium-90.) Strontium occurs naturally as a mixture of several nuclides, including the stable isotope strontium-86.

If three different strontium-containing minerals form at the same time in the same magma, each strontium containing mineral will have the same ratios of the different strontium nuclides, since all strontium nuclides behave the same chemically. (Note that this does not mean that the ratios are the same everywhere on earth. It merely means that the ratios are the same in the particular magma from which the test sample was later taken.) As strontium-87 forms, its ratio to strontium-86 will increase. Strontium-86 is a stable element that does not undergo radioactive change. In addition, it is not formed as the result of a radioactive decay process. The amount of strontium-86 in a given mineral sample will not change. Therefore the relative amounts of rubidium-87 and strontium-87 can be determined by expressing their ratios to strontium-86: Rb-87/Sr-86 and Sr87/Sr-86 We measure the amounts of rubidium-87 and strontium-87 as ratios to an unchanging content of strontium-86.

Because of radioactivity, the fraction of rubidium-87 decreases from an initial value of 100% at the time of formation of the mineral, and approaches zero with increasing number of half lives. At the same time, the fraction of strontium-87 formed increases from zero and approaches 100% with increasing number of half-lives. The two curves cross each other at half life = 1.00. At this point the fraction of Rb87 = Sr87 = 0.500. At half life = 2.00, Rb87 = 25% and Sr87 = 75%, and so on.

Wednesday, November 12, 2008

Radioactive Dating: Uranium-Lead

The uranium-lead method is the longest-used dating method. It was first used in 1907, about a century ago. The uranium-lead system is more complicated than other parent-daughter systems; it is actually several dating methods put together.

Natural uranium consists primarily of two isotopes, U-235 and U-238, and these isotopes decay with different half-lives to produce lead-207 and lead-206, respectively. In addition, lead-208 is produced by thorium-232. Only one isotope of lead, lead-204, is not radiogenic.

The uranium-lead system has an interesting complication: none of the lead isotopes is produced directly from the uranium and thorium. Each decays through a series of relatively short-lived radioactive elements that each decay to a lighter element, finally ending up at lead. Since these half-lives are so short compared to U-238, U-235, and thorium-232, they generally do not affect the overall dating scheme. The result is that one can obtain three independent estimates of the age of a rock by measuring the lead isotopes and their parent isotopes. Long-term dating based on the U-238, U-235, and thorium-232 will be discussed briefly here; dating based on some of the shorter-lived intermediate isotopes is discussed later.

The uranium-lead system in its simpler forms, using U-238, U-235, and thorium-232, has proved to be less reliable than many of the other dating systems. This is because both uranium and lead are less easily retained in many of the minerals in which they are found. Yet the fact that there are three dating systems all in one allows scientists to easily determine whether the system has been disturbed or not.

Using slightly more complicated mathematics, different combinations of the lead isotopes and parent isotopes can be plotted in such a way as to minimize the effects of lead loss. One of these techniques is called the lead-lead technique because it determines the ages from the lead isotopes alone. Some of these techniques allow scientists to chart at what points in time metamorphic heating events have occurred, which is also of significant interest to geologists

Monday, November 10, 2008

Intelligent Design ... by the numbers

The most endearing element of Intelligent Design is that the universe is so fine tuned that only a designer with a purpose could have made it so.

If the structure of the atom was changed just so...If the energy property of the quarks was altered just so slightly...every thing would be different. The Universe as we know it would not exist. And life would not exist.

OK...Let's see.

The Dimensions of the Game
Let's talk about baseball. Many baseball purists think that the game is perfect. It is the perfect team sport and the perfect individual sport. Defense vs. Offense. And if you change the dimensions of the game, it would be different.

Think of it. The bases are 90 ' ft from each other. This accounts for the batting averages being what they are. The majority of baseball players have a batting average between 250 and 275. The really good batters will have an average above 280 and up to 340. Very few if any will have a yearly batting average above 350. In the last 60 years only one player has had an average above 400. It is very hard to get a hit given the dimensions of the game.

Now suppose you changed the dimensions only a little bit. Instead of the base path at 90', you changed them to 89' and 10". That's only two inches. But that would be enough to change the batting averages. Not much but you would affect the close calls. Now instead of missing the close call base hit by 2" you make it by 2". Batting averages would go up from 250 to 275; the outstanding hitters will have averages above 350 to 380.

Changing the dimensions of the game make a change in the game. But it is still baseball. Ohter adjustments may be made to favor the picture. 5 balls will equal a walk; 2 stikes equal a strike out. Change the dimensions to favor the batter, and you change other rules to favor the picture.

But it is still baseball. The game has changed, because the rules have changed. But it is baseball.

The Structure of the Universe
Well that's what the universe is like. If you change the rules so that this universe will produce a set of atoms with a certain atomic structure, you can have different molecular structures that would adjust to the different chemical rules, and hence different physical rules. The table of periodic elements would look different, but you would still have a table of periodic elements.

Fine Tuning
So the fine tune tinkering would not necessarily be unique. In fact, if you have a universe with the basic laws of F=MA and Shroedingers equation.




You have this universe. Now you can take this universe and change it by tinkering with the rules. But you would get a different universe. However, there is nothing to say that it couldn't evolve life and consciousness. If it did that, if it allowed that, then you would have a universe like ours. It would permit life and consciousness. But it would not be unique.

The Arguement for Design does it show a necessary universe?
The result is that if any universe can create the conditions of life and consciousness, it undermines the argument of design. This is a finely tuned universe, but so is any universe that can create the conditions of life and consciousness. There is nothing special; this is a condition of nature and not necessarily unique. If it is not necessarily unique then the uniqueness factor is not the necessary factor to make you believe in a designer. That is unless you believe that the designer is nature, in the sense that God is nature via Spinosa.

A necessary universe is a unique universe. A universe which can create life and consciousness is necessary only if there is no other way to create those elements; only if life and consciousness can only be created in one universe. If any universe or a great number of possible universes can create life and consciousness then those conditions are ubiquitous; they are pan-universal. So the universe is not unique among universes. It is not unique, and if it is not unique, it is not necessary.

Sunday, September 21, 2008

The Hap Map Project?

The International HapMap Project is a multi-country effort to identify and catalog genetic similarities and differences in human beings. Using the information in the HapMap, researchers will be able to find genes that affect health, disease, and individual responses to medications and environmental factors.

The Project is a collaboration among scientists and funding agencies from Japan, the United Kingdom, Canada, China, Nigeria, and the United States. All of the information generated by the Project will be released into the public domain.

The goal of the International HapMap Project is to compare the genetic sequences of different individuals to identify chromosomal regions where genetic variants are shared. By making this information freely available, the Project will help biomedical researchers find genes involved in disease and responses to therapeutic drugs.

In the initial phase of the Project, genetic data are being gathered from four populations with African, Asian, and European ancestry. Ongoing interactions with members of these populations are addressing potential ethical issues and providing valuable experience in conducting research with identified populations.

Tuesday, September 16, 2008

Creationism and Snapshots of the Universe

One problem that Creationism has (among many) is that it is a snapshot of the state of the Universe. By that I mean that any changes to the underlying structure cannot be made because time and space and the elements have already been spelled out by the Creator.

One way of looking at this is the notion that when the Creator made the universe he could have made a less than perfect creation or a perfect creation.

If the creation is less than perfect, end of story. There is no need to pursue creationism or Intelligent design as a model for the universe.

If the creation is perfect, as creationists believe, then in Liebniz' words, this is the "best of all Possbile worlds."

Well is it? I for one don't think it is because it could be improved by getting rid of cancer or diabeties. In other words, the universe doesn't meet the condition of being perfect if there are flaws. So the snapshot that was taken didn't really occur.

Friday, August 22, 2008

Creationism, Evolution and the Peppered Moth

Finding: The Peppered Moth has been used as an example of evolution in action. Recent controversy about the precise mechanism used to change the appearance of the Peppered Moth has allowed creationists to argue that Evolutionists have used an incorrect and fraudulent example of evolution. but new research has confirmed the actual environmental process used by evolution to change the appearance of the Peppered Moth.

An Example of Evolution in Action
For decades, the peppered moth was the textbook example of evolution in action, unassailable proof that Darwin got it right.

Recently, though, the peppered moth's status as an icon of evolution has been under threat. Emboldened by legitimate scientific debate over the fine details of the peppered moth story, creationists and other anti-evolutionists have orchestrated a decade-long campaign to discredit it - and with it the entire edifice of evolution.

These days you're less likely to hear about the peppered moth as proof of evolution than as proof that biologists cannot get their story straight.

Recent Controversy
The peppered moth now counts among the anti-evolutionists' most potent weapons. In the past few years it has helped them get material critical of evolution added to high-school science lessons in Ohio and Kansas, although the material has now been removed. In 2000, the authors of the widely used school textbook Biology reluctantly dropped the peppered moth in direct response to creationist attacks. The latest edition features the beaks of Galapagos finches instead.
Now, though, biologists are fighting back. Majerus recently finished an exhaustive experiment designed to repair the peppered moth's tattered reputation and reverse the creationists' advances. The preliminary results are out, and Majerus says they are enough to fully reinstate the moth as the prime example of Darwinian evolution in action.


The Peppered Moth Evolutionary Story
The textbook version of the peppered moth story is simple enough. Before the mid-19th century, all peppered moths in England were cream coloured with dark spots. In 1848, however, a "melanic" form was caught and pinned by a moth collector in Manchester. By the turn of the 20th century melanic moths had all but replaced the light form in Manchester and other industrial regions of England. The cause of the change was industrial pollution: as soot and other pollutants filled the air, trees used by peppered moths as daytime resting places were stripped of their lichens then stained black with soot. Light-coloured moths that were well camouflaged on lichen-coated trees were highly conspicuous on blackened trees. Melanic moths, in contrast, were less easily spotted by predatory birds and so survived longer, leaving more offspring than the light forms. As melanism is heritable, over time the proportion of black moths increased.
As with all textbook examples, however, this is a simplified account of decades of field work, genetic studies and mathematical analyses carried out by dozens of researchers. It also draws disproportionately on the flawed work of one biologist, Bernard Kettlewell of the University of Oxford.

1950 Experiments
In the 1950s Kettlewell carried out a series of classic experiments that cemented the peppered moth's iconic status. These were designed to test a hypothesis first proposed that the rise in melanism was a result of natural selection caused by differential bird predation.

Kettlewell carried out experiments in 1953 and 1955 in polluted woodland in Rubery, near Birmingham, and unspoiled woodland in rural Dorset. In the mornings he dropped hundreds of marked moths, both light and melanic, on tree trunks, where they quickly took up resting positions. In the evenings he used moth traps to recapture them. In Birmingham, he recaptured twice as many dark as light moths. In Dorset, he found the opposite, recapturing more light moths. The obvious conclusion was that light moths were more heavily predated than dark moths in Birmingham, and vice versa in Dorset.

During these experiments Kettlewell also directly observed robins and hedge sparrows eating peppered moths. As expected, the birds noticed and ate more light-coloured moths on soot-covered trees, and more melanic ones on lichen-covered trees. This was a breakthrough, as hardly anyone in Kettlewell's time believed that birds ate moths.

Kettlewell's experiments were accepted as proof that the rise of the melanic moth was a case of evolution by natural selection, and that the agent of selection was bird predation. The peppered moth quickly found its way into textbooks, often accompanied by striking photographs of light and dark moths resting on lichen-covered and soot-stained bark.

Problems with the Experiments
But in truth there were problems with Kettlewell's experiments. Perhaps the most significant was that he released moths onto tree trunks. Although moths occasionally choose trunks as a daytime resting place, they prefer the underside of branches. Kettlewell also let his moths go during the day, even though they normally choose their resting place at night. And he released more moths than would naturally be present in an area, which may have made them more conspicuous and tempted birds to eat them even if they wouldn't normally. These problems were familiar to evolutionary biologists, many of whom tried to resolve them with experiments, but were not given a general airing until 1998, when Majerus pointed out the flaws in Kettlewell's work in his book Melanism: Evolution in action.

The Origins of the Controversy
In November 1998, Nature published a review of his book by evolutionary biologist Jerry Coyne of the University of Chicago. In it, Coyne wrote a sentence that would come back to haunt him: "For the time being we must discard Biston as a well-understood example of natural selection in action." He did not mean to imply that the peppered moth was not an example of evolution by natural selection, merely that the fine details were still lacking. "I wasn't very clear. The key was well-understood."

But to anti-evolution organisations such as the Discovery Institute, they took the criticism of the Kettewell experiments. Coyne's words were taken out of context and were selectively quoting him and Majerus they managed to portray the textbook version of events as hopelessly flawed, and with it the entire theory of evolution. They also pointed at the textbook pictures - which are often staged with dead specimens - and proclaimed that the science behind those pictures was staged too.

Out of the Frying Pan into the Fire
In 2000 Majerus embarked on a large experiment designed to iron out the problems with Kettlewell's work. But things took a turn for the worse when in 2002, journalist Judith Hooper published a popular science book called Of Moths and Men: Intrigue, tragedy & the peppered moth. She accused Kettlewell of manipulating his data to prove his hypothesis. Hooper's book is not a creationist text, but creationists seized on it anyway as evidence that Kettlewell was a fraud.

Promlems with Hooper's Book
Numerous historians and scientists pointed out that Hooper's book is littered with factual errors, not least the accusation that Kettlewell forged his data. There is no evidence he did so. Coyne himself wrote a scathing review of Hooper's book in which he accused her of unfairly smearing Kettlewell and concluded that "industrial melanism still represents a splendid example of evolution in action". It is fair to say that this accurately represented the views of the vast majority of evolutionary biologists at the time, but by then the damage had been done.

Reworking the Experiments
Meanwhile, Majerus was steadily working through his experiment in his own garden in Cambridge. He started by identifying 103 branches that were suitable resting places for peppered moths, ranging in height from 2 to 26 metres, many of them covered in lichen. For seven years, every night from May to August, he placed nets around 12 randomly chosen branches and released a single moth into each net. Around 90 per cent were light-coloured to reflect the natural frequencies of the two forms around Cambridge.

The moths took up resting positions overnight, usually on the underside of the branch. At sunrise the next morning Majerus removed the nets and 4 hours later checked to see which moths were still there. His assumption was that, as peppered moths spend the whole day in their resting position, any that disappeared between sunrise and mid-morning had almost certainly been spotted and eaten by birds.

Because he was able to watch some of the branches from his house through binoculars, he also observed the moths being eaten by many species of bird - including robins, blackbirds, magpies and blue tits. As expected, the birds were better at spotting the dark moths than the camouflaged light ones, he says.

Majerus addressed all the flaws in Kettlewell's experiments. He let moths choose their own resting positions, he used low densities, he released them at night when they were normally active, and he used local moths at the frequencies found in nature.

Majerus presented his preliminary results at a meeting of evolutionary biologists at the University of Uppsala in Sweden. He said that over the seven years, 29 per cent of his melanic moths were eaten compared with 22 per cent of light ones. This was a statistically significant difference.

As in many parts of the UK, pollution in Cambridge has declined since the adoption of clean air acts in the 1950s, and melanic moths are becoming increasingly rare, declining from 12 per cent of the population in 2001 to under 2 per cent today. According to Majerus, his results show that bird predation is the agent of this change. Birds were better at spotting dark moths than light ones, ate more of them and reduced the percentage of black moths over time. It provides the proof of evolution.

There is no doubt that the peppered moth's colour is genetically determined, so changes in the frequencies of light and dark forms demonstrate changes in gene frequencies - and that is evolution. What's more, the direction and speed at which this evolution occurred can only be explained by natural selection.

Anti-evolutionists continue to suggest there is, of course, but as far as Majerus and others are concerned their claims have been debunked and the peppered moth should be reinstated as a textbook example of evolution in action. Not just to teach children either, but also as a direct rebuttal of anti-evolutionism. The peppered moth story is easy to understand because it involves things that we are familiar with: vision and predation and birds and moths and pollution and camouflage and lunch and death. That is why the anti-evolution lobby attacks the peppered moth story. They are frightened that too many people will be able to understand.

Some problems with Intelligent Design

Is this the only way that life could have originated?

Intelligent design advocates make the case that the universe is finely tuned, so much so that if you changed some small feature, the gravitational constant, or the mass of the proton, then the universe would be different, and life could not have been formed. They conclude that the intelligent construction of the universe, so finely tuned, creates proof that the universe was created by an intelligent creator. In other words, the intelligent creation is the result of an intelligent creator. The universe is intelligently designed, so there must be an intelligent creator, or God.



Let's look at this arguement. The main thrust of the argument is that because the universe, with it's natural construction of atoms has resulted in an organized table of periodic elements, and one of them, carbon, is the foundation of life, this natural construction could not have occured if the resulting organization weren't put into place to begin with.


Let's see.


Let's talk about Baseball. In baseball, the home plate is 90 feet from first base. Batters have their entire careers built around getting to first base as soon as possible after they hit the ball.

Good batters will get there about 30% of the time. Bad batters will get there about 20% of the time. Most batters will fall in between those two percentages.


Now let's change the dimensions of the ball field. Now say that the distance is not 90 feet but 89 feet. That means that more players will get to first base. Averages my go up so the the best hitters are over 35% and the worst hitters are now batting over 25%. The statistics have changed and the players will have to adjust their game to the new dimensions.


Or suppose the length is not 90 feet but 91 feet. The opposite will occur. More players will be out. Batting averages will drop, so that the best hitters are now batting around 25%; the worst hitters are batting around 15%. Again the statistics change, and the players have to adjust to the new dimensions.


But in both cases the game of baseball is still recognized. The conditions change but the game is still there. So let's take this kind of argument and apply it to the universe.

Universe #1

About 13 to 15 billion years ago, the big bang occurred. And let's say that one of the results of the big bang was that certain rules, and physical laws and constants were created that made the universe the way that it is. Let's call the rules and conditions by a simple term: "C" which the constant for the speed of light.

One result of the rules is periodic table of elements. There are 92 naturally occuring, and about 26 man-made.

We also know that element # 6 is carbon. And carbon is the foundation of life.






Universe #2:
Now let's suppose that in Universe #2, there was a big bang as well. But this time the constant of the universe is 1.1C. It is just a little bit bigger than C is in our universe. One of the consequences is that there are 134 naturally occurring elements. Carbon is one of them, but in this universe, element #48, our "Cadmium" is responsible for life, not carbon. Again in this universe there was a naturally occuring structure built around how the atoms were arranged. Life was not pre-ordained to start with Carbon, but with a different element.



Universe #3.
Universe #3 is like universe #2 and Universe #1. But in universe #3, the constant is .98C, smaller than universe #1. As a result there are only 58 naturally occuring elements, and element #18, "Argon" creates life. And again with this universe, the atoms arranged in such away that the properties of the periodic table result in a very different configuration for what constitutes matter.



In all three universes, we see that it is not necessary to invoke a special design, or even that the design in one is unique. This means that an intelligent design in which there is only one way to create life, doesn't have to be.

Tuesday, July 22, 2008

New Method Of Selecting DNA Microarray-based Genomic Selection (MGS),

Finding: Microarray-based Genomic Selection (MGS), is a research protocol that allows scientists to extract and enrich specific large-sized DNA regions, then compare genetic variation among individuals using DNA resequencing methods. Sequencing can be done by a small staff of researchers...it is inexpensive and not labor intensive.

The new technology will allow researchers to more easily discover subtle and overlooked genetic variations that may have serious consequences for health and disease.

Problem in genetic investigation
DNA sequencing platforms do not have a simple, inexpensive method of selecting specific regions to resequence; this has been a serious barrier to detecting subtle genetic variability among individuals.


The goal of most human genetics researchers is to find variations in the genome that contribute to disease. Despite the success of the human genome project and the availability of a number of next-generation The Emory scientists believe that goal will be much more obtainable thanks to MGS.

MGS uses DNA oligonucleotides (probes) arrayed on a chip at high density (microarray) to directly capture and extract the target region(s) from the genome. The probes are chosen from the reference human genome and are complementary to the target(s) to capture. Once the target is selected, resequencing arrays or other sequencing technologies can be used to identify variations.

The Emory scientists believe MGS will allow them to easily compare genetic variation among a number of individuals and relate that variation to health and disease.

The human genome project focused on sequencing just one human genome--an amazing technological feat that required a very large industrial infrastructure, hundreds of people and a great deal of money. The question since then has been, can we replicate the ability to resequence parts of the genome, or ultimately the entire genome, in a laboratory with a single investigator and a small staff" The answer is now 'yes.'"

Geneticists have found many different types of obvious gene mutations that are deleterious to health, but more subtle variations, or variations located in parts of the genome where scientists rarely look, may also have negative consequences but are not so easily discovered.

Other methods for isolating and studying a particular region of the genome, such as PCR and BAC cloning (bacterial artificial chromosomes) are comparatively labor intensive, difficult for single laboratories to scale to large sections of the genome, and relatively expensive, says Dr. Zwick.

Whereas typical microarray technology measures gene expression, MGS is a novel use of microarrays for capturing specific genomic sequences. For the published study, a third type of microarray--a resequencing array--was used to determine the DNA sequence in the patient samples.

The logic behind the resequencing chip is that you design the chip to have the identity of the base at every single site in a reference sequence. You use the human genome reference sequence as a shell and you search for variation on the theme. This alternative new technology allows a regular-sized laboratory and single investigator to generate a great deal of data at a cost significantly less than what a sequencing center would charge.

Friday, June 20, 2008

Importance Of Gene Regulation For Common Human Disease

Finding: A new study shows that common, complex diseases are more likely to be due to genetic variation in regions that control activity of genes, rather than in the regions that specify the protein code.

Where are the regions?
This result comes from a study of the activity of almost 14,000 genes in 270 DNA samples collected for the HapMap Project. The authors looked at 2.2 million DNA sequence variants (SNPs) to determine which affected gene activity. They found that activity of more than 1300 genes was affected by DNA sequence changes in regions predicted to be involved in regulating gene activity, which often lie close to, but outside, the protein-coding regions.

The challenge of large-scale studies that link a DNA variant to a disease
We predict that variants in regulatory regions make a greater contribution to complex disease than do variants that affect protein sequence. This is the first study on this scale and these results are confirming our intuition about the nature of natural variation in complex traits.
One of the challenges of large-scale studies that link a DNA variant to a disease is to determine how the variant causes the disease: our analysis will help to develop that understanding, a vital step on the path from genetics to improvements in healthcare.

What the HapMap does
Past studies of rare, monogenic disease, such as cystic fibrosis and sickle-cell anaemia, have focused on changes to the protein-coding regions of genes because they have been visible to the tools of human genetics. With the HapMap and large-scale research methods, researchers can inspect the role of regions that regulate activity of many thousands of genes.

The HapMap Project established cell cultures from participants from four populations as well as, for some samples, information from families, which can help to understand inheritance of genetic variation. The team used these resources to study gene activity in the cell cultures and tie that to DNA sequence variation

Scientists found strong evidence that SNP variation close to genes - where most regulatory regions lie - could have a dramatic effect on gene activity. Although many effects were shared among all four HapMap populations, they also shown that a significant number were restricted to one population.

What about the house keeping genes?
They also showed that genes required for the basic functions of the cell - so-called housekeeping genes - were less likely to be subject to genetic variation. This was exactly as one would expect: you can't mess too much with the fundamental life processes and we predicted we would find reduced effects on these genes.

The study also detected SNP variants that affect the activity of genes located a great distance away. Genetic regulation in the human genome is complex and highly variable: a tool to detect such distant effects will expand the search for causative variants. The authors note, however, that the small sample size of 270 HapMap individuals is sensitive enough to detect only the strongest effects.

Tuesday, June 17, 2008

Happenstance mutations matter

Scientists show that happenstance mutations matter

Finding: In experiments on bacteria grown in the lab, scientists found that evolving a new trait sometimes depended on previous, happenstance mutations. Without those earlier random mutations, the window of opportunity for the novel trait would never have opened. History might have been different.

Evolutionist Stephen Jay Gould once suggested that the if the evolution of life were “wound back” and played again from the start, it could have turned out very differently.

Though not firmly conclusive, the new research adds a real-world case study of evolution in action to the decades-old debate stirred by Gould’s thought experiment. British paleontologist Simon Conway Morris and others argued that only a few optimal solutions exist for an organism to adapt to its environment, so even if the clock were wound back, environmental pressures would eventually steer evolution toward one of those solutions — regardless of the randomness along the way.

What the scientists did
Scientists obviously can’t turn back the hands of time, but Richard Lenski and his colleagues at Michigan State University in East Lansing did the next best thing. Lenski’s team watched 12 colonies of identical E. coli bacteria evolve under carefully controlled lab conditions for 20 years, which equates to more than 40,000 generations of bacteria. After every 500 generations, the researchers froze samples of bacteria. Those bacteria could later be thawed out to “replay” the evolutionary clock from that point in time.

The evolution of a nutrient absorption ability
After about 31,500 generations, one colony of bacteria evolved the novel ability to use a nutrient that E. coli normally can’t absorb from its environment. Thawed-out samples from after the 20,000-generation mark were much more likely to re-evolve this trait than earlier samples, which suggests that an unnoticed mutation that occurred around the 20,000th generation enabled the microbes to later evolve the nutrient-absorption ability through a second mutation, the researchers report in the Proceedings of the National Academy of Sciences.

By way of contrast with another control group
In the 11 other colonies, this earlier mutation didn’t occur, so the evolution of this novel ability never happened.

Put populations in the same environment and see what happens
This is a direct empirical demonstration of Gould-like contingency in evolution. You can’t do an exact replay in nature, but scientists were able to literally put all these populations in virtually identical environments and show that contingency is really what had occurred.

What was the mutation that occured?
The next step will be to determine what that earlier mutation was and how it made the later change possible. If the first mutation didn’t offer any survival advantage to the microbes on its own, it would make the case airtight that Gould was right. That’s because a mutation that doesn’t improve an organism’s ability to survive and reproduce can’t be favored by evolution, so whether the microbe happens to have that necessary mutation when the second evolutionary change occurs becomes purely a matter of chance. Thus the first mutation must have improved the chance of the organisms survival.

The first mutation gave the microbes a survival advantage. The growth rate and the density of bacteria in the colony jumped up after the second mutation, but not after the first one. The first mutation may have set the stage for what was to come, the second mutation took advantage of the change.

Monday, May 26, 2008

BAC: Super-Sized Inserts

Bacterial Artificial Chromosomes (BAC) have been developed to hold much larger pieces of DNA than a plasmid can. BAC vectors were originally created from part of an unusual plasmid present in some bacteria called the F’ plasmid.

The F’ plasmid allows bacteria to have “sex” (well, sort of: F’ helps bacteria give its genome to another bacteria but this only happens rarely when bacteria are under a lot of stress). F’ had been studied extensively and it was found that it could hold up to a million basepairs of DNA from another bacteria. Also, F’ has origins of replication and bacteria have a way to control how F’ is copied.

Friday, May 16, 2008

Mitochondrial Eve

'Mitochondrial Eve' Research: Humanity Was Genetically Divided For 100,000 Years
A Picture of the Ancient Past
Finding:
Based on Anthroopological genetic research, researchers believe that about 60,000 years ago, modern humans started the journey to populate the world. However, relatively little is known about the demographic history of our species over the previous 140,000 years in Africa.

The current study focuses on Africa and refines the understanding of early modern Homo sapiens history. These early human populations were small and isolated from each other for many tens of thousands of years.

The research was based on a survey of African mitochondrial DNA (mtDNA) and is the most extensive survey of its kind. It included over 600 complete mtDNA genomes from indigenous populations across the continent.

How Old Was “Mitochondrial Eve”?

MtDNA, inherited down the maternal line, was used in 1987 to discover the age of the “Mitochondrial Eve,” the most recent common female ancestor of everyone alive today. This work has since been extended to show unequivocally that “Mitochondrial Eve” was an African woman who lived sometime during the past 200,000 years.

Recent data suggests that Eastern Africa went through a series of massive droughts between 90,000 and 135,000 years ago. It is possible that this climate shift contributed to the population splits. What is surprising is the length of time the populations were separate — for as much as half of our entire history as a species.

The study shows that tiny bands of early humans, forced apart by harsh environmental conditions, coming back from the brink to reunite and populate the world. Truly an epic drama, written in our DNA.

Monday, April 28, 2008

Last Common Ancestor Of Neanderthals And Humans

Finding: Fossil Found In Europe, 1.2 Million Years Old

That's 500,000 years older than the previous oldest known humanlike fossils from the area. The new find bolsters the view that Homo reached Europe not long after leaving Africa almost 2 million years ago.

It seems probable that the first European population came from the region of the Near East, the true crossroads between Africa and Eurasia, and that it was related to the first demographic expansion out of Africa.

The researchers tentatively classified the new fossil as an earlier example Homo antecessor (Pioneer Man), the species represented by the previous oldest fossils and thought to be the last common ancestor of Neanderthals and modern humans.

Sunday, April 6, 2008

Early Human Populations Evolved Separately For 100,000 Years

Finding:
A team of Genographic researchers and their collaborators have published the most extensive survey to date of African mitochondrial DNA (mtDNA). Over 600 complete mtDNA genomes from indigenous populations across the continent were analyzed by the scientists. Analyses of the extensive data provide surprising insights into the early demographic history of human populations before they moved out of Africa, illustrating that these early human populations were small and isolated from each other for many tens of thousands of years.

MtDNA, inherited down the maternal line, was used to discover the age of the famous 'mitochondrial Eve' in 1987. This work has since been extended to show unequivocally that the most recent common female ancestor of everyone alive today was an African woman who lived in the past 200,000 years. Paleontology provides corroborating evidence that our species originated on this continent approximately 200,000 years ago.

The migrations after 60,000 years ago that led modern humans on their epic journeys to populate the world have been the primary focus of anthropological genetic research, but relatively little is known about the demographic history of our species over the previous 140,000 years in Africa. The current study returns the focus to Africa and in doing so refines our understanding of early modern Homo sapiens history.

There is strong evidence of ancient population splits beginning as early as 150,000 years ago, probably giving rise to separate populations localized to Eastern and Southern Africa. It was only around 40,000 years ago that they became part of a single pan-African population, reunited after as much as 100,000 years apart.

Recent paleoclimatological data suggests that Eastern Africa went through a series of massive droughts between 135,000-90,000 years ago. It is possible that this climatological shift contributed to the population splits. What is surprising is the length of time the populations were separate - as much as half of our entire history as a species.

Friday, March 14, 2008

Two Explosive Evolutionary Events Shaped Early History Of Multicellular Life

The Avalon Explosion suggests that more than one explosive evolutionary event may have taken place during the early evolution of animals. Using rigorous analytical methods, scientists have identified another explosive evolutionary event that occurred about 33 million years earlier among macroscopic life forms unrelated to the Cambrian animals.



The Cambrian explosion event refers to the sudden appearance of most animal groups in a geologically short time period between 542 and 520 million years ago, in the early Cambrian Period. Although there were not as many animal species as in modern oceans, most (if not all) living animal groups were represented in the Cambrian oceans.



Methodology


To test whether other major branches of life also evolved in an abrupt and explosive manner, Virginia Tech scientists analyzed the Ediacara fossils: the oldest complex, multicellular organisms that had lived in oceans from 575 to 542 million years ago; that is, before the Cambrian Explosion of animals. What was notable was that the Ediacara organisms do not have an ancestor-descendant relationship with the Cambrian animals, and most of them went extinct before the Cambrian Explosion.

This group of organisms -- most species -- seems to be distinct from the Cambrian animals. But how did those Ediacara organisms first evolve? Did they also appear in an explosive evolutionary event, or is the Cambrian Explosion a truly unparalleled event. 50 characters were identified and mapped the distribution of these characters in more than 200 Ediacara species. These species cover three evolutionary stages of the entire Ediacara history across 33 million years. The three successive evolutionary stages are represented by the Avalon, White Sea, and Nama assemblages (all named after localities where representative fossils of each stage can be found).

The earliest Avalon stage was represented by relatively few species. These earliest Ediacara life forms already occupied a full morphological range of body plans that would ever be realized through the entire history of Ediacara organisms. In other words, major types of Ediacara organisms appeared at the dawn of their history, during the Avalon Explosion. Then Ediacara organisms diversified in White Sea time and then declined in Nama time. But, despite this notable waxing and waning in the number of species, the morphological range of the Avalon organisms were never exceeded through the subsequent history of Ediacara.

The process involved adapting quantitative methods that had been used previously for studying morphological evolution of animals, but never applied to the enigmatic Ediacara organisms. "We think of diversity in terms of individual species. But species may be very similar in their overall body plan. For example, 50 species of fly may not differ much from one another in terms of their overall shape -- they all represent the same body plan. On the other hand, a set of just three species that include a fly, a frog and an earthworm represent much more morphological variation. We can thus think of biodiversity not only in terms of how many different species there are but also how many fundamentally distinct body plans are being represented.

The approach combined both those approaches. In addition, the method relies on converting different morphologies into numerical (binary) data. This strategy allows us to describe, more objectively and more consistently, enigmatic fossil life forms, which are preserved mostly as two-dimensional impressions and are not understood well in terms of function, ecology, or physiology.


Scientists are still unsure what were the driving forces behind the rapid morphological expansion during the Avalon explosion, and why the morphological range did not expand, shrink, or shift during the subsequent White Sea and Nama stages.

The evolution of earliest macroscopic and complex life also went through an explosive event before to the Cambrian Explosion. It now appears that at the dawn of the macroscopic life, between 575 and 520 million years ago, there was not one, but at least two major episodes of abrupt morphological expansion.

Saturday, February 23, 2008

New Route For Heredity Bypasses DNA

A group of scientists in Princeton's Department of Ecology and Evolutionary Biology has uncovered a new biological mechanism that could provide a clearer window into a cell's inner workings.


What's more, this mechanism could represent an "epigenetic" pathway -- a route that bypasses an organism's normal DNA genetic program -- for so-called Lamarckian evolution, enabling an organism to pass on to its offspring characteristics acquired during its lifetime to improve their chances for survival. Lamarckian evolution is the notion, for example, that the giraffe's long neck evolved by its continually stretching higher and higher in order to munch on the more plentiful top tree leaves and gain a better shot at surviving.

The research also could have implications as a new method for controlling cellular processes, such as the splicing order of DNA segments, and increasing the understanding of natural cellular regulatory processes, such as which segments of DNA are retained versus lost during development. The team's findings will be published Jan. 10 in the journal Nature.
Princeton biologists Laura Landweber, Mariusz Nowacki and Vikram Vijayan, together with other members of the lab, wanted to decipher how the cell accomplished this feat, which required reorganizing its genome without resorting to its original genetic program. They chose the singled-celled ciliate Oxytricha trifallax as their testbed.

Ciliates are pond-dwelling protozoa that are ideal model systems for studying epigenetic phenomena. While typical human cells each have one nucleus, serving as the control center for the cell, these ciliate cells have two. One, the somatic nucleus, contains the DNA needed to carry out all the non-reproductive functions of the cell, such as metabolism. The second, the germline nucleus, like humans' sperm and egg, is home to the DNA needed for sexual reproduction.
When two of these ciliate cells mate, the somatic nucleus gets destroyed, and must somehow be reconstituted in their offspring in order for them to survive. The germline nucleus contains abundant DNA, yet 95 percent of it is thrown away during regeneration of a new somatic nucleus, in a process that compresses a pretty big genome (one-third the size of the human genome) into a tiny fraction of the space. This leaves only 5 percent of the organism's DNA free for encoding functions. Yet this small hodgepodge of remaining DNA always gets correctly chosen and then descrambled by the cell to form a new, working genome in a process (described as "genome acrobatics") that is still not well understood, but extremely deliberate and precise.
Landweber and her colleagues have postulated that this programmed rearrangement of DNA fragments is guided by an existing "cache" of information in the form of a DNA or RNA template derived from the parent's nucleus. In the computer realm, a cache is a temporary storage site for frequently used information to enable quick and easy access, rather than having to re-fetch or re-create the original information from scratch every time it's needed.




"The notion of an RNA cache has been around for a while, as the idea of solving a jigsaw puzzle by peeking at the cover of the box is always tempting," said Landweber, associate professor of ecology and evolutionary biology. "These cells have a genomic puzzle to solve that involves gathering little pieces of DNA and putting them back together in a specified order. The original idea of an RNA cache emerged in a study of plants, rather than protozoan cells, though, but the situation in plants turned out to be incorrect."




Through a series of experiments, the group tested out their hypothesis that DNA or RNA molecules were providing the missing instruction booklet needed during development, and also tried to determine if the putative template was made of RNA or DNA. DNA is the genetic material of most organisms, however RNA is now known to play a diversity of important roles as well. RNA is DNA's chemical cousin, and has a primary role in interpreting the genetic code during the construction of proteins.




First, the researchers attempted to determine if the RNA cache idea was valid by directing specific RNA-destroying chemicals, known as RNAi, to the cell before fertilization. This gave encouraging results, disrupting the process of development, and even halting DNA rearrangement in some cases.




In a second experiment, Nowacki and Yi Zhou, both postdoctoral fellows, discovered that RNA templates did indeed exist early on in the cellular developmental process, and were just long-lived enough to lay out a pattern for reconstructing their main nucleus. This was soon followed by a third experiment that "… required real chutzpah," Landweber said, "because it meant reprogramming the cell to shuffle its own genetic material."




Nowacki, Zhou and Vijayan, a 2007 Princeton graduate in electrical engineering, constructed both artificial RNA and DNA templates that encoded a novel, pre-determined pattern; that is, that would take a DNA molecule of the ciliate's consisting of, for example, pieces 1-2-3-4-5 and transpose two of the segments, to produce the fragment 1-2-3-5-4. Injecting their synthetic templates into the developing cell produced the anticipated results, showing that a specified RNA template could provide a new set of rules for unscrambling the nuclear fragments in such a way as to reconstitute a working nucleus.




"This wonderful discovery showed for the first time that RNA can provide sequence information that guides accurate recombination of DNA, leading to reconstruction of genes and a genome that are necessary for the organism," said Meng-Chao Yao, director of the Institute of Molecular Biology at Taiwan's Academia Sinica. "It reveals that genetic information can be passed on to following generations via RNA, in addition to DNA."




The research team believes that if this mechanism extends to mammalian cells, then it could suggest novel ways for manipulating genes, besides those already known through the standard methods of genetic engineering. This could lead to possible applications for creating new gene combinations or restoring aberrant cells to their original, healthy state

Saturday, February 2, 2008

Oxygen: The Clue To First Appearance Of Large Animals

The sudden appearance of large animal fossils more than 500 million years ago – a problem that perplexed even Charles Darwin and is commonly known as “Darwin’s Dilemma” – may be due to a huge increase of oxygen in the world’s oceans.


In 2002 researchers found the world’s oldest complex life forms between layers of sandstone on the southeastern coast of Newfoundland. This pushed back the age of Earth’s earliest known complex life to more than 575 million years ago, soon after the melting of the massive “snowball” glaciers. New findings reported today shed light on why, after three billion years of mostly single-celled evolution, these large animals suddenly appeared in the fossil record.

A huge increase in oxygen following the Gaskiers Glaciation 580 million years ago corresponds with the first appearance of large animal fossils on the Avalon Peninsula in Newfoundland.
Now for the first time, geochemical studies have determined the oxygen levels in the world’s oceans at the time these sediments accumulated in Avalon. Studies show that the oldest sediments on the Avalon Peninsula, which completely lack animal fossils, were deposited during a time when there was little or no free oxygen in the world’s oceans. Immediately after this ice age there is evidence for a huge increase in atmospheric oxygento at least 15 per cent of modern levels, and these sediments also contain evidence of the oldest large animal fossils.

The close connection between the first appearance of oxygenated conditions in the world’s oceans and the first appearance of large animal fossils confirms the importance of oxygen as a trigger for the early evolution of animals, the researchers say. They hypothesize that melting glaciers increased the amount of nutrients in the ocean and led to a proliferation of single-celled organisms that liberated oxygen through photosynthesis. This began an evolutionary radiation that led to complex communities of filter-feeding animals, then mobile bilateral animals, and ultimately to the Cambrian “explosion” of skeletal animals 542 million years ago.

Wednesday, January 23, 2008

Monday, January 21, 2008

Bones From French Cave Show Neanderthals, Cro-Magnon Hunted Same Prey

Finding: A 50,000-year record of mammals consumed by early humans in southwestern France indicates there was no major difference in the prey hunted by Neanderthal and Cro-Magnon.

Research findings counter the idea proposed by some scientists that Cro-Magnon, who were physically similar to modern man, supplanted Neanderthals because they were more skilled hunters as a result of some evolutionary physical or mental advantage.

The new study suggests Cro-Magnon were not superior in getting food from the landscape. Archeoligists could detect no difference in diet, the animals they were hunting and the way they were hunting across this period of time, aside from those caused by climate change.

The takeover by Cro-Magnon does not seem to be related to hunting capability. There is no significant difference in large mammal use from Neanderthals to Cro-Magnon in this part of the world. The idea that Neanderthals were big, dumb brutes is hard for some people to drop. Cro-Magnon created the first cave art, but late Neanderthals made body ornaments, so the depth of cognitive difference between the two just is not clear.

Bears, Caves, and Cro-magnon
The study also resurrects a nearly 50-year-old theory first proposed by Finnish paleontologist Björn Kurtén that modern humans played a role in the extinction of giant cave bears in Europe. Cro-Magnon may have been the original "apartment hunters" and displaced the bears by competing with them for the same caves the animals used for winter den sites.


The cave has a rich, dated archaeological sequence that extends from about 65,000 to about 12,000 years ago, spanning the time when Neanderthals flourished and died off and when Cro-Magnon moved into the region. Neanderthals disappeared from southwestern France around 35,000 years ago, although they survived longer in southern Spain and central Europe.
The researchers were most interested in the transition from the Middle to Upper Paleolithic, or Middle to Late Stone Age.


Neanderthals occupied Grotte XVI as far back as 65,000 years ago, perhaps longer. Between 40,000 and 35,000 years ago, people began making stone tools in France, including at Grotte XVI, that were more like those later fashioned by Cro-Magnon. However, human remains found with these tools at several sites, were Neanderthal, not Cro-Magnon. Similar tools but no human remains from this time period were found in Grotte XVI and people assumed to be Cro-Magnon did not occupy the cave until about 30,000 years ago.

The researchers examined more than 7,200 bones and teeth from large hoofed mammals that had been recovered from the cave. The animals – ungulates such as reindeer, red deer, roe deer, horses and chamois were the most common prey – were the mainstay of humans in this part of the world, according to Grayson.

He and Delpech found a remarkable dietary similarity over time. Throughout the 50,000-year record, each bone and tooth assemblage, regardless of the time period or the size of the sample involved, contained eight or nine species of ungulates, indicating that Neanderthals and Cro-Magnon both hunted a wide variety of game.

The only difference the researchers found was in the relative abundance of species, particularly reindeer, uncovered at the various levels in Grotte XVI. At the oldest dated level in the cave, reindeer remains accounted for 26 percent of the total. Red deer were the most common prey at this time, accounting for nearly 34 percent of the bones and teeth. However, as summer temperatures began to drop in Southwestern France, the reindeer numbers increased and became the prey of choice. By around 30,000 years ago, when Cro-Magnon moved into the region, reindeer accounted for 52 percent of the bones and teeth. And by around 12,500 years ago, during the last ice age, reindeer remains accounted for 94 percent of bones and teeth found in Grotte XVI.

Grayson and Delpech also looked at the cut marks left on bones to analyze how humans were butchering their food. They found little difference except, surprisingly, at the uppermost level, which corresponds to the last ice age.

It is possible that because it was so cold, people were hard up for food. The bones were very heavily butchered, which might be a sign of food stress. However, if this had occurred earlier during Neanderthal times, people would have said this is a sure sign that Neanderthals did not have the fine hand-eye coordination to do fine butchering.
In examining the Grotte XVI record, the researchers also found a sharp drop in the number of cave bears from Neanderthal to Cro-Magnon times.

Cave bears and humans may have been competing for the same living space and this may have led to their extinction. He added that it is not clear if the decline and eventual extinction of the bears was driven by an increase in the number of humans or increased human residence times in caves, or both.

If we can understand the extinction of any animal from the past, such as the cave bear, it gives us a piece of evidence showing the importance of habitat to animals. The cave bear is one of the icons of the late Pleistocene Epoch, similar to the saber tooth cats and mammoths in North America. If further study supports the argument, we finally may be in a position to confirm a human role in the extinction of a large Pleistocene mammal on a Northern Hemisphere continent.

Monday, January 14, 2008

Biogeographic distributions

I. Three important principles:
How do these principles support descent with modification?
A. Environment cannot account for either similarity or dissimilarity, since similar environments can harbor entirely different species groups
B. "Affinity" (=similarity) of groups on the same continent (or sea) is closer than between continents (or seas)
C. Geographical barriers usually divide these different groups, and there is a correlation between degree of difference and rate of migration or ability to disperse across the barriers.

Disjoint locations for the same extant species: Is this evidence for creation? Note that Evolution proposes Single Centers for the origins of species, so Discontinuous Distributions need to be explained.
A. this means that a method of dispersal must be proposed.
1. Changes in climate or geology must have affected migration (i.e., by first allowing migration and then preventing migration)
2. Darwin designed tests of a priori assumptions
3. Although "accidental", dispersal is not really random (and thus allows very specific predictions about distributions in some cases)

B. Case study: Similarity of flora and fauna at mountain summits (is this evidence for independent creations or something else?)
1. Evidence is clear for recent glaciation
2. Migrations are easily visualized in the gradual advances and retreats of glaciers
3. Because mountain tops retain a colder climate, some cold-adapted, northern species would be retained on mountain tops (and thus isolated during glacial retreat)
4. Also explains why such mountain-top species are most closely related to species living due north
5. Isolation poses an opportunity for change, esp. if it means a change in its interspecific associations
6. Assumption of the scenario: Circumpolar distribution is uniform (presently the case)
7. Secondary assumption: Similar situation for subarctic species

C. Many difficulties remain to be solved, esp. the very distinct, but distantly related forms in the Southern hemisphere (e.g., marsupial versus placental mammals)
1. These species are too distinct to be explained by the recent glaciation
2. Darwin postulates an earlier glaciation, because he did not know about plate tectonics
3. With plate tectonics, many (if not all) of these kinds of problems are soluble.

Fresh water distributions
Because freshwater is isolated, you might expect restricted ranges, however, this is not the case just the opposite, they often have distributions even broader than terrestrials: How can this be explained? Three cases to consider:
A. Distribution of Fish
B. Distribution of Shells (molluscs)
C. Distribution of Plants (often very wide ranges)
In all cases, dispersal of freshwater organisms depends largely on animal (esp. bird) transport
Distribution of species on oceanic islands
Darwin considered this evidence as especially strong in its support of descent with modification
A. The total number of species on oceanic islands is small compared to the number on an equal area of continent
B. Proportion of endemic species is very high
C. Oceanic islands are missing entire Classes
D. Endemic species often possess characters that are adaptive elsewhere, but are useless characters on the island
E. Endemic species often show (new) adaptive traits not possessed by any of their relatives
F. Batrachians are universally absent (except one frog in New Zealand)
G. Terrestrial mammals are not found on any island >300 miles from mainland
H. But arial mammals are found on such islands, and many of these are endemic
I. Also a correlation between the depth of the sea separating islands inhabited by mammals and the degree of "affinity" (classification) between these species
J. "The most striking and important fact" (p. 397) is the affinity of these island species to those of the nearest mainland, without being actually the same species
K. Within an archipelago, species are more closely related to each other than to those on the mainland (but still distinct from each other)
L. The principle applies widely that island inhabitants are most closely related to the inhabitants of a region from which colonization is possible
M. According to this principle, it must be the case that at some former time, a single parental species covered both ranges (i.e., the migration event itself)N. Darwin draws a parallel between Time and Space in the "Laws of Life"