Friday, August 31, 2007

Playing the Odds

Personally, I don't think there's intelligent life on other planets. Why should other planets be any different from this one? ~ Bob Monkhouse

When last we met, I was rambling on about how life on other worlds might come in unexpected forms. That got me to thinking about the odds of intelligent life occurring at all and Frank Drake's equation. Think about how we got here.

Planet Earth formed around 4.5 billion years ago. Somewhere along the line, after Earth had oceans and possible life, an object the size of Mars probably came along and smacked into the Earth, destroying any life that was around. On the other hand, the collision (according to the most popular theory these days) gave us our moon, which had a number of salutary effects, the most important of which was to stabilize Earth's wobbling about its axis. This stability lengthens climatic cycles, giving life forms a better chance to get going.

So, on Earth Mark II, life arose again. For an incredibly long time, all the planet had was microbial life and blue-green algae. This stuff may have had to survive a planet-wide ice age. Had any land-based life been around (and there's no evidence there was), it would have been wiped out by such an event, leaving the planet to the microbes again. Eventually, Earth thawed out, and life finally exploded (for reasons that no one has ever been able to explain) in an event called the pre-Cambrian Explosion.

Things were pretty good until the Permian extinction. This event, possibly related to immense volcanic activity, wiped out 95% of the life around at the time. Life is tenacious, though, and gradually made a comeback. Mass extinctions came and went until about 220 million years ago (give or take) when the dinosaurs showed up and had a 165 million year run.

The dinosaurs went extinct thanks most probably to a combination of climatic change, possibly induced by another bout of volcanic activity, and a big honking meteor strike in Mexico about 65 million years ago. Now think about that for a minute. It took around 60 million years for our primate ancestors to show up, but once we did, we made pretty quick strides to getting smart. The dinosaurs were around almost three times as long and evidently never got beyond the clever carnivore stage.

It is, of course, possible that there were actually intelligent dinosaurs that built some sort of civilization. After 65 million years there isn't much chance that you're going to find signs of settlements or pottery fragments. It's hard enough to locate stuff like that from 5000 years ago. Not finding big-brained dinosaurs is no absolute proof either, considering how difficult it is to find ancient hominids. Let's face it: Intelligent beings are crunchy and taste good to carnivores (with or without ketchup).

So, even if you can liberate yourself from being a carbon-based chauvinist and imagine all sorts of exotic life forms, intelligent life seems to be a more difficult thing to come by, even if it breathes methane. So, how does this impact Drakes' Equation?

Based on current information, it appears that planets are a pretty common event. While we haven't found many Earth-like planets, our method of search tends to find the big weird gas giants more easily than it finds good candidates for life. I know I'm saying we should accept the idea of strange life forms, but intelligent life appears to need some sort of planetary stability. Gas giants whirling around their stars close enough to be boiling or in eccentric orbits that send them from freezing voids to blistering passes by their alien suns are not going to be likely habits for technological civilizations.

On the other hand, given the tenacity of living things like microbes and simple creatures, based solely on our sample of one planet (Earth), it's likely that life occurring is quite common. It may not be very common, though, that it ever gets intelligent enough to build radio telescopes.

If you refer back to the article linked above, then, it appears that the factors for the existence of planets and life arising at least once are probably pretty close to one, as shown in the second set of variables. The trouble occurs when you consider the proportion where intelligent life arises and builds a technological civilization. The number in both examples is 0.01, which leads to 1,000,000 (in the second example) as the number of planets in our galaxy with intelligent life.

Frankly, that sounds pretty high.

Given all the ways that life can be extinguished (not counting those that intelligent beings would inflict on themselves), it would seem that the intelligent life factor should be much smaller. How much? I have no idea, but reducing it from 10-2 to 10-6 would not seem to be unreasonable. That would release the number of intelligent civilizations from potentially 1,000,000 to 100 in the entire Milky Way galaxy.

And it's a big galaxy.

Now that doesn't mean that we should quit looking for life, because the odds favor finding some sort of living things out there. The odds might even favor finding creatures that walk around on land surfaces and eat each other, just like they did hear for eons. Just finding that sort of life would be a intellectually stunning event. People would have to come to grips with the fact that the universe is not here just for us.

It also doesn't mean that we should quit looking for intelligent life. Aside from the fact that my numbers could be so much smoke, even if I'm right we could still have someone in our neighborhood. Making contact with some alien civilization would send shockwaves throughout our immensely self-centered societies.

I look at it this way, though. It's difficult to know how many Earth-like planets there are out there, but it's likely that they won't be inhabited by anyone who is going to be put out by a bunch of strangers showing up from another planet. It means that humanity has a chance to survive future disasters like those that have wiped out life before by going to the stars and colonizing other systems. Assuming that we don't obliterate ourselves through our own collective stupidity (no sure thing), our descendants could even survive the death of the our sun by going to another star system.

It's nice to know we have options.

Monday, August 27, 2007

Life As We Don't Know It

The fancy that extraterrestrial life is by definition of a higher order than our own is one that soothes all children, and many writers. ~ Joan Didion

If a group of reasonably science-aware folks were forced to list the Big Questions in science, we might come up with a disparate listing, but there are two questions that I suspect would make everyone's list. First is the origin of the universe. Probably every human being who ever lived has pondered this question because we all want to know where we came from. Even the most fundamentalist Jew, Muslim, and Christian have at some time in their lives considered alternative ideas for this greatest of all events.

The other question that would make everyone's list is whether life exists elsewhere in that same universe. Of course, the dream of many (and nightmare of some) is that we might someday find a species with which we can communicate (something we don't so well with each other), but most of us would be perfectly happy to find some tiny microbes wriggling about in the sands of some distant world.

In recent years, the question has become less theoretical because, as we launch more and more satellites and probes, we find more and more tantalizing hints that life could exist on other worlds in our own solar system. The possibilities have generated a lot of discussion of late.

For instance, a German scientists by the name of Joop Houtkooper has announced that the Viking Landers, which arrived on Mars in 1976, may have found life after all. He says the lander data shows signs of hydrogen peroxide. His little bugs would have been filled with a combination of hydrogen peroxide and water, providing them with a natural anti-freeze. As much as 0.1% of Martian soil could be of biological origin, a ratio comparable to what one finds in Antarctica, where, last I checked, it's bloody cold.

Of course, other scientists disagree, but there arguments are based pretty much on the old "life on Earth doesn't work like that." They also base their opinions on the gas chromatograph results. The trouble with that is that some intrepid scientists have determined that the GCMS may not been accurately set up for the conditions on Mars. They based their opinion on the fact that the instrument was unable to find life on another planet -- Earth.

As someone once put it (it may have been Carl Sagan), it would have been so much easier if a Martian giraffe had walked by the cameras. Or if the rovers would send back a snap like this:

Life is also more tenacious than we generally credit. One of the continuing objections to life on Mars is that it's bathed in radiation (besides being colder than an intergalactic welldigger's ankles). That sounds like a real killer until you remember that the Apollo 12 crew brought back a camera full of terrestrial microbes from an old Surveyor probe, microbes that were resuscitated. Some scientists think the contamination occurred after the camera was returned, but, considering that we keep finding life on Earth where we aren't supposed to find it (salt flats, suboceanic smokers, and so on), the possibility that microbes can survive the harshest conditions is certainly not far-fetched.

The problem is that most people tend to think of "life as we know it." That is, life should be carbon- and water-based. Intelligent life should be bipedal with binocular eyes. Well, that's life as we know it. Except that some very intelligent life on Earth (whales and dolphins) is not bipedal. If we can be that far off for life we see all the time, how wrong might we be about other life forms?

Recently it seems that scientists are trying to think outside the carbon-based box. Maybe DNA is not the be-all and end-all. Maybe Titan holds ammonia-based bugs. And maybe inorganic compounds can generate life. Perhaps Spock's green blood isn't so unlikely after all (well, it might be, since it would make for an inefficient transfer of oxygen).

The point is that scientists are beginning to speculate about life as we don't know it. That opens a major can of worms (or microbes) because it raises a thorny question. If we're looking for life as we don't know it, how will we know it when we find it? It means we need to reconsider the factors that make something "alive".
And, if we aren't sure of what's alive, can we be sure we would recognize intelligence if we saw it? (Insert sarcastic joke here.) Most people expect that intelligent beings will have artifacts, cities, spoken language, and so on. But what if we came across a highly intelligent group of ... well, whatever... that communicated through color, perhaps in ranges like the ultraviolet, that we can't even see.

The Discovery-Science axis did some imaginative programs a while back speculating on life on other worlds. They did seem to be fascinated by creatures that floated, but if you can get beyond that, the flights of fancy taken by the scientists involved showed that entirely alien ecosystems could operate logically. And they stayed within the carbon and water framework.

I suspect that we're going to find strong evidence of life within my lifetime (assuming I don't get run over by a truck tomorrow). It may be microbial, it may be swimming in the oceans of Europa, or it may be breathing methane and crawling around on Titan. But, I do think we'll find something.

I just hope we'll recognize it when we find it.

Wednesday, August 22, 2007

The Phlogiston Effect

All science is either physics or stamp collecting. ~ Ernest Rutherford

When I was in college, I wanted to be a physicist. There were two reasons for this. First, physics is the crown jewel of the sciences. If you don't think that's true, just ask a physicist. Second, physics is the science of everything. If you don't believe that, just think about how may disciplines have "physics" in their title, like geophysics, astrophysics, and biophysics. Modern chemistry is dependent on quantum theory, as is electronics, and who came up with quantum theory? Why, physicists, of course.


Regrettably, the physics thing didn't work out. As someone once said (or should have said), "Those who can, do. Those who can't do, teach. Those who can't teach, kibbitz."

Since physics is the study of nearly everything, it is might seem natural that physicists would be obsessed with a theory that encompasses everything. In fact, until the early twentieth century, that didn't seem to be much of a concern. But, early in the century, Einstein came out of the woodwork with a flock of papers that culminated in his Special and General Theories of Relativity. In an incredible burst of genius, Einstein tied together electromagnetism, gravity, time, and mass-energy equivalence (among other things) into one unified whole. It was beautiful.

Then, along came the Copenhagen Gang, with guys like Wolfgang Pauli, Werner Heisenberg, and Paul Dirac, led by that Einsteinian arch-nemesis, Niels Bohr, dragging their quantum theory in with them. Okay, they weren't really a gang (nor did they all come from Copenhagen), but given Einstein's antipathy to quantum theory, one can imagine his having a view of them in black leather jackets. What upset Einstein was that while Relativity portrays a deterministic, predictable view of the universe, quantum theory depicted a world of uncertainties in which only probabilities could be assigned to any event.

(Ironically, the theory Einstein despised derived its name from a term Einstein invented, the quantum, to describe the photon's discrete packets.)

Now, no one disputed that Relativity worked extremely well on large scales, but even Einstein was forced to admit that it failed in the realm of the very small, at atomic levels. Quantum theory, on the other hand, while describing the atomic world well, collapses into ordinary Newtonian and Einsteinian behavior at large scales. Einstein spent much of his life searching for a theory to unify the nuclear and the relativistic forces into one theory, the Grand Unified Theory (GUT), or, as it has become colloquially known, the Theory of Everything.

Einstein failed, but it hasn't stopped everyone in physics from trying ever since. There is a dream physicists have that there is a formula that somehow brings together Einstein's Field Equation (the famous one with the cosmological constant) and quantum theory. It would be short, succinct, and fit on a physicist's business card.

The dream is still unfulfilled. String theory has been touted until we're all tired of it as the road to GUT. The one or two of you who have read this blog before know how I feel about that. Briefly, string theory is an outrageously complicated way to explain things that have already been explained while not really predicting anything on its own. Despite periodic pronouncements over the last few years by string theory advocates, string theory hasn't succeeded in doing any better job of explaining what's going on than standard quantum theory does. And, despite our ability to use quantum mechanics to describe what goes on and make predictions of what will happen in the subatomic universe, there are too many uncertainties (not of the quantum type) for scientists to reconcile that universe with the big one all around us.

It turns out that we don't understand the big universe all that well, either. For example, over the years, we've come to realize that the observable universe doesn't have enough mass. At one time, it was thought that 75% of the universe was unaccounted for; the number is now around 95%. To try to explain this anomalous situation, physicists and astronomers came up with "dark matter", a sort of ... something...that we can't see, can't describe, but can observer...sort of ...indirectly.

When dark matter failed to account for all the observations, a new element, dark energy was invoked. Now it's felt that most of what we can't find is actually dark energy. Except that we don't know what that is, either. It also turns out that what little we thought we knew about dark matter may be wrong now, too.

Which brings me to Phlogiston.

Back in the days of alchemy, one of the many things that puzzled those practitioners was the business of combustion. Take a piece of would and set it on fire. After a while, you've got a pile of ashes that don't look anything like the wood we started out with. Obviously, something interesting was going on here. So, back in the 18th century or thereabouts, the alchemists decided that everything had something called Phlogiston in it. When you burned ("dephlogistonized") a thing, the Phlogiston was given off and what was left, the ashes, were the real substance of the burned object. As to Phlogiston itself, it was colorless, odorless, tasteless, and weightless. Note that if you left off "weightless", you'd have a pretty good definition of dark matter.

Matters stood this way until Lavoisier demonstrated the actual nature of combustion, demonstrating that it was, in fact, rapid oxidation. Yet, prominent thinkers like Priestly resisted the oxygen hypothesis in favor of Phlogiston. It took some years (and the passing of some of the more prominent doubters) for Lavoisier's work to become fully accepted.

Phlogiston was an attempt to explain something we didn't understand, but it was an improvement over invoking superstition and myth. It was an attempt to bring scientific methodology to bear, albeit in an early, halting way, in studying a phenomenon. Today we're much more knowledgeable and sophisticated. We've got geniuses like Hawking, Thorne, Guth, following in the paths of Einstein, Feynman and others. But, for all their brilliance, we're still peeking out into a vast universe (and into a vast quantum world) and trying to figure out what the Phlogiston is. That's not a bad thing. Knowing what you don't know is a big step toward knowing it.

You'll have to excuse me now. I think my dinner is beginning to dephlogistonize.

Postscript: After I thought about the cute Phlogiston angle, I did a little web searching and found someone
else had this clever idea. I wasn't surprised; with a bazillion people blogging out there, it was inevitable that someone else would have heard of Phlogiston and linked it to dark matter. At least I was wordier about it. Whether that's a good thing is open to debate.

Friday, August 17, 2007

A Mathematical Certainty?

Why is there air? ... Any phys ed major knows why there's air. There's air to blow up volleyballs, to blow up basketballs. You guys call ME dumb ... ~ William H. Cosby Jr., Ed. D.

Sometimes I have trouble deciding where to start on a piece, and this is one of those times. I guess the thing to do is to quote from John Tierney's New York Times article:

"In fact, if you accept a pretty reasonable assumption of Dr. [Nick] Bostrom’s, it is almost a mathematical certainty that we are living in someone else’s computer simulation."

Take special note of the words "mathematical certainty". Later in the same article:

"Dr. Bostrom doesn’t pretend to know which of these hypotheses is more likely, but he thinks none of them can be ruled out. 'My gut feeling, and it’s nothing more than that,” he says, “is that there’s a 20 percent chance we’re living in a computer simulation.'"My gut feeling is that the odds are better than 20 percent, maybe better than even."

Dr. Nick Bostrum is a philosopher at Oxford University. He has written a paper speculating that there's some chance (his "gut feeling" of 20%) that our lives could, in fact, be a supercomputer simulation being played by some highly evolved "posthuman." Mr. Tierney is either the most brilliant satirist I've ever read or someone with a tenuous grip on reality. Taking a 20% "gut feeling" and turning it into "mathematical certainty" would suggest the latter.

I have nothing against philosophy; in fact, I'm a big fan of Aristotle, Kierkegaard, Kant, and Sartre (Plato can go suck eggs). When I attended Case Tech (no, Aristotle was not a Scholar-in-Residence), the school made us take a course in the humanities each semester in an attempt to keep us from turning into total geek vegetables. I took several philosophy courses over 4 1/2 years (don't ask), so, while I'm no licensed philosopher, I am familiar with the concepts.

Dr. Bostrum's argument sounds suspiciously like a combination of several questions that are put to freshman philosophy students.

1. Are there minds other than my own? Alternatively, is this reality actually someone else's dream? Discussing the nature of reality is an important philosophical concept, leading to questions on morality, the meaning of existence, and how we relate to others. However, the questions as phrased above are lightweight and lead to some pretty silly arguments. I always felt, for example, that if we were in someone else's dream, we'd keep showing up for important occasions (like final exams) totally unprepared and naked.

2. Does God exist? Back in the sixties, this was generally a civil but useless discussion; today, I'm sure that some philosophy classrooms have descended into warfare. The problem here is that there is no "proof" either way, because all arguments for God are based on faith (the Bible, miracles, and so on) or the "cosmic watchmaker" principle of "the world (universe) is too complicated to have come about by accident, so there must be a Creator." Attempting to disprove the existence of God is almost meaningless as well, since negative proofs are notoriously difficult to make into solid arguments. Note that considering the existence of God is not the same as contemplating the nature of God, which is a much more important field of discussion.

In one fell swoop, Dr. Bostrum has lumped these rather specious questions together, postulating a "designer" who is running ancestral simulations and presumably still living in his mother's basement. Mr. Tierney runs with this, contemplating nested simulations, the first of which is created by the "Prime Designer" or, as most of us call him, God.

Frankly, this stuff is pretty rank sophistry, probably more on Mr. Tierney's part than on Dr. Bostrum's. Aside from a silly restating of freshman-level philosophy questions, the whole discussion presupposes behavior in the future will be the same as behavior now. A computer powerful enough to create an entire universe, complete with life-forms, buried fossils, climate effects, black holes, and on and on won't be here in fifty years, despite Mr. Tierney's vague citation of "some computer experts". Impressive 3-D graphics do not the creation of an entire simulated universe make. So we are looking in some dim and distance future (despite Mr. Tierney's statement that the time doesn't matter) and assuming that entities that are most likely very different from us are still into video games.

Even Dr. Bostrum hedges his bets, saying, "T
his kind of posthuman might have other ways of having fun, like stimulating their pleasure centers directly." Or perhaps they'll be fond of going outside and playing baseball. Either way, it's no sure thing that they'll amuse themselves creating simulations of their ancestors.

There are two things that are especially disturbing about this entire discussion. First, if this is the state of philosophical thought today, we're in trouble. Philosophers have become as vapid as reality TV. Second, Mr. Tierney's article appears in the Science section of the paper. That's frightening because the average web-dolt who stumbles across this is going to take it as science instead of sophistry. It'll turn up on Digg and Slashdot (it's already been on Fark), it'll be Wiki-ed, it may even end up on legitimate science sites (Scientific American bloggers dote on this sort of thing).

Before you know it, politicians will be saying that a 20% chance of something is "a mathematical certainty" because if it's in the New York Times, well, then it's a fact.

May the Prime Designer protect us from sophists and those who think they make sense.

Sunday, August 12, 2007

The Innovators

Anthropology is the science which tells us that people are the same the whole world over - except when they are different. ~ Nancy Banks Smith

The anthropologists certainly have been busy of late. For beginners, Kenyan scientists recently unveiled some Homo Erectus skulls. What made them noteworthy is that they were the first female Erectus examples ever found. The other thing that made them interesting is how small they were. The implication is that Erectus may have been more ape-like than previously thought. The female skulls were also found in a lower (older) layer than a Homo Habilis jawbone, which would seem to indicated that rather than Erectus having been a descendant of Habilis, Erectus and Habilis coexisted, probably descended from a common ancestor.

A new DNA study gives strength to the long-standing theory that modern humans came out of Africa as a new species and is not related to Neanderthals or "hobbits". The skulls mentioned above do nothing to change that idea. The DNA analysis shows that differences between human populations can mostly be defined by distance from Africa.

So, "out of Africa" is confirmed, but Erectus falls in terms of modernity. The latter is interesting in that Erectus was considered to be an innovator in the realm of tools. Now, being more primitive doesn't lessen that possibility, but it may help to explain why Erectus made that one innovation then seemingly never made another.

The idea of innovation brings to a new theory of Neanderthal intelligence. An archaeologist in England has decided that the archetypal caveman was a better innovator and more adaptive than previously thought. The article, perhaps unfairly to the scientist, doesn't really explain how he builds his case about more innovation. The data I've read seems to indicate that, yes, Neanderthal did come up with some new ways of doing things, but, after 300,000 years, he was still using the same methods. Like Erectus, Neandethal came up with some good ideas early on, but he never came up with any more.

As to Neanderthal's adaptability, it is certainly correct that he was able to deal with the colder climates better than his neighbor Heidelbergensus, but it is debatable as to whether new climatic changes 40,000 years ago didn't contribute to his decline.

Neanderthal certainly wasn't anywhere near as innovative as early Homo Sapiens. As Sapiens came out of Africa in that last wave, he demonstrated a near passion for invention. New spear points, improved hunting techniques, farming all came from Homo Sapiens drive to change the world around him. And don't forget art. It was Sapiens who started wearing decorative shells which they painted and modified.

In fact, there is one scientist, Nicholas Conrad, who thinks he knows exactly where and when the explosion of art occurred. It was 40,000 years ago in Swabia. Swabia is located between France, Switzerland, and Bavaria, and Conrad has found carvings, decorative things, and even a flute, none of which, he claims have been found in layers as early as these finds.

Conrad's theory (reported in the September/October issue of Archaeology Magazine, the article is not online at this writing) is controversial in that it tries to pinpoint a single locale as the spot where art suddenly bloomed in modern mankind. There are concerns that the layers in which the artifacts are found are not so clearly delineated. Some point to the fact that carnivores like to use the same caves humans use. A bear, digging a bedding area, would throughly confuse the layers in which artifacts and bones would be found. Conrad claims that such mixing is not an issue in his finds.

All this business of innovation boils down to this: Until Homo Sapiens came along, innovation was a very occasional thing. It is not that Erectus, Neanderthal, and all the others didn't come with new inventions and discoveries. It's that they didn't come with many, and the ones they did create or find happened early in their career on the planet. Sapiens, on the other hand, has been constantly inventing and innovating virtually since he appeared.

It's not that Erectus and Neanderthal were stupid; they had to be more mentally advanced than their predecessors. It's just that they got to a certain point early on and stalled there. Eventually, conditions changed or competition came along, and the old-timers had shot their bolt.

Now before we get all smug about our cleverness, stop and think about some timeframes. Erectus was around for a million years; Neanderthal made over 250,000 years. We've only been on top of the pyramid for 40,000 years. There's no telling how long we'll last, assuming we just don't blow ourselves up and turn the planet back over to the insects. If we do last, our species may someday be looking at a new group that makes our innovativeness look like cream cheese.

Gives one pause for thought.

Sunday, August 05, 2007

Even More Lost In Space

Where there is no vision, the people perish. - Proverbs 29:18

I haven't written about manned spaceflight because, well, it's getting very depressing. Consider these recent developments.
  • Tommy Holloway, former manager of NASA's space station program, said to a Congressional committee, "I think depending totally on COTS would be a significant risk to the long-term viability of the station."

  • As NASA told us how wonderfully refurbished the shuttle Endeavour was (while misspelling the name on a welcome banner at the Cape), they also revealed it had a leak in the crew cabin. Fortuntely they seem to have found it.

  • Meanwhile, Energia was on the verge of bankruptcy, and Rocketplane Kistler was having continuing problems raising funding, preventing them from getting their mitts on all of that $207 million of taxpayer money NASA wants to give them.

  • Finally, there was the tragic blast during a test of a SpaceShip Two engine, as the Burt Rutan-Richard Branson Virgin Galactic project continues to go nowhere.
This isn't good.

Ironically, while our manned and commercial projects are sinking in the sunset, science projects continue to amaze and astound. The Mars Rovers set survival records every minute they continue to operate. They were built so well, it appears they will survive a planet-wide dust storm. New Horizons, passing by Jupiter on its way to Pluto, returned exciting data. Cassini turns up new discoveries in the Saturn system with every orbit. The Phoenix lander launch was a thing of beauty.

So, of course, our geniuses in Washington think science funding should be cut back to ensure there's enough money for COTS. While the science folks find ingenious ways to maximize their payloads and return more data, manned space flight is still using old techniques that got us to the money in 1969.

I can't help thinking that the recent dusting off of the Saturn launch vehicle that's been rusting away as a display piece may be a precursor to its being refitted to use for Orion.

Okay, that's a joke, but so is manned space flight, especially commercial space flight.

I've written over and over again about my concerns about so-called "free enterprise" "private sector" space projects. I've also worried over the "gee-whiz" promises of the Bush Administration to go to the moon and Mars (all nicely timed to occur long after Bush and friends are out of office) as pie-in-the-sky directionless objectives. I'm not going to flog that horse again. Besides, I don't have to.

"Successes" like those listed above speak for themselves.