Wednesday, January 31, 2007

Aussie Bones

From my limited perspective, real paleontology amounts to far more digging and head scratching than a rational person enjoys. It appears to help if you also know how to argue, cuss, and speak Latin. ~Robert Kirby

One rarely hears about fossils being found in Australia. Even the Sahara, long regarded as a poor source of finds, has become better known as a source for dinosaur and primate remains. And, of course, the Badlands of the United States and the Chinese deserts have been long known for their museum-filling capability. It's not that there aren't fossils down under, but they appear to be harder to find. One particularly good fossil bed, for example, is in tidal water, requiring a lot of slogging and draining to extract the bones. I imagine that another reason may be that the continent may have lacked some of those convenient conditions (inland seas, flash floods) that cover bones and render them available to diggers eons later.

Of course, if they're going to take five years to report a find, that's not going to raise their profile, either.

I'm being facetious. Finding a bunch of fossils and making sense of them are two very different things. So, despite the fact that a huge find of bones was found in limestone caves under the Nullarbor Plain was found in 2002, proper paleontology requires that people begin to figure out what's there before rushing out to shout about it. Very well, then, what's there?

What is there is a veritable "What's What" in Australia's prehistoric past. How about 23 extinct varieties of kangaroo? Or a marsupial lion? So far, 69 vertebrate species have been found, including mammals, reptiles, and birds. They got there because the Thylacoleo Caves (as they are now known) are natural traps. Tubes lead up to the Nullarbor Plain, and these are camouflaged by bushes or other growth. An animal comes along and pokes its nose into the brush looking for a morsel and ends up tumbling into the cave below from where there is no escape.

Sad for the ancient 10-foot-tall kangaroo but perfectly wonderful for the modern paleontologist.

The creatures date from anywhere from 400,000 to 800,000 years ago. All are now extinct, although some were around up to 11,500 years ago. Whenever such a large number of specimens turns up, it gives scientists a chance to speculate on extinction mechanisms. Climate change has long been a suspect in the demise of large Australian fauna, but the caves seem to indicate that the Nullarbor Plain has been arid throughout the 400,000 to 800,000 years that these animals wandered there. So, we look at the ever-popular culprit: Man.

Another article raises the specter of over-hunting, primarily because, since scientists didn't dream that there were so many species supported on the Plain, then it must have been people that wiped them out. That's a bit of a leap of logic. The flora changed considerably over the time period; it hasn't been determined that all of these species were existing at the same time; and, (and this is a big one to me), there's no evidence anywhere of hunting activity by humans.

As one of the scientists puts it, just because a species was arid-adapted 200,000 years ago, there's not enough information to extrapolate that to 40,000 years ago and announce that "Yep, them ornery humans did it." In fact, in places where bones of both megafauna and humans have been found, like Cuddie Springs, there's no evidence that the animals were hunted.

None of this is to say that humans didn't play some role in the demise of some of the large animals, perhaps through burning off brush or through some level of hunting. But, not enough information is available yet to draw those sorts of conclusions. A site like this has a lot to tell us, and it's just beginning to reveal its secrets. After all, it took around half a million years to accumulate all this information. It is not unreasonable to think it might take more than five years to figure it out.

I suspect the Nullarbor Plain will be throwing more curves at the scientists over the next few years.

Thursday, January 25, 2007

Streaking Past Jupiter

Equipped with his five senses, man explores the universe around him and calls the adventure Science. ~Edwin Powell Hubble

When everyone was going crazy about devaluing Pluto (about which I wrote at length, culminating here), one of the very stupid thoughts to be floated was that, if Pluto was no longer considered a planet, should the New Horizons project be scrapped?

New Horizons was, and is, a satellite on its way to investigate what we used to call the ninth planet (or the eighth planet, when it came inside the orbit of Neptune). By the time the debate had reached it's height, with the IAU's decision to reduce Pluto to a dwarfy, icy, Kuiper-Beltish sort of thing, New Horizons was streaking through the inner Solar System like the proverbial bat out of hell, heading for a 2015 rendezvous with Pluto, Charon, two other little moons and whatever else might be out there.

So, what did the nay-sayers want to do, call it back?

Fortunately, no one paid much attention to the non-debate concerning whether Pluto's demotion somehow denigrated the nature of the New Horizons probe. In fact, the probe's rapid progress has it in a position to start sending some science back, not about Pluto, but about Jupiter.

It is hard to grasp sometimes just how fast this satellite is moving. To put it into perspective, New Horizons launched January 19, 2006. It will make its closest approach to Jupiter on February 28 – of this year! Now if you aren't amazed by that, consider that Galileo, the little satellite that could, took six years to make the trip, needing two or three gravity boosts in the process. To put it simply, New Horizons is the fastest thing we've ever launched into space – or anywhere else for that matter.

Fast as it is, New Horizons is using Jupiter to give it a further speed boost, because its still a long, long way to Pluto. Despite taking 1/6 the time to get to the Jovian system that Galileo did, it will take another eight years to get to the depths of space where Pluto lurks. Pluto is awaaaaay out there.

All this speed was accomplished by making the satellite as compact (about the size of a piano, as opposed to Cassini, which was a bus) as possible and putting it on a large rocket. There is a trade off for traveling at this incredible rate; you can't slow down. To put enough fuel and a big enough engine on New Horizons to enable it to slow down and possibly orbit the Pluto system would have made it so heavy that it would have taken many more years to get where it was going. Even a light-weight ion engine would have required so much deceleration time, that nothing would be gained by using the constant acceleration afforded (assuming that we had sufficient experience with ion engines; so far, Deep Space One is the only usage and, while it worked, it was a bit finicky in its behavior).

The Jupiter encounter gives the New Horizons team a chance to test out systems, gather some science, and conduct a dress-rehearsal of what an incredibly high speed flyby is like. Already, early pictures are showing changes in the great Red Spot of Jupiter since Cassini took its flyby pictures. There will be pictures of Io's current volcanic state, which should prove interesting as Io seems to be the most dynamic place in the Solar System short of the Sun.

But one of the most interesting aspects of this pass is that New Horizons will plunge through Jupiter's “tail”. Like any planet with a magnetic field, the solar wind blows Jupiter's magnetosphere back away from the planet into a comet sort of shape. In fact, New Horizons' path will take it the length of that “cometary tail” which will give tremendous amounts of data on the extent and strength of Jupiter's magnetic field and on the solar wind's effects.

Principle investigator Alan Stern is, as one might expect, very excited, but he also has something on his mind. After years of working to get NASA to send a probe to Pluto, he was saddened to see the old ice-ball get demoted to whatever it got demoted to.

Okay, okay, it's a dwarf planet, but I think the IAU's approach and definition is dumb, as I've said at length before.

At any rate, Mr. Stern is not giving up, using this flyby, and any other opportunity to get Pluto back to planethood. Well, that's all well and good, but I think he's going to be disappointed. Once New Horizons gets out to the Kuiper Belt, it will give us, hopefully, a good look at Pluto and quite probably a look at some other ice balls floating around in the neighborhood. It's quite possible that Pluto will be found to be much closer kin to a comet than to anything we would normally recognize as a planet.

Knowing what Pluto and other Kuiper Belt objects are like will give us important clues about the formation of the solar system. If Pluto's status as a non-planet is confirmed as a result, that wouldn't be so awful.

Besides, “building block of the solar system” isn't a bad thing to be called.

Resources:
Pluto probe begins close-up study of Jupiter

NASA's Pluto Probe Prepares for Jupiter Flyby

Tuesday, January 23, 2007

Bonaparte's Gastric Distress

What a myth never contains is the critical power to separate its truths from its errors. ~ Walter Lippman

History is confusing enough, what with trying to sort out the reality behind to various spins placed by the descendants of interested parties. It's even more difficult when people inject supposition and even mysticism. Nowadays, the CSI-mentality has added a new twist with “forensics” being applied to find conspiracies and treachery behind every historical death.

For example, there's the matter of King Tutankhamen's death. For years, an X-ray of Tut's skull was thought to imply a blow to the head might have been the cause of the boy king's death. Recently, Tut got the forensic treatment in a big way, with “evidence” gathered by some private detectives at the behest of the Discovery Channel. The “evidence” showed almost beyond a shadow of a doubt that Tut had been whacked by one of his close associates. Then came the CT scan that showed Tut's fractured leg, with clear evidence that he lived for a short time after the break. The conclusion of people who actually understand these things was that the broken leg almost surely caused Tut's demise.

Oh, shucks.

Of course, that didn't stop the private eyes from offering that he still could have been poisoned after his leg was broken. Why you poison someone who is dying from an serious injury is problematic, but logic doesn't seem to figure into these things very strongly.

Then there was the Bonaparte Poisoning Case.

After defeating Napoleon at Waterloo, the Allies realized that they had to put the Emperor somewhere very far away. If he had any opportunity to raise a force, he would find his way back. After all, he had left Elba with a handful of men and subsequently regained the loyalty of the entire French Army. Definitely not a man to be trifled with. So this time, he was exiled to a small pile of rocks called St. Helena in the South Atlantic, where he lived for about six years, until his death in 1821.

An autopsy conducted immediately after his death determined that the cause of death was stomach cancer, a disease that is supposed to have claimed Napoleon's father. Since then, a popular myth has popped up asserting that Napoleon was poisoned with arsenic by someone in cahoots with the imprisoning powers, by someone who hated his guts, or by someone who just wanted the exile to end so he could get off that gawd-awful island. The fact that the autopsy found none of the signs associated with arsenic poisoning was not particularly important. Last year, sometime, we were treated to another one of the “forensic” evaluations that “clearly” indicated that poisoning could have occurred.

Sorry, people, it just ain't so.

A for-real pathologist has led a team that has determined that Napoleon died of gastric cancer, probably caused by the persistent gastritis that plagued Bonaparte through much of his adult life. Bang goes another illusion. But, why have the illusion if the evidence wasn't there to begin with?

I think part of it is that people are reluctant to imagine a titanic force like Napoleon succumbing to the weakness of his own body. Oh, dying of a heart attack during a critical moment would have been okay, but to simply pass on because of a disease probably brought on by heartburn seems so futile.

In the case of Tutankhamen, it's the youth of the victim that seems to fire our imaginations, plus the fact that, thanks to his intact tomb, we have such a vivid picture of a very human Tut, as opposed to gigantic figures carved on walls or in front of temples. Add to that the natural assumption of political intrigue in turbulent times, and a natural death seems unlikely. Unlikely or not, it appears to have been the case.

Or maybe it's just that most people would rather imagine some sort of sinister, conspiratorial scenario than attempt to examine the actual facts.

It's not that there haven't been conspiracies in history, but we normally have a pretty good idea what they were and how they worked. Occasionally, some new information comes to light that sheds a new perspective on some event, but most often the new information or analysis leads to a more prosaic interpretation. The sinking of the Maine, which was the excuse for the Spanish-American war was supposed to have been the result of a Spanish mine. Instead, it appears to have been caused by a coal fire within the ship itself. The U.S. government was pleased to have the excuse, so they didn't look very deeply into any cause beyond that of a mine. I guess that would count as some sort of conspiracy, but, then it was no secret that the U.S. was looking for an excuse to fight Spain.

And what fun is a conspiracy that everyone knows about?

Thursday, January 18, 2007

Martian Mistakes?

For man must strive, and striving he must err. ~ Goethe

There's something about Mars that seems to tie into strange things happening. So many probes have been lost that there are those who think that they are being shot down by ticked-off Martians who think we're spying on them. No such luck. Recently, two more instances of mistakes were raised, one of which could well be true, the other of which is more doubtful.

Not too long ago, I wrote a paean to Mars Global Surveyor, which ceased communications last November. MGS radioed a problem with one of its solar panels and went into safe mode. Repeated attempts to resurrect the onboard computer and to locate MGS (even using Mars Reconnaissance Orbiter to try to find it) were in vain. As is usually the case in such events, NASA initiates an investigation to try to figure out what went wrong.

Early indications are that it may have been a software glitch, introduced when an update was uploaded to MGS. Now those of us who use Microsoft products, this is not an unusual occurrence. But, in the world of satellite operations, software is very single-purposed, so one would expect rigorous testing prior to an upload. It appears, in a preliminary finding, that the uploaded commands, sent up in June which were supposed to synchronizing the two processors, had incorrect memory addresses. This caused the solar arrays to drive to a “hard stop” which is why MGS went into safe mode.

Unfortunately, at this point the radiator for the battery was pointed at the Sun which overheated, causing battery failure. As a result, insufficient power was available to the satellite, with the result that communications could not be re-established.

Bummer.

It's difficult to understand, with the testing that normally occurs before any change is made to a system, how such an error could have gone undetected. But, that's why we have post-mortem investigations. And, it may yet turn out that something else was to blame, like a micrometeorite strike or component failure.

The other possible mistake involves the Viking landers. A geology professor at Washington State University, one Dirk Schultze-Makuch, thinks that the very tests used to discover signs of life on Mars may have killed it. According to his theory, based on data obtained from more recent satellites and landers, Martian life could have a chemistry based on a mix of water and hydrogen peroxide. The tests weren't designed to look for hydrogen-peroxide-based structures, so the microbes could have been drowned or cooked.

Well, maybe.

Interestingly, not very long ago, another group had thought the landers may have found life after all. Looking at the Viking data, two of the three tests did, in fact, give evidence of life, but the third test, the gas chromatograph mass spectrometer (GCMS) that was supposed to be the most determinative of the tests, failed to show any evidence of microbial activity. In a recent re-evaluation of the GCMS, it was found that it didn't do such a good job of detecting life on Earth, much less on Mars. Lest we get hypercritical of the Viking experimenters, we should keep in mind that they wanted to be absolutely sure that what they found was conclusive.

Their methods were thoroughly tested based on assumptions about what Martian soil was like. Based on data returned from the Rovers, a group decided to use the GCMS on the Mars-like soils on Earth, where it failed to find any signs of life.

So we have one investigator telling us that data indicates Martian chemistry is different enough that the tests destroyed it, while another group tells us that it wouldn't have been detected by the key test anyway.

Ultimately, I have to lean toward the GCMS-doubters because Mr. Schultze-Makuch does not mention the two positive tests, one of which included “drowning” the microbes in water. It's possible that he, like some, doesn't regard the first two positive tests as meaningful, or, as a geologist, he's operating out of his specialty a bit and isn't familiar with the details of the tests.

Finding extraterrestrial life is no picnic. Many people think that life on other worlds could be based on silicon, copper, or some other mix of chemicals. But silicon bonds are not as sturdy as carbon bonds, and copper is not as efficient an oxidizer as iron (despite Mr. Spock's green blood). That is not to say that there may not be some exotic chemical makeup for life elsewhere, but within our own solar system it would seem less likely.

Everything in our solar system is made of essentially the same stuff. Even Titan has been seen to have a makeup that would seem to be very similar to the early Earth. It would seem that life in our neck of the woods would tend to be of stuff we can recognize, even though its exact form and behavior may be exotic to our way of thinking.

A mistake may have been made in the Viking search-for-life tests, but it's open to debate which, if either, of these explanations describes what it may have been.

Guess we'll just have to bring back some Mars dirt and find out.

Tuesday, January 16, 2007

Solutions

I have yet to see any problem, however complicated, which, when you looked at it in the right way, did not become still more complicated. ~ Paul Alderson

There are entire libraries full of books on the subject of problem-solving. Entire companies makes copious amounts of money teaching people how to identify and solve problems. I have a fair amount of experience in the field of problem resolution. first as a quality professional and more recently as a system administrator (where problems are the order of the day).

Problem-solving methodologies borrow heavily from the scientific method. The scientific method is generally attributed to Sir Isaac Newton, although I've heard credit given to Galileo, who was certainly one of the earliest true experimenters. Even some early Greek scholars have been cited. The method is as follows:
  1. Observe some aspect of the universe.
  2. Develop a hypothesis.
  3. Use the hypothesis to make predictions.
  4. Test the by way of experiments or further observations.
  5. If the predictions are borne out,publish it, wait for the notification from the Nobel committee, then go on to the next problem.
  6. If it doesn't work, modify the hypothesis in light of the additional results. Lather, rinse, and repeat as necessary.
In the day to day world, substitute “Define the problem” for number 1, which includes gathering data to ascertain the real issue.

Now this seemingly simple process is filled with pitfalls. For example, it is easy for a researcher to fall in love with a hypothesis. You have some observations, and you think about them. Then, in a bit of a flash, you have an explanation, based on inductive reasoning. When further observations are taken or experiments are conducted that don't support the hypothesis, the researcher sometimes finds fault with the deductive methods.

Or take the day-to-day term “Define the problem.” I would speculate that the single biggest difficulty in problem resolution is determining what you're really trying to resolve. A customer called one day to announce that a percentage of a given part we were making was out of tolerance, because they didn't fit on a mating component made by the customer. Since the part was molded, we suspected that a core might have been damaged in way that was not readily visible to the eye. So we asked which cavity number was bad. The customer said that it varied.

That simply doesn't happen. It turned out that the customer's own process was the problem. Our parts were fine, but his weren't.

Why had they accused our part? Ours was made of rubber; the mating part was steel. Everyone “knows” that rubber parts vary in dimension while steel parts are consistent. Except that in this case, the steel parts were machined after being cast. It was the machining that was wrong on the steel parts that our rubber parts wouldn't fit.

The customer went down the wrong path because a) he didn't bother to check his steel parts, and b) he made assumptions based on incomplete data.

Errors in calculation also lead problem solvers in the wrong direction. For example, based on a data set, a quality engineer made some recommendations that were going to cost a lot of money to implement. The boss knew that wasn't going to go over very well, particularly since the engineer admitted that his solution would not guarantee that the problem was corrected, so he asked me to look at the numbers. There was a data group that was prominently out of line with the others. This group was interpreted to show that the process was extremely unstable. In fact, the data group had been entered incorrectly. When fixed, the “instability” disappeared, and we were able to move to a good solution.

Science is full of similar examples. Einstein's refusal to accept quantum theory led to years of futility on his part trying to make Relativity work down to the subatomic level. His “part” was right; if the other guy's (the quantum theorists) didn't fit, then they had to be wrong. Today we recognize that different rules apply to quantum particles than to macro particles. The attempt to reconcile them continues, but it's recognized that it's going to take some sort of new tools to do so.

Then there's the reality of errors. The history of science is filled with great scientists (including Einstein) making blunders that should never have happened but did, because scientists are human. The classic example involves DNA. When Watson and Crick were coming down to the wire on determining the structure of DNA, they were crushed to learn Linus Pauling was about to release his own results. Thanks to Pauling's son, Watson and Crick were able to get an advance copy of the elder Pauling's paper (a story in and of itself). They found a simple mathematical error that caused Pauling to decide that the structure was a triple helix.

Of course, the true structure was the famed double helix. Watson and Crick, who determined to publish the structure themselves, did not point out Pauling's error to his son. No, rather they toasted their own good fortune.

Not exactly the scientific method at work, but, then, I did say that scientists were human.

Thursday, January 11, 2007

Dark Stuff

In my youth, I regarded the universe as an open book, printed in the language of physical equations, whereas now it seems to me as a text written in invisible ink, of which in our rare moments of grace we are able to decipher a small fragment. ~ Arthur Koestler

It's a great big dark universe out there. It's not just dark because there's nothing to scatter the light of the stars and brighten things up. It's dark because the vast majority of what makes up the universe is invisible. The concept of “dark matter” has been around for some time now, and more recently, physicists have determined that something known as “dark energy” is an even greater contributor to the stuff of the universe.

Scientists began figuring out that something was missing when they began to try to determine whether the universe was going to expand forever or contract back into a “Big Crunch”. To do this, they began adding up the mass of the visible universe. I think we can safely assume that there was statistics involved here. At any rate, as they began to look at the mass figures and compared them to other things, like the Hubble constant and the apparent gravitational effects on galaxies, it became clear that something was seriously awry.

It appeared that a great deal of our universe wasn't visible at all. Over the years, the figure for the amount of missing mass has been juggled around, but it would be fair to say that 80-95% of the “stuff” out there can't be seen. But it got spookier.

When Einstein was developing his theories, he found that his theory of General Relativity had a problem in that, without some sort of repulsive force, all the planets, stars, and galaxies would be clumping up. So he added a fudge factor that he called the “cosmological constant,” evidently figuring that if he gave it a grandiose name, people would overlook it's seemingly artificial nature. When Edwin Hubble discovered that galaxies were, in fact, flying apart from each other at high speed, Einstein gleefully removed the constant, thankfully eliminating what he called his “greatest mistake.”

Only it wasn't a mistake. For years, it had been assumed that the expansion of the universe was slowing down, as one would expect after an expansionary event like the Big Bang. Recent observations seem to indicate that the rate of expansion is actually increasing; in other words, there is some repulsive force at work. The cosmological constant is alive and well.

The cause of this repulsion was finally categorized as “dark energy.” Moreover, when looking at the missing mass problem, it appeared that there was as much as three times as much dark energy as there is dark matter. So let's summarize: Nine-tenths of the universe is made up of stuff we can't see, and we don't know what it is.

In scientific circles, this is called “a fine howdy-doo.”

But, those daring men and women with the science machines are making some progress. On one front, what may be the greatest science machine of all, the Hubble Space Telescope, has been used to attempt to map the distribution of dark matter. Even though you can't see dark matter, you can see its gravitational effects on visible stuff, like galaxies. The Hubble was used in concert with ground-based telescopes to develop a three-dimensional picture of these effects.

The map used observations of half a million galaxies. Since we know through statistical methods what the galaxies should look like, if their shapes are deformed in some manner, it would mean that something exerted gravitational influence on the light from those distant galaxies before it got to our instruments. The map only covers a small fraction of the sky, but it's an impressive piece of work nonetheless.

It also has some difficulties. Dark matter is normally associated with some sort of visible matter, most often as an invisible halo surrounding a galaxy. In some cases, the map shows dark matter with no associated visible material. It's not that there is some law that says dark matter can't be around with no visible stuff, but it's not expected. This could be due to some anomaly in some measurements, or it could indicate that there are problems with our models of the universe. At this point, it's too soon to start changing models, and the data that has been gathered will keep the cosmologists busy for some time.

On the dark energy front, a recent study of distant supernovae has shown that dark energy has been at work for at least 9 billion years. While this should have the cosmological constant crowd whooping it up, the data has one inconsistency. According to some current models, dark energy should have been an attraction force that far back in time. As matter density decreased, the force would have turned into a repulsive force. There was a point, therefore, where the dark energy forced flipped from attractive to repulsive. Unfortunately, the study doesn't indicated that to be the case.

The problem is that the data itself has a rather large margin of error, not unusual when looking at something this far away. About all that can be said for sure is that the data indicates the presence of dark energy in the young universe, but it doesn't necessarily depict its nature accurately.

Dark matter and dark energy are almost certainly part of our universe, but we are a very long way from understanding the properties of either one.

Well, no one said this was going to be easy.

Tuesday, January 09, 2007

Thinking About Craters

The world may turn topsy-turvy in an hour. ~ John Clarke

When people think of meteor impacts on Earth, they think of the crater in Arizona or the massive Chixulub crater that probably finished off the dinosaurs. Whatever they may think of, they probably don't think about Wetumpka, Alabama. If we really get down to it, there's probably not much else that makes them think of Wetumpka. Heck, I live a half-hour away, and I never think of the place.

There's nothing wrong with Wetumpka (well, it's local politics can get a little ridiculous, but that's not the focus of this blog), but there's nothing much about it to get anyone all that excited either, unless you're into meteor craters. I was reminded of that by an article in the January issue of “Alabama Living” magazine. Normally, this slim periodical is devoted to telling me how its publisher, the local electric co-op, is doing a great job (they're not, but that's not the point of this blog, either). But it also talks about interesting areas of the state, and this month it mentioned the ancient impact in Wetumpka.

Somewhere around 80 or 85 million years ago, 62 million tons of rock slammed into central Alabama. Of course, it wasn't Alabama back then, but we need to give you a geographical reference of some sort. The impact released energy equivalent to 2,300 megatons of TNT, no doubt ruining the day for a large number of southern dinosaurs.

(I wish I hadn't written that. Now I can't get rid of the image of a bunch of T-Rex-type dinosaurs sitting in rocking chairs, wearing white suits and sipping mint juleps.)

Now, by all means come on down and take a look at Wetumpka's crater. The Chamber of Commerce will love you for it. But don't come down expecting to see a big hole in the ground. Oh, there's one there, all right, but if you look out over the area where it is, you'll see a pretty scenic view of trees and rolling hills. It takes careful examination of aerial photographs to really see that there is a crater.

That's not the fault of anyone in Wetumpka. With rare exception, it's hard to detect most of the visible craters on our planet from the ground. It's easier (and more impressive) to see something like the Terminal Moraine (the place where the glaciers stopped) in Thompson, Ohio, with its clear ledges and huge drop stones. But that's how the Earth is. We live on a dynamic planet, and, over the long haul, everything changes.

Craters fall prey to erosion, mountain-building, volcanic events, even colliding continents. For a crater to stay visible, it has to either be relatively recent (geologically speaking) or in a good place. The crater in Arizona is a good example of both conditions.

As a result of this smoothing out process, we get rather blasé about meteor impacts, as if they're something that only happened long ago. That's hardly the case. The same article contains a sidebar about Ann Hodges, a resident of Oak Grove, Alabama. Ms. Grove, it seems, is the only recorded person who has actually been struck by a meteor. In 1954, she was peacefully napping when a small rock from space crashed through her roof and hit her on the hip and arm. Beyond being startled and bruised, she suffered no significant injury.

Too long ago for you? On January 2, 2007, a golf-ball sized lump of metallic rock came down in New Jersey landed in someone's bathroom. Fortunately, the facility was unoccupied at the time, sparing some serious embarrassment.

Those are small events, but there are bigger ones possible. One need only look at the Moon to see a record of massive impacts over the eons. In fact, part of the reason we're here is because the Moon helped sweep up a lot of the detritus left over from the creation of the Solar System. In fact, observations made during the recent Geminid meteor shower detected five or six impacts from Geminid meteors. Telescopes were watching for these events to try to get a handle on what sort of risks a lunar base might face from such collisions.

That impacts were once much more common is seen on a planet like Mars, where a combination of a thin atmosphere and geological inactivity have resulted in numerous impacts that have stayed visible on the surface. In fact, one of the great disappointments in the early days of Mars exploration were the Mariner photographs showing a heavily cratered surface. We've since realized that there's a lot to learn on Mars, despite all those craters, so we've sent many messengers back there. In fact, there's at least one case, as I recall, of a recently discovered crater that was not in photographs taken just a few months earlier.

There's still a lot of stuff out there.

We understand why we don't see many craters on Earth, but there are places that should have a lot of craters that don't. Io, the volcanic moon of Jupiter, has practically no craters, but that's easy to understand, given the nearly constant resurfacing going on there. Europa is also rather shy of craters, probably because the surface is floating on an ocean and is being remade as well, through a kind of plate tectonics, except that the plates are made of ice.

Then there's Venus. Venus, which has been thoroughly mapped by radar has craters, but they seem to be the same age. Really old craters don't seem to be there at all. It's certain that Venus has been geologically active during its history, but the implication is that the entire surface was overturned in some cataclysmic fashion, perhaps a massive lava outburst, akin to the Siberian or Deccan traps, only planet-wide. Alternatively, it could have something to do with sulfurous atmosphere eroding the planet's features.

Then there's Titan, everyone's favorite satellite these days (including me). Scientists were disappointed not to find oceans of methane on Titan, but they have found what appear to be lakes in the polar latitudes, based on radar mapping conducted by the Cassini probe. Another thing they haven't found much of on Titan is cratering. Some have been located, but not the amount that might be expected on a body circling the junk-filled environs around Saturn.

One reason for the lack of craters might be Titan's reasonably thick atmosphere, but another might be some sort of geological activity. Something is replenishing the methane in the atmosphere, which is why Titan-watchers were expecting hydrocarbon seas. It's unclear whether the lakes, if that is what they are, contain enough liquid methane to make up for the methane that is broken up by solar radiation. It's possible that the methane is produced from within as part of water-ice volcanoes. Or there may be some mechanism for both the methane and the lack of craters that we haven't discovered yet.

It's amazing how interesting some holes in the ground can be.

Thursday, January 04, 2007

Neanderthals, Cosmic Rays, and Space Pens

There should be no articles of faith in science, unless it be the faith that no discovery, no law, is so absolute that it cannot be superseded. ~Anthony Storr

Time to tackle a few odds and ends once again.

British Neanderthals?

One of the intriguing things about anthropology, paleontology, and even archaeology is that a discoveries that have been lying about in museums and university collections for years at a time can, when found or looked at a second time, yield interesting new findings. Part of the reason for this is that there a lot of bones and artifacts and priority is given to the most immediately promising. In the case of bones, there is often a great deal to be done just to prepare them for study. Sometimes the item just gets lost in the shuffle, cataloged away to be analyzed further when time permits. But along come other newer finds, and the earlier ones get pushed back.

This is not because these scientists are somehow callous or forgetful. Newer finds generally have better provenance and more complete dating information than older ones. They also tend to be better handled in the field, taking advantages of the lessons learned as the various sciences have matured.

Take a bit of hominid upper jaw that was lying around in the Torquay Museum collection in England. The jaw with its three teeth was dug up in 1927, where it sat until it was radio-carbon dated in the 1980's and found to be 31,000 years old. Recently, though, bones found in the same layer as the jaw were dated to between 37,000 and 40,000 years old.

That time frame raises the interesting proposition that the jaw could have belonged to a Neanderthal who had come to live in the British Isles. As I've said before, Neanderthals are a hot item these days, which is probably one reason for renewed interest in this particular jaw and it's three teeth.

Current investigators are trying to determine if the teeth are in the right place and if they haven't been damaged, since teeth are normally an excellent source of DNA. But, as noted in the referenced article, even museums didn't the care with bones in the 1930's that they do today. Contamination is a distinct possibility.

But, if DNA could be extracted, it would definitively establish whether the teeth belonged to a modern human or Homo Neanderthalis. If the latter, it would be evidence that the Neanderthals got to Great Britain much earlier than previously thought.

You just never know what you might find in the back room.

Cosmic Rays Again?

It seems that when Neanderthals aren't in the news, cosmic rays are. I recently wrote about their possible impact on the weather. Although that link seemed rather tenuous to me, it seems that some scientists are taking it very seriously. Basically, the theory holds that cosmic rays are a factor in cloud formation, and the more cosmic rays you have, the more clouds you have. If clouds increase, the amount of sunlight reaching the surface of the planet is diminished. Simply put, it gets colder. In fact, it can get cold enough to create ice age conditions that inhibit the chances for life to flourish.

It turns out, through determinations of the amount of carbon-13 at various times during Earth's history, that a correlation has been found between periods of star formation and periods of cold climate. Heavy star formation generates large amounts of cosmic rays, supporting the theory of cosmic ray cooling.

The correlations are supposed to be so good that, according to the linked article, “the odds are 10000-1” against a coincidental relationship between star formation, the attendant cosmic ray increase, climatic cooling, and declines in living organisms. That a colder climate sustains less life is a fair statement, but I think the jury is still out on the star-formation-cosmic-ray-clouds connection.

For example, an alternative explanation might be that intense star formation (and subsequent star death) increases the amount of dust in our corner of the galactic neighborhood. Such an increase would also decrease the amount of sunlight hitting the surface of the planet and lead to planetary cooling.

I am not counting out the cosmic ray connection to climate, but I do think we're just beginning to understand this relationship.

It Was the Pencils That Were Expensive?

One of the enduring stories of early manned spaceflight concerns writing instruments. Astronauts needed to write things down, but a conventional pen uses gravity to deliver ink, so, according to the story, NASA had special pens made that would work in zero-g, each of which cost several thousand dollars. The Soviets solved the problem more simply: They used pencils.

According the Scientific American article linked above, it just ain't so. It turned out the American astronauts started out using pencils, too. Eventually, though, both sides began to use pens that had the ink pressurized with nitrogen to force the ink out the right end. Said pens only cost $2.39 each. So the pens were cheap, but why did both the Americans and the Soviets switch in the first place?

It seems that pencils are not very safe for use in space. They break and flake tiny bits of graphite that would be no problem on Earth but would float around in a space capsule and possibly end up in an astronaut's eye or causing a short circuit somewhere. Oh, graphite burns, too, which, after the Apollo fire was something NASA was sensitive about. So pens made good sense.

But there's a little kicker to this story. The mechanical pencils that U.S. astronauts used were pretty special. They were so special that they ran $128.89. When this purchase became known, NASA got some serious knocks for blowing that kind of cash on a pencil, so they started looking very hard for a replacement. It turned out that Fisher Pen Company had spent $1 million of their own money to develop a “space pen.” Actually, it had been designed to write upside down, underwater, and in frigid conditions, making it ideal for use in space. And it sold for $3.98. The U.S. and the Soviets bought a case, got a quantity discount, and both paid $2.39 each.

So there was an element of truth to the myth; they just got it backwards.

Tuesday, January 02, 2007

Baffling Bursts

The important thing in science is not so much to obtain new facts as to discover new ways of thinking about them. ~William Lawrence Bragg

If I were starting out today in a career in astronomy, I would make Gamma Ray Bursters (GRB's) my specialty. These energetic events are causing no end of consternation in astrophysical circles, and when that happens, people start jumping at possible explanations. That's PhD material.

GRB's have been a mystery for some time now. Basically, a GRB is an object that very briefly emits a huge outburst of gamma rays. Because the bursts are brief, it was a long time before astronomers gained sufficient data to know that they were frequent events. Initially, it was thought that GRB's were reasonably local happenings, occurring within our own galaxy. But with satellites doing the observing, it was found that these things were all over the sky, not just in the plane of the galaxy. That left GRB-watchers with two discomfiting facts:
  • Most GRB's were very far away, which meant that

  • Their energy output was almost incomprehensible.
In fact, if one considered a GRB as if it were an ordinary explosion, the amount of energy being released would have required a galaxy-worth of mass. Yet these things were happening all over the sky, and no galaxies were disappearing, which meant that one would be forced to conclude that E=mc2 was being violated. It's one thing to mess with Mother Nature, it's another to screw around with one of the most fundamental equations of physics.

Fortunately, there was a way out if one got rid of the assumption of a traditional radiating-in-all-directions explosion. When astronomers looked at supernova remnants, they saw all sorts of evidence of very directional outbursts. They also saw the jets of energy from super massive black holes. So GRB's must radiate their energy in the same way, so the GRB's we observe happen to be pointed in our direction. It also made a fair case for GRB's to be the result of supernova explosions, although there were still some questions to be answered (as I discussed here).

I should point out that supernovas have been primarily associated with long-duration GRB's. Brief ones are thought to be a consequence of collisions between neutron stars or black holes (or a neutron star and a black hole).

But Nature loves to tease scientists. In May and June of 2006, scientists saw two GRB's that were most unsettling. They were long-duration events; the June burst lasted 102 seconds, which is forever in GRB-speak (the May burst went for 4 seconds, which is still considered “long duration”). Since the launch of the Swift satellite, designed to find GRB's, astronomers have set up an alarm system to try to find what might be seen where the burst happened. When these two were spotted by the SWIFT satellite, telescopes around the planet (and, in space, Hubble) were turned to the coordinates, expecting to find supernovas, as they have often done in the recent past.

This time, they found nothing, which was a fine howdy-do. Well, technically, they did find something, but the observed behavior was definitely not like that of a supernova. The June burst, for example, was studied for a long period, with astronomers looking for a rebrightening object that would indicate a supernova had occurred, but despite looking for almost two months, nothing was seen after the first four days. An object was found at the site of the May burst, but it dimmed within a day. These would be vanishingly dim supernovas.

So, to add to the conundrum I discussed in my earlier article that not all supernovas create GRB's, we now had GRB's with no supernovas. It took a couple of months, but a new theory has emerged, one that has a new take on how some stars die. It seems that some massive stars, for reasons which are not clear, don't end in a massive supernova explosion; they simply collapse into a black hole, without so much as a whimper.

Well, that's not exactly correct, because the star's material falling into the black hole does let out a mighty shout, which is detected as the gamma ray outburst. The problem is that we don't understand what mechanism is at work here. All of the current theories describe the final fate of stars based on their mass. Stars like our Sun slough off outer layers, then collapse inward to become white dwarf stars. Massive stars go out in a blaze of glory and gamma rays. Except that maybe some just go out in a gamma ray burst.

It's an interesting idea, but no one understands what mechanism could be at work to cause the star to simply collapse on itself yet be massive enough to create a black hole. One possibility is that the star is collapsing into a companion black hole, but there would be difficulties with that scenario as well. There is considerable work to do based on these recent observations.

Fortunately for the astronomers, given the frequency of GRB's, they'll have no shortage of data with which to work.