Thursday, April 30, 2009

Of Mountains and Models

If you can't explain it simply, you don't understand it well enough. ~ Albert Einstein

"Using one unsolved mystery to solve another makes some people sceptical," says Marcus Chown in his article on the latest weird theory involving dark matter. To which I can only add, "Amen, brother." It seems that dark matter caused the re-ionization of matter when the universe was about a billion years old. Well, at least that's the what the gang over at Fermilab is postulating these days.

But that's not all. Somehow while dark matter was busily annihilating itself to ionize hydrogen, it was also responsible for the growth of the supermassive black holes at the centers of galaxies, in a process called, so help me, "dark gulping."

Oh lawsey, lawsey, lawsey, when will it stop?

It's not that I have anything against dark matter or its evil twin, dark energy. They've never done anything to me. No, my problem is with scientists who, in their rush to publish something, anything at all, will invoke those undetected and unmeasured phantoms at the drop of a tachyon. The dark things are the deus ex machina of modern physics.

I've been trying to figure out why there are so many half-baked, ill-formed, and generally unsatisfactory theories coming out of physics and cosmology these days. Certainly string theory can take its share of the blame. I've written a few thousand words on the subject (just search the blog on "string theory" and see), but string theory isn't to blame for dark matter/energy.

The need to "publish or perish" is, of course, always an issue. Scientists trying to get grants or tenured positions all always under pressure to publish theories as soon as possible, often without the requisite due diligence. And then there's just the issue of keeping your organization in the scientific limelight. Certainly, the advent of the Large Hadron Collider has had the folks at Fermilab trying to show how relevant they still are. But even those pressures can't explain the weird sorts of theories being published almost daily.

I've got a theory about theories.

First, there's the problem of data. It used to be that data about the workings of the universe at the scales of the very large or the very small were difficult, if not impossible, to come by. Nowadays, though, there is a veritable flood of information from satellites and sophisticated ground-based telescopes. Even without the LHC, there is a mountain of data created by existing particle colliders, and, thanks to our ability to actually "see" things happening at quantum levels, there is data on the behavior of atomic particles.

The trouble is that there is so much data, no one can stop long enough to analyize it. Consider that recently, planets were discovered to have been imaged by the Hubble Space Telescope. This is great news, except that they were imaged years ago, yet astronomers are just now reviewing those images closely enough to tell what they show. Hubble has taken such a huge volume of measurements that no one can get their arms around it. Add Spitzer, Chandra, and a host of other devices, and it becomes obvious that we simply have more data than we can analyze in a human lifetime.

Now, in itself, such a huge volume of information is not a bad thing. The problem comes when scientists cannot impose enough discipline on themselves to actually sift through the mountain of data with some degree of focus. What we tend to see is a researcher taking a single observation and making a theoretical mountain out of an single-data-point molehill.

The other cause of the increase in cockamamy theories is the ease of constructing computer models. Now, I'm a networking specialist by profession. I understand a lot of in's and out's of programming. Over the years, I've learned that the results that a computer program produces are completely dependent on the conditions set in the program.

Now consider that no one has ever actually found any dark matter. There is some putative evidence of gravitational effects that would be explained by some unknown source, but we have no idea what it is and what its properites are. If one is to construct a program to model the interaction between dark matter and black holes, one is going to make a lot of assumptions. Just because that model turns out something called "dark gulping" (oh lawsey, lawsey, lawsey), that doesn't mean the universe engages in such asto-gastronomical adventures.

In other words, as one scientist once put it on some show or another, if you want to prove that pink elephants exist, you can create a computer model that will generate them given certain starting conditions.

This is not to say that computer models are all phony or useless. On the contrary, a model based on solid initial conditions can be very useful. But, those initial conditions must be well understood, and the laws of interaction that work on those conditions also need to be well understood.

Consider weather prediction. Around here, hurricane season makes us all weather researchers. Many of us will monitor predicted hurricane paths with a passion borne of a cross between pure fear and morbid curiosity (if it looks like the hurricane is going somewhere else). One weather site used to publish a map showing the paths predicted by all the current computer weather models. I presume they do this to provide a little comic relief to lessen the tension, because some of the models will show a hurricane making it to Minnesota as a category 2 storm.

You might say I'm being unfair because the weather is a complex business, with many factors to consider, the interplay of which is still not fully understood. My answer is that you're right. The weather here on Earth, which we've been studying for centruies like our lives depend on it (because it does) is so difficult to model because we don't understand all the physics of what's going on in the atomosphere.

So how can we be confident that a model involving an unknown substance (dark matter) with unknown properties interacting with a known object (a black hole) with poorly understood properties is actually producing a meaningful result? We can't be.

Computer models can be useful (even the weather ones) as a guide, but we should be aware that they could also be leading us to Minnesota.

What physicists need to do is start focusing their efforts more on collating some of those mountains of data and interpreting what they tell us directly, rather than cherry picking an interesting observation here or there and creating a model that mimics the behavior based on initial conditions that may or may not have anything to do physical reality.

The various forms of science used to be called "disciplines." What's needed is to restore some discipline to the sciences.

Friday, April 24, 2009

Classical Gas

You can't possibly hear the last movement of Beethoven's Seventh and go slow. ~ Oscar Levant, explaining his way out of a speeding ticket

And now, for something completely different.

My mind works in mysterious ways, when it works at all. The other night, as usual, there was nothing worth watching on Discovery, Science Channel, or the History Channels. I mean, seriously now, gangs, explosions, Armageddon, and phony survivor shows are simply not science or history. So I sallied down to PBS and tried to watch an opera, Lucia di Lammamor. Now, the thing is, I don't really care for opera or ballet, which is strange because I really enjoy serious music. The main reason I don't like opera is that I'm not particularly fond of solo voice. Even though I can admire the voices of Beverly Sills or Luciano Pavarotti, I really prefer orchestral pieces.

In case you're curious, my gripe with ballet is that a theme will get repeated endlessly. If you've ever heard Copland's Appalachian Suite, imagine it extended to three times its length. To me, it becomes too much of a good thing. At any rate, as I struggled through Lucia's anguish, it occurred to me, as it often does, that most people these days simply don't listen to serious music.

Part of the problem is that many pieces are long, and folks these days have grown up listening to three minute pop tunes. Another thing is that you really need to listen to serious music, not boogie to the beat, go jogging, or carry on a conversation. Modern attention spans have grown so short that the contemplative nature of serious music seems to interfere with those "busy schedules" everyone seems to have.

The funny thing is that when I have serious music playing on my MP3 player at work (connected to some speakers), everyone who comes into the office comments on what beautiful music I always have on. People would listen to the stuff if it was actually available to them. But, aside from the very rare "classical" music radio station or the even rarer televised concert, people just aren't exposed to the music much any more.

Which is a shame. They just don't know what they're missing. They also don't know what "classical" music is. So I did a little Internet searching (some links are below).

The proper term for what most folks call "classical" music is "serious" music, as opposed to popular or "pop" music. "Classical" actually refers to a particular period of music. So, I'd like to take a moment to set the record straight.

There are several recognized eras or periods of serious music: Medieval, Renaissance, Baroque, Classical, and Romantic. Then there is the 20th Century serious music which is variously labeled as Modern, Neoclassical, and Postmodern, periods which overlap and are argued over by musicologists. Let's go with the easy ones first.

Medieval music is pre-15th century stuff. Gregorian chants are typical of the period. "Chant" is the operative word here. There is little rhythmic variation or harmony in this music; this was primarily sacred music, sung in unison. A lot of what you hear called Gregorian Chants is actually modernized versions, updated to appeal to our more musically sophisticated ears.

Renaissance music is where instrumentation becomes more important, although the voice is still predominant. Madrigals and folk tunes appear in this era, which runs approximately from 1400-1600.

The Baroque period, from 1600-1750, introduces structured pieces of music that were much more elaborate and orchestral. Think Vivaldi's Four Season's. Bach and Handel are the biggest and most prolific composers of the period, but Telemann, Purcell and others are still staples of chamber orchestras today.

Now comes the Classical period (1750-1820). It's only 70 years, and today only three composers from this period would be considered well-known: Mozart, Haydn, and Beethoven. That's pretty impressive company, but a relative handful considering that the period has given its name to virtually all non-pop music. I really can't imagine how that has come to be, but there it is.

If you're looking for the period where all that music you've heard over the years happens, you've arrived. It's the Romantic period, running from 1820-1900. You've got Brahams, Berlioz, Debussey, Dukas, Dvorak, Grieg, Liszt, Tchaikovsky, and Wagner, and most everyone else who wrote what you've been calling "classical" music all your life. It was an incredible explosion of much of the greatest music of all time.

The we get to the 20th century. It's not that there hasn't been great music; there's been plenty. In our time, you can find Gershwin, Stravinsky, Prokoviev, Copland, Shostakovich, Berlioz, Bartok, and bunches more. You can also find Arnold Schoenberg and John Cage.

The music of the last century started out as a continuation of the Romantic period, but you start to see complex rhythms and all manner of polytonality. Some of these composers were influenced by jazz, some by folk tunes, and some just went off the deep end. "Modern" music was used to describe the rougher edge that could be ascribed to the music of the early 20th. This was music with attitude, but the influences of the masters were still visible. It was also called Neoclassical, depending on who you ask, although I recall a music professor separating the pre-World War I period as Modern and everything after that as Neoclassical.

Then there was Postmodern. Again, depending on who you ask, Modern and Postmodern get lumped together and Neoclassical gets lost, or they become fuzzily overlapping periods. All I know is that when someone says Postmodern, I think of Schoenberg and John Cage. Schoenberg invented a 12-tone scale with rigid rules that involved not repeating any of the 12 tones in a sequence. Aside from being hard to write, it can be hard to listen to. John Cage just went weird. Among his pieces are one that is 15 minutes of silence. At least it's easy to play.

The Postmoderns (or Moderns, if you don't like all the subdivisions) also introduced highly polytonal pieces that frequently sound like everyone in the orchestra is making it up as they go along. There are those who love this sort of thing; I can't say I've ever gotten into it, although I did hear a piece once that consisted of musical bird calls which actually sounded sort of nice.

So, there you have it: 600 years of music in a nutshell. With that much music to chose from, how can you limit yourself to the music of one generation? There's no reason to give up pop music, but there's a world of sounds that you're missing if you don't investigate serious music.

Heck, you can even call it "classical" if that's what it'll take to get you to listen to some.

Some resources:

Intro to Classical Music

20th Century Classical Music

Neoclassism

Sunday, April 19, 2009

Dark Murmurings

It is a good thing to proceed in order and to establish propositions. This is the way to gain ground and to progress with certainty. ~ Gottfried Leibniz

I am so tired of dark energy.

It's not that I'm dead set against the concept. The idea that we haven't discovered everything resonates with me. Whether it's dark energy or dark matter, in a huge universe that's 15 billion years old (give or take), there are bound to be things we haven't found. It's just that I am so tired of dark stuff being invoked every time someone can't explain something.

Worse, dark energy seems now to be a variable entity, at least according to this study. This idea has come up before in a different context, but actual data didn't support the "flipping" of dark energy from attractive to repulsive.

Perhaps it's the theory that's repulsive.

At any rate, now it seems that the increasing acceleration of the universe is actually slowing down, which is what people used to think was happening. The culprit in this new plot twist is, once again, the measurement of distances to supernovae.

It is assumed, based on a lot of observations, that certain types of supernovae all have the same intrinsic brightness. Therefore, if you see a supernova in a galaxy far, far away, you can measure its brightness, compare that to what it would be at a known distance, and calculate the actual distance to the object. Then you can look at the red-shift of the spectrum of the supernova and determine how fast it's traveling.

In this case, that resulted in some numbers that didn't fit the current theoretical flavor of the month.

Now, it's possible that there could be problems with the data. For example, some of the supernovae could be partially obscured by clouds of gas and dust. In that case, the observers make corrections for the amount of dimming caused by the intervening stuff. Perhaps the adjustments weren't correct.

The thing is, the same arguments could be made for the original data that determined that the universe's expansion was accelerating. Or it could be possible that, in fact, supernova luminosities have changed over billions of years. It could be that the chemistry of stars has altered over time, with the result that older supernovae are brighter or dimmer than we would expect.

Then there's the problem of the voids.

One of the assumptions of physics and astronomy is that there's nothing special about our place in the universe. So that, as we look in different directions at different objects, nothing is affected by where we happen to be. This would be fine, except that the more we look at the universe, the less homogeneous it is. I first brought this up in a discussion of the Swiss Cheese theory of the universe. Some time later, this article popped up, raising the same sorts of questions.

Now, another survey of the sky has been taken that once again raises the issue of apparent huge voids in the universe. Keep in mind that this survey covers only six degrees of the southern sky, so stranger things could await us. If nothing else, it could be found that one of those voids is home for the Milky Way.

If that's the case, a lot of theories about dark matter, dark energy, and the expansion of the universe end up needing to be reworked. One nice thing is that non-homogeneity would pretty much get us out of this business of constants changing over time. If in fact our place is not exactly the neighborhood we thought it was, that would explain why things look different to us than we expected. It's sort of like those perspective tricks that are played on the eye, where the relative sizes of objects are not what they seem.

In the case of space, it's gravity that may not be affecting us or observed objects to the extent we believe they are.

The data collecting will continue, and it's quite possible that real life is going to turn some speculations about dark stuff on its ear. It's also possible that it could provide some details that could finally give us a handle on what the dark stuff might be, if it does exist. It wouldn't ruin my day to find that dark energy and dark matter were at least partially defined, because getting to that point would allow cosmologists to finally begin to hone some of these theories. I also suspect that a lot of inconsistencies would start to disappear.

And I'll bet through all of it that Einstein's theories still stand strong.

Tuesday, April 14, 2009

Survey Says ... Nothing!

I haven't trusted polls since I read that 62% of women had affairs during their lunch hour. I've never met a woman in my life who would give up lunch for sex. ~ Erma Bombeck

In my little treatise on statistics, I didn't spend much time on opinion polling. Primarily that's because, while I have a lot of experience in the realm of physical measurement statistics, I have a lot less in the area of opinion surveys. That being said, I do have a good bit of experience with one type of opinion poll, and that is the product evaluation.

Now there are differences in opinion polling and product evaluation, but, generally speaking, both types ask for subjective judgments about something, which could be how good a job the President is doing or how good a shave you got from an unidentified razor.

I use the razor as an analogy because that's the sort of product testing I did, so I can speak with experience.

Product testing and opinion polling both depend on two things: the population being polled and the nature of the poll questions. In addition, as we shall see, there is the matter of how the responses are grouped.

In our testing, we selected a random group of shavers. Sort of. We chose our mailing list from a list of members of the American Society for Quality Control (as the American Society for Quality was known then). We chose that group because we waned people who would be more likely to seriously evaluate the product. In thinking back over the masses of data I saw, that probably was the case. Most tests were comparison tests, where they received our product and that of a competitor. They were asked to rate each based on a set number of uses then pick which they thought was better.

You know those commercials where they say, "Four out of five people loved our product"? This is just that sort of test, and believe me, if you're going to make that claim, you'd better have some good looking data to support it, because the Feds just love to occasionally call your bluff. Now avoiding doing time or at least paying hefty fines is pretty much based on how you do your data.

For example, "Four out of five people couldn't tell the difference" isn't the same as "Four out of five thought we were better." There are a lot of ways to ask the tester questions that can give you the answer you want. All of them may be used in a particular survey, but only the one that gives you the desired result is used. For example:
  1. Rate the product as follows: Poor, Fair, Good, Very Good, Excellent (this can be done for various characteristics)
  2. Rate product A against B: A is somewhat better, A is much better, B is better, B is somewhat better, No Differnece
  3. Which product would you buy?
And endless variations thereof. Note question 1. I can ask about any number of characteristics, say 7 of them. My product might lose on six but win on one. Guess which result goes into the commercial?

There's also the business of grouping results. It's typical to group Poor and Fair as well as Very Good and Excellent. So we really only have three categories, but I have seen situations where Good, Very Good, and Excellent were all combined. When comparing the results of two products, one of which is okay the other of which performs very well, the aggregated groups data might show no statistical significance because the "okay" product's "Good" results carry as much weight as the better product's "Very Good" and "Excellent" results.

And that's where the problems come from.

When you hear about competing surveys for, say, political candidates where one says Smith is going to win and the other says Jones is a shoo-in, there are several things that can be happening. One survey may say, "Which candidate can do a better job?" while the other says "Which candidate will you vote for?" Those are two different questions, because factors beyond how good an alderman the candidate is come into play. A person might think a female candidate would do a better job than a man, but the gender angle would sway that person to vote for the man. Or someone might be a "straight ticket" voter, so it doesn't matter how good one candidate is relative to another. That person will always vote for a Democrat, a Republican, a conservative, or a liberal regardless of ability.

Or when you hear that people "found no difference" between low-priced product A and high-priced product B, you might be seeing grouping in action. In fact, you can see it in interactive surveys. I am frequently asked to complete surveys on the performance of technical support people by Dell, among others. What I've noticed is that if I rate something 7 or above (0 being bad and 10 being great), the survey goes on to the next question of set of questions. If I rate something 5 or lower, I'll be asked a question about what was wrong or what could be improved (the reaction to a six varies with the type of question). So basically, it doesn't matter whether I rate something 7, 8, 9, or 10, or if I rate it 1, 2, 3, 4, 5. Those categories are essentially clumped together.

Then there's the whole business of populations. As noted, our sample was from a large number of quality professionals. This means they would tend to be better educated and possibly higher wage earners than the average person. In our tests, though, all that mattered is that we had a variety of ages and hair types (we had male and female testers). We also kept track of test results by user to weed out those who didn't ever seem to find differences. The good thing is that we could compare historic data because the nature of the test group was very consistent.

One of the big problems with polls is that you can't be sure of the nature of the sample. If the poll today is top-heavy with high income conservatives compared to the last survey which had a larger blue-collar component, then saying that a Democratic candidate is doing worse today may not mean anything.

So, if you're going to evaluate a product claim or a popularity poll, you won't be able to do it without understanding how the data was collected, how it was grouped, and what the makeup of the population was. And, most of the time, you can't get any of that information without going to a lot of trouble. That is why I say that I don't put much stock in polls. You're better off making up your own mind.

That way, at least, you know who's responsible for the decision.

Thursday, April 09, 2009

Statistics, Damn Statistics, and Lies

After all, facts are facts, and although we may quote one to another with a chuckle the words of the Wise Statesman, “Lies - damn lies - and statistics,” still there are some easy figures the simplest must understand, and the astutest cannot wriggle out of. ~ Leonard Henry Courtney

Figures often beguile me, particularly when I have the arranging of them myself; in which case the remark attributed to Disraeli would often apply with justice and force: "There are three kinds of lies: lies, damned lies, and statistics." ~Mark Twain


Mark Twain is said to have heard the quote and created his own version later, embellishing it with the attribution to Disraeli, whom he evidently thought was the "Wise Statesman." I find it ironic that a quote about the science of Statistics, maligned for being used to manipulate facts, has in fact been manipulated itself.

Most people will admit that statistics are useful, at least until the numbers don't go their way. Then statistics are an elegant way of telling half-truths (as someone else once said). As usual, it depends on whose ox is being gored.

To my mind there are two very distinct uses of statistics. One is the evaluation of actual physical data, that is, physical measurements of a sample of things are taken, and statistics are used to determine what this sample tells us about the population as a whole. More often, statistics are used in this context to compare to populations of things without having to measure all of them. Why not just measure all of them, you say? Well, there may be too many things in the populations, or the measurement could be destructive. If you were planning to sell the things, destroying all of them in the measurement process would not be a profit-making methodology.

The other use is polling. Polling is an entirely different animal because it is highly dependent on technique. For example, there were stories recently about why polls in New Hampshire picked Obama to beat Clinton (who actually won). The bottom line is that polling technique was not good.

Opinion polling depends on getting a representative sample, asking the right questions or at least asking the same questions consistently, and hoping the respondents don't lie. When any of those things fail, the poll can look pretty stupid (as in "Dewey Defeats Truman").

Personally, I don't much care about opinion polls. I make my decisions on whatever facts I can obtain, whether it's about a political candidate, brands of cars, or what to watch on TV. What the crowd thinks isn't my only measure of what's good or right. It's merely another factor, and usually a small factor at that.

However, measurement statistics can be rendered unreliable as well. I was in quality control for a long time, and statistics is a big part of that game. I saw stats misused often, sometimes on purpose, more often because of errors in method or interpretation. Let me give you a classic example.

I was working at a factory that made blades for a well-known label maker. The tolerances on the height of the blade were tight because the blade had to just cut through the plastic label, but not through the paper backing. That way you could grab the little piece of plastic and pull the paper off. Those of you who got a label maker with a blade that wasn't right know how much fun that could be.

Well, we seemed to have a lot of trouble making those blades, which was odd because we held equally tight tolerances elsewhere. But there we were, throwing out a good chunk of product. Worse, the customer was sending back another good chunk of product because we weren't catching all of it.

Our industrial products quality engineer went out and ran a process control study, which consisted of measuring the height of five blades every 10 minutes and recording them over a number of hours. She came back and announced that the process was out of statistical control and that the only solution was to measure more parts more often. Remember that business about destructive measurements? Measuring those little blades wrecked their cutting edge, so measuring more of them meant more going into the trash. Moreover, she couldn't guarantee that would catch all the bad product.

My boss didn't think management was going to buy that, so he came to me, the consumer products quality engineer and said, "HELP!"

In reviewing the data, I found a few things. First, there were a couple of arithmetic errors that made the data look stranger than it was. Second, the process was out of statistical control, but that wasn't bad. "Statistical control" means that data points are randomly scattered over the time frame taken. Few processes are actually in statistical control because tools wear, which causes a dimension to change over time. This is a predictable condition and can be dealt with. In fact, the process would produce parts that would be well within tolerance for a period of about eight hours before the grinding wheels had to be adjusted.

The third thing was the killer, though. At the time that the adjustment had to be made, the process would go nuts for about an hour, then level out and run smoothly again. So, something was going weird when the operator was making adjustments. I went to the manufacturing engineer (who was my legendary fishing buddy, Moon) and asked what could be going on. He said we should just go out and see what the operator was doing.

It turned out to be an issue of timing. The operator made adjustments but measured the product before the adjustment had taken effect, so he was always over- or under-adjusting until he got lucky. Once he was instructed to wait 10 seconds before measuring a part after an adjustment, the problem went away.

So the statistics didn't lie; they were screaming at us what the problem was, and no one took the time to find out what that was. In other words, the statistics were being used in a vacuum. That was the mistake. The statistics were about the world, but no one went out into the world to see what was really happening.

This happens in science at times. People get enamored with a set of measurements and overlook the fact that they just might be in error or that the wrong method of interpretation has been applied. It's not the statistics that are bad, it's either the data or our methods. Either way, we should approach statistical summaries with caution. Don't be afraid to ask to look at the raw data because there might be some surprises there.

By the way, as a result of our little investigation, it was determined that we could significantly reduce inspection. Things worked great for two weeks when the quality engineer came to me and said, "Your reduced inspection doesn't work. Last night's production was rejected this morning." I asked her what the inspection data during the shift showed. When she got it, it turned out it showed that the entire run was bad. The machine had a mechanical problem, but the shift supervisor told the operator to keep running anyway. The engineer would have known this if she had simply looked at the statistical charts.

Numbers don't lie; people, on the other hand ...

Saturday, April 04, 2009

Facing Risk

When everyone feels that risks are at their minimum, over-confidence can take over and elementary precautions start to get watered down. ~ Ian Macfarlane

It's an odd confluence of events that turned my mind to the concept of risk. First, I was the victim of a hit-and-run accident in which some loon drove through a stop sign and the rear of my car without ever so much as touching his/her brakes. I was not injured, save for a bit of soreness where the seatbelt kept me from flying across the car. Everything happened so quickly that I never even saw the other vehicle; the airbags deployed and blocked the view on the side where I was hit (those things are bloody fast). By the time I stopped spinning, the perp was headed for parts unknown.

The next evening, I was reading the Smithsonian magazine and stumbled across this article which discussed the proposition that increasing safety measures (e.g. seat belts in cars, helmets and padding in sports) has appeared to increase risky behavior.

The next day I was sitting in the doctor's office waiting to get a once-over to be sure I really was uninjured (I was). For some reason, my doctor has a TV in the waiting room with Fox News on it. Frankly, any news station would be depressing, but Fox News with its endless "the sky is falling" reporting is an extreme downer. One of the day's depressing breaking stories involved yet another loon walking into a building and killing a bunch of people.

Finally, there was this little bit of news from The New Scientist informing us that a group of scientists have determined that poker is, in fact, a game of skill, not of chance.

Putting all of these things together has led me to some conclusions.
  • If you try to protect people from themselves, they will still find ways to harm themselves. Give them seat belts and air bags, and they will drive like idiots because the seat belts and airbags will save them. Give them football pads and helmets, and they will be used as weapons by the atheletes because they think they're so well protected (as is the opponent that is currently being speared).

  • You cannot avoid all risk. A driver with no regard for others can be merrily yakking away on his/her cell phone, cut over two lanes, and send you over a cliff. A lunatic gunman could walk into the grocery store and blow you apart like a ripe watermelon. You could a bunker with a lifetime source of food, power, water, and video games, then trip over an ottoman and fracture your skull on a concrete wall.

  • There are different kinds of risk. There are the risks over which you have no control. Then there are risks which are voluntary, like climbing Mt. Everest or trying for the land speed record. There are risks which are discretionary, like how one invests. Then there are risks that are necessary to take because without the risk there is no reward. The scientist who puts forth a controversial theory risks a career; a firefighter going into burning building is risking a life. Yet without those sorts of risks being taken great leaps aren't made, and other lives aren't saved.
Which brings us to poker.

One can argue the merits of the study, which seems to base it's conclusion on the fact that of 103 million hands of online Texas Hold 'em poker, 76% ended without hands being revealed. That is, the losers folded rather than calling, so that no hands were shown. To the researchers, this meant that poker was a game of skill, because the better hand did not always win.

Entire books have been written on the subject of playing Texas Hold 'em, which is, in my opinion, to five or seven card stud what 9 Ball is to Billiards. All these games require skill; some of them, though, have a lot more variables to track.

I have been known to watch the World Series of Poker, and an interesting trend has developed. The young Internet-trained poker players are beginning to dominate the professional gamblers. Are we to believe that they are somehow more skillful? Or, is it because they play a different risk level?

Tournament poker is not the poker the professional grew up playing. In a tournament, you put up an entry fee which is all that is risked. For that you get a set amount of chips. When those are gone, you're done. You can't, in the parlance, put up a marker for the amount of the bet and continue. In other words, your risk is fixed
(I think some tournaments now allow a buy-back-in option, but I'm not sure how that works).

Also, when a certain amount of players have been eliminated from the tournament, everyone who is left is guaranteed a payoff, even if they lose all their chips. This is a virtual no-risk situation. Now imagine the Internet player now playing with his own money against a professional. There is no guaranteed payoff. The only way you walk away with more money than you started with is to win it. I would suggest to you that, in that event, a lot more hands are going to be played out with cards being shown.

That doesn't mean poker isn't a game of skill. The skill is in knowing when to bet, how much to bet, and when not to bet. The skill is in keeping yourself in the game until the cards turn your way or the other guy makes a mistake. I would submit that many an aggressive Internet player would fare much more poorly in a money game against a professional. It's a different game when it's your money. The player who is willing to go all in on a bluff and then hits a flush on the community cards might be inclined to play a little differently.

In other words, the Internet player who won big because he hit a straight when the other guy had a pair of aces in the hole might not hang around to find out if he's going to hit that straight.

There is a point to all this rambling. There are risks that I can't control. All I can hope is that, next time, I'm not going through that intersection when a crazy person runs the stop sign at high speed. There are risks I need to take if I want to feel good about myself, but I can measure those. And there are risks I will never take because the rewards don't outweigh the potential losses as measure them..

That's why I don't tailgate, relying on my ABS brakes to save me or cut someone off hoping they'll back off. It's also why I won't be climbing Mt. Everest any time soon.

And it's why you won't see me in Vegas unless I'm playing with someone else's money.