วันพุธที่ 15 กรกฎาคม พ.ศ. 2558

MARRIAGE AND ITS DISCONTENTS

Of course, if a happy marriage was easy to achieve, we’d have a much lower divorce rate and far fewer affairs. Unfortunately, it’s much easier to fall in love than it is to stay in love. Just as we can use the body’s own chemistry to chart the effect of infatuation, we can also use it to understand that waning desire is a natural part of any long-term relationship. As Oscar Wilde said, “The essence of romance is uncertainty,” but uncertainty is precisely what you are giving up when you get married. Sadly, finding the right person can never be reduced simply to smelling sweaty T-shirts, appealing though that prospect might be. As studies have shown, this chemical element of romance lessens over time. One researcher has found that the altered brain chemistry of falling in love lasts roughly six to eight months. Others have found that it takes two to three years for the feelings of infatuation to fade to feelings of neutrality—not mild attraction but neutrality!
 
The problem with relying on our passion to guide us is that a marriage has to stand the test of time to be successful. Some people may feel relieved when they get divorced, but I don’t think anyone has ever counted it as a success. To base a long-term relationship on short-term chemistry alone is a little like buying a car based on how it’s going to run for the first one hundred miles.
 
This isn’t a marital problem. It’s a human problem. We all experience this waning of desire in countless ways. The excitement of anticipation gives way to the dullness of routine. If you have ever bought a new car or started a new job, you have experienced this sensation. This isn’t such a big deal when it comes to a car purchase. If you have the money, it’s a relatively simple matter to get a new car. But it is a huge deal when it comes to marriage. The funny thing about the waning of our desires is that even though all of us have gone through this multiple times, studies show that we forget about it each and every time. We also do a terrible job of predicting how we will feel in the future, always expecting that it will be more like that present than it is. You can imagine how potentially destructive these habits of mind are for a couple who marries while still infatuated with each other.
 

If you are one of those people who simply refuse to accept this and want your passion to burn as brightly after forty years as it does after one day, there is one possible solution—more sex. According to several experiments, animals show less habituation to positive feelings when given oxytocin, which is released during sex. It’s not clear how much sexual activity it will take to hold habituation at bay, but I invite any energetic readers to give it their best shot. For the rest of us, it’s time to come to terms once again with the cost of the romantic story line.

LEAVE THEM LAUGHING


 
Humor deserves its own special treatment. As anyone who skims through the personal ads knows, sense of humor is an absolutely essential quality. Everyone wants it in his or her partner, and no one will admit to lacking it. Any time I asked men or women what qualities they looked for, sense of humor was at or near the top of the list. Why should that be the case? Researchers did a very interesting study that helps provide some answers. They asked a group of women to read vignettes about various men. The key variable that changed from story to story was the man’s sense of humor. In some stories, the fictional man had an excellent sense of humor. In others, he had an average one. And in still others, he had a poor sense of humor. The study found that men with an excellent sense of humor were endowed by female readers with all sorts of other good qualities. Women saw them as more sensitive, more adaptable, happier, more intelligent, more masculine, and even taller. All of these additional attributes were not because of anything in the vignette. They were entirely the result of the man’s sense of humor. In other words, women unconsciously use sense of humor as a proxy for many other traits, such as creativity and intelligence. This helps explain why humor is always high on the list of desired qualities. It is not just for that quality in and of itself but because it acts as a signal for so many other sought-after qualities as well.
 
To see how deeply this is woven into our psyches, you only need to look at the results when the researchers took into account a woman’s fertility. When a woman was at her peak fertility and looking for a short-term relationship, her attraction to the man with an excellent sense of humor spiked sharply. Men with average or below-average humor found their ratings unchanged, which confirms that humor acts as a proxy for good genes in general. Humor may even be worth the importance that so many of us place on it. One study revealed that women’s humor rating of their partners significantly predicted their general relationship satisfaction. But there remains a crucial difference between the sexes. Studies show that men tend to be the ones who make the jokes, and women tend to be the ones who laugh at them.
 

Unfortunately, none of this is the holy grail of dating. At best, it gets you only a small way toward figuring out what to look for in a partner. When it comes to personality, science still only has a rudimentary understanding of why one person is attracted to another.

Dating and Deception


 
Given this ceaseless battle on both the human and genetic level, relationships are rampant breeding grounds for deception. Part of the problem is that modern society provides far more opportunities to lie. In our ancestral environment, social groups were smaller, and those who lied would gain a reputation for dishonesty. But in today’s environment, especially in cities or other highly populated areas, there is a much smaller chance of being caught. Internet dating has only increased the problem. Not surprisingly, men and women in relationships lie to each other all the time. In a 1990 study of college students, researchers found that 85 percent of the participants had lied to a partner about a past relationship or an indiscretion. Another study revealed that dating couples lied to each other in about one-third of their interactions. The numbers do improve for married couples, who lied in only 10 percent of their conversations, although the survey found that married couples saved their biggest lies for their partners. Your spouse is probably telling the truth about whether or not she likes your new tie but is possibly lying about whether or not she slept with the mailman. While men are more dishonest than women, they are at least more honest about their dishonesty, giving more accurate estimates of how much they lie than women do. And those are just the lies that we openly acknowledge. The most successful lies are those that we do not even know we are telling, and studies have shown that we are quite good at lying to ourselves about many things having to do with mating, such as how committed we think we are to someone when we are trying to get him or her into bed.
 
The deception occurs along predictable lines. Given a culture that frowns on female promiscuity—a man’s greatest fear in any relationship revolves around questions of fidelity and paternity—women quite commonly lie about their sex lives. This helps explain why sexual surveys always show a gross disparity between the number of partners men and women have had (the other cause is that men exaggerate the number of their partners). Researchers found that if they hooked female college students up to a fake lie detector and then asked them about the number of sexual partners they had, women suddenly reported almost twice as many as the women who were not hooked up to the bogus lie detector. For men, there is a quick and easy way to try to get a sense of a woman’s fidelity—if they can get her to divulge her sexual fantasies. One researcher has found that women who have more sexual fantasies about other men are also more likely to be unfaithful. Women also tend to lie about their bodily appearance, although they probably prefer to consider things like padded bras and control-top panty hose enhancements, rather than outright deceptions.
 
Women I interviewed frequently admitted that they did not tell men the truth about their sexual pasts. To give one example, an attractive woman in her late twenties was incredibly self-confident about her sexuality and had no problem sleeping with a man for her own pleasure and then never seeing him again. She had already racked up more than thirty partners, and she used to proclaim that fact proudly to the men she was dating—until she realized that they couldn’t handle the information. Some immediately freaked out. Others went off to sulk. In almost every case, it hurt the relationship and sometimes even ended it. Now when she is asked, she always answers that she has slept with six men, which seems to strike the perfect balance between being a prude and being a slut. Another woman said that men she had dated were often afraid to ask because they didn’t want to break the illusion that she had never been with other men, which was an illusion she was happy to allow them to cling to.
 
While men’s greatest concerns center on a woman’s potential promiscuity, women get more angry when a man has lied about his income or status or when he has exaggerated his feelings in order to have sex, and studies confirm that men lie more about their resources and their level of commitment as well as how kind, sincere, and trustworthy they are. Needless to say, nearly every woman I interviewed had experienced some form of this. One woman later found that her boyfriend had lied to her about virtually every aspect of his life—his age, his family, his previous jobs. The only thing he didn’t lie about was his current job, and that was only because they worked together.
 
What makes deception an even bigger problem is that it turns out that, while seemingly all of us are reasonably adept at lying, we are terrible at telling when other people are lying to us. According to research, people can only distinguish truth from lies 54 percent of the time, which is not much better than random guessing. We’re even worse at picking out lies, which we only manage to achieve 47 percent of the time. Sometimes even the person who is lying isn’t aware that he or she is doing so, which makes detecting the lie nearly impossible.
 
Men are so quick to lie in order to have sex that evolutionary psychologist Glenn Geher advises women that if they can’t judge a man’s intention with at least 90 percent accuracy, they are better off being skeptical all the time. Women should also be more careful prior to entering a relationship. Once they are in a relationship, studies show that they tend to shut off their skepticism and become more vulnerable to deception. If you want to take a more active approach, you can try to train yourself to become better at figuring out when someone is lying, in which case you could turn to Paul Ekman, an expert on facial expressions. He has devoted a substantial part of his professional life to figuring out how to “read” deception in the face of other people and has found that our faces are constantly leaking information about what we are feeling. For example, if your boss makes an annoying request, you might cover up your feeling with a polite smile and a nod of assent, but there was likely a split second (less than a fifth of a second to be more precise) when your face sent a very different message, albeit too fleeting for your boss or even you to notice. Ekman calls these brief moments microexpressions, and with training, you can become better at noticing these facial “leaks.”
 

Deception, genetic warfare, measures, and countermeasures—we are a long way from the romantic story line. Although evolutionary psychology offers a great deal of insight into human mating and dating, it is not a pretty picture. Luckily, that is not the end of the story.

BEWARE EXPECTATIONS


Our experience is not only hostage to our slippery memory—it’s also powerfully shaped by the expectations we bring with us. Even something as fundamental as how we taste food is remarkably susceptible to manipulation based on our expectations. You only need to look at Brian Wansink’s brilliant work as the director of the Cornell University Food and Brand Lab. He has done a number of clever studies at the Spice Box, a laboratory that masquerades as a restaurant. In one experiment, he offered diners a free glass of Cabernet Sauvignon—but with one devious alteration. Although all the diners were given a glass of a wine known as Two-Buck Chuck (the nickname tells you the price), half of them were told that they were being served wine from a new California label, while the other half were told that they were getting a glass of North Dakota’s finest. Even though they drank the same wine, their expectations radically shaped their experience. Not only did those diners who thought they were drinking North Dakotan wine rate their wine as tasting bad, they also rated their food as worse than the other group. In fact, it altered their entire meal. They ended up eating less and leaving the restaurant sooner.
 
The power of expectations is so great that it has an almost preternatural ability to become a self-fulfilling prophecy. In one study, after a test was given to all the students in an elementary school, a few students were randomly selected, and the teachers were told that these students had scored so highly on the test that they were sure to excel in the coming year. The parents and the students weren’t told about this so that the only difference was in the minds of the teachers. But just that small intervention led to a major difference. By the end of the year, the falsely anointed “exceptional” students showed significantly higher gains in their IQ scores than the other students. In other words, simply leading the teachers to believe that these students were special led those teachers to treat them in a way that ended up making them special.
 
Experiments have demonstrated the same power of expectations for attraction. In one study, men and women were asked to talk on the phone and get acquainted with an unknown member of the opposite sex. Before the conversation, each man was given a photograph of his supposed partner. The actual photograph was randomly selected from a group that was either attractive or unattractive. The women were not given photographs. Then, the couples spoke on the phone for roughly ten minutes about anything they wanted. Men who had received photos of beautiful women spoke to the women in a way that caused the women to be friendlier and more flirtatious—acting for all intents and purposes as beautiful women, regardless of their actual appearance.
 
In The Psychology of Human Conflict, Edwin Guthrie tells a remarkable story of how one college woman was transformed in real life by a similar experiment. A group of college men chose a shy, socially inept student and decided to treat her as if she were one of the most popular girls at the school. They made sure she was invited to the right parties and always had men asking her to dance and generally acted as if they were lucky to be in her company. Before the school year had ended, her behavior had completely changed. She was more confident and came to believe that she was indeed popular. Even after the men ended their experiment (although without telling her anything about it), she continued to behave with self-assurance. But here is the really amazing part—even the men who “conducted” the experiment came to see her in the same way, so fully had her demeanor been transformed. If only someone would secretly hire the people around us to treat us not as we are but as we wish to be, we might all become the people we aspire to be.

 

HOW THINKING TOO MUCH IS BAD FOR YOUR DATING


 
Before you go off, confident that you will avoid falling into the traps of priming or framing by bringing ruthless rationality to all of your decisions, I have to warn you against turning to an overly cerebral approach to dating: consciously thinking about your decision making is perhaps even more dangerous than not thinking at all. There are probably some among us—I admit to being one—who, when faced with a tough decision, decide to sit down and write out a list of all the pros and cons so that we can make an informed choice. Well, I’m here to tell you that this is a disastrously bad idea and likely to lead to worse decisions, especially if the subject we are examining is difficult to articulate. Or, as I like to think of this section, the unexamined life is worth living!
 

Imagine that you are given a choice of five different posters to decorate your room. One of them is a van Gogh, another is a Monet. The other three are captioned cartoons or photos of animals. Which do you choose? Researchers ran precisely this study with college students, and, as you might expect, most people preferred the posters by van Gogh and Monet. No great surprise there. We probably didn’t need a study to find that the average college student prefers van Gogh to a kitten playing with a ball of yarn. But that was not the purpose of the study. Researchers were interested in how thinking about that decision might alter it, so they asked half of the people involved to write a short essay explaining what they liked or disliked about the five posters. Afterward, all of the students were allowed to choose one of the posters and then take it home.

Self-Fulfilling Beliefs and Data Mining

Taken to extremes, these cognitive illusions may give rise to closed systems of thought that are immune, at least for a while, to revision and refutation. (Austrian writer and satirist Karl Kraus once remarked, “Psychoanalysis is that mental illness for which it regards itself as therapy.”) This is especially true for the market, since investors’ beliefs about stocks or a method of picking them can become a self-fulfilling prophecy. The market sometimes acts like a strange beast with a will, if not a mind, of its own. Studying it is not like studying science and mathematics, whose postulates and laws are (in quite different senses) independent of us. If enough people suddenly wake up believing in a stock, it will, for that reason alone, go up in price and justify their beliefs.
A contrived but interesting illustration of a self-fulfilling belief involves a tiny investment club with only two investors and ten possible stocks to choose from each week. Let’s assume that each week chance smiles at random on one of the ten stocks the investment club is considering and it rises precipitously, while the week’s other nine stocks oscillate within a fairly narrow band.
George, who believes (correctly in this case) that the movements of stock prices are largely random, selects one of the ten stocks by rolling a die (say an icosahedron—a twenty-sided solid—with two sides for each number). Martha, let’s assume, fervently believes in some wacky theory, Q analysis. Her choices are therefore dictated by a weekly Q analysis newsletter that selects one stock of the ten as most likely to break out. Although George and Martha are equally likely to pick the lucky stock each week, the newsletter-selected stock will result in big investor gains more frequently than will any other stock.
The reason is simple but easy to miss. Two conditions must be met for a stock to result in big gains for an investor: It must be smiled upon by chance that week and it must be chosen by one of the two investors. Since Martha always picks the newsletter-selected stock, the second condition in her case is always met, so whenever chance happens to favor it, it results in big gains for her. This is not the case with the other stocks. Nine-tenths of the time, chance will smile on one of the stocks that is not newsletter-selected, but chances are George will not have picked that particular one, and so it will seldom result in big gains for him. One must be careful in interpreting this, however. George and Martha have equal chances of pulling down big gains (10 percent), and each stock of the ten has an equal chance of being smiled upon by chance (10 percent), but the newsletter-selected stock will achieve big gains much more often than the randomly selected ones.
Reiterated more numerically, the claim is that 10 percent of the time the newsletter-selected stock will achieve big gains for Martha, whereas each of the ten stocks has only a 1 percent chance of both achieving big gains and being chosen by George. Note again that two things must occur for the newsletter-selected stock to achieve big gains: Martha must choose it, which happens with probability 1, and it must be the stock that chance selects, which happens with probability 1/10th. Since one multiplies probabilities to determine the likelihood that several independent events occur, the probability of both these events occurring is 1 × 1/10, or 10 percent. Likewise, two things must occur for any particular stock to achieve big gains via George: George must choose it, which occurs with probability 1/10th, and it must be the stock that chance selects, which happens with probability 1/10th. The product of these two probabilities is 1/100th or 1 percent.
Nothing in this thought experiment depends on there being only two investors. If there were one hundred investors, fifty of whom slavishly followed the advice of the newsletter and fifty of whom chose stocks at random, then the newsletter-selected stocks would achieve big gains for their investors eleven times as frequently as any particular stock did for its investors. When the newsletter-selected stock is chosen by chance and happens to achieve big gains, there are fifty-five winners, the fifty believers in the newsletter and five who picked the same stock at random. When any of the other nine stocks happens to achieve big gains, there are, on average, only five winners.
In this way a trading strategy, if looked at in a small population of investors and stocks, can give the strong illusion that it is effective when only chance is at work.
“Data mining,” the scouring of databases of investments, stock prices, and economic data for evidence of the effectiveness of this or that strategy, is another example of how an inquiry of limited scope can generate deceptive results. The problem is that if you look hard enough, you will always find some seemingly effective rule that resulted in large gains over a certain time span or within a certain sector. (In fact, inspired by the British economist Frank Ramsey, mathematicians over the last half century have proved a variety of theorems on the inevitability of some kind of order in large sets.) The promulgators of such rules are not unlike the believers in bible codes. There, too, people searched for coded messages that seemed to be meaningful, not realizing that it’s nearly impossible for there not to be some such “messages.” (This is trivially so if you search in a book that has a chapter 11, conveniently foretelling many companies’ bankruptcies.)
People commonly pore over price and trade data attempting to discover investment schemes that have worked in the past. In a reductio ad absurdum of such unfocused fishing for associations, David Leinweber in the mid-90s exhaustively searched the economic data on a United Nations CD-ROM and found that the best predictor of the value of the S&P 500 stock index was—a drum roll here—butter production in Bangladesh. Needless to say, butter production in Bangladesh has probably not remained the best predictor of the S&P 500. Whatever rules and regularities are discovered within a sample must be applied to new data if they’re to be accorded any limited credibility. You can always arbitrarily define a class of stocks that in retrospect does extraordinarily well, but will it continue to do so?
I’m reminded of a well-known paradox devised (for a different purpose) by the philosopher Nelson Goodman. He selected an arbitrary future date, say January 1, 2020, and defined an object to be “grue” if it is green and the time is before January 1, 2020, or if it is blue and the time is after January 1, 2020. Something is “bleen,” on the other hand, if it is blue and the time is before that date or if it is green and the time is after that date. Now consider the color of emeralds. All emeralds examined up to now (2002) have been green. We therefore feel confident that all emeralds are green. But all emeralds so far examined are also grue. It seems that we should be just as confident that all emeralds are grue (and hence blue beginning in 2020). Are we?
A natural objection is that these color words grue and bleen are very odd, being defined in terms of the year 2020. But were there aliens who speak the grue-bleen language, they could make the same charge against us. “Green,” they might argue, is an arbitrary color word, being defined as grue before 2020 and bleen afterward. “Blue” is just as odd, being bleen before 2020 and grue from then on. Philosophers have not convincingly shown what exactly is wrong with the terms grue and bleen, but they demonstrate that even the abrupt failure of a regularity to hold can be accommodated by the introduction of new weasel words and ad hoc qualifications.
In their headlong efforts to discover associations, data miners are sometimes fooled by “survivorship bias.” In market usage this is the tendency for mutual funds that go out of business to be dropped from the average of all mutual funds. The average return of the surviving funds is higher than it would be if all funds were included. Some badly performing funds become defunct, while others are merged with better-performing cousins. In either case, this practice skews past returns upward and induces greater investor optimism about future returns. (Survivorship bias also applies to stocks, which come and go over time, only the surviving ones making the statistics on performance. WCOM, for example, was unceremoniously replaced on the S&P 500 after its steep decline in early 2002.)
The situation is rather like that of schools that allow students to drop courses they’re failing. The grade point averages of schools with such a policy are, on average, higher than those of schools that do not allow such withdrawals. But these inflated GPAs are no longer a reliable guide to students’ performance.
Finally, taking the meaning of the term literally, survivorship bias makes us all a bit more optimistic about facing crises. We tend to see only those people who survived similar crises. Those who haven’t are gone and therefore much less visible.

The Fundamentalists’ Creed: You Get What You Pay For

 

The notion of present value is crucial to understanding the fundamentalists’ approach to stock valuation. It should also be important to lottery players, mortgagors, and advertisers. That the present value of money in the future is less than its nominal value explains why a nominal $1,000,000 award for winning a lottery—say $50,000 per year at the end of each of the next twenty years—is worth considerably less than $1,000,000. If the interest rate is 10 percent annually, for example, the $1,000,000 has a present value of only about $426,000. You can obtain this value from tables, from financial calculators, or directly from the formulas above (supplemented by a formula for the sum of a so-called geometric series).
 
The process of determining the present value of future money is often referred to as “discounting.” Discounting is important because, once you assume an interest rate, it allows you to compare amounts of money received at different times. You can also use it to evaluate the present or future value of an income stream—different amounts of money coming into or going out of a bank or investment account on different dates. You simply “slide” the amounts forward or backward in time by multiplying or dividing by the appropriate power of (1 + r). This is done, for example, when you need to figure out a payment sufficient to pay off a mortgage in a specified amount of time or want to know how much to save each month to have sufficient funds for a child’s college education when he or she turns eighteen.
 
Discounting is also essential to defining what is often called a stock’s fundamental value. The stock’s price, say investing fundamentalists (fortunately not the sort who wish to impose their moral certitudes on others), should be roughly equal to the discounted stream of dividends you can expect to receive from holding onto it indefinitely. If the stock does not pay dividends or if you plan on selling it and thereby realizing capital gains, its price should be roughly equal to the discounted value of the price you can reasonably expect to receive when you sell the stock plus the discounted value of any dividends. It’s probably safe to say that most stock prices are higher than this. During the 1990 boom years, investors were much more concerned with capital gains than they were with dividends. To reverse this trend, finance professor Jeremy Siegel, author of Stocks for the Long Run, and two of his colleagues recently proposed eliminating the corporate dividend tax and making dividends deductible.
 
The bottom line of bottom-line investing is that you should pay for a stock an amount equal to (or no more than) the present value of all future gains from it. Although this sounds very hard-headed and far removed from psychological considerations, it is not. The discounting of future dividends and the future stock price is dependent on your estimate of future interest rates, dividend policies, and a host of other uncertain quantities, and calling them fundamentals does not make them immune to emotional and cognitive distortion. The tango of exuberance and despair can and does affect estimates of stock’s fundamental value. As the economist Robert Shiller has long argued quite persuasively, however, the fundamentals of a stock don’t change nearly as much or as rapidly as its price.

 

Are Insider Trading and Stock Manipulation So Bad?


It’s natural to take a moralistic stance toward the corporate fraud and excess that have dominated business news the last couple of years. Certainly that attitude has not been completely absent from this book. An elementary probability puzzle and its extensions suggest, however, that some arguments against insider trading and stock manipulation are rather weak. Moral outrage, rather than actual harm to investors, seems to be the primary source of many people’s revulsion toward these practices.
 
Let me start with the original puzzle. Which of the following two situations would you prefer to be in? In the first one you’re given a fair coin to flip and are told that you will receive $1,000 if it lands heads and lose $1,000 if it lands tails. In the second you’re given a very biased coin to flip and must decide whether to bet on heads or tails. If it lands the way you predict you win $1,000 and, if not, you lose $1,000. Although most people prefer to flip the fair coin, your chances of winning are 1/2in both situations, since you’re as likely to pick the biased coin’s good side as its bad side.
 
Consider now a similar pair of situations. In the first one you are told you must pick a ball at random from an urn containing 10 green balls and 10 red balls. If you pick a green one, you win $1,000, whereas if you pick a red one, you lose $1,000. In the second, someone you thoroughly distrust places an indeterminate number of green and red balls in the urn. You must decide whether to bet on green or red and then choose a ball at random. If you choose the color you bet on, you win $1,000 and, if not, you lose $1,000. Again, your chances of winning are 1/2in both situations.
 
Finally, consider a third pair of similar situations. In the first one you buy a stock that is being sold in a perfectly efficient market and your earnings are $1,000 if it rises the next day and -$1,000 if it falls. (Assume that in the short run it moves up with probability 1/2and down with the same probability.) In the second there is insider trading and manipulation and the stock is very likely to rise or fall the next day as a result of these illegal actions. You must decide whether to buy or sell the stock. If you guess correctly, your earnings are $1,000 and, if not, -$1,000. Once again your chances of winning are 1/2 in both situations. (They may even be slightly higher in the second situation since you might have knowledge of the insiders’ motivations.)
 
In each of these pairs, the unfairness of the second situation is only apparent. You have the same chance of winning that you do in the first situation. I do not by any means defend insider trading and stock manipulation, which are wrong for many other reasons, but I do suggest that they are, in a sense, simply two among many unpredictable factors affecting the price of a stock.
 


 

The Paradoxical Efficient Market Hypothesis







If a large majority of investors believe in the hypothesis, they would all assume that new information about a stock would quickly be reflected in its price. Specifically, they would affirm that since news almost immediately moves the price up or down, and since news can’t be predicted, neither can changes in stock prices. Thus investors who subscribe to the Efficient Market Hypothesis would further believe that looking for trends and analyzing companies’ fundamentals is a waste of time. Believing this, they won’t pay much attention to new developments. But if relatively few investors are looking for an edge, the market will not respond quickly to new information. In this way an overwhelming belief in the hypothesis ensures its falsity.
 
To continue with this cerebral somersault, recall now a rule of logic: Sentences of the form “H implies I” are equivalent to those of the form “not I implies not H.” For example, the sentence “heavy rain implies that the ground will be wet” is logically equivalent to “dry ground implies the absence of heavy rain.” Using this equivalence, we can restate the claim that overwhelming belief in the Efficient Market Hypothesis leads to (or implies) its falsity. Alternatively phrased, the claim is that if the Efficient Market Hypothesis is true, then it’s not the case that most investors believe it to be true. That is, if it’s true, most investors believe it to be false (assuming almost all investors have an opinion and each either believes it or disbelieves it).
 
Consider now the inelegantly named Sluggish Market Hypothesis, the belief that the market is quite slow in responding to new information. If the vast majority of investors believe the Sluggish Market Hypothesis, then they all would believe that looking for trends and analyzing companies is well worth their time and, by so exercising themselves, they would bring about an efficient market. Thus, if most investors believe the Sluggish Market Hypothesis is true, they will by their actions make the Efficient Market Hypothesis true. We conclude that if the Efficient Market Hypothesis is false, then it’s not the case that most investors believe the Sluggish Market Hypothesis to be true. That is, if the Efficient Market Hypothesis is false, then most investors believe it (the EMH) to be true. (You may want to read over the last few sentences in a quiet corner.)
 
In summary, if the Efficient Market Hypothesis is true, most investors won’t believe it, and if it’s false, most investors will believe it. Alternatively stated, the Efficient Market Hypothesis is true if and only if a majority believes it to be false. (Note that the same holds for the Sluggish Market Hypothesis.) These are strange hypotheses indeed!
 
Of course, I’ve made some big assumptions that may not hold. One is that if an investor believes in one of the two hypotheses, then he disbelieves in the other, and almost all believe in one or the other. I’ve also assumed that it’s clear what “large majority” means, and I’ve ignored the fact that it sometimes requires very few investors to move the market. (The whole argument could be relativized to the set of knowledgeable traders only.)
 
Another gap in the argument is that any suspected deviations from the Efficient Market Hypothesis can always be attributed to mistakes in asset pricing models, and thus the hypothesis can’t be conclusively rejected for this reason either. Maybe some stocks or kinds of stock are riskier than our pricing models allow for and that’s why their returns are higher. Nevertheless, I think the point remains: The truth or falsity of the Efficient Market Hypothesis is not immutable but depends critically on the beliefs of investors. Furthermore, as the percentage of investors who believe in the hypothesis itself varies, the truth of the hypothesis varies inversely with it.
 

On the whole, most investors, professionals on Wall Street, and amateurs everywhere, disbelieve in it, so for this reason I think it holds, but only approximately and only most of the time.

Pushing the Complexity Horizon


The complexity of trading rules admits of degrees. Most of the rules to which people subscribe are simple, involving support levels, P/E ratios, or hemlines and Super Bowls, for example. Others, however, are quite convoluted and conditional. Because of the variety of possible rules, I want to take an oblique and abstract approach here. The hope is that this approach will yield insights that a more pedestrian approach misses. Its key ingredient is the formal definition of (a type of) complexity. An intuitive understanding of this notion tells us that someone who remembers his eight-digit password by means of an elaborate, long-winded saga of friends’ addresses, children’s ages, and special anniversaries is doing something silly. Mnemonic rules make sense only when they’re shorter than what is to be remembered.
 
Let’s back up a bit and consider how we might describe the following sequences to an acquaintance who couldn’t see them. We may imagine the 1s to represent upticks in the price of a stock and the 0s downticks or perhaps up-and-down days.
1. 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 . . .
2. 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 0 1 1 . . .
3. 1 0 0 0 1 0 1 1 0 1 1 0 1 1 0 0 0 1 0 1 0 1 1 0 0 . . .
 
The first sequence is the simplest, an alternation of 0s and 1s. The second sequence has some regularity to it, a single 0 alternating sometimes with a 1, sometimes with two 1s, while the third sequence doesn’t seem to manifest any pattern at all. Observe that the precise meaning of “ . . . ” in the first sequence is clear; it is less so in the second sequence, and not at all clear in the third. Despite this, let’s assume that these sequences are each a trillion bits long (a bit is a 0 or a 1) and continue on “in the same way.”
 
Motivated by examples like this, the American computer scientist Gregory Chaitin and the Russian mathematician A. N. Kolmogorov defined the complexity of a sequence of 0s and 1s to be the length of the shortest computer program that will generate (that is, print out) the sequence in question.
 
A program that prints out the first sequence above can consist simply of the following recipe: print a 0, then a 1, and repeat a half trillion times. Such a program is quite short, especially compared to the long sequence it generates. The complexity of this first trillion-bit sequence may be only a few hundred bits, depending to some extent on the computer language used to write the program.
 
A program that generates the second sequence would be a translation of the following: Print a 0 followed by either a single 1 or two 1s, the pattern of the intervening 1s being one, two, one, one, one, two, one, one, and so on. Any program that prints out this trillion-bit sequence would have to be quite long so as to fully specify the “and so on” pattern of the intervening 1s. Nevertheless, because of the regular alternation of 0s and either one or two 1s, the shortest such program will be considerably shorter than the trillion-bit sequence it generates. Thus the complexity of this second sequence might be only, say, a quarter trillion bits.
 
With the third sequence (the commonest type) the situation is different. This sequence, let us assume, remains so disorderly throughout its trillion-bit length that no program we might use to generate it would be any shorter than the sequence itself. It never repeats, never exhibits a pattern. All any program can do in this case is dumbly list the bits in the sequence: print 1, then 0, then 0, then 0, then 1, then 0, then 1, . . . . There is no way the . . . can be compressed or the program shortened. Such a program will be as long as the sequence it’s supposed to print out, and thus the third sequence has a complexity of approximately a trillion.
 
A sequence like the third one, which requires a program as long as itself to be generated, is said to be random. Random sequences manifest no regularity or order, and the programs that print them out can do nothing more than direct that they be copied: print 1 0 0 0 1 0 1 1 0 1 1 . . . . These programs cannot be abbreviated; the complexity of the sequences they generate is equal to the length of these sequences. By contrast, ordered, regular sequences like the first can be generated by very short programs and have complexity much less than their length.
 
Returning to stocks, different market theorists will have different ideas about the likely pattern of 0s and 1s (downs and upticks) that can be expected. Strict random walk theorists are likely to believe that sequences like the third characterize price movements and that the market’s movements are therefore beyond the “complexity horizon” of human forecasters (more complex than we, or our brains, are, were we expressed as sequences of 0s and 1s). Technical and fundamental analysts might be more inclined to believe that sequences like the second characterize the market and that there are pockets of order amidst the noise. It’s hard to imagine anyone believing that price movements follow sequences as regular as the first except, possibly, those who send away “only $99.95 for a complete set of tapes that explain this revolutionary system.”
 
I reiterate that this approach to stock price movements is rather stark, but it does nevertheless “locate” the debate. People who believe there is some pattern to the market, whether exploitable or not, will believe that its movements are characterized by sequences of complexity somewhere between those of type two and type three above.
 
A rough paraphrase of Kurt Godel’s famous incompleteness theorem of mathematical logic, due to the aforementioned Gregory Chaitin, provides an interesting sidelight on this issue. It states that if the market were random, we might not be able to prove it. The reason: encoded as a sequence of 0s and 1s, a random market would, it seems plausible to assume, have complexity greater than that of our own were we also so encoded; it would be beyond our complexity horizon. From the definition of complexity it follows that a sequence can’t generate another sequence of greater complexity than itself. Thus if a person were to predict the random market’s exact gyrations, the market would have to be less complex than the person, contrary to assumption. Even if the market isn’t random, there remains the possibility that its regularities are so complex as to be beyond our complexity horizons.
 
In any case, there is no reason why the complexity of price movements as well as the complexity of investor/computer blends cannot change over time. The more inefficient the market is, the smaller the complexity of its price movements, and the more likely it is that tools from technical and fundamental analysis will prove useful. Conversely, the more efficient the market is, the greater the complexity of price movements, and the closer the approach to a completely random sequence of price changes.
 
Outperforming the market requires that one remain on the cusp of our collective complexity horizon. It requires faster machines, better data, improved models, and the smarter use of mathematical tools, from conventional statistics to neural nets (computerized learning networks, the connections between the various nodes of which are strengthened or weakened over a period of training). If this is possible for anyone or any group to achieve, it’s not likely to remain so for long.

 

Stocks: Chaos and Unpredictability



What is the relative importance of private information, investor trading strategies, and pure whim in predicting the market? What is the relative importance of conventional economic news (interest rates, budget deficits, accounting scandals, and trade balances), popular culture fads (in sports, movies, fashions), and germane political and military events (terrorism, elections, war) too disparate even to categorize? If we were to carefully define the problem, predicting the market with any precision is probably what mathematicians call a universal problem, meaning that a complete solution to it would lead immediately to solutions for a large class of other problems. It is, in other words, as hard a problem in social prediction as there is.
 
Certainly, too little notice is taken of the complicated connections among these variables, even the more clearly defined economic ones. Interest rates, for example, have an impact on unemployment rates, which in turn influence revenues; budget deficits affect trade deficits, which sway interest rates and exchange rates; corporate fraud influences consumer confidence, which may depress the stock market and alter other indices; natural business cycles of various periods are superimposed on one another; an increase in some quantity or index positively (or negatively) feeds back on another, reinforcing or weakening it and being reinforced or weakened in turn.
 
Few of these associations are accurately described by a straight-line graph and so they bring to a mathematician’s mind the subject of nonlinear dynamics, more popularly known as chaos theory. The subject doesn’t deal with anarchist treatises or surrealist manifestoes but with the behavior of so-called nonlinear systems. For our purposes these may be thought of as any collection of parts whose interactions and connections are described by nonlinear rules or equations. That is to say, the equations’ variables may be multiplied together, raised to powers, and so on. As a consequence the system’s parts are not necessarily linked in a proportional manner as they are, for example, in a bathroom scale or a thermometer; doubling the magnitude of one part will not double that of another—nor will outputs be proportional to inputs. Not surprisingly, trying to predict the precise long-term behavior of such systems is often futile.
 
Let me, in place of a technical definition of such nonlinear systems, describe instead a particular physical instance of one. Picture before you a billiards table. Imagine that approximately twenty-five round obstacles are securely fastened to its surface in some haphazard arrangement. You hire the best pool player you can find and ask him to place the ball at a particular spot on the table and take a shot toward one of the round obstacles. After he’s done so, his challenge is to make exactly the same shot from the same spot with another ball. Even if his angle on this second shot is off by the merest fraction of a degree, the trajectories of these two balls will very soon diverge considerably. An infinitesimal difference in the angle of impact will be magnified by successive hits of the obstacles. Soon one of the balls will hit an obstacle that the other misses entirely, at which point all similarity between the two trajectories ends.
 
The sensitivity of the billiard balls’ paths to minuscule variations in their initial angles is characteristic of nonlinear systems. The divergence of the billiard balls is not unlike the disproportionate effect of seemingly inconsequential events, the missed planes, serendipitous meetings, and odd mistakes and links that shape and reshape our lives.
 
This sensitive dependence of nonlinear systems on even tiny differences in initial conditions is, I repeat, relevant to various aspects of the stock market in general, in particular its sometimes wildly disproportionate responses to seemingly small stimuli such as companies’ falling a penny short of earnings estimates. Sometimes, of course, the differences are more substantial. Witness the notoriously large discrepancies between government economic figures on the size of budget surpluses and corporate accounting statements of earnings and the “real” numbers.
 
Aspects of investor behavior too can no doubt be better modeled by a nonlinear system than a linear one. This is so despite the fact that linear systems and models are much more robust, with small differences in initial conditions leading only to small differences in final outcomes. They’re also easier to predict mathematically, and this is why they’re so often employed whether their application is appropriate or not. The chestnut about the economist looking for his lost car keys under the street lamp comes to mind. “You probably lost them near the car,” his companion remonstrates, to which the economist responds, “I know, but the light is better over here.”
 
The “butterfly effect” is the term often used for the sensitive dependence of nonlinear systems, a characteristic that has been noted in phenomena ranging from fluid flow and heart fibrillations to epilepsy and price fluctuations. The name comes from the idea that a butterfly flapping its wings someplace in South America might be sufficient to change future weather systems, helping to bring about, say, a tornado in Oklahoma that would otherwise not have occurred. It also explains why long-range precise prediction of nonlinear systems isn’t generally possible. This non-predictability is the result not of randomness but of complexity too great to fathom.
 
Yet another reason to suspect that parts of the market may be better modeled by nonlinear systems is that such systems’ “trajectories” often follow a fractal course. The trajectories of these systems, of which the stock price movements may be considered a proxy, turn out to be aperiodic and unpredictable and, when examined closely, evince even more intricacy. Still closer inspection of the system’s trajectories reveals yet smaller vortices and complications of the same general kind.
 
In general, fractals are curves, surfaces, or higher dimensional objects that contain more, but similar, complexity the closer one looks. A shoreline, to cite a classic example, has a characteristic jagged shape at whatever scale we draw it; that is, whether we use satellite photos to sketch the whole coast, map it on a fine scale by walking along some small section of it, or examine a few inches of it through a magnifying glass. The surface of the mountain looks roughly the same whether seen from a height of 200 feet by a giant or close up by an insect. The branching of a tree appears the same to us as it does to birds, or even to worms or fungi in the idealized limiting case of infinite branching.
 
As the mathematician Benoit Mandelbrot, the discoverer of fractals, has famously written, “Clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line.” These and many other shapes in nature are near fractals, having characteristic zigzags, push-pulls, bump-dents at almost every size scale, greater magnification yielding similar but ever more complicated convolutions.
 
And the bottom line, or, in this case, the bottom fractal, for stocks? By starting with the basic up-down-up and down-up-down patterns of a stock’s possible movements, continually replacing each of these patterns’ three segments with smaller versions of one of the basic patterns chosen at random, and then altering the spikiness of the patterns to reflect changes in the stock’s volatility, Mandelbrot has constructed what he calls multifractal “forgeries.” The forgeries are patterns of price movement whose general look is indistinguishable from that of real stock price movements. In contrast, more conventional assumptions about price movements, say those of a strict random-walk theorist, lead to patterns that are noticeably different from real price movements.
 
These multifractal patterns are so far merely descriptive, not predictive of specific price changes. In their modesty, as well as in their mathematical sophistication, they differ from the Elliott waves mentioned in chapter 3.
 

Even this does not prove that chaos (in the mathematical sense) reigns in (part of) the market, but it is clearly a bit more than suggestive. The occasional surges of extreme volatility that have always been a part of the market are not as nicely accounted for by traditional approaches to finance, approaches Mandelbrot compares to “theories of sea waves that forbid their swells to exceed six feet.”

Are Stocks Less Risky Than Bonds?




Perhaps because of Monopoly, certainly because of WorldCom, and for many other reasons, the focus of this book has been the stock market, not the bond market (or real estate, commodities, and other worthy investments). Stocks are, of course, shares of ownership in a company, whereas bonds are loans to a company or government, and “everybody knows” that bonds are generally safer and less volatile than stocks, although the latter have a higher rate of return. In fact, as Jeremy Siegel reports in Stocks for the Long Run, the average annual rate of return for stocks between 1802 and 1997 was 8.4 percent; the rate on treasury bills over the same period was between 4 percent and 5 percent. (The rates that follow are before inflation. What’s needless to say, I hope, is that an 8 percent rate of return in a year of 15 percent inflation is much worse than a 4 percent return in a year of 3 percent inflation.)
 
Despite what “everybody knows,” Siegel argues in his book that, as with Monopoly’s hotels and railroads, stocks are actually less risky than bonds because, over the long run, they have performed so much better than bonds or treasury bills. In fact, the longer the run, the more likely this has been the case. (Comments like “everybody knows” or “they’re all doing this” or “everyone’s buying that” usually make me itch. My background in mathematical logic has made it difficult for me to interpret “all” as signifying something other than all.) “Everybody” does have a point, however. How can we believe Siegel’s claims, given that the standard deviation for stocks’ annual rate of return has been 17.5 percent?
 
If we assume a normal distribution and allow ourselves to get numerical for a couple of paragraphs, we can see how stomach-churning this volatility is. It means that about two-thirds of the time, the rate of return will be between -9.1 percent and 25.9 percent (that is, 8.4 percent plus or minus 17.5 percent), and about 95 percent of the time the rate will be between -26.6 percent and 43.4 percent (that is, 8.4 percent plus or minus two times 17.5 percent). Although the precision of these figures is absurd, one consequence of the last assertion is that the returns will be worse than -26.6 percent about 2.5 percent of the time (and better than 43.4 percent with the same frequency). So about once every forty years (1/40 is 2.5 percent), you will lose more than a quarter of the value of your stock investments and much more frequently than that do considerably worse than treasury bills.
 
These numbers certainly don’t seem to indicate that stocks are less risky than bonds over the long term. The statistical warrant for Siegel’s contention, however, is that over time, the returns even out and the deviations shrink. Specifically, the annualized standard deviation for rates of return over a number N of years is the standard deviation divided by the square root of N. The larger N is, the smaller is the standard deviation. (The cumulative standard deviation is, however, greater.) Thus over any given four-year period the annualized standard deviation for stock returns is 17.5%/2, or 8.75%. Likewise, since the square root of 30 is about 5.5, the annualized standard deviation of stock returns over any given thirty-year period is only 17.5%/5.5, or 3.2%. (Note that this annualized thirty-year standard deviation is the same as the annual standard deviation for the conservative stock mentioned in the example at the end of chapter 6.)
 
Despite the impressive historical evidence, there is no guarantee that stocks will continue to outperform bonds. If you look at the period from 1982 to 1997, the average annual rate of return for stocks was 16.7 percent with a standard deviation of 13.1 percent, while the returns for bonds were between 8 percent and 9 percent. But from 1966 to 1981, the average annual rate of return for stocks was 6.6 percent with a standard deviation of 19.5 percent, while the returns for bonds were about 7 percent.
 
So is it really the case that, despite the debacles, deadbeats, and doomsday equities like WCOM and Enron, the less risky long-term investment is in stocks? Not surprisingly, there is a counterargument. Despite their volatility, stocks as a whole have proven less risky than bonds over the long run because their average rates of return have been considerably higher. Their rates of return have been higher because their prices have been relatively low. And their prices have been relatively low because they’ve been viewed as risky and people need some inducement to make risky investments.
 
But what happens if investors believe Siegel and others, and no longer view stocks as risky? Then their prices will rise because risk-averse investors will need less inducement to buy them; the “equity-risk premium,” the amount by which stock returns must exceed bond returns to attract investors, will decline. And the rates of return will fall because prices will be higher. And stocks will therefore be riskier because of their lower returns.
 
Viewed as less risky, stocks become risky; viewed as risky, they become less risky. This is yet another instance of the skittish, self-reflective, self-corrective dynamic of the market. Interestingly, Robert Shiller, a personal friend of Siegel, looks at the data and sees considerably lower stock returns for the next ten years.
 
Market practitioners as well as academics disagree. In early October 2002, I attended a debate between Larry Kudlow, a CNBC commentator and Wall Street fixture, and Bob Prechter, a technical analyst and Elliot wave proponent. The audience at the CUNY graduate center in New York seemed affluent and well-educated, and the speakers both seemed very sure of themselves and their predictions. Neither seemed at all affected by the other’s diametrically opposed expectations. Prechter anticipated very steep declines in the market, while Kudlow was quite bullish. Unlike Siegel and Shiller, they didn’t engage on any particulars and generally talked past each other.
 
What I find odd about such encounters is how typical they are of market discussions. People with impressive credentials regularly expatiate upon stocks and bonds and come to conclusions contrary to those of other people with equally impressive credentials. An article in the New York Times in November 2002 is another case in point. It described three plausible prognoses for the market—bad, so-so, and good—put forth by economic analysts Steven H. East, Charles Pradilla, and Abby Joseph Cohen, respectively. Such stark disagreement happens very rarely in physics or mathematics. (I’m not counting crackpots who sometimes receive a lot of publicity but aren’t taken seriously by anybody knowledgeable.)
 

The market’s future course may lie beyond what, in chapter 9, I term the “complexity horizon.” Nevertheless, aside from some real estate, I remain fully vested in stocks, which may or may not result in my remaining fully shirted.

Expected Value, Not Value Expected



What can we anticipate? What should we expect? What’s the likely high, low, and average value? Whether the quantity in question is height, weather, or personal income, extremes are more likely to make it into the headlines than are more informative averages. “Who makes the most money,” for example, is generally more attention-grabbing than “what is the average income” (although both terms are always suspect because—surprise—like companies, people lie about how much money they make).
 
Even more informative than averages, however, are distributions. What, for example, is the distribution of all incomes and how spread out are they about the average? If the average income in a community is $100,000, this might reflect the fact that almost everyone makes somewhere between $80,000 and $120,000, or it might mean that a big majority earns less than $30,000 and shops at Kmart, whose spokesperson, the (too) maligned Martha Stewart, also lives in town and brings the average up to $100,000. “Expected value” and “standard deviation” are two mathematical notions that help clarify these issues.
 
An expected value is a special sort of average. Specifically, the expected value of a quantity is the average of its values, but weighted according to their probabilities. If, for example, based on analysts’ recommendations, our own assessment, a mathematical model, or some other source of information, we assume that 1/2 of the time a stock will have a 6 percent rate of return, that 1/3 of the time it will have a -2 percent rate of return, and that the remaining 1/6 of the time it will have a 28 percent rate of return, then, on average, the stock’s rate of return over any given six periods will be 6 percent three times, -2 percent twice, and 28 percent once. The expected value of its return is simply this probabilistically weighted average—(6% + 6% + 6% + (-2%) + (-2%) + 28%)/6, or 7%.
 
Rather than averaging directly, one generally obtains the expected value of a quantity by multiplying its possible values by their probabilities and then adding up these products. Thus .06 × 1/2 + (-.02) × 1/3 + .28 × 1/6 = .07, or 7%, the expected value of the above stock’s return. Note that the term “mean” and the Greek letter µ (mu) are used interchangeably with “expected value,” so 7% is also the mean return, µ.
 
The notion of expected value clarifies a minor investing mystery. An analyst may simultaneously and without contradiction believe that a stock is very likely to do well but that, on average, it’s a loser. Perhaps she estimates that the stock will rise 1 percent in the next month with probability 95 percent and that it will fall 60 percent in the same time period with probability 5 percent. (The probabilities might come, for example, from an appraisal of the likely outcome of an impending court decision.) The expected value of its price change is thus (.01 × .95) + (-.60) × .05), which equals -.021 or an expected loss of 2.1%. The lesson is that the expected value, -2.1%, is not the value expected, which is 1%.
 
The same probabilities and price changes can also be used to illustrate two complementary trading strategies, one that usually results in small gains but sometimes in big losses, and one that usually results in small losses but sometimes in big gains. An investor who’s willing to take a risk to regularly make some “easy money” might sell puts on the above stock, puts that expire in a month and whose strike price is a little under the present price. In effect, he’s betting that the stock won’t decline in the next month. Ninety-five percent of the time he’ll be right, and he’ll keep the put premiums and make a little money. Correspondingly, the buyer of the puts will lose a little money (the put premiums) 95 percent of the time. Assuming the probabilities are accurate, however, when the stock declines, it declines by 60 percent, and so the puts (the right to sell the stock at a little under the original price) become very valuable 5 percent of the time. The buyer of the puts then makes a lot of money and the seller loses a lot.
 
Investors can play the same game on a larger scale by buying and selling puts on the S&P 500, for example, rather than on any particular stock. The key to playing is coming up with reasonable probabilities for the possible returns, numbers about which people are as likely to differ as they are in their preferences for the above two strategies. Two exemplars of these two types of investor are Victor Niederhoffer, a well-known futures trader and author of The Education of a Speculator , who lost a fortune by selling puts a few years ago, and Nassim Taleb, another trader and the author of Fooled by Randomness, who makes his living by buying them.
 
For a more pedestrian illustration, consider an insurance company. From past experience, it has good reason to believe that each year, on average, one out of every 10,000 of its homeowners’ policies will result in a claim of $400,000, one out of 1,000 policies will result in a claim of $60,000, one out of 50 will result in a claim of $4,000, and the remainder will result in a claim of $0. The insurance company would like to know what its average payout will be per policy written. The answer is the expected value, which in this case is ($400,000 × 1/10,000) + ($60,000 × 1/1,000) + ($4,000 × 1/50) + ($0 × 9,979/10,000) = $40 + $60 + $80 + $0 = $180. The premium the insurance company charges the homeowners will no doubt be at least $181.
 
Combining the techniques of probability theory with the definition of expected value allows for the calculation of more interesting quantities. The rules for the World Series of baseball, for example, stipulate that the series ends when one team wins four games. The rules further stipulate that team A plays in its home stadium for games 1 and 2 and however many of games 6 and 7 are necessary, whereas team B plays in its home stadium for games 3, 4, and, if necessary, game 5. If the teams are evenly matched, you might be interested in the expected number of games that will be played in each team’s stadium. Skipping the calculation, I’ll simply note that team A can expect to play 2.9375 games and team B 2.875 games in their respective home stadiums.
 
Almost any situation in which one can calculate (or reasonably estimate) the probabilities of the values of a quantity allows us to determine the expected value of that quantity. An example more tractable than the baseball problem concerns the decision whether to park in a lot or illegally on the street. If you park in a lot, the rate is $10 or $14, depending upon whether you stay for less than an hour, the probability of which you estimate to be 25 percent. You may, however, decide to park illegally on the street and have reason to believe that 20 percent of the time you will receive a simple parking ticket for $30, 5 percent of the time you will receive an obstruction of traffic citation for $100, and 75 percent of the time you will get off for free.
 
The expected value of parking in the lot is ($10 × .25) + ($14 × .75), which equals $13. The expected value of parking on the street is ($100 × .05) + ($30 × .20) + ($0 × .75), which equals $11. For those to whom this is not already Greek, we might say that µL, the mean costs of parking in the lot, and µS, the mean cost of parking on the street, are $13 and $11, respectively.
 

Even though parking in the street is cheaper on average (assuming money was your only consideration), the variability of what you’ll have to pay there is much greater than it is with the lot. This brings us to the notion of standard deviation and stock risk.

A Stock-Newsletter Scam





The accounting scandals involving WorldCom, Enron, and others derived from the data being selected, spun, and filtered. A scam I first discussed in my book Innumeracy derives instead from the recipients of the data being selected, spun, and filtered. It goes like this. Someone claiming to be the publisher of a stock newsletter rents a mailbox in a fancy neighborhood, has expensive stationery made up, and sends out letters to potential subscribers boasting of his sophisticated stock-picking software, financial acumen, and Wall Street connections. He writes also of his amazing track record, but notes that the recipients of his letters needn’t take his word for it.
 
Assume you are one of these recipients and for the next six weeks you receive correct predictions about a certain common stock index. Would you subscribe to the newsletter? What if you received ten consecutive correct predictions?
 
Here’s the scam. The newsletter publisher sends out 64,000 letters to potential subscribers. (Using email would save postage, but might appear to be a “spam scam” and hence be less credible.) To 32,000 of the recipients, he predicts the index in question will rise the following week and to the other 32,000, he predicts it will decline. No matter what happens to the index the next week, he will have made a correct prediction to 32,000 people. To 16,000 of them he sends another letter predicting a rise in the index for the following week, and to the other 16,000 he predicts a decline. Again, no matter what happens to the index the next week, he will have made correct predictions for two consecutive weeks to 16,000 people. To 8,000 of them he sends a third letter predicting a rise for the third week and to the other 8,000 he predicts a decline.
 
Focusing at each stage on the people to whom he’s made only correct predictions and winnowing out the rest, he iterates this procedure a few more times until there are 1,000 people left to whom he’s made six straight correct “predictions.” To these he sends a different sort of follow-up letter, pointing out his successes and saying that they can continue to receive these oracular pronouncements if they pay the $1,000 subscription price to the newsletter. If they all pay, that’s a million dollars for someone who need know nothing about stock, indices, trends, or dividends. If this is done knowingly, it is illegal. But what if it’s done unknowingly by earnest, confident, and ignorant newsletter publishers? (Compare the faithhealer who takes credit for any accidental improvements.)
 
There is so much complexity in the market, there are so many different measures of success and ways to spin a story, that most people can manage to convince themselves that they’ve been, or are about to be, inordinately successful. If people are desperate enough, they’ll manage to find some seeming order in random happenings.
 
Similar to the newsletter scam, but with a slightly different twist, is a story related to me by an acquaintance who described his father’s business and its sad demise. He claimed that his father, years before, had run a large college-preparation service in a South American country whose identity I’ve forgotten. My friend’s father advertised that he knew how to drastically improve applicants’ chances of getting into the elite national university. Hinting at inside contacts and claiming knowledge of the various forms, deadlines, and procedures, he charged an exorbitant fee for his service, which he justified by offering a money-back guarantee to students who were not accepted.
 
One day, the secret of his business model came to light. All the material that prospective students had sent him over the years was found unopened in a trash dump. Upon investigation it turned out that he had simply been collecting the students’ money (or rather their parents’ money) and doing nothing for it. The trick was that his fees were so high and his marketing so focused that only the children of affluent parents subscribed to his service, and almost all of them were admitted to the university anyway. He refunded the fees of those few who were not admitted. He was also sent to prison for his efforts.
 

Are stock brokers in the same business as my acquaintance’s father? Are stock analysts in the same business as the newsletter publisher? Not exactly, but there is scant evidence that they possess any unusual predictive powers. That’s why I thought news stories in November 2002 recounting New York Attorney General Eliot Spitzer’s criticism of Institutional Investor magazine’s analyst awards were a tad superfluous. Spitzer noted that the stock-picking performances of most of the winning analysts were, in fact, quite mediocre. Maybe Donald Trump will hold a press conference pointing out that the country’s top gamblers don’t do particularly well at roulette.

Moving Averages, Big Picture


 
People, myself included, sometimes ridicule technical analysis and the charts associated with it in one breath and then in the next reveal how much in (perhaps unconscious) thrall to these ideas they really are. They bring to mind the old joke about the man who complains to his doctor that his wife has for several years believed she’s a chicken. He would have sought help sooner, he says, “but we needed the eggs.” Without reading too much into this story except that we do sometimes seem to need the notions of technical analysis, let me finally proceed to examine some of these notions.
 
Investors naturally want to get a broad picture of the movement of the market and of particular stocks, and for this the simple technical notion of a moving average is helpful. When a quantity varies over time (such as the stock price of a company, the noontime temperature in Milwaukee, or the cost of cabbage in Kiev), one can, each day, average its values over, say, the previous 200 days. The averages in this sequence vary and hence the sequence is called a moving average, but the value of such a moving average is that it doesn’t move nearly as much as the stock price itself; it might be termed the phlegmatic average.
 
For illustration, consider the three-day moving average of a company whose stock is very volatile, its closing prices on successive days being: 8, 9, 10, 5, 6, 9. On the day the stock closed at 10, its three-day moving average was (8 + 9 + 10)/3 or 9. On the next day, when the stock closed at 5, its three-day moving average was (9 + 10 + 5)/3 or 8. When the stock closed at 6, its three-day moving average was (10 + 5 + 6)/3 or 7. And the next day, when it closed at 9, its three-day moving average was (5 + 6 + 9)/3 or 6.67.
 
If the stock oscillates in a very regular way and you are careful about the length of time you pick, the moving average may barely move at all. Consider an extreme case, the twenty-day moving average of a company whose closing stock prices oscillate with metronomic regularity. On successive days they are: 51, 52, 53, 54, 55, 54, 53, 52, 51, 50, 49, 48, 47, 46, 45, 46, 47, 48, 49, 50, 51, 52, 53, and so on, moving up and down around a price of 50. The twenty-day moving average on the day marked in bold is 50 (obtained by averaging the 20 numbers up to and including it). Likewise, the twenty-day moving average on the next day, when the stock is at 51, is also 50. It’s the same for the next day. In fact, if the stock price oscillates in this regular way and repeats itself every twenty days, the twenty-day moving average is always 50.
 
There are variations in the definition of moving averages (some weight recent days more heavily, others take account of the varying volatility of the stock), but they are all designed to smooth out the day-to-day fluctuations in a stock’s price in order to give the investor a look at broader trends. Software and online sites allow easy comparison of the stock’s daily movements with the slower-moving averages.
 
Technical analysts use the moving average to generate buy-sell rules. The most common such rule directs you to buy a stock when it exceeds its X-day moving average. Context determines the value of X, which is usually 10, 50, or 200 days. Conversely, the rule directs you to sell when the stock falls below its X-day moving average. With the regularly oscillating stock above, the rule would not lead to any gains or losses. It would call for you to buy the stock when it moves from 50, its moving average, to 51, and for you to sell it when it moves from 50 to 49. In the previous example of the three-day moving average, the rule would require that you buy the stock at the end of the third day and sell it at the end of the fourth, leading in this particular case to a loss.
 
The rule can work well when a stock fluctuates about a long-term upward- or downward-sloping course. The rationale for it is that trends should be followed, and that when a stock moves above its X-day moving average, this movement signals that a bullish trend has begun. Conversely, when a stock moves below its X-day moving average, the movement signals a bearish trend. I reiterate that mere upward (downward) movement of the stock is not enough to signal a buy (sell) order; a stock must move above (below) its moving average.
 
Alas, had I followed any sort of moving average rule, I would have been out of WCOM, which moved more or less steadily downhill for almost three years, long before I lost most of my investment in it. In fact, I never would have bought it in the first place. The security guard mentioned in chapter 1 did, in effect, use such a rule to justify the sale of the stocks in his pension plan.
 
There are a few studies, which I’ll get to later, suggesting that a moving average rule is sometimes moderately effective. Even so, however, there are several problems. One is that it can cost you a lot in commissions if the stock price hovers around the moving average and moves through it many times in both directions. Thus you have to modify the rule so that the price must move above or below its moving average by a non-trivial amount. You must also decide whether to buy at the end of the day the price exceeds the moving average or at the beginning of the next day or later still.
 
You can mine the voluminous time-series data on stock prices to find the X that has given the best returns for adhering to the X-day moving average buy-sell rule. Or you can complicate the rule by comparing moving averages over different intervals and buying or selling when these averages cross each other. You can even adapt the idea to day trading by using X-minute moving averages defined in terms of the mathematical notion of an integral. Optimal strategies can always be found after the fact. The trick is getting something that will work in the future; everyone’s very good at predicting the past. This brings us to the most trenchant criticism of the moving-average strategy. If the stock market is efficient, that is, if information about a stock is almost instantaneously incorporated into its price, then any stock’s future moves will be determined by random external events. Its past behavior, in particular its moving average, is irrelevant, and its future movement is unpredictable.
 


 

THE FLOOD THEORY



Most Young Earth Creationists appeal to one or more catastrophes to explain geological features—mountain ranges, sedimentary layers, and so on—that might otherwise seem far older. There's nothing wrong with catastrophe theories as such. Even orthodox scientists suppose catastrophes—comet strikes, volcanic eruptions, floods, and so on—have played an important role in shaping this planet and the life on it. According to most contemporary Young Earth Creationists, the key catastrophe involved in shaping our contemporary landscape was the biblical flood: the flood on which Noah famously floated his ark. They believe that Old Testament story is literally true: Noah really did build an ark onto which he was instructed by God to put seven mated pairs of every clean kind of animal and every kind of bird (Genesis 7:2). The waters then rose, drowning the rest. The current inhabitants of the land and sky are descendents of those who boarded the ark.
So how is the flood supposed to account for various geological features, such as the fossil record? It's claimed that, when the waters rose, they produced huge amounts of silt and mud. This material settled and solidified, eventually forming many of the sedimentary rock layers we find today. Many of the fossils we find within these layers are fossils of creatures drowned by the flood. The flood supposedly also explains other geological features, such as the Grand Canyon, which was carved out when the flood waters subsided.
Perhaps you are wondering why creatures are not buried randomly within the sedimentary layers but are arranged in a very specific order? Why, if the flood theory is true, do we never find the fossils of large mammals within the same layers as dinosaurs? Why do the lower layers contain fossils of only simple sea creatures? Why do humans only appear in only the very topmost layers? Why, if they were all buried by the same catastrophic flood, aren't their remains jumbled up together?
Young Earth Creationists have their answers. They say we should expect the simple sea creatures living at the bottom of the ocean to have been buried first. Birds would be restricted to the higher layers, as they would be able to fly from the rising waters. Humankind, being the smartest, would probably have found ways to avoid being drowned until the last moment, so it is not surprising we find human remains only in the top layers. We should also expect to see some order in the fossil record due, for example, to the fact that different ecological zones were submerged at different times, and also because of the different rates at which the corpses of different species bloat and then sink. “So you see?” say Young Earth Creationists. “The fossil record is, after all, consistent with our theory! It all fits!”
We might say in reply, “But these moves made by Creationists only postpone their difficulties, as they generate a myriad of further puzzles. What about flightless birds, such as penguins and ostriches, which would not have been able to delay being drowned? Why do their fossils never show up in layers lower than other birds? Why do we find sharks, but no dolphins in the lower sedimentary layers, given they occupy similar ecological zones? Surely both would have been buried in the early stages of the flood? In fact we could go on and on and on, citing a mountain of fossil evidence that contradicts the flood theory.” Still, Young Earth Creationists continue to work on developing flood-friendly explanations for these observations.
Of course, it's not just the fossil record that generates puzzles for Young Earth Creationism. Let's think for a moment about the logistics of Noah's expedition. Genesis 16:2 says the ark was 300 × 50 × 30 cubits—that's about 460 × 75 × 44 feet. Not a particularly large vessel (a cross section of 75 by 44 feet is, coinciden-tally, not very much greater than that of my four-bedroom Victorian terraced house). How did at least two of every kind of animal fit aboard this comparatively small vessel? Remember, Noah didn't just need specimens of today's creatures such as African elephants, rhinos, and giraffes. If dinosaurs were drowned in the flood, then Noah must also have put dinosaurs on board his ark. Young Earth Creationists accept this. But then how did Noah get two T. rexes, two stegosauruses, two bronto-sauruses, and so on, safely aboard? These aren't even the very largest dinosaurs. What about, for example, two argentino-sauruses, at 120 feet long and 100 tons each?
Other questions arise. What did Noah feed his creatures during their voyage? How did Noah round up the known 900,000 insect species from around the planet, and how did he ensure they weren't trodden on during the voyage? Also, how did Noah acquire polar bears from the Arctic and possums from Australia—how did they cross the vast oceans and continents to reach the ark?
But Young Earth Creationists don't give up easily. They have constructed answers to all these and other obvious questions about Noah's voyage. For example, the website of Christian Information Ministries suggests that Noah did not need at least two of every named species of dinosaur, merely two of every “kind” (whatever that is, exactly): “Some creationists believe there may have been far fewer animals if Noah only took on board pairs of ‘kinds' as the word is used in Genesis 1. God created these ‘kinds' with potential for rich genetic diversity.” Creation Ministries International endorses this explanation, adding, “Although there are about 668 names of dinosaurs, there are perhaps only 55 different ‘kinds' of dinosaurs.”
The same source also suggests that Noah did not need full-sized adult specimens—young examples would do:
Furthermore, not all dinosaurs were huge like the Brachio-saurus, and even those dinosaurs on the Ark were probably “teenagers” or young adults. Indeed, dinosaurs were recently discovered to go through a growth spurt, so God could have brought dinosaurs of the right age to start this spurt as soon as they disembarked.
 
So how did Noah feed all his creatures while they were at sea? Christian Information Ministries suggests they hibernated:
How Noah and his small family could have cared for this large menagerie is unknown, not to mention the sanitation problem! What we must remember is that this event, i.e., the Flood, had supernatural elements. For instance, the animals came to the Ark against their natural instincts (Gen. 6:20). It is therefore reasonable to assume, as some creationists do, that the animals' metabolism may have been slowed down during their confinement, even to the point where some of the animals may have gone into a state of hibernation.7
 
Of course, once we allow “supernatural elements” to play a role, we could just say that God shrank the dinosaurs to pocket size during their journey. That would deal with many of these problems.
How do Young Earth Creationists explain how polar bears and possums made it all the way to Noah's Ark across the great oceans? According to Ken Ham and Tim Lovett at Answers in Genesis, there were no separate continents at that time. There was a single continent that the flood subsequently broke apart, as they here explain: “As even secular geologists observe, it does appear that the continents were at one time ‘together' and not separated by the vast oceans of today. The forces involved in the Flood were certainly sufficient to change all of this.” Really? The forces were sufficient to push vast continents around the face of the planet, but not enough to sink a wooden vessel with a cross section of 75 by 44 feet? I guess God must have somehow protected the ark from these extraordinary forces.
Even setting aside ark logistics, the flood theory raises a host of other questions, such as, where did all the water sufficient to cover the earth's great mountain ranges go? Answer: there were no great ranges at that time—they were created by the flood. Because the surface of the earth was relatively flat, there was, and still is, more than enough water to cover the land, as Ham and Lovett also explain: “Simply put, the water from the Flood is in the oceans and seas we see today. Three-quarters of the earth's surface is covered with water.”
So how did creatures get back to their respective newly created continents after the ark was finally deposited on the mountains of Ararat (Genesis 8:4)? The marmosets could hardly have walked and swum halfway around the world, across the Atlantic Ocean, to the Amazonian rain forests where they now dwell. I guess Noah must have dropped off the marmosets in South America and the possums in Australia as the waters receded (but how, then, did the ark end up deposited high on the mountains of Ararat?). Or perhaps Noah built them rafts.
So you see: Young Earth Creationists insist they can deal with many of these questions! Admittedly, they don't have all the answers—and don't claim to. But, as they correctly point out, who does? Even orthodox science faces questions it is not currently able to answer, and perhaps never will.

Explanations such as those outlined above are continuously being developed and refined by people describing themselves as “scientists” in multimillion-dollar “research institutes” dedicated to the pursuit of something called “creation science.” These “scientists” insist that, far from falsifying Young Earth Creationism, the empirical evidence is broadly consistent with it. Young Earth Creationism, they maintain, fits the evidence at least as well as its orthodox scientific rivals. Surely, they add, good science is all about developing theories to fit the evidence. But then, because they are developing their theory to make it fit the evidence, what they are practicing is good science. Moreover, if theories are confirmed to the extent that they fit the evidence, then Young Earth Creationism, developed and refined in these ways, is as well confirmed as its rivals.