The complexity of trading rules admits
of degrees. Most of the rules to which people subscribe are simple, involving
support levels, P/E ratios, or hemlines and Super Bowls, for example. Others,
however, are quite convoluted and conditional. Because of the variety of
possible rules, I want to take an oblique and abstract approach here. The hope
is that this approach will yield insights that a more pedestrian approach
misses. Its key ingredient is the formal definition of (a type of) complexity.
An intuitive understanding of this notion tells us that someone who remembers
his eight-digit password by means of an elaborate, long-winded saga of friends’
addresses, children’s ages, and special anniversaries is doing something silly.
Mnemonic rules make sense only when they’re shorter than what is to be
remembered.
Let’s back up a bit and consider how
we might describe the following sequences to an acquaintance who couldn’t see
them. We may imagine the 1s to represent upticks in the price of a stock and the
0s downticks or perhaps up-and-down days.
1. 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 . . .
2. 0 1 0 1 1 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 0 1 1 . . .
3. 1 0 0 0 1 0 1 1 0 1 1 0 1 1 0 0 0 1 0 1 0 1 1 0 0 . . .
The first sequence is the simplest, an
alternation of 0s and 1s. The second sequence has some regularity to it, a
single 0 alternating sometimes with a 1, sometimes with two 1s, while the third
sequence doesn’t seem to manifest any pattern at all. Observe that the precise
meaning of “ . . . ” in the first sequence is clear; it is less so in the second
sequence, and not at all clear in the third. Despite this, let’s assume that
these sequences are each a trillion bits long (a bit is a 0 or a 1) and continue
on “in the same way.”
Motivated by examples like this, the
American computer scientist Gregory Chaitin and the Russian mathematician A. N.
Kolmogorov defined the complexity of a sequence of 0s and 1s to be the length of
the shortest computer program that will generate (that is, print out) the
sequence in question.
A program that prints out the first
sequence above can consist simply of the following recipe: print a 0, then a 1,
and repeat a half trillion times. Such a program is quite short, especially
compared to the long sequence it generates. The complexity of this first
trillion-bit sequence may be only a few hundred bits, depending to some extent
on the computer language used to write the program.
A program that generates the second
sequence would be a translation of the following: Print a 0 followed by either a
single 1 or two 1s, the pattern of the intervening 1s being one, two, one, one,
one, two, one, one, and so on. Any program that prints out this trillion-bit
sequence would have to be quite long so as to fully specify the “and so on”
pattern of the intervening 1s. Nevertheless, because of the regular alternation
of 0s and either one or two 1s, the shortest such program will be considerably
shorter than the trillion-bit sequence it generates. Thus the complexity of this
second sequence might be only, say, a quarter trillion bits.
With the third sequence (the commonest
type) the situation is different. This sequence, let us assume, remains so
disorderly throughout its trillion-bit length that no program we might use to
generate it would be any shorter than the sequence itself. It never repeats,
never exhibits a pattern. All any program can do in this case is dumbly list the
bits in the sequence: print 1, then 0, then 0, then 0, then 1, then 0, then 1, .
. . . There is no way the . . . can be compressed or the program shortened. Such
a program will be as long as the sequence it’s supposed to print out, and thus
the third sequence has a complexity of approximately a trillion.
A sequence like the third one, which
requires a program as long as itself to be generated, is said to be random.
Random sequences manifest no regularity or order, and the programs that print
them out can do nothing more than direct that they be copied: print 1 0 0 0 1 0
1 1 0 1 1 . . . . These programs cannot be abbreviated; the complexity of the
sequences they generate is equal to the length of these sequences. By contrast,
ordered, regular sequences like the first can be generated by very short
programs and have complexity much less than their length.
Returning to stocks, different market
theorists will have different ideas about the likely pattern of 0s and 1s (downs
and upticks) that can be expected. Strict random walk theorists are likely to
believe that sequences like the third characterize price movements and that the
market’s movements are therefore beyond the “complexity horizon” of human
forecasters (more complex than we, or our brains, are, were we expressed as
sequences of 0s and 1s). Technical and fundamental analysts might be more
inclined to believe that sequences like the second characterize the market and
that there are pockets of order amidst the noise. It’s hard to imagine anyone
believing that price movements follow sequences as regular as the first except,
possibly, those who send away “only $99.95 for a complete set of tapes that
explain this revolutionary system.”
I reiterate that this approach to
stock price movements is rather stark, but it does nevertheless “locate” the
debate. People who believe there is some pattern to the market, whether
exploitable or not, will believe that its movements are characterized by
sequences of complexity somewhere between those of type two and type three
above.
A rough paraphrase of Kurt Godel’s
famous incompleteness theorem of mathematical logic, due to the aforementioned
Gregory Chaitin, provides an interesting sidelight on this issue. It states that
if the market were random, we might not be able to prove it. The reason: encoded
as a sequence of 0s and 1s, a random market would, it seems plausible to assume,
have complexity greater than that of our own were we also so encoded; it would
be beyond our complexity horizon. From the definition of complexity it follows
that a sequence can’t generate another sequence of greater complexity than
itself. Thus if a person were to predict the random market’s exact gyrations,
the market would have to be less complex than the person, contrary to
assumption. Even if the market isn’t random, there remains the possibility that
its regularities are so complex as to be beyond our complexity
horizons.
In any case, there is no reason why
the complexity of price movements as well as the complexity of investor/computer
blends cannot change over time. The more inefficient the market is, the smaller
the complexity of its price movements, and the more likely it is that tools from
technical and fundamental analysis will prove useful. Conversely, the more
efficient the market is, the greater the complexity of price movements, and the
closer the approach to a completely random sequence of price
changes.
Outperforming the market requires that
one remain on the cusp of our collective complexity horizon. It requires faster
machines, better data, improved models, and the smarter use of mathematical
tools, from conventional statistics to neural nets (computerized learning
networks, the connections between the various nodes of which are strengthened or
weakened over a period of training). If this is possible for anyone or any group
to achieve, it’s not likely to remain so for long.
ไม่มีความคิดเห็น:
แสดงความคิดเห็น