Промышленный лизинг Промышленный лизинг  Методички 

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 [ 79 ] 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103

Nonlinearities in Financial Data

Till-. ECONOMETRIC. METHODS we discuss in diis lext are almost alt designed to detect linear structure in financial data. In Chapter 2, for example, we develop time-series tests for predictability of asset returns that use weighted combinations of return autocorrelations-linear predictability is the focus. The event study of Chapter 4, and the CAPM and AIT of Chapters 5 and 6, are based on linear models of expected returns. And even when wc broaden our focus in later chapters to include other economic variables such as consumption, dividends, and interest rates, the models remain linear. This emphasis on linearity should not be too surprising since many of the economic models that drive financial econometrics are linear models.

However, many aspects of economic behavior may not be linear. Experimental evidence and casual introspection suggest lhat investors attitudes towards risk and expected return arc nonlinear. The terms of many financial contracts such as options and other derivative securities are nonlinear. And the strategic interactions among market participants, the process by which information is incorporated into security prices, and the dynamics of economy-wide fluctuations are all inherently nonlinear. Therefore, a natural frontier for financial econometrics is the modeling of nonlinear phenomena.

This is quite a challenge, since the collection of nonlinear models is much larger than the collection of linear models-after all, everything which is not linear is nonlinear. Moreover, nonlinear models arc generally more difficult to analyze than linear ones, rarely producing closed-form expressions which can be easily manipulated and empirically implemented. In some cases, the only mode of analysis is computational, and this is unfamiliar territory to those of us who are accustomed to thinking analytically, intuitively, and linearly.

But economists of a new generation are creating new models and tools that can capture nonlinearities in economic phenomena, and some of these models and tools arc the focus of this chapter. Exciting advances in dynam-



12. Nontinearities in Financial Data

ical systems theory, nonlinear time-series analysis, stochastic-volatility models, nonparamciiir statistics, and artificial neural networks have fueled the recent interest in nonliiieaiitics in financial data, and we shall explore each ol these topics in the following sections.

Section 12.1 revisits some of the issues raised in Chapter 2 regarding predictability, hut from a lincar-versiis-iionlincar perspective. We present a taxonomy of models thai distinguishes between models that are nonlinear in mean and hence depart from the martingale hypothesis, and models that are nonlinear in variance and hence depart from independence but not from the- martingale hypothesis.

Section 12.2 explores in greater detail models that are nonlinear in variance, including univariate and multivariate Generalized Autoregressive Conditionally I leieroskedaslic (GARCH) and stochastic-volatility models.

In Set lions 12.;! and I2.l we move beyond parametric lime-series models to explore nonparametric methods for fitting nonlinear relationships between variables, including smoothing techniques and artificial neural networks. Although these techniques are able to uncover a variety of nonliiieai ilies, they are heavily data-dependent and computationally intensive. To illustraie the power of these techniques, we present an application to the pricing and hedging of derivative securities and to estimating stale-price densities.

We also discuss some of ihe limitations of these techniques in Sec-lion 12.5. The most important limitations are the twin problems of overfilling and data-snooping, which plague linear models too but nol nearly to die same degree. Unfortunately, we have very little to say about how tc deal with these issues except in very special cases, hence this is an area with many open research questions to be answered.

12.1 Nonlinear Structure in Univariate Time Series

A typical time-series model relates an observed time series .v, to an underlying sequence ol shocks с,. In linear time-series analysis the shocks are assumed lo be uncorrelated but are not necessarily assumed to be III), by the Wold Representation Theorem any lime series can be written as an infinite-order linear moving average of such shocks, and this linear moving-average representation summarizes the unconditional variance and aulocovariances of the series.

In nonlinear lime-series analysis die underlying shocks are typically assumed to be 111), but we seek a possibly nonlinear function relating (he scries Л/ lo the history of the shocks. A general representation is

.v, = / (<- c, , e, . ...), (12.1.1)


12.1. Nonlinear Structure in Univariate Time Series

where the shocks are assumed to have mean zero and unit variance, and /( ) is some unknown function. The generality of this representation makes it very hard to work with-most models used in practice fall into a somewhat more restricted class that can be written as

x, = ge.-i.e,-*,...) + (Me,-\,fi-2. ) (12.1.2)

The function g(-) represents the mean of x, conditional on past information, since E, i [x,] = g(.(e,-?, ...). The innovation in x, is proportional io the shock б where the coefficient of proportionality is the function A(-) The square of this function is the variance of x, conditional on past information, since F., i[(x,-E, i[x,])2] = h{e,-\,€,,.. Models with nonlinear g(-) arc said to be nonlinear in mean, whereas models with nonlinear Л(-)2 are said to be nonlinear in variance.

To understand the restrictions imposed by (12.1.2) on (12.1.1), consider expanding (12.1.1) in a Taylor series around (, = 0 forgiven f, 2,...:

x, = /(0, e,-,....)+e,/i(0, r,-.....)

(12.1.3)

where f\ is the derivative of / with respect to e its first argument; f\\ is the second derivative of / with respect to £,; and so forth. Toobtain (12.1.2),we drop the higher-order terms in the Taylor expansion and set g( , i,...) =

/((), ) and /i(e, i )= /i(0, e, .....). By dropping higher-order

terms we link the lime-variation in the higher conditional moments of xt inflexibly with the time-variation in the second conditional moment of x since for all powers p>2, E, i[(x, - E.-x,])] = A(-)>E[ f]. Those who are interested primarily in the first two conditional moments of x, regard this restriction as a price worth paying for the greater tractability of (12.1.2).

Equation (12.1.2) leads to a natural division in the nonlinear time-series literature between models of the conditional mean g(-) and models of the conditional variance Л(-)2. Most time-series models concentrate on one form of nonlinearity or the other. A simple nonlinear moving-average model, for example, takes the form

x, = ,+ ? ,. (12.1.4)

Here g(-) = cze2 and A(-) = 1. This model is nonlinear in mean but not in variance. The first-order Autoregressive Conditionally Heteroskedastic (ARCH) model of Engle (1982), on the other hand, takes the form

x, = e,jct(

ci-r

(12.1.5)

Here £ ( ) = 0 and A(-) = Jct6f v This model is nonlinear in variance but

nol in mean.



One way to understand tlie distinction between nonlinearily in mean and nonlinearily in variance is to consider the moments of the x, process. As wc have emphasized, nonlinear models can be constructed so that second moments (autocovariances) E[x,x, ,] are all zero for i>0. In the two examples above it is easy to confirm that this is the case provided that <f( is symmetrically distributed, i.e., its third moment is zero. For the nonlinear moving average (12.1.4), for example, wc have E[x, x, i ] = Ще,+ас} .Не,-1+ае* 2)] = E[tf ,] 0 when Е[ ? ,]=0.

Now consider the behavior of higher moments of the form

E[x,x,

X, . X, (,

Models that are nonlinear in the mean allow these higher moments to be nonzero when i, j. A,... >0. Models that are nonlinear in variance but obey the martingale property have E[x, x, i,.. .]=0, so their higher moments are zero when i, j, k,... >0. These models can only have nonzero higher moments if at least one time lag index i, j, k,... is zero. In the nonlinear-moving-average example, (12.1.4), the third moment with =/=l,

Е[*£,] = V.Uet+ctil.+acl.f]

= orE[e/ , ]4-2o8 Ек?.,] ф 0.

In Hie first-order ARCH example, (12.1.5), the same third moment E[x/ xt } = л[(е,уае)б ,а<:;\2] = 0. But for this model the fourth moment with i=tj,>=A=l,E[xfx? 1] = E[e?a*f,V? 2] ф 0.

I We discuss ARCH and other models of changing variance in Section 12.2; for jthe remainder of this section we concentrate on nonlinear models of the conditional mean. In Section 12.1.1 wc explore several alternative ways to f aramctrize nonlinear models, and in Section 12.1.2 wc use these parametric models to motivate and explain some commonly used tests for non-linearity in univariate time scries, including the test of Brock, Dechcrt, and Schsinkman (1987).

12.1.1 Some Parametric Models

It is impossible to provide an exhaustive account of all nonlinear specifications, even when we restrict our attention to the subset of parametric models. Priestley (1988), Tcrasviria, Tjoslhcim, and Granger (1994), and Tong (1990) provide excellent coverage of many of the most popular nonlinear time-series models, including more-specialized models with some very intriguing names, e.g., self-exciting threshold auloregression (SETAR), amplitude-dependent exponential autoregression (EXPAR), and stale-dependent models (Sl)M). To provide a sense of the breadth of this area, we discuss four examples in

this section: polynomial models, piecewise-linear models, Markov-switching models, and deterministic chaotic models.

Polynomial Models

One way to represent the function g(-) is expand il in a Taylor series around c, i =6, >= =0, which yields a discrete-lime Volterra series (see Volterra Ц959)):

ПО -V) XJ

g(e,~ I, *,-.;>, ...) = £\i/t, , + ]T]zLA il

/=1 .= i /=/

f\j M \i

,=1 J=, *=,

The single summation in (12.1.0) is a standard linear moving average, the double summation captures the effects of lagged cross-products of two innovations, the triple summation captures the effects of lagged cross-products of three innovations, and so on. The summations indexed by j start at i, the summations indexed by к start at and so on to avoid counting a given cross-product of innovations more than once. Ihe idea is lo represent the true nonlinear function of past innovations as a weighted sum of polynomial functions of the innovations. Equation (12.1.4) is a simple example of a mode) of this form. Robinson (1979) and Priestley (1988) make extensive use of this specification.

Polynomial models may also be written in autoregressive form. The function g(e, i, Ci-л ) relating the conditional mean lo past shocks may be rewritten as a function g{x,-\, x, 2,...) relating the conditional mean to lags of x,. The autoregressive version of (12.1.0) is then

£*(.*, x, 2,...) = 4 I] XX *--*<->

1=1 1=1 j=i

oo oo ou

+ £££;.*-<*-<*<-*+ (12--7)

= y=i k=J

Il is also possible to obtain mixed autoregressive/moving-average representations, the nonlinear equivalent olARMA models. The bilinear model, for example, uses lagged values of x lagged values of e and cross-products of the two:

4j -V -V

£ *( , x, ,....) = ]re, ,- + X/W,4-£X)VW;.-,. (12.1.8)



1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 [ 79 ] 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103