Промышленный лизинг
Методички
where I\ = P(tk). /ii(/o) is the marginal density function of / , and f(l\, k I /У-i. i; в) is the conditional density function of l\ given l\-i, also called the transition density function. For notational simplicity, we will write f(lk, h I lk-1, /*-1: в) simply as Д. Given (9.3.4) and the observations Io..... I , the parameter vector may be estimated by the method of maximum likelihood (see Section Л.1 oflhe Appendix and Silvey [ 1975, Chapter 4). To define the maximum likelihood estimator 0, let С(в) denote the log-likelihood function, the natural logarithm of the joint density function of l\t...../ viewed as a function of в: Ш) з £loB/ . (9.3.5) The maximum likelihood estimator is then given by 0 = argmax£(0). (9.3.C.) (7. В Under suitable regularity conditions, в is consistent and has die following normal limiting distribution: 7,(0-0) ~ AA((),I-(f7)). 1(0) з lim -К I ifC(O) Й JoW (9.3.7) where 1(0) is called the information matrix. When n is large, die asymptotic distribution in (9.3.7) allows us to approximate the variance of 0 as Va.jtf] ~J-\0), (9.3.8) and the information matrix 1(0) may also be estimated in the natural way. i = i. (9.3.9) n HO ill) Moreover, в has been shown lo be asymptotically efficient in the class ol all consistent and uniformly asymptotically normal (CUAN) estimators; that is, it has the smallest asymptotic variance of all estimators that are CUAN, hence il is the preferred method of estimation whenever feasible. ()f course, maximum likelihood estimation is only feasible when the likelihood function can be obtained in closed form which, in our case, requires obtaining the transition density functions Д in closed form. Unfortunately, a closed-form expression for Д for arbitrary drift and diffusion functions cannot be obtained in general. However, it is possible to characterize Д implicitly as the solution lo a IDF.. In particular, fix the conditioning variables l\-[ and I/, \ and let Д be a function Ы Pk and 4; to emphasize this, we drop the subscript к from the arguments and write fk(P, l Pk u /, ,), Then it follows from the Fokker-llanck or forward equation lhat Д must satisfy the following (see I.o [ 1988] for a derivation): = -J!HAL + 1 ;)!W> ( m()) ill ПР 2 <)/- with initial condition MI.k-\ I П-1.4-1) = <S(/-/Vi). (9.3.11) where <5(/-/k ,) is the Dirac-della function centered at tVi- Maximum likelihood estimation is feasible whenever (9.3.10) can be solved explicitly and this depends on how complicated the coefficient functions a and /; are. Once Д is obtained, 0 can be computed numerically given the data /....../ . To interpret this estimator, we must check dial the regularity conditions for the consistency and asymptotic normality off) are satisfied. In tome cases of interest ihey are not. For example, a lognormal diffusion dl - ill dl + a I dB violates the stationarity requirement. Hut in this case, a simple log-transformation of the data does satisfy the regularity conditions: i, >.....r , where rk = log/* **-1, is a stationary sequence. Maximum likelihood estimation off) may then be per formed with (>/,). We shall return to this example in Section 9.3.2. VMM Estimation For many combinations of coellicient functions a and Ii, (9.3.10) cannot be solved explicitly, hence for these cases maximum likelihood estimation is infeasible. An alternative proposed by I lansen and Scheiukman (1995) is lo apply Hansens (1982) generalized method of moments (GMM) estimator, which they extend to the case of strictly stationary continuous-time Markov processes (see the Appendix for an exposition of GMM). The focus of any GMM procedure is, of course, the moment conditions in which the parameter vector 0 is implicitly ticlincd. The GMM estimator is that parameter vector 0 lhat minimizes the distance between the sample moment conditions and their population counterparts. The properties of a GMM estimator depend critically on the choice of moment conditions and the distance metric, and for standard discrete-lime GMM applications, these two issues have been studied quite thoroughly. Moment conditions are typically suggested by the interaction of conditioning information and optimally or equilibrium conditions such as F.tiler equations, e.g., the orthogonality conditions of linear instrumental variables estimation (sec the Appendix, Section A.I). In some cases, the optimal (in an asymptotic sense) distance metric can be deduced explicitly (see. lor example, I lainilton 1994, Chapter 14]), and efficiency bounds can be oblained (see 1 lansen ( H)8.r)) and 1 lansen, Heaton, and Ogaki (1988)). Bui lor conlinuous-iime applications in finance, especially those involving derivative securities, much less is known about the properties of GMM estimators. Indeed, one of Hansen and Scheinkmans (Н)9Г>) main coniri-bulions is lo show how lo generate moment conditions for continuous-time Markov processes with discretely sampled data. Although a complete exposition of GMM estimation for continuous-lime processes is beyond the scope of (his lexl, die central thrust of their approach can be illustrated through a simple example. Suppose we wish lo estimate the parameters of the following stationary diffusion process: dj> - ~y{j>~ ji)dl + ci dli, /НО) = f> > 0, у > 0. (9.3.12) This is a continuous-time version of a stationary AR( 1) process with unconditional mean /< (see Section 9.3.4 for further discussion), and hence il satisfies the hypotheses of I lansen and Scheinkman (1995). To generate moment conditions for (/>(/)), I lansen and Scheinkman (1995) use the infinitesimal generator T>, also known as the Dynkin operator, which is the time-derivative of a conditional expectation. Specifically, ДЛ-1 = - £ [}. (9.3.13) where the expectations operator F. - is a conditional expectation, conditioned on /i(0) = p . This operator has several important applications for deriving moment conditions of diffusions, for example, consider the following heuristic calculation of the ex pet lalion of dp:
where (9.3.14) and (9.3.15) follow from the fact the expectation of a linear function is the linear flint lions of ihe expectation, (9.3. Hi) follows from lite same property for die differential operator and from the fact dial increments of Brownian motion have /его expectation, and (9.3.17) is another way of expressing (9.3.1li). Before considering the importance of (9.3.18), observe that (9.3.17) is a first-order linear ordinary differential equation in E0[p] which can easily be solved to yield a closed-form expression for E [p] (note the initial condition e [/ ))] = p0): ej/>1 = р0е~у+ц. By applying similar arguments to the stochastic differential equations of рг, рл, and so on-which may be obtained explicitly via Itos Lemma-all higher moments of p may be computed in the same way. Now consider the unconditional version of the infinitesimal generator £>[] = dE[-)fdt. Aseries of calculations similar to (9.3.14)-(9.3.18) follows, but with one important difference: the time derivative of the unconditional expectation is zero, since p is a strictly stationary process, hence we have the restriction: V[p) = -y(E[p] -д) = 0 (9.3.19) which implies E[p] = м (9.3.20) and this yields a first-moment condition: The unconditional expectation of p must equal the steady-state mean ц. More importantly, we can apply the infinitesimal generator to any well behaved transformation /( ) of p and by Itos Lemma we have: mm = e -f\p)y(p-ll) + -f (p) 0 (9.3.21) which yields an infinite number of moment conditions-one for each /-j related to the marginal distribution of f(p). From these moment conditions, ] and under the regularity conditions specified by Hansen and Scheinkman (1995), GMM estimation may be performed in the usual way. Hansen and Scheinkman (1995) also generate multipoint moment conditions-conditions Which exploit information contained in the conditional and joint distributions of f(p)-making creative use of the properties of time-reversed diffusions along the way, and they provide asymptotic approximations for statistical inference. Although it is too early to say how their approach will perform in practice, it seems quite promising and, for many lto processes of practical interest, the GMM estimator is currendy the only one that is both computationally feasible and consistent. 9.3.2 Estimating a in the Black-Scholes Model To illustrate the techniques described in Section 9.3.1, we consider the implementation of the Black-Scholes model (9.2.16) in which the parameter a must be estimated. A common misunderstanding about a is that it is die standard deviation of simple returns Ii, of the stock. If, for example, the animal standard leviation of IBMs stock return is 30%, it is often assumed that a = 0.30. To ec why this is incorrect, let prices l(l) follow a lognormal diffusion (0.2.2) 4s required by the Black-Scholes model (sec Section 9.2.1) and assume, for cxpositional simplicity, lhat prices arc sampled at equally spaced intervals of length h in the interval [0, 7], hence 1\ = l(hh), h = 0, 1.....n ami / = nh. Then simple returns I{k(h) = (lk/l\~\) ~ arc lognormally distributed with mean and variance: WW) = < - 1 (9.3.22) УагШЛ)] = e1 h [/- 1]. (9.3.23) Therefore, the magnitude of IBMs a cannot he gauged solely by the 30% estimate since this is an estimate of /ЧлтТЩЩТ and not of a. In particular, solving (9.3.22) and (9.3.23) for cr yields the following: (9.3.24) Г 1 i /, , Vart/WQ] V ° = T l041+ (l+EIft( ])0. Therefore, a mean and standard deviation of 10% and 30%, respectively, for IBMs annual simple returns implies a value of 26.8% for a. While 30% and 26.8% may seem almost identical for most practical purposes, the former value implies a Black-Scholes price of $8.48 for a one-year call option with a $35 strike price on a $40 stock, whereas the latter value implies a Black-Scholes price of$8.10 on the same option, an economically significant difference. Since most published statistics for equity returns are based on simple returns, (9.3.24) is a useful formula to obtain a quick ballpark estimate ofa when historical data is not readily available. If historical data are available, it is a simple matter to compute the maximum likelihood estimator of n using continuously compounded returns, as discussed in Sections 9.3.1 and 9.3.3. In particular, applying llos Lemma to log/(0 and substituting (9.2.2) for dl yields dlog/ = in - \cil)dl +cidH = adt + adH (9.3.25) v here or = м - jc2. Therefore, continuously compounded returns rkih) = I>g(/* >*-l) are IID normal random variables with mean ah and variance с 2 Л hence the sample variance of rkih)/\/Ti should be a good estimator of rt2; in fact, the sample variance of rk{h)l\/li is the maximum likelihood estimator of a. More formally, under (9.3.25) the likelihood function of a sjimple of continuously compounded returns r\(h).....r ih) is Cia,a) = log(27r<T2/d - YiW - ah)1 (9.3.26) 2 2a-1, fa and in this case the maximum likelihood estimators lor a and a1 can be obtained in closed form: (9.3.27) a = - > (i*(/i) -ah). (9.3.28) Moreover, because the rk{h)s are 111) normal random variables under the dynamics (9.2.2), the regularity conditions for the consistency ami asymptotic normality of the estimator a1 are satisfied (see the Appendix, Section A.4). Therefore, we are assured that a and a1 are asymptotically efficient in the class of CUAN estimators, with asymptotic covariance matrix given by (9.3.8). Irregularly Sampled Data To see why irregularly sampled data poses no problems for continuous-lime processes, observe that the sampling interval h is arbitrary and can change mid-sample without affecting the functional form of the likelihood function (9.3.26). Suppose, for example, we measure returns annually for the first n\ observations and then monthly for the next щ observations. If Л is measured in units of one year, so that It = 1 indicates a one-year holding period, the maximum likelihood estimator of a7 for the п + п-г observations is given by £(r U) -TUT) 1 / ч(1/12) - r(l/12) v - )---- , (9.3.29 where J t i <; (I) = -УЧ(1). i(l/l2) = -}~ i*(l/l2). Observe that the second term of (9.3.29) mav be rewritten as J (4(1/12)- r(l/l2)) . which issiniply the variance cstimalorof monthly continuously compounded returns, rescaled loan annual frequency. The ease with which irregularly sampled data can he accommodated is one of the greatest advantages of continuous-time stochastic processes. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 [ 61 ] 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 |