Промышленный лизинг  Методички

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 [ 33 ] 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103

The implication of the Sharpe-Lintner version of the CAlM for (5.3.1) i)> thai all of the elements of the vector a are /его. This implication follows from comparing the unconditional expectation of (5.3.1) to (5.1.3) and forms the principal hypothesis for tests of the model. If all elements of a arc zero then m is the tangency portfolio.

We use the maximum likelihood approach to develop estimators of the unconstrained model. Ordinary least squares (OI.S) regressions asset by asset lead to the same estimators for a and /3. To start, we consider the probability density function (pdf) of excess returns conditional on the excess return of ihe markei. Given the assumed joint normality of excess returnsfor the pdf of Z we have

/(Z, I Znl) = (2тгГ*ЕГ*

xexp[-i(Z(-a-/9Z ,)S-1(Z(-a-/3X ()], (5.3.0)

and since excess returns arc temporally IID, given T observations, the joint probability density function is

/(Zi,Z2.....Zy I У., \,У. а.....Z, r)

= Г] p(Z, I Zml) ; , (5.3.7)

= Г](2лгГ*£Г* (=t

x cxp[-i(Z, - a - /3Z )S-(Z, - a - f3Zml)}. (5.3.8)

л Given (5.3.8) and the excess-return observations, the parameters of th< excess-return markei model can be estimated using maximum likelihood. This approach is desirable because, given certain regularity conditions, majximum likelihood estimators arc consistent, asymptotically efficient, and asymptotically normal. To define the maximum likelihood estimator, we foijn the log-likelihood function, that is, the logarithm of the joint probability density function viewed as a function of the unknown parameters, or, /3, and S. Denoting С as the log-likelihood function we have:

jC(c*,)3,E) = -log(2)-logS 1

- - ]T(Z, - a - f}7.m)-£.-\Z, - a - 0Zml). (5.3.9) * i=t

The maximum likelihood estimators are the values of the parameters which maximize C. To find these estimators, we differentiate £ with respect to a, /3, and E, and set the resulting equations to zero. The partial derivatives are

 . =i -(зу- ) (5.3.10) - РУ-rnl) УШ (5.3.11)

- - ST1 2

+ 2S

]£(Z, - a -/3Z )(Z, -ex- pZml)

(5.3.12)

Setting (5.3.10), (5.3.11), and (5.3.12) to zero, we can solve for the maximum likelihood estimators. These are

/3 =

A - /ЗДт

£,r=1(Z, -£)(/ -£J

(5.3.13) (5.3.14)

7tzf

where

1 1 v

= 7-Z ,z ;,n(l 11 = Tjr;

As already noted, these arc just the formulas for OLS estimators of the parameters.

The distributions of the maximum likelihood estimators conditional on the excess return ol the market, Y., \, /., 4,..., zwy, follow Irom the assumed joint normality of excess returns and the IID assumption. The variances and covariances of the estimators can be derived using the inverse of the Fisher information matrix. As discussed in the Appendix, the Kishcr information matrix is minus the expectation of the second order derivative of the log-likelihood function with respect to the vector of die parameters.

j. i ue i.apital Asset Im nig Miiitel

The condition,il distributions arc

ТЕ ~ W.v<7-2, E), where / < , is ;is previously defined and

(5.3. lCi)

(5.3.17) (5.3.18)

Thi notation W.v(7 - 2, E) indicates that the (NxN) matrix 7Ё has a Wisliarl distribution with (V - 2) degrees of freedom and covariance matrix E. This distribution is a multivariate generalization of the clti-square distribution. Anderson (1984) and Muirliead (1983) provide discussions of its properties.

lite covariance of a and /) is

(5.3.19)

E is independent of both Л and ft.

Using the unconstrained estimators, we can form a Wald test stalistk: of the null hypothesis,

II : r* = 0 (5.3.20)

against the alternative hypothesis,

H.i: а ф 0. (5.3.21)

The Wald lest statist ic is

/ = a[Vaiji-vl]-

= T

cV E 1 a.

(5.3.22)

where we have substituted from (5.3.Hi) forVaraj. Under the null hypothesis / will have a < hi-square distribution with A degrees of freedom. Since E is unknown, to use J,t for testing I In, we .substitute a consistent estimator for E in (5.3.22) and then asymptotically the null distribution will be chi-sqiiarc with /V degrees of freedom. The maximum likelihood estimator of E can serve as a consistent estimator.

J.J. Statistical Framework for Estimation and Testing

However, in this case we need not resort to large-sample distribution theory to draw inferences using a Wald-type test. The finite-sample distribution, which is developed in MacKinlay (1987) and Gibbons, Ross, and Shanken (1989), can be determined by applying the following theorem presented in Muirliead (1983): j

Theorem. Let the m-vectorx be distributed N(0, fi), h>l the (mxm) matrix A. distributed W, (n, fi) with (n > rn), and let x and A be independent. Then:

(n- m + 1) ,

xa x ~- Гjn л-m-4-1.

To apply this theorem we set x = a/T[1 + £2 /с}2] l/2d,-A = 7E, = А/, and n = (T - 2). Then defining J\ as the test statistic we have:

Jx =

(T-N - 1) r

aE~d. (5.3.23)

Under the null hypothesis, J\ is unconditionally distributed central F wjith N degrees of freedom in the numeratorand (T-N- 1) degrees of freedom in the denominator.

We can construct the Wald test J, and the finite-sample F-test J\ using only the estimators from the unconstrained model, that is, the excess-rettlrn market model. To consider a third test, the likelihood ratio test, we need the estimators of the constrained model. For the constrained model, the Sharpe-I.intner CAPM, the estimators follow from solving for (3 and E from (5.3.11) and (5.3.12) with a constrained to be zero. The constrained estimators are

P = (5-3-24)

2-/=l ml 1

S* = £(Z,-/3,Ze,)(Zl-j&*ZJ)\ (5.3.25)

The distributions of the constrained estimators under the null hypothesis are

/3 ~ ЛМ/3,-

(5.3.26)

7E ~ WW(7-1,E). (5.3.27)

Given both the unconstrained and constrained maximum likelihood estimators, we can test the restrictions implied by the Sharpe-Lintner version

<*►

using ihe likelihood ratio test. This test is based on the logarithm of ihe likelihood ratio, which is the value of the constrained log-likelihood function minus the unconstrained log-likelihood function evaluated al the maximum likelihood estimators. Denoting £77. as the log-likelihood ratio, we have

СП = £* - £

= -[logS*-logt],

(5.3.28)

where £* represents the constrained log-likelihood function. To derive (5.3.28) we have used the fact that summation in the last term in both the unconstrained and constrained likelihood function evaluated at the maximum likelihood estimators simplifies to NT. We now show this for the unconstrained function. For the summation of the last term in (5.3.9), evaluated at the maximum likelihood estimators, we have

£(Z, - a - /З ХГ (Z, - a - pZM) (5.3.29)

т ,

= Y, lracc[t (Z, - a - )3Z )(Z, - d - pZml)] (5.3.30)

= trace

S ]P(Z, -ex- pZm,)(Z, - a - pZm,)

trace [E (/£)] = Ttracc[/] = NT.

(5.3.31)

(5.3.32)

The step from (5.3.29) to (5.3.30) uses the result that trace AH = trace ВЛ, at d the step to (5.3.31) uses the result that the trace of a sum is equal to the sum of a trace. In (5.3.32) we use the result that the trace of the identity matrix is equal to its dimension.

The test is based on the asymptotic result that, under the null hypothesis, -1 times the logarithm of the likelihood ratio is distributed chi-square with dt grccs of freedom equal to the number of restrictions under ]lu. That is, can test Hq using

j2 = -2СП

= r[logS*-logS] ~ X.v-

(5.3.33)

Interestingly, here we need not resort lo large-sample theory lo conduct a likelihood ratio test. j\ in (5.3.23) is itself a likelihood ratio lest statistic. This result, which we next develop, follows from the fact that j\ is a monotonic transformation of j?. The constrained maximum likelihood

estimators can be expressed in terms of the unconstrained estimators. For P we have

(5.3.34)

0=0 + --f

and for Ё we have

£ = T(Z,-p/.,

1 ~*~7.

,)(Z, -p У. )

7 г

(Z, - a - PZ ) +

(Z, - a - pZml) + 1 -

+ J

Noting that

we have

£* = £+ f-xr-rgv Taking the determinant of both sides we have

(5.3.35)

(5.3.3G)

(5.3.37)

(5.3.38)

where to go from (5.3.37) to (5.3.38) we Гасюте £ and use the result that , + xx-] = (1 + xx) for the identity matrix I and a vector x. Substituting (5.3.38) into (5.3.28) gives

and for ji we have

(5.3.39)

(T-N - 1) N

(5.3.40)

which is a monotonic transformation ol J,. This shows that J, can be interpreted as a likelihood ratio test.

Since the finite-sample distribution of 7, is known, equation (, .3.40) can also be used to derive the finite-sample distribution of./2. As we shall

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 [ 33 ] 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103