1. Consider the following model:yi=α+βxi+i;i= 1,2,···,n(1)where xi is fixed in repeated sampling, and the random disturbance termisatisfies the usualassumptions of E(i) = 0∀iE(2i) =σ2∀iE(ij) = 0∀i6=jLet ˆα and ˆβ denote the ordinary least squares (OLS) estimators ofαandβ, respectively. Thereis no need to derive the least squares estimators for this question. Also, let ˆyidenote the fittedvalue ofyiobtained by the least squares estimation.a) Show thatn∑i=1ˆyi=n∑i=1yi.(2)[30%]b) Show thatE[(ˆβ−β)] = 0where=1nn∑i=1i.(3)[30%]c) Find the covariance between ˆαandˆβ.[40%]2. Consider the following model:yi=α+βxi+i;i= 1,2,···,n(4)wherexiis fixed in repeated sampling, and the random disturbance termisatisfies the usualassumptions ofE(i) = 0V(i) =σ2∀iE(ij) = 0∀i6=j(5)Let ˆi, ˆαandˆβdenote the OLS residuals and the parameter estimators, respectively. The re-gression model of (4) is fitted to the data: (x1,y1),···, (xn,yn), giving least squares estimates ˆα1
andˆβ. Given a new valuexn+1for the explanatory variable, we wish to predictyn+1. It can beassumed thatyn+1=α+βxn+1+n+1wheren+1satisfies the usual assumptions of a white noise so that:E(n+1) = 0V(n+1) =σ2E(in+1) = 0∀i= 1,2,···,n(6)The natural predictor foryn+1is:ˆyn+1= ˆα+ˆβxn+1Let the prediction error be denoted by en+1.
In Class 3, we saw that this prediction error could be expressed as:en+1= (α−ˆα) + (β−ˆβ)xn+1+n+1(7)and that the mean of the distribution of en+1was 0. Answer the following questions.a) Show that this prediction error can also be expressed as:en+1=n∑i=1dii+n+1wheredi=−[1n+(xn+1−x)(xi−x)∑nj=1(xj−x)2],withx=1nn∑i=1xi.(8)[50%]b) Using the result ina), show that the variance of the prediction error is given by:V(en+1) =σ2[1 +1n+(xn+1−x)2∑nj=1(xj−x)2].(9)Discuss the practical relevance of this result.[50%]3. Consider the following model:y=Xβ+;∼(0,σ2Ω)(10)whereXis ann×knon-singular matrix of regressors that are fixed in repeated sampling,βak×1 vector of unknown parameters,ann×1 vector error term, andΩann×n positive definite matrix.a) LetΩ=In where In is an identity matrix of ordern. Let ˆβ denote the ordinary least squares estimator ofβ. Consider any arbitrary linear estimator ̃β∗. Show that ˆβ indeed attains the minimum sum of squares of residuals compared to any other ̃β∗.[40%]b) Let ̃βdenote the generalised least squares (GLS) estimator ofβ. There is no need to derive ̃β. Letˆβdenote the OLS estimator ofβin (10). Show that the covariance between ̃βandˆβ− ̃βis0.