🎯pnd | News Taipei Times | QM-Final 論文研讀 OECD有關的 |

Chapter 6 Further Inference in the Multiple Regression Model

多元迴歸模型中的進一步推斷


detail

前言

理論與假設檢驗

Economists develop and evaluate theories about economic behavior. Hypothesis testing procedures are used to test these theories. In Chapter 5, we developed t-tests for null hypotheses consisting of a single restriction on one parameter βk from the multiple regression model, and null hypotheses consisting of a single restriction that involves more than one parameter. In this chapter we extend our earlier analysis to testing a null hypothesis with two or more restrictions on two or more parameters. An important new development for such tests is the F-test. A large sample alternative that can be used under weaker assumptions is the χ2-test.

限制最小二乘法

The theories that economists develop sometimes provide nonsample information that can be used along with the information in a sample of data to estimate the parameters of a regression model. A procedure that combines these two types of information is called restricted least squares. It can be a useful technique when the data are not information-rich—a condition called collinearity—and the theoretical information is good. The restricted least squares procedure also plays a useful practical role when testing hypotheses. In addition to these topics, we discuss model specification for the multiple regression model, prediction, and the construction of prediction intervals. Model specification involves choosing a functional form and choosing a set of explanatory variables.
- 經濟學理論有時提供非樣本信息,這些信息可以與樣本數據結合來估計回歸模型的參數。
- 限制最小二乘法是一種將這兩種信息結合的程序,特別在數據不夠豐富(如共線性情況下)且理論信息良好時非常有用。

模型規範與預測

數據問題與非線性最小二乘法

Critical to the choice of a set of explanatory variables is whether a model is to be used for prediction or causal analysis. For causal analysis, omitted variable bias and selection of control variables is important. For prediction, selection of variables that are highly correlated with the dependent variable is more relevant. We also discuss the problems that arise if our data are not sufficiently rich because the variables are collinear or lack adequate variation, and summarize concepts for detecting influential observations. The use of nonlinear least squares is introduced for models that are nonlinear in the parameters.
- 提到如果數據不夠豐富(如變數共線性或缺乏足夠變異)可能出現的問題,以及如何檢測影響觀察值。 - 引入非線性最小二乘法,用於參數非線性的模型。

以上這段文字強調了經濟學研究中假設檢驗、模型規範及其在預測和因果分析中的應用。


章節

p.261 6.1 Testing Joint Hypotheses: The F-test
p.271 6.2 The Use of Nonsample Information
p.273 6.3 Model Specification
p.282 6.4 Prediction
p.288 6.5 Poor Data, Collinearity, and Insignificance
p.294 6.6 Nonlinear Least Squares
p.297 6.7 Exercises
p.311 Appendix 6A The Statistical Power of F-Tests
p.315 Appendix 6B Further Results from the FWL Theorem

學了第6章會幫助你了解:
1. Explain the concepts of restricted and unrestricted sums of squared errors and how they are used to test hypotheses.
2. Use the F-test to test single null hypotheses or joint null hypotheses.
3. Use your computer software to perform an F-test.
4. Test the overall significance of a regression model and identify the components of this test from your computer output.
5. From output of your computer software, locate
(a) the sum of squared errors,
(b) the F-value forthe overall significance of a regression model,
(c) the estimated covariance matrix for the least squares estimates, and (d) the correlation matrix for the explanatory variables.
6. Explain the relationship between the finite sample F-test and the large sample χ2-test, and the assumptions under which each is suitable.
7. Obtain restricted least squares estimates that include nonsample information in the estimation procedure.
8. Explain the properties of the restricted least squares estimator. In particular, how do its bias and variance compare with those of the unrestricted, ordinary, least squares estimator?
9. Explain the differences between models designed for prediction and models designed to estimate a causal effect.
10. Explain what is meant by (a) an omitted variable and (b) an irrelevant variable. Explain the consequences of omitted and irrelevant variables for the properties of the least squares estimator.
11. Explain the concept of a control variable and the assumption necessary for a control variable to be effective.
12. Explain the issues that need to be considered when choosing a regression model. 13. Test for misspecification using RESET.
14. Compute forecasts, standard errors of forecast errors, and interval forecasts from a multiple regression model.
15. Use the Akaike information or Schwartz criteria to select variables for a predictive model.

  1. Identify collinearity and explain its consequences for least squares estimation.
  2. Identify influential observations in a multiple regression model.



| https://ppt.cc/fJiQcx |