Friday, June 29, 2012

SURE Models

In recent weeks I've had several people email to ask if I can recommend a book that goes into all of the details about the "Seemingly Unrelated Regression Equations" (SURE, or just SUR) model.

Any decent econometrics text discusses this model, of course. However, the treatment usually focuses on the asymptotic properties of the standard estimators - iterated feasible GLS, or MLE.

Attention, Stata Users

I've mentioned the Econometrics by Simulation blog before. Although it's still relatively new, it's had some great posts, and Francis Smart is doing a terrific job, as is reflected in the way that the page view numbers are building up.

Definitely worth a look, especially (but not only) if you're a Stata user.

© 2012, David E. Giles

Friday, June 15, 2012

F-tests Based on the HC or HAC Covariance Matrix Estimators

We all do it - we compute "robust" standard errors when estimating a regression model in any context where we suspect that the model's errors may be heteroskedastic and/or autocorrelated.

More correctly, we select the option in our favourite econometrics package so that the (asymptotic) covariance matrix for our estimated coefficients is estimated, using either White's heteroskedasticity-consistent (HC) estimator, or the Newey-West heteroskedasticity & autocorrelation-consistent (HAC) estimator.

The square roots of the diagonal elements of the estimated covariance matrix then provide us with the robust standard errors that we want. These standard errors are consistent estimates of the true standard deviations of the estimated coefficients, even if the errors are heteroskedastic (in White's case) or heteroskedastic and/or autocorrelated (in the Newey-West case).

That's fine, as long as we keep in mind that this is just an asymptotic result.

Then, we use the robust standard error to construct a "t-test"; or the estimated covariance matrix to construct an "F-test", or a Wald test.

And that's when the trouble starts!

Tuesday, June 12, 2012

Highly Cited Statistical Papers for Econometricians

There are "classic" research papers in all disciplines. As econometricians we frequently find ourselves making reference to publications by authors who are statisticians. Have you ever wondered how the statistical papers that are important to us actually "stack up" when it comes to a more general audience?

Specifically, how widely cited are these statistical  papers?

Fixed-Effects Vector Decomposition

Warning! Avoid the so-called "Fixed-Effects Vector Decomposition" (FEVD) estimator, introduced by PlΓΌmper and Troeger in a 2007 issue of Political Analysis.

A recent "Symposium on Fixed-Effects Vector Decomposition" in the 2011 volume of that journal, which included critiques by William Greene and by Trevor Breusch et al.,  reveals just what this estimator is.... and isn't!

Tuesday, June 5, 2012

Integrated & Cointegrated Data

Last week I had a post titled More About Spurious Regressions. Implicitly, in that post, I assumed that readers would be familiar with terms such as "integrated data", "cointegration", "differencing", and "error correction model".

It tuns out that my assumption was wrong, as was apparent from the comment/request left  on that post by one of my favourite readers (Anonymous), who wrote:
"The headlined subject of this post is of great interest to me -- a non-specialist. But this communication suffers greatly from the absence of a single real-world example of, e.g. "integrated" or "co-integrated" data, "differencing" (?), "error-correction model," etc. etc. 
I'm not trying to be querulous. It's just that not all your interested readers are specialists. And the extra intellectual effort required to provide examples would help us..."

Sunday, June 3, 2012

Monte Carlo Experiments With gretl

I keep saying that I must make more use of the gretl econometrics package. It's great software, and it's free! So, shame on me for not putting my effort where my mouth is.

Fortunately, Riccardo (Jack) Lucchetti keeps a bit of an eye on me in this regard!

Saturday, June 2, 2012

Panel Unit Root Tests

Testing for unit roots in panel data is pretty standard stuff these days. Any decent econometrics package has everything set up to make life easy for practitioners who want to apply such tests. As usual, though, ease of application doesn't guarantee correct application of the tests, or the interpretation of the associated results.

Friday, June 1, 2012

Yet Another Reason for Avoiding the Linear Probability Model

Oh dear, here we go again. Hard on the heels of this post, as well an earlier one here, I'm moved to share even more misgivings about the Linear Probability Model (LPM). That's just a fancy name for a linear regression model in which the dependent variable is a binary dummy variable, and the coefficients are estimated by OLS.

Another Gripe About the Linear Probability Model

NOTE: This post was revised significantly on 15 February, 2019, as a result of correcting an error in my original EViews code. The code file and the Eviews workfile that are available elsewhere on separate pages of this blog were also revised. I would like to thank Frederico Belotti for drawing my attention to the earlier coding error.

So you're still thinking of using a Linear Probability Model (LPM) - also known in the business as good old OLS - to estimate a binary dependent variable model?

Well, I'm stunned!

Yes, yes, I've heard all of the "justifications" (excuses) for using the LPM, as opposed to using a Logit or Probit model. Here are a few of them: