Recent and ongoing advances in econometric methodology for applied research.

Author:Masih, Mansur

    The following were the major alternative approaches to modeling up until, say, 1990:

    1.1 Large scale macro models that typically involve

    i. distinguishing exogenous and endogenous variables

    ii. imposing restrictions of short-run dynamics to achieve identification

    iii. estimation usually by OLS or IV and are used for simulations

    Examples are Fed Reserve Bank Model, Reserve Bank of Australia Model, Chris Murphy Model of Australian economy, London Business School Model, Fair Model of the US economy.

    1.2 Non-Cointegrated VAR Models: Unrestricted, Bayesian and Structural VAR forms

    i. Unrestricted VAR is frequently used in short-run forecasting and hypothesis testing for Granger-causality but it has limited use in policy simulations.

    ii. Bayesian VAR by definition predetermine structure through prior ordering of variables, ranking from most endogenous to exogenous--by design it is researcher fixed.

    Bayesian approach is becoming important now-a-days particularly in the field of forecasting. Invariably forecasting involves some degree of model uncertainty: there is little certainty on what the true underlying model is and what the true factors are that should be included in a forecasting model. Hence, both model and parameter uncertainty may have tantamount importance of the performance of the forecasts generated from such models. Bayesian methods provide an intuitive framework to tackle the problem of model and parameter uncertainty. At the heart of this framework is Bayesian model averaging or BMA. This procedure attempts to optimise the forecasting power from all the information set by averaging over all possible combination of predictions given the number of models that can be specified from such a set. The weights are given by their posterior probabilities. The concept is well established in the forecast combination literature and this approach belongs to the same neighbourhood as combining, or pooling models. The novelty to BMA is how we combine the predictions generated from each of the models, and the fact that a Bayesian approach allows the researcher to impose his/her priors through the weights. For example, a fundamentally-driven fund manager would place more weights on those models that incorporate these variables at the expense of imposing limited weight on models that include non-fundamental, technical or qualitative variables.

    Avramov (2002) and Cremers (2002) are the first of such papers that examine the role of model uncertainty in a predictive linear framework. Both papers show that BMA provides better in-sample and out of sample performance compared to single equation models. Avramov applied BMA to portfolio selection and Cremers demonstrated that the presence of predictability is reconfirmed for both the risk averse and confident investor.

    Bayesian methods do, however, have some drawbacks. First, assigning the prior distribution and estimation of all possible combinations of models, involves some computational burden. Second, there is an implicit assumption, though not that limited, that the true model is included in the set of all possible models one estimates. Third, the literature on BMA in the context of asset price forecasting is fairly new: past studies do not consider latent interaction between the factors and the dependent variable; and hence, more work needs to be done to scrutinize the out-of-sample forecasting power of BMA compared to linear and nonlinear reduced form models.

    iii. Structural VAR provides structure through imposition of restrictions on covariance structure of various shocks but this is carried out by IRF analysis in a structurally meaningful way and does not aim to model the structure of the economy in a behavioural/theoretical way. The main purpose of structural VAR (SVAR) estimation is to obtain non-recursive orthogonalisation of the error terms for impulse response analysis. This is an alternative to the recursive Cholesky orthogonalsation and SVAR requires the user to impose enough restrictions to identify the orthogonal (structural) components of the error terms.

    As regards the differences between the structural VAR and unrestricted VAR, the former imposes restrictions on the covariance matrix of structural errors and/or on the long run impulse responses themselves. In contrast to unrestricted VAR, the structural VAR tries to explicitly provide some economic rationale behind the covariance restrictions used, and hence aim to avoid the use of arbitrary or implicit identifying restrictions. However, these restrictions still do not allow the long run identifying relationships (as are done in Cointegrated Long run Structural Modelling now-a-days), and the covariance restrictions are not always easy to interpret or motivate from an economic perspective, especially in systems involving multiple variables.

    Structural VARs are not as much in fashion nowadays because of the level of attention given to cointegration and non-stationarity (i.e. VEC, LRSMs, etc) but for a time they were a good alternative to unrestricted VARs which often had the "atheoretical" criticism launched at them.

    1.3 CGE models are variants of 1.1

    They are of the Cowles foundation strategy of simultaneous equation models (SEMs) breed and were in fashion in the 1960s and early 1970s. However, given that they could not handle stagflation along with Lucas RE critique of 1970s, they underwent changes, some of which are incorporated in SEMs that one can see being published now--for example, Warwick McKibbin's model and Chris Murphy's model now incorporate rational expectations, productivity shocks, etc.


    To forecast a time series variable on the basis of its past pattern (rather than on theory-based variables as the conventional regression is). Both seasonal and non-seasonal univariate models can be built. The variable should initially be turned stationary both in the variance and in the mean. Then the process involves (i) identification (ii) estimation and (iii)forecasting the variable.

    Since the variable is turned stationary, the forecasting is valid only for the short run.


    3.1 Major Limitations of the Traditional Regression Techniques

    As to the time-series part of the exercise, it has, by now, become almost a standard procedure (helped by easy access to standard software packages) that any regression analysis should start off, not mechanically, but by testing the stationarity and cointegration properties of the time series involved. It is, by now, well established that most economic time series are non-stationary in their original 'level' form. If the variables are non-stationary, the conventional statistical tests (such as R2, t, etc.) are not valid.

    If the variables are non-stationary but cointegrated, the ordinary regression without the error-correction term(s) derived from the cointegrating equation is mis-specified. However, if the variables are non-stationary but not cointegrated, then an ordinary regression with 'differenced' variables (which will be stationary) can be estimated but the conclusions drawn from such an analysis will be valid only for the short run and no conclusions can be made about the (long run) theoretical relationship among the variables since the theory has typically nothing to say about the short run relationship. This is due to the fact that the 'differenced' time-series variables have no information about the long run relationship between the trend components of the original series since these have, by definition, been removed. The long run co-movement between the variables cannot be captured by 'differenced' variables.

    Hence on the one hand, if the variables taken are 'non-stationary' at their original 'level' forms, the conventional statistical tests are not valid because the variances of these variables are changing and the relationship estimated will be 'spurious'. On the other hand, if the variables taken are turned 'stationary' by 'first-differencing', the long-term information contained in the trend element in each variable has been, by definition, removed and the relationship estimated gives only the short run relationship between the variables and hence the regression does not test any theory.

    This damaging limitation of the traditional regression analysis (i.e., either spurious or not testing theory) has been addressed by the recent and ongoing cointegration time series techniques. The significant contributions made by the time series cointegration techniques starting with the publication of the seminal paper by Engle and Granger(1987) has been recognized through the most recent award of the Nobel Prize in Economic Science to Engle and Granger in 2003 [for further details see Diebold(2004, 2003) and Granger(2003)].

    3.2 The steps required for the application of cointegrating time series techniques to real world raw data:

    Step 1: Testing the non-stationarity/stationarity of each variable

    (i) give 'ADF' command to each log variable and then 'ADF' command to each log 'differenced' variable. In each case check the 'calculated' estimates against the 'critical' statistic(ignoring the minus signs) and draw conclusions about whether 'the null of non-stationarity' is accepted or rejected in each case.

    Ideally, the 'level' form of the variable should be non-stationary but the 'differenced' variable should be stationary.

    (ii) interpretations/implications: a stationary series has a mean(to which it tends to return), a finite variance, shocks are transitory, autocorrelation coefficients die out as the number of lags grows, whereas a non-stationary series has an infinite variance(it grows over time), shocks are permanent (on the series) and its autocorrelations tend to be unity. If the series is 'stationary', the demand-side short run...

To continue reading

Request your trial