Date | Room | Session Name | Paper Title | Author(s) | Abstract |
2019/6/1 |
Room #1 | Welcome Address
(8:55-9:00) Hisashi Tanizaki; Osaka University (Chair of SETA2019 Program Committee and Local Organizing Committee) |
|||
Room #1 | Keynote Speaker
(SETA Lecture) (9:00-10:00) Chair: Hisashi Tanizaki; Osaka University |
Understanding Regressions with Observations Collected at High Frequency over Long Span | Joon Y. Park; Indiana University | In this paper, we analyze regressions with observations collected at small time intervals over a long period of time. For the formal asymptotic analysis, we assume that samples are obtained from continuous time stochastic processes, and let the sampling interval d shrink down to zero and the sample span T increase up to infinity. In this setup, we show that the standard Wald statistic diverges to infinity and the regression becomes spurious as long as d goes to zero sufficiently fast relative to T goes to infinity. Such a phenomenon is indeed what is frequently observed in practice for the type of regressions considered in the paper. In contrast, our asymptotic theory predicts that the spuriousness disappears if we use the robust version of the Wald test with an appropriate longrun variance estimate. This is supported, strongly and unambiguously, by our empirical illustration. | |
Room #1 | Empirical Finance I (10:10-11:40) | Model Specification and Time-Varying Risk Premia: Evidence from Spot and Option Markets | Chang
Shu Chung; National Chengchi University* Ting Fu Chen; Feng Chia University Shih Kuei Lin; National Chengchi University |
In this paper, we attempt to answer four questions: (i) On average, what do the proportions of the stochastic volatility and return jumps account for the total return variations in S&P 500 index? In particular, which one has more influence than the other does on the total return variations? (ii) Is the fitting performance of infinite-activity jump models better than that of finite-activity jump models both in the spot and option markets? (iii) When would investors require significantly higher risk premiums? Specifically, were there significant changes in volatility and jump risk premiums during financial shocks? (iv) Whether the variance risk premiums have the predictive power on S&P 500 index returns, especially can a portfolio based on the diffusive variance risk premiums (DVRPs) gain excess returns? For the first question, we find that most of return variations are explained by the stochastic volatility, and the return jumps account for the higher percentage than the stochastic volatility at the beginning of financial crises. For the second question, we adopt the dynamic joint estimation to obtain the stochastic volatility model with double-exponential jumps and correlated jumps in volatility and the stochastic volatility model with normal-inverse Gaussian jumps fit S&P 500 index returns and options well in different criteria. For the third question, the time-varying risk premiums show that the jump risk premiums increase after the shock of the recent financial crisis, which implies that the panic of bearing jump risk in the post-crisis period causes more expected returns. For the fourth question, we find that DVRPs have the predictive power on S&P 500 index returns both in-sample and out-of-sample, with R-squared statistics of 5.40% and 3.46%, respectively. Finally, we further investigate the economic significance of the out-of-sample predictability on the basis of asset allocations with DVRPs, and the mean-variance portfolio generates substantial economic gains of over 166 basis points per annum. | |
Chair: Tomoo Inoue; Seikei University | Forecasting the Volatility of Asset Returns: The Informational Gains from Option Prices | Vance
Martin; University of Melbourne Chrismin Tang; University of Melbourne Wenying Yao; Deakin University* |
The Realized GARCH class of models is extended to include option prices to forecast the volatility of asset returns. As analytical expressions are not available to evaluate option prices in the presence of GARCH volatility specifications, the VIX is used to approximate option prices. This is formally achieved by deriving an expression relating the integrated measure of volatility and the VIX where the parameters of the equation are subject to a set of cross-equation restrictions. The full model is characterized by a nonlinear system of three equations containing asset returns, RV and the VIX, which is estimated by maximum likelihood methods. The forecasting properties of the joint model are investigated by forecasting daily volatility on the S&P500 using daily and intra-day data from July 2001 to November 2017. For comparison a number of special cases are also considered including the GARCH and RV-GARCH models as well as the GARCH model augmented by the VIX but not RV. The forecasting results show that augmenting the GARCH model with the VIX generates superior forecasts for a broad range of forecast horizons. The inclusion of RV as well as the VIX, or RV without the VIX does not improve the forecasts. | ||
How Does Unconventional Monetary Policy Affect the Global Financial Markets?: Evaluating Policy Effects by Global VAR Models | Tomoo
Inoue; Seikei University* Tatsuyoshi Okimoto; Australian National University |
This paper examines the effects of unconventional monetary policies (UMPs) by the Bank of Japan (BOJ) and Federal Reserve (Fed) on the financial markets with taking international spillovers and a possible regime change into account. To this end, we apply the smooth-transition global VAR model to 10 countries and one Euro zone for the sample. Our results suggest that the BOJ and Fed's expansionary UMPs have significant positive effects on the domestic financial markets, particularly in more recent years. Our results also indicate that the BOJ's UMPs have rather limited effects on the international financial markets and those effects of Fed's UMPs is approximately ten times larger. | |||
2019/6/1 |
Room #2 | Econometric Method (10:10-11:40) | Parametric Inference on the Mean of Functional Data | Jin
Seo Cho; Yonsei University* Juwon Seo; National University of Singapore |
We consider estimating the population mean of functional data by minimizing the functional mean squared error. We assume a possibly misspecified parametric model for the population mean function and present an appropriate set of regularity conditions under which the consistency and asymptotic normality of our estimator are achieved. We also discuss the situations where the asymptotic properties of our estimator are influenced by the estimation errors of nuisance parameters. Based on the results, we further study statistical inferences by extending the standard Wald, Lagrange multiplier, and quasi-likelihood ratio tests to the framework of our functional data analysis. The asymptotic behaviors of the tests are derived under the null and alternative hypotheses. To illustrate the use of our methodology, we apply our tests to a certain class of distribution specification tests and random coefficient tests. |
Chair: Masayuki Hirukawa; Ryukoku University | A Doubly Corrected Robust Variance Estimator for Linear GMM | Jungbin
Hwang; University of Connecticut Byunghoon Kang; Lancaster University Seojeong Lee; University of New South Wales* |
We propose a new finite sample corrected variance estimator for the linear generalized method of moments (GMM) including the one-step, two-step, and iterated estimators. Our formula additionally corrects for the over-identification bias on top of the commonly used finite sample correction of Windmeijer (2005) which corrects for the bias from estimating the efficient weight matrix, so is doubly corrected. The over-identification bias arises from the fact that the over-identified sample moment condition is nonzero while it converges in probability to zero under correct specification. Compared with the Windmeijer correction, the extra correction terms for the over-identification bias are higher-order so the order of finite sample correction is unchanged. However, when the moment condition is misspecified, these extra terms become first-order as the over-identification bias does not vanish asymptotically. The proposed variance estimator corrects for the over-identification bias in finite samples and thus consistent under misspecification. In contrast, the conventional variance estimator and the one with the Windmeijer correction are inconsistent under misspecification. Thus, our proposed doubly corrected variance estimator provides improved inference under correct specification and robustness under misspecification. | ||
Uniform inference in GMM with locally-misspecified moment conditions | Wenjie
Wang; Nanyang Technological University* Firmin Doko Tchatoka; University of Adelaide |
Testing and confidence set construction after pre-testing, selection or averaging of the moment conditions is very challenging because in general the null limiting distributions of interest are discontinuous in certain nuisance parameter. In particular, confidence intervals constructed from the conventional normal asymptotic approximation will under-cover the true parameter, while testing based-on the normal approximation will lead to serious over-rejection. To propose a valid inference procedure under possible local misspecification of moment conditions, we consider the Bonferroni-based size-correction method. Within a general GMM framework, we develop size-corrected critical values and show that these critical values are uniformly valid in the sense that they yield tests with correct asymptotic size. Monte Carlo simulations confirm that the proposed methods have reliable finite-sample performance with much better size control than those based on conventional normal approximations. | |||
Yet another look at the omitted variable bias: Two-sample alternatives to using instruments | Masayuki
Hirukawa; Ryukoku University* Irina Murtazashviliy; Drexel University Artem Prokhorovz; University of Sydney |
When conducting regression analysis, econometricians often face the situation where some relevant regressors are unavailable in the data set at hand. One common solution is to look for valid instruments for the regressors that are suspected to be endogenous due to their possible correlation with the omitted variables. Another solution is to look for proxies. However, in many cases no such variables are available in the same data set. This paper shows how to combine the original data set with one containing the \missing" regressors even when the two data sets do not have common observations. The use of additional data improves estimation eciency and we propose a consistent semiparametric two-sample estimator of the parameters of interest. We explore the asymptotic properties of the estimator and show, using Monte Carlo simulations, that it dominates the solution involving instrumental variables, both in terms of bias and eciency. An application to the PSID and NLS data illustrates the importance of our estimation approach for empirical research. | |||
2019/6/1 |
Room #4 | DSGE and RBC models (10:10-11:40) | Measuring Consumer Confidence Using Aggregate Expenditure Data | Robert
Waldmann; University of Rome Donghoon Yoo; Korea Labor Institute* |
We propose to measure consumer confidence by exploiting a discrepancy between observed productivity and information available to consumers. Using a standard signal extraction with noisy information, we estimate model-based filtered consumer confidence and compare it to survey-based confidence indices for the U.S. and fifteen European countries. The results show that our model-based consumer confidence are positively correlated with the survey-based counterparts, potentially providing a structural interpretation of the survey-based indices widely discussed in the literature. |
Chair: Yasuo Hirose; Keio University | Investigating the Role of Money in the Identification of Monetary Policy Behavior: A Bayesian DSGE Perspective | Qing Liu; Tsinghua University | This paper estimates an enriched version of the mainstream medium-scale DSGE model which features non-separability between consumption and real money balance in household's utility and a systematic response of the policy rate to money growth. The estimation results show that money is a significant factor in the monetary policy rule, without which it may lead to biased estimates of the model. In contrast to earlier studies that rely on small-scale models, the paper stresses the merits of using a sufficiently rich model. First, it delivers different results, such as the role of non-separability between consumption and real money balance in preference. Second, the rich dynamics embedded in the model allows us to explore the responses of a larger set of macroeconomic variables, and thus such model is more informative on the effects of the shocks. Third and also most importantly, it avoids the possible pitfalls of small-scale model, which assures more reliable inferences on the role of money over business cycles. | ||
Expectation Effects of Switching Financial Frictions | Yoosoon
Chang; Indiana University* Shi Qiu; Indiana University Bloomington |
This paper investigates the effects of financial market uncertainty on macroeconomy by extending a standard dynamic stochastic general equilibrium model (DSGE) model to allow for switching in financial frictions specified as the financial accelerator. We allow for regime switching in the uncertainty process using a novel approach which systematically introduces a feedback channel from past fundamental shocks to regime switching through time-varying transition probability given explicitly as a function of past shocks driving the economy. Time-varying transition probabilities influence agents' choice through expectation effect: a bleak outlook of financial market causes slow future growth of investment. Estimation with the U.S. data uncovers evidence of such time-varying transition and identifies the sign and relative size of the contribution of each fundamental shock to regime changes in financial market uncertainty and its subsequent effect on agent's expectation formations. | |||
Monetary Policy and Macroeconomic Stability Revisited | Yasuo
Hirose* Takushi Kurozumi Willem Van Zandweghe |
A large literature has established that the Fed's change from a passive to an active policy response to inflation led to U.S. macroeconomic stability after the Great Inflation of the 1970s. This paper revisits the literature's view by estimating a generalized New Keynesian (NK) model using a full-information Bayesian method that allows for equilibrium indeterminacy and adopts a sequential Monte Carlo algorithm. The estimated model shows an active policy response to inflation even during the Great Inflation. Moreover, a more active policy response to inflation alone does not suffice for explaining the macroeconomic stability, unless it is accompanied by a change in either trend inflation or policy responses to the output gap and output growth. Our model empirically outperforms canonical NK models used in the literature, thus giving strong support to our view. | |||
Room #1 | Invited Speaker
(11:50-12:35) Chair: Kosuke Oya; Osaka University |
Volatility model specification and long-run forecasting | Kevin Sheppard; University of Oxford | Volatility forecasts are often required across a range of horizons to manage risk. This paper studies the forecast performance over horizons out to one month. Particular attention is paid to the choice between iterating a daily model and estimating a horizon-specific model. Forecasts from the latter are often referred to as direct forecasts. Direct forecasts may be preferable if the model used to produce iterative forecasts is meaningfully misspecified. Both forecasting methods are compared using a panel of 25 financial asset return series covering the major assets classes. Iterative models are found to out-perform direct forecasting methods across a wide range of horizons and assets. Direct forecasts are only found to perform better than iterative forecasts when for a small subset of models when the estimation window is long. Extensions to asymmetric models show that adding conditional asymmetries improves out-of-sample performance although the ranking between iterative and direct forecast is unaltered. | |
lunch (12:35-13:40) | |||||
2019/6/1 |
Room #1 | Financial Econometrics (13:40-15:10) | Estimating the Persistency Matrix of Multivariate Diffusion Process | Xiaohu Wang; The Chinese University of Hong Kong | The linear multivariate diffusion process is widely used in economics and finance, especially in the literature on affine term structure models. The traditional maximum likelihood (ML) estimator of the persistency matrix based on discrete-time observations has two major disadvantages: (i) it covers a narrow domain, which limits its practical application, and (ii) it involves an infinite summation whose truncation rule is unknown in practice. This paper proposes an alternative ML estimator of the persistency matrix that does not have these two disadvantages. The new estimator is easy to calculate, especially for low-dimensional models. A more efficient reduced-rank estimator of the persistency matrix is proposed for when there are cointegrating relationships. The large-sample theories of the proposed estimators are developed for a wide range of cases, including stationary, pure unit root, and cointegrated processes. Simulation studies and an empirical study of an affine term structure model are conducted to illustrate the advantages of the proposed estimators. |
Chair: Yohei Yamamoto; Hitotsubashi University | Two-step Inference In Insurance Ratemaking | Seul ki Kang; Georgia State University | Recently Heras, Moreno and Vilar-Zanon (2018) proposed a two-step inference in insurance ratemaking for forecasting the Value-at-Risk of the total amount of claims via modeling the probability with nonzero claims by a logistic regression at the first stage and the total amount of claims given nonzero claims by a quantile regression at the second stage. Because of the different quantile levels in each group due to categorical predicting variables, one may doubt the ability of the second stage for pooling information across all groups. To assess this conjecture, this paper proposes an empirical likelihood method to construct a confidence interval for the Value-at-Risk measure without the second stage, and then applies it to test whether the estimates in Heras, Moreno and Vilar-Zanon (2018) are significantly different from the risk measure without the second stage. A simulation study confirms the finite good sample performance of the new method. | ||
Detecting the Number of Factors of Quadratic Variation in the Presence of Microstructure Noise | Naoto
Kunitomo; Meiji University Daisuke Kurisu; Tokyo Institute of Technology* |
We develop a new method of detecting hidden factors of Quadratic Variation (QV) of It\^o semimartingales from a set of discrete observations when the market microstructure noise is present. We propose a statistical way to determine the number of factors of quadratic co-variations of asset prices based on the SIML (separating information maximum likelihood) method developed by Kunitomo, Sato and Kurisu (2018). In high-frequency financial data, it is important to disentangle the effects of the possible jumps and the market microstructure noise existed in financial markets. We explore the variance covariance matrix of hidden returns of the underlying It\^o semimartingales and investigate its characteristic roots and vectors of the estimated quadratic variation. We also give some simulation results to see finite sample properties of the proposed method. | |||
Testing for Speculative Bubbles in Large-Dimensional Financial Panel Data Sets | Yohei
Yamamoto; Hitotsubashi University* Tetsushi Horie; Hitotsubashi University |
Before the 2007-2008 financial crisis, speculative bubbles prevailed in various financial assets. Whether these bubbles are an economy-wide phenomenon or market-specific events is an important question. This study develops a testing approach to investigate whether the bubbles lie in the common or idiosyncratic components of large-dimensional financial panel data sets. To this end, we extend right-tailed unit root tests to common factor models, benchmarking the panel analysis of nonstationarity in the idiosyncratic and common components (PANIC) approach proposed by Bai and Ng (2004). We find that when the PANIC test is applied to the explosive alternative hypothesis as opposed to the stationary alternative hypothesis, the test for the idiosyncratic component may suffer from the nonmonotonic power problem. This study then proposes a new cross-sectional (CS) approach to disentangle the common and idiosyncratic components in a relatively short explosive window. This method first estimates the factor loadings in the training sample and then uses them in cross-sectional regressions to extract the common factors in the explosive window. Our Monte Carlo simulations show that the CS approach is robust to the nonmonotonic power problem. We finally provide an empirical example using the house price indexes of the 50 largest metropolitan areas in the United States. | |||
2019/6/1 |
Room #2 | Identification I (13:40-15:10) | Identifying modern macro equations with old shocks | Regis
Barnichon; San Francisco Fed Geert Mesters; Universitat Pompeu Fabra* |
Despite decades of research, the consistent estimation of structural macroeconomic equations remains a formidable empirical challenge because of pervasive endogeneity issues. Prominent cases with wide ranges of estimates include the Phillips curve, the Euler equation and the interest rate rule. In this work, we show how sequences of independently identified structural shocks can be used as instrumental variables to consistently estimate the coefficients of macroeconomic equations. The method is robust to weak instruments and is valid regardless of the variance contribution of the structural shocks used as instruments. We show that after instrumenting inflation expectations and the output gap with monetary shocks, the estimated slope of the Phillips curve is more than twice as large as with conventional methods. |
Chair: Kyoo il Kim; Michigan State University | Relevant moment selection under mixed identification strength | Firmin
Doko Tchatoka; University of Adelaide* Prosper Dovonon; Concordia University Michael Aguessy; Concordia University |
This paper proposes a moment selection method in the presence of moment condition models with mixed identification strength. We show that the relevant moment selection procedure of Hall et al. (2007) is inconsistent in this setting as it does not explicitly account for the rate of convergence of parameter estimation of the candidate models which may vary. We introduce a new moment selection procedure based on a criterion that sequentially evaluates the rate of convergence of the candidate model's parameter estimate as well as the entropy of the estimator's asymptotic distribution. The benchmark estimator that we consider is the two-step efficient generalized method of moments (GMM) estimator which is known to be efficient in this framework as well. A family of penalization functions is introduced that guarantees the consistency of the selection procedure. The finite sample performance of the proposed method is assessed through Monte Carlo simulations. | ||
Comment on Identification Properties of Recent Production Function Estimators | Kyoo il Kim; Michigan State University | Control function approaches to production function estimation rely on a proxy that is monotone in a scalar unobserved productivity conditioning on other state variables. Ackerberg, Caves, and Frazer (2015) point out a potential functional dependence problem of the approach using capital as the only state variable, besides its restrictive implicit assumption on the timing of labor input. They provide a simple solution by conditioning on both capital and labor. Moreover, ACF’s approach allows flexible timing of labor input when the firm learns all or part of the productivity. However, we demonstrate that ACF’s moment condition may suffer from weak identification, whose significance depends on the timing of labor input. We propose easy-to-implement modifications that remedy these issues and provide Monte Carlos evidence. Moreover, the exact timing of input choices is unknown and may differ across firms in practice. Therefore, our proposal is valuable as it fully incorporates the flexible ACF framework but avoids weak identification. | |||
2019/6/1 |
Room #4 | Time Series I (13:40-15:10) | Robust Inference with Stochastic Local Unit Root Regressors in Predictive Regressions | Yanbo
Liu; Singpaore Management University* Peter Phillips; Yale University |
This paper explores in predictive regressions with various stochastic unit root components, the IVX inference procedures enables robust chi-square testing for a class of persistent and time-varying stochastic nonstationary regressors. This papers extends the mechanism of self-generated instruments called IVX. This paper shows that under the case of both short-horizon predictive regressions and long-horizon mean predictive regression case, IVX-type methods remain valid for regressors characterized as Stochastic Unit Root Model of Lieberman and Phillips (2016) and Stochastic Local Unit Root Model of Lieberman and Phillips (2017). The asymptotic distributions of IVX estimators here are new compared to previous derivations in IVX literature. Surprisingly it is again demonstrated that the pivotal asymptotic distributions of Wald testing procedures remain robust for a single nonstationary regressor as in Lieberman and Phillips (2016) and Lieberman and Phillips (2017) and multiple regressors case with both various degrees of persistence and stochastic departure from unit roots. The numerical experiements justify the above asymptotic theory and good power and desirable size properties of the proposed tests. Morever one Bonferroni-Type randomness detection procedure is justi・d. Both IVX methods and Bonferroni-Type randomness detection procedure are applied in the empirical illustration part. |
Chair: Jianning Kong; Shandong University | Tests of the Null of Cointegration Using IM-OLS Residuals | Cheol-Keun Cho; Korea Energy Economics Institute | This paper addresses tests of the null of cointegration using test statistics of KPSS-type. Two different IM-OLS residuals are considered to construct the KPSS test statistics. Their limiting distributions are derived under the null of cointegration and under the alternative of no cointegration. Both tests, labeled as KPSS-C and KPSS-Fb respectively, are consistent under the standard asymptotics but only KPSS-Fb statistic has a pivotal fi・ed-b null limiting distribution, rendering itself available for a fi・ed-b inference. Simulation experiments show the KPSS-Fb test delivers mild size distortion even for null DGPS with relatively high persistence when Andrew痴 AR(1) plug-in data-dependent bandwidth (DDB) is employed and fi・ed-b critical values are used. The simulation experiments also indicate that the power of the test is reasonably good even with the DDB being used unlike existing tests. This property is further investigated by deriving the limit of AR(1) coe? cient estimator which is a key component of the DDB formula. An extension of the test to the case of trending regressors is also provided. | ||
Testing weak sigma-convergence using HAR inference | Jianning Kong; Shandong University | Measurement of diminishing or divergent cross section dispersion in a panel plays an important role in the assessment of convergence or divergence over time in key economic indicators. Econometric methods, known as weak sigma-convergence tests, have recently been developed (Kong, Phillips, Sul, 2019) to evaluate such trends in dispersion in panel data. To achieve generality, these tests rely on heteroskedastic and autocorrelation consistent (HAC) covariance matrix estimates. This paper examines the behavior of these convergence tests when heteroskedastic and autocorrelation robust (HAR) covariance matrix estimates using ・ixed-b methods are employed instead of HAC estimates. Asymptotic theory for the corresponding HAR convergence test is derived and numerical simulations are used to assess performance. Unlike other applications of HAR estimation in trend regression we ・ind that HAR estimation does not generally improve ・inite sample performance of these convergence tests either in size or power. Any improvements that are found are very limited. The explanation is that weak sigma-convergence tests rely on intentionally misspecified linear trend regression formulations of unknown trend decay functions that model convergence behavior rather than regressions with correctly specified trend decay functions. | |||
2019/6/1 |
Room #1 | Empirical Finance II (15:20-16:50) | Investigating the interaction between returns and order flow imbalances: Endogeneity, intraday variations, and macroeconomic news announcements | Makoto Takahashi; Hosei University | The study examines the interaction between returns and order flow imbalances (differences between buy and sell orders), constructed from the best bid and offer files of S&P 500 E-mini futures contract, using a structural vector autoregressive (SVAR) model. The well-known intraday variation in market activity is considered by applying the SVAR model for each short interval each day, whereas the endogeneity due to time aggregation is handled by estimating the structural parameters via the identification through heteroskedasticity. The estimation results show that significant endogeneity exists and that the estimated parameters and associated quantities, such as the return variance driven by order flow imbalances, vary over time, reflecting intense or mild order submission activities. Further, order flow imbalances are shown to be more informative several minutes away from macroeconomic news announcements and that inactive order submission periods exist when they occur. |
Chair: Masahito Kobayashi; Yokohama National University | Reconsidering the volatility of gold: Is gold a hedge or a safe haven? | Zhaoying
Lu; Osaka University* Hisashi Tanizaki; Osaka University |
In this study, we examine the volatility in gold markets using daily data for the last 18 years. We find that asymmetry, volatility transmission effect of US dollar exchange rate and that of stock price, asymmetric effect of US dollar influence the volatility in gold markets. We also investigate the role of gold as a hedge or a safe haven. Taking into account these effects in the volatility of gold, we find that gold serves as a hedge and a safe haven against US dollar exchange markets. | ||
A New Copula Analysis of the EU Sovereign Debt Crisis | Masahito Kobayashi; Yokohama National University | The aim of this paper is to propose a new method to measure asymmetric correlation between the stock and government bond price returns of the five peripheral EU countries during the EU sovereign crisis. We find a strong dependence in the lower-tail of the stock-bond correlation in the early stage of the crisis, which can be interpreted a panic capital flight. The correlation between the stock and bond price returns of the peripheral EU countries changed in the EU sovereign debt crisis. We consier the stock index and 10-year government bond price returns of the five EU countries after removing the effect of volatility change and serial correlation. In this paper we quantify the correlation asymmetry of the stock and bond returns of the five peripheral euro countries ( Greece, Ireland, Italy, Portugal, Spain) using a new time-varying asymmetric copula. In the pre-crisis period (2006-2009) the sign of the correlation was negative. It changed to positive and the lower-left tail area had high ""correlation"" than in the upper-right tail area in the mid-crisis (2010-2013). In the post-crisis period (2014-2015) the asymmetry disappeared and the positive stock-bond correlation remained, suggesting that the government bonds were still unsafe assets. In order to analyze the time-varying correlations of the stock and government bond price returns in the crisis, we constructed a new asymmetric copula from a split normal distribution, namely a bivariate distribution consisting of two halved bivariate normal density functions with different correlation coefficients connected on the negative 45 degree line. The correlation coefficients of the underlying distribution are estimated by the particle filter method in a state space framework under the assumption that they independently follow random walk processes. The merit of using copula is that we can construct a joint distribution function with an arbitrary marginal distribution function and an arbitrary quantile dependence, which is an alternative concept to correlation. For the ease of the computational burden, we estimated the standard deviation of the transition equation, which is essentially smoothness parameter, by maximizing the likelihood function approximated and interpolated by a thin plate spline regression method. | |||
2019/6/1 |
Room #2 | Forecasting (15:20-16:50) | Optimal Multi-step VAR Forecast Averaging | Jen-Che
Liao; Academia Sinica* Wen-Jen Tsay; Academia Sinica |
This paper proposes frequentist multiple-equation least squares averaging approaches for multi-step forecasting with vector autoregressive (VAR) models. The proposed VAR forecasting averaging methods are based on the multivariate Mallows model averaging (MMMA) and multivariate leave-h-out cross-validation averaging (MCVAh) criteria (with h denoting the forecast horizon), which are valid for iterative and direct multi-step forecasting averaging, respectively. Under the framework of stationary VAR processes of infinite order, we provide theoretical justifications by establishing asymptotic unbiasedness and asymptotic optimality of the proposed forecasting averaging approaches. Specifically, MMMA exhibits asymptotic optimality for one-step ahead forecast averaging, whereas for direct multi-step forecasting averaging the asymptotically optimal combination weights are determined separately for each forecast horizon based on the MCVAh procedure. The finite-sample behaviour of the proposed averaging procedures under misspecification is investigated via simulation experiments. An empirical application to a three-variable monetary VAR, based on the U.S. data, is also provided to present our methodology. |
Chair: Jing Tian; University of Tasmania | Handling Class Imbalance in Predicting Fraud of US Listed Firms’ Financial Statement using Resampling Method and Machine Learning Approach | Jerome
Impas; Tsukuba University* Tadashi Ono Mina Ryoke |
Class imbalance occurs predominantly in the real world and the challenges are compounded by the fact that the minority class is the primary interest and typically the misclassification cost is higher. Class imbalance refers to the scenario where the distribution of the majority class is greatly higher than the minority class or the dataset is heavily skewed towards the majority class. For the two-class case (e.g. Fraud and Non-Fraud), we can assume the minority or rare class is the Fraud class and the majority is the Non-Fraud class. Predictive models developed using conventional classifiers could be biased towards the majority class and tend to produce unsatisfactory classifiers when faced with imbalanced datasets like the Financial fraud dataset. According to literature, the frequency of fraud detected based in the US SEC AAER list is low that even the best performing model used in prior research(es) resulted in higher false positives. The aim of this research to find a practical method to improve the classification of financial statement fraud detection under a high imbalance scenario using the US firms Financial Dataset. First, this research investigates the classification models’ ability to classify the fraud and non-fraud using different resampling methods and determine if this improves the performances of the models. Second, this work explores the use of bagging learning technique if it helps overcome class imbalance problem of the fraud dataset. Ten-fold cross-validation was done to examine the performance of the training and the results obtained from the experiments showed that Random Over-sampling (ROS) method provides the best performance when used with the employed classifiers in terms of balance result of AUC and Sensitivity. Furthermore, the combination of ROS re-sampling method and Random Forrest (RF) performs much better than Support Vector Machine (SVM) and Logistic regression (LR) and this agrees with previous studies conducted. Furthermore, this indicates that it's feasible to determine fraud from the financial statement using the identified important indicators employed in this research and combine it with proper resampling method and machine learning classifiers. However, the result indicates that the bagging method doesn't help in improving the performance of the classifiers. Bagging is more useful when the training sample dataset is very different which can lead to very different sets of predictions. The higher the variability of the dataset, the stronger the result of bagging will be. This means that the data of the predictors in the selected dataset is very similar making bagging superfluous hence, it doesn’t yield any significant improvements. This study contributes to auditing and accounting research by examining the proper re-sampling method to use in a high imbalance fraud dataset and combine it with machine learning algorithms to identify and discriminate potential misstatements or Fraud events. This methodological framework could be of assistance to auditors, both internal and external, to regulators and investing public in general. | ||
Forecast comparison tests under fat-tails | Jihyun
Kim; Toulouse School of Economics Nour Meddahi; Toulouse School of Economics Mamiko Yamashita; Toulouse School of Economics* |
Forecast comparison tests are widely implemented with an assumption that the second moment of the loss difference sequence is bounded. However, we show that the heavy-tailed nature of the financial variables can make this moment condition violate. If the second moment of the loss difference is unbounded and the first moment is bounded (``moderate'' case), the asymptotic distribution of the test statistic under the null hypothesis is not the standard Normal and therefore, the level of the test will be distorted if a researcher blindly assumes that the moment condition is satisfied. If the first moment is unbounded (``severe'' case), the null hypothesis is not well defined and therefore, the test outcome under the classical test procedure is irrelevant to the forecast performances and will be misleading. In the empirical study, we compute the Hill estimator using the data of volatility forecasting and show that the existence of the second moment of loss difference is questionable when MSE and QLIKE functions are used. We further study specific applications and show that the moderate and severe cases may occur according to the parameter values. Simulation results are also provided. This paper is preliminary; we plan to propose an alternative test procedure that is robust to the heavy-tailedness of the loss difference sequence. | |||
Forecast revisions under a multiple error structure | Jing
Tian; University of Tasmania* Firmin Doko Tchatoka; University of Adelaide Thomas Goodwin; TasFoods Ltd. |
Economic forecasts, such as the surveys of professional forecasters, are revised multiple times before realization of the target. This paper studies the sources of forecast revisions. In particular, by decomposing fix-event forecast errors to rational errors that occur due to unanticipated future shocks and irrational errors that may occur due to measurement error in acquired information or forecasters' over- and under- reactions to information, this paper derives the conditions for which forecasts that consist of irrational forecast errors can still possess the second moment properties of rational forecast revisions. The results provide an explanation why empirically fixed-event forecasts often present a subset, in stead of a full set, of second moment properties featured by rational forecast revisions, and suggest that retaining the null hypothesis of the rationality tests based on these properties may not guarantee rational forecast revisions. | |||
2019/6/1 |
Room #3 | Empirical Trade I (15:20-16:50) | Importing Inputs for Climate Change Mitigation: The Case of Agricultural Productivity | Akira Sasahara; University of Idaho | This paper estimates agricultural total factor productivity (TFP) in 162 countries between 1991 and 2015 and aims to understand sources of cross-country variations in agricultural TFP levels and its growth rates. Two factors affecting agricultural TFP are analyzed in detail ? imported intermediate inputs and climate. We first show that these two factors are independently important in explaining agricultural TFP ? imported inputs raise agricultural TFP; and higher temperatures and rainfall shortages impede TFP growth, particularly in low-income countries (LICs). We also provide a new evidence that, within LICs, those with a higher import component of intermediate inputs seem to be more shielded from the negative impacts of weather shocks. |
Chair: Naoto Jinji; Kyoto University | IPR Policies and Membership in Standard Setting Organizations: A two-mode Network Analysis | Jiaming Jiang; Okayama University | In this paper, we attempt to analyze empirically the behavior of market participants or on Standards and Standard Setting Organizations (SSO). In recent years, the SSOs in some sectors, particularly in the information and communications technology (ICT) industries, have focused with increased attention on IPR policies. By referring the sample overview of Chiao and Lerner (2005), we concentrate our study on 30 SSO features and their IPR policies in which, we pay attention on the features of decision process of standards, the licensing rules, and disclosure requirements, etc.. We also employ a social network analysis technique, i.e., the two-mode network analysis, to collect relations between the participants (companies) and these SSOs. Our study is based on a database recently developed for the analysis on the SSOs activities. We collect the two-mode relations from the database, where we identified more than 2500 companies, and extract the information for the IPR policies of the SSOs which these companies participant. At the same time, we try to give an image of the membership of companies and SSOs, and highlight some important characteristics, e.g., betweenness, degree, modularity, so as to clustering these participants and find out four clusters in the two-mode relation network. Based on these features for the companies and the SSOs we attempt to propose an empirical analysis about if these features which can incent/prevent a company attend activities of the SSOs. | ||
Does Deep Economic Integration Facilitate International Research Collaboration? | Naoto
Jinji; Kyoto University* Xingyuan ZHANG; Okayama University Shoji Haruna; Fukuyama University |
We examine whether regional trade agreements (RTAs) facilitate international research collaboration. Using a simple duopoly model with process research and development (R&D) investment and spillovers, we first analyze whether trade liberalization through a trade agreement with deep economic integration increases firms' incentive to engage in research collaboration. We then empirically investigate the effects of deep RTAs by using the data on patents with multiple inventors from different countries at the United States Patent and Trademark Office (USPTO) for 113 countries/regions for the period 1990-2011. We interpret co-inventions by inventors who are resident in different countries as evidence of international research collaboration. We use dummy variables and indexes to measure how deep economic integration by RTAs is. We find that deeper integration is actually associated with more active international co-inventions. We check the robustness of our findings by employing various specifications and by addressing endogeneity issue. | |||
2019/6/1 |
Room #4 | Big data (15:20-16:50) | Double Machine Learning with Gradient Boosting and Its Application to the Big N Audit Quality Effect | Jui-Chung
Yang; National Tsing Hua University* Hui-Ching Chuang; Yuan Ze University Chung-Ming Kuan; National Taiwan University |
In this paper, we study the double machine learning (DML) approach of Chernozhukov et al. (2018) for estimating average treatment effect and apply this approach to examine the Big N audit quality effect in the accounting literature. This approach relies on machine learning methods and is suitable when a high dimensional nuisance function with many covariates is present in the model. This approach would not suffer from the ``regularization bias'' if a learning method with a proper convergence rate is used. We demonstrate by simulations that, for the DML approach, the gradient boosting is to be preferred to other learning methods, such as the regression tree and random forest. We then apply this approach with the gradient boosting to estimate the Big N effect. It is found that Big N auditors have a positive effect on audit quality and that this effect is not only statistically significant but also economically important. We also show that, in contrast with the results of propensity score matching, our estimates of such effect are quite robust to the hyper-parameters in the gradient boosting algorithm. |
Chair: Kengo Kato; Cornell University | Statistical Analysis of Sparse Factor Models | Benjamin
POIGNARD; Osaka University* Yoshikazu Terada; Osaka University |
We consider the problem of estimating sparse approximate factor model. In a first step, we jointly estimate the factor loading parameters and the error - or idiosyncratic - covariance matrix based on the Gaussian quasi-maximum likelihood method. Conditionally on these first step estimators, using the SCAD, MCP and Lasso regularizers, we obtain a sparse error covariance matrix based on a Gaussian QML and, as an alternative criterion, a least squares loss function. Under suitable regularity conditions, we derive error bounds for the regularized idiosyncratic factor model matrix for both Gaussian QML and least squares losses. Moreover, we establish variable selection consistency, including the case when the regularizer is non-convex. These theoretical results are supported by empirical studies. | ||
Skewness Tests for the Common Factor Model | Tetsushi Horie; Hitotsubashi University | This paper proposes a testing procedure to identify whether an asymmetric property of time series data is caused by the common factors or not. To this end, we apply the test of skewness proposed by Bai and Ng (2005) to the common factor model. We show that the statistics have the normal distributions when N and T go to infinity under √(T/N)→0, where T is the dimension of time period and N is the dimension of cross section. Simulations show that the finite sample properties of the proposed tests are almost the same as Bai and Ng (2005). We apply the proposed tests to U.S. macroeconomic time series and find that there are the skewed common factors. However, these are canceled out each other and observed macroeconomic time series behave symmetrically. | |||
Room #1 | Keynote Speaker
(Hatanaka Lecture) (17:00-18:00) Chair: Mototsugu Shintani; University of Tokyo |
The Origins and Effects of Macroeconomic Uncertainty | Francesco
Bianchi; Duke University* Howard Kung; London Business School Mikhail Tirskikh; London Business School |
We construct and estimate a dynamic stochastic general equilibrium model that features demand- and supply-side uncertainty. Using term structure and macroeconomic data, we find sizable effects of uncertainty on risk premia and business cycle fluctuations. Both demand-side and supply-side uncertainty imply large contractions in real activity and an increase in term premia, but supply-side uncertainty has larger effects on inflation and investment. We introduce a novel analytical decomposition to illustrate how multiple distinct risk propagation channels account for these differences. Supply and demand uncertainty are strongly correlated in the beginning of our sample, but decouple in the aftermath of the Great Recession. | |
Reception (18:20-20:30) | |||||
2019/6/2 |
Room #1 | Keynote Speaker
(ET Lecture) (9:00-10:00) Chair: Yoosoon Chang; Indiana University |
Spatial Dependence in Option Observation Errors | Toben
Andersen; Northwestern University* Nicola Fusari; Northwestern University Viktor Todorov; Northwestern University Rasmus Varneskov; Northwestern University |
The empirical option pricing literature, the treatment of observation errors in option prices is largely ad hoc, and no formal test exists for assessing the hypothesis of cross-sectional dependence in such errors. In this paper, we develop a nonparametric test for deciding whether the observation error in option panels has spatial dependence. The option panel consists of options written on an underlying asset with different strikes and times to maturity. The asymptotic setup is of infill type: the mesh of the strike grids of the observed options shrinks asymptotically to zero while keeping the set of observation times and maturities fixed. We propose a Ljung-Box type test for testing the null hypothesis of no spatial dependence in the observation error. The test makes use of the smoothness of the true (unobserved) option price as a function of its strike and is robust to presence of heteroskedasticity of unknown form in the observation error. A Monte Carlo study shows good finite sample properties of the developed testing procedure and empirical application to S&P 500 index option data reveals mild spatial dependence in the observation error which has declined over time. |
Room #1 | Empirical Finance III (10:10-11:40) | The day-of-the-week effect on Bitcoin return and volatility | DONGLIAN
MA; Osaka University* Hisashi Tanizaki; Osaka University |
This study investigates the day-of-the-week effect on both return and volatility of Bitcoin (BTC) from January 2013 to December 2018 using daily data obtained from CoinDesk Bitcoin Price Index. Estimation results suggest that the day-of-the-week effect in return equation varies with sample periods, while significantly high volatilities are observed on Monday and Thursday. Hence, the significantly high mean return of Bitcoin on Monday is found as a response to higher volatility. Besides, the day-of-the-week effect on both return and volatility remains robust after accounting for stock market returns (S&P 500; SSEC; Nikkei 225) and foreign exchange market returns (USD/CNY; USD/JPY; EURO/USD). Finally, no asymmetry effect on volatility is discovered here. | |
Chair: Pei Kuang; University of Birmingham | Changing Vulnerability in Asia: Contagion and Systemic Risk | Moses
Kangogo; University of Tasmania Vladimir Volkov; University of Tasmania* |
This paper investigates the changing network of financial markets between Asian markets and those of the rest of the world during 2003?2017 to capture both the direction and strength of the links between them. Because each market chooses whether to connect with emerging markets as a bridge to the wider network, there are advantages to having access to this bridge for emerging markets for protection during periods of financial stress. And both parties gain by overcoming the information asymmetry between emerging and global markets. We analyze networks for four key periods, capturing networks in financial markets before and after the Asian financial crisis and the global financial crisis. Increased connections during crisis periods are evident, as well as a general deepening of the global network. The evidence on Asian market developments suggests caution is needed on regulations proposing methods to create stable networks, because they may result in reduced opportunities for emerging markets. | ||
New Tests of Expectation Formation with Applications to Asset Pricing Models | Pei Kuang; University of Birmingham | The paper develops new tests of expectation formation which are generally applicable in financial and macroeconomic models. The tests utilize cointegration restrictions among forecasts of model variables. Survey data suggests forecasts of stock prices are not cointegrated with forecasts of consumption and rejects this aspect of the formation of stock price expectations in a wide range of asset pricing models, including rational expectations and various learning or sentiment-based models. We show adding sentiment (or judgment) directly to subjective stock price forecasts can reconcile equity pricing models with the new survey evidence. | |||
2019/6/2 |
Room #2 | Identification II (10:10-11:40) | A Flexible Parametric Method and Identification for Nonlinear Models with Endogeneity | Myoung-Jin Keay; South Dakota State University | I
present a fxible parametric approach to the models with multiple discrete
endogenous explanatory variables (EEV). A likelihood function for the
dependent and discrete EEV's can be constructed by copulae. Various copulae
enable us to approximate the population with more exibility than the usual
parametric models do. Under the assumption of independence among EEV's
conditional on the dependent variable, the joint distribution can be written
without specifying the distributions among the EEV's. There are two
advantages on this approach: it helps us focus on the causal e ects of each EEV on the dependent variable, and the identication is achieved even if there are multiple EEV's, but a single available instrumental variable. This will make the multiple EEV analyses for a randomized experiments that typically give a single instrumental variable by the experiment design. Monte Carlo Simulations show that the MLE's with the true copula function give the highest likelihood values, which facilitates the choice of copula. |
Chair: Hiroyuki Kasahara; University of British Columbia and Hitotsubashi University | Identification and Estimation of Sequential Games of Incomplete Information with Multiple Equilibria | Jangsu Yoon; University of Wisconsin-Milwaukee | This paper discusses identification and estimation of game theoretic models mainly focusing on sequential games of incomplete information. In the current work, I specify a sequential game allowing for multiple players in each stage and multiple Perfect Bayesian Nash Equilibria, showing that the structural parameters including the payoff functions, the order of actions, and equilibrium selection mechanism are nonparametrically identified. Next, I consider a sequential game version of Hotz and Miller (1993)’s two step estimator and verify its asymptotic properties. Compared with models of simultaneous games, my structural modeling can be applied to a broader set of economic settings such as sequential entry games or bargaining games among groups of players. Finally, I propose a specification test as a complementary step of the literature in simultaneous games to justify the order of actions being correctly specified. A concise Monte Carlo simulation result is provided to evaluate the performance of the estimator and the specification test. | ||
Treatment Effect Models with Strategic Interaction in Treatment Decisions | Tadao
Hoshino; Waseda University* Takahide Yanagi; Kyoto University |
This study develops identification and estimation methods for treatment effect models with strategic interaction in treatment decisions. We consider models where one's treatment choice and outcome can be endogenously affected by others' treatment choices. We formulate the interaction of treatment decisions as a two-player complete information game with potential multiple equilibria. For this model, under the assumption of a stochastic equilibrium selection rule, we prove that the marginal treatment effect (MTE) from one's own treatment and that from his/her partner's can be separately point-identified using a latent index framework. Based on our constructive identification results, we propose a two-step semiparametric procedure for estimating the MTE parameters using series approximation. We show that the proposed estimator is uniformly consistent with the optimal convergence rate and has asymptotic normality. | |||
Identification of Regression Models with a Misclassified and Endogenous Binary Regressor | Hiroyuki
Kasahara; University of British Columbia and Hitotsubashi University* Katsumi Shimotsu; University of Tokyo |
We study identification in nonparametric regression models with a misclassified and endogenous binary regressor when an instrument is correlated with misclassifications. We show that the regression function is nonparametrically identified if one binary instrument variable and one binary covariate that satisfy the following conditions are present. The instrumental variable (IV) corrects endogenity; the IV must be correlated with the unobserved true underlying binary variable and is uncorrelated with the error in the outcome equation but is allowed to be correlated with the misclassification error. The covariate corrects misclassification; this variable can be one of the regressors in the outcome equation, must be correlated with the unobserved true underlying binary variable, but must be uncorrelated with the misclassification error. We also propose a mixture-based framework for modeling unobserved heterogeneous treatment effects with a misclassified and endogenous binary regressor and show that the distribution of treatment effects can be identified if the true treatment effect is related to an observed regressor and another observable variable. | |||
2019/6/2 |
Room #4 | Time Series II (10:10-11:40) | A max-correlation white noise test for weakly dependent time series | Jonathan
Hill; University of North Carolina Kaiji Motegi; Kobe University* |
This paper presents a bootstrapped p-value white noise test based on the maximum correlation, for a time series that may be weakly dependent under the null hypothesis. The time series may be prefiltered residuals. The test statistic is a normalized weighted maximum sample correlation coefficient, where the maximum lag increases at a rate slower than the sample size. We only require uncorrelatedness under the null hypothesis, along with a moment contraction dependence property that includes mixing and non-mixing sequences. We show Shao's (2011) dependent wild bootstrap is valid for a much larger class of processes than originally considered. It is also valid for residuals from a general class of parametric models as long as the bootstrap is applied to a first order expansion of the sample correlation. We prove the bootstrap as asymptotically valid without exploiting extreme value theory (standard in the literature) or recent Gaussian approximation theory. Finally, we extend Escanciano and Lobato's (2009) automatic maximum lag selection to our setting with an unbounded lag set that ensures a consistent white noise test, and find it works extremely well in controlled experiments. |
Chair: Muneya Matsui; Nanzan University | Frequency-wise causality analysis in infinite order vector autoregressive processes | Ryo
Kinoshita; Tokyo Keizai University* Kosuke Oya; Osaka University Mototsugu Shintani; University of Tokyo |
Since an inuential work of Granger (1969), his definition of causality has been one of the most popularly used concepts in the analysis of multiple economic time series. While various statistical procedures has been proposed to conduct inference regarding the Granger causality, they often rely on the assumptions of the parametric time series models such as a vector autoregressive (VAR) model or vector autoregressive moving-average (VARMA) model of some fixed orders. In this paper, we investigate the statistical properties of frequency-domain causality measures and its testing procedure for more general multiple times series described by infinite order VAR models. Thus, our approach is less subject to problems caused by possible misspecifications of VAR and VARMA models.In the estimation of frequency-domain causality measures, a VAR model of infinite order is approximated by letting its order increases with sample size. This idea of using a sieve approximation in estimating a general linear process has long been used in the time series literature including a test of no Granger causality considered by Lutkepohl and Poskitt (1996).To the best of our knowledge, however, it has not been used in the inference of frequency-domain causality measures.Causality measures at a particular frequency have been proposed by Geweke (1982) and Hosoya (1991). Their measures are de_ned as a nonlinear transformation of a multiple spectral density function. Yao and Hosoya (2000) developed a statistical inference for the frequency-domain causality measure of Hosoya (1991). Breitung and Candelon (2006) propose a simple test for zero restriction on the frequency-domain causality measure at a particular frequency using bivariate VAR model of _nite order. Statistical inference of the frequency-domain causality measure is also considered using a VARMA model with known order by Hosoya, et al. (2017).Unlike the previous studies listed above, we adopt the asymptotic framework developed by Berk (1974) and its multivariate extension considered by Lewis and Reinsel (1985) to establish the asymptotic properties of the frequency-domain causality measures computed from an infinite-order VAR model. Since the VAR sieve estimator of spectral density matrix at a particular frequency converges at a rate of square root of T over p which is slower than the order of sqquare root of T, asymptotic properties of our statistics differ from those based on the VAR model of finite order. We begin our analysis by developing a simple test statistics for zero causality measure analogues to the one proposed by Breitung and Candelon (2006). We then use our asymptotic results to construct the confidence intervals of causality measure at a particular frequency. Finally, we consider testing procedures to detect possible structural breaks in causality measure at a some frequencies. We investigate finite sample properties of the statistical inference procedure by Monte-carlo simulations. Finally, an empirical example of financial data is demonstrated. | ||
Econometric Analysis of Functional Dynamics in the Presence of Persistence | Yoosoon
Chang; Indiana University Bo Hu; Peking University* Joon Park; Indiana University |
We introduce an autoregressive model for functional time series with unit roots. The autoregressive operator can be consistently estimated, but its convergence rate and limit distribution are different in different subspaces. In the unit root subspace, the convergence rate is fast and given by $T$. Outside the unit root subspace, however, the limit distribution is Gaussian, although the convergence rate varies and is given by $\sqrt{T}$ or a slower rate. The predictor based on the estimated autoregressive operator has a normal limit distribution with a reduced rate of convergence. We also provide the Beveridge-Nelson decomposition, which identifies the permanent and transitory components of functional time series with unit roots, representing persistent stochastic trends and stationary cyclical movements, respectively. Using our methodology and theory, we analyze the time series of yield curves and study the dynamics of the term structure of interest rates. | |||
Characterization of the tail behavior of a class of BEKK processes: A stochastic recurrence equation approach | Muneya Matsui; Nanzan University | We provide new, mild conditions for strict stationarity and ergodicity of a class of BEKK processes. By exploiting that the processes can be represented as multivariate stochastic recurrence equations, we characterize the tail behavior of the associated stationary laws. Specifically, we show that the each component of the BEKK processes is regularly varying with some tail index. In general, the tail index differs along the components, which contrasts most of the existing literature on the tail behavior of multivariate GARCH processes. | |||
2019/6/2 |
Room #1 | Invited Speaker
(11:50-12:35) Chair: Yoshihiko Nishiyama; Kyoto University |
Long-Range Dependent Curve Time Series | Degui
Li; University of York Peter M. Robinson; London School of Economics* Han Lin Shang; Australian National University |
We introduce methods and theory for functional or curve time series with long-range dependence. The temporal sum of the curve process is shown to be asymptotically normally distributed, the conditions for this covering a functional version of fractionally integrated autoregressive moving averages. We also construct an estimate of the long-run covariance function, which we use, via functional principal component analysis, in estimating the orthonormal functions spanning the dominant sub-space of the curves. In a semiparametric context, we propose an estimate of the memory parameter, and establish its consistency. A Monte-Carlo study of finite-sample performance is included, along with two empirical applications. The first of these finds a degree of stability and persistence in intra-day stock returns. The second finds similarity in the extent of long memory in age-specific fertility rates across some developed nations. |
lunch (12:35-13:40) | |||||
Room #1 | News Shocks (13:40-15:10) | A "Bad Beta, Good Beta" Anatomy of Currency Risk Premiums and Trading Strategies | I-Hsuan
Ethan Chiang; University of North Carolina Xi Mo; University of North Carolina* |
We test a two-beta currency pricing model that features betas with risk-premium news and real-rate news of the currency market. Unconditionally, beta with risk-premium news is "bad" because of signicantly positive price of risk (2.52% per year); beta with real-rate news is "good" due to nearly zero or negative price of risk. The price of risk-premium beta risk is counter-cyclical, while the price of the real-rate beta risk is pro-cyclical. Most prevailing currency trading strategies either have excessive "bad beta" or too little "good beta," failing to deliver abnormal performance. | |
Chair: Etsuro Shioji; Hitotsubashi University | The BOJ’s ETF Purchases and Its Effects on Nikkei 225 Stocks? | Kimie
Harada; Chuo University Tatsuyoshi Okimoto; Australian National University* |
This paper examines the impacts of the Bank of Japan’s (BOJ) exchange-traded funds (ETFs) purchasing program that has been conducted since December 2010; this program is a part of the BOJ’s unconventional monetary policy and has accelerated after the introduction of the Quantitative and Qualitative Easing in April 2013. In this study, the influence of underlying stocks is assessed by comparing the performance of stocks those included in the Nikkei 225 and others using a difference-in-difference analysis. We also separate morning and afternoon returns to control the fact that the BOJ tends to purchase ETFs when performance of the stock market is not great in the morning session. We find that the Nikkei 225 component stocks’ afternoon returns are significantly higher than those of non-Nikkei 225 stocks when the BOJ purchases ETFs. In addition, the subsample analysis demonstrates that the impact on Nikkei 225 stock returns becomes smaller over time despite the growing purchase amounts. Overall, our results indicate that the cumulative treatment effects on the Nikkei 225 are around 20% as of October 2017. | ||
Identifying Monetary Policy and Central Bank Information Shocks Using High-frequency Exchange Rates | Oliver
Holtemoeller; Martin-Luther University and Halle Institute for Economic
Research Alexander Kriwoluzky; Freie University Berlin and German Institute for Economic Research Boreum Kwak; Martin-Luther University and Halle Institute for Economic Research* |
This work investigates effects of conventional monetary policy and central bank information shocks from monetary policy announcements on the U.S. economy. We identify the surprises caused by changes in target rate and central bank’s private information embedded in high frequency exchange rate responses around policy announcements. Our identification strategy is based on economic observations that conventional monetary policy effect is restricted, whereas central bank information effect becomes quantitatively more significant during the zero lower bound (ZLB) period. We investigate the impact of identified monetary policy and central bank information shocks on macro variables using proxy SVAR. Contractionary monetary policy decreases output and price level clearly. A positive information shock which also induces increases in interest rate is perceived by private agents as a positive signal related to a future economic status, and induces an increase in output and easing financial condition. | |||
Pass-through of oil supply shocks to domestic gasoline prices: evidence from daily data | Etsuro Shioji; Hitotsubashi University | Oil prices may have different effects on domestic prices depending on the nature of the source of their changes. This paper develops a new approach based on the Structural VAR with External Instruments (SVAR-IV or proxy-VAR) coupled with High Frequency Identification (HFI) to identify shocks to expected future supply of crude oil, and to estimate their impacts on an importer country’s prices. This methodology is applied to daily data on gasoline taken from a Japanese price comparison web site. The result indicates a rather fast pass-through, suggesting a high value of utilizing daily observations. | |||
2019/6/2 |
Room #2 | Semi- and Non-parametric Method (13:40-15:10) | Simple Semiparametric Estimation of Ordered Response Models | Ruixuan
Liu; Emory University Zhengfei Yu; University of Tsukuba* |
We propose two simple semiparametric estimation methods for ordered response models with an unknown error distribution. The proposed methods do not require users to choose any tuning parameter and they automatically incorporate the monotonicity restriction of the unknown distribution function. Fixing finite dimensional parameters in the model, we construct nonparametric maximum likelihood estimates (NPMLE) for the error distribution based on the related binary choice data or the entire ordered response data. We then obtain estimates for finite dimensional parameters based on moment conditions given the estimated distribution function. Our semiparametric approaches deliver root-n consistent and asymptotically normal estimators of the regression coefficients and threshold parameter. We also develop valid bootstrap procedures for inference. We apply our methods to the interdependent durations model in Honore and de Paula (2010), where the social interaction effect is directly related to the threshold parameter in the corresponding ordered response model. The advantages of our methods are borne out in simulation studies and a real data application to the joint retirement decision of married couples. |
Chair: Sung-Jin Cho; Seoul National University | Semi-parametric Single--index Predictive Regression | Weilun
Zhou; Monash University Jiti Gao; Monash University David Harris; University of Melbourne Hsein Kew; Monash University* |
This paper studies a semi-parametric single-index predictive model with multiple integrated predictors that exhibit cointegrating relationship. Orthogonal series expansion is employed to approximate the unknown link function in the predictive model and the estimator is derived from an optimization under the constraint of identification condition for the index parameter. The main finding includes two types of super-consistency rates for the estimators of the index parameter along two orthogonal directions in a new coordinate system. The central limit theorem is established for a plug-in estimator of the unknown link function. In an empirical application, we apply our single index predictive model to re-examine stock return predictability in the United States. We present some new evidence that the quarterly U.S. stock market returns are nonlinearly predictable when we account for cointegration among integrated predictors over the 1927-2017 period and the post-1952 period. | ||
Semi-parametric instrument-free demand estimation: relaxing optimality and equilibrium assumptions | Sung-Jin
Cho; Seoul National University* John Rust; Georgetown University |
We analyze the problem of demand estimation when consumer demand is characterized as a stochastic process that results from a compound arrival/choice process: consumers arrive at a market according to a stochastic arrival process and make independent discrete choices of which of several alternatives to purchase. Overall demand is derived from microaggregation of individual consumer choices, and thus will not lead to a simple static linear aggregate “demand curve” that has been traditionally used the literature on demand estimation. We are interested in estimating more realistic stochastic nonlinear models of the demand for hotels in order to study the pricing decisions of a particular luxury hotel, ”hotel 0”, located in a major US city. There is substantial weekly and seasonal variation in arrival rates of customers wishing to book rooms at one of the seven hotels in the local market in which this hotel operates. Given limited capacity, the variation in customer demand leads to strong positive correlation between hotel prices and occupancy since the hotels raise prices substantially on days they expect to sell out, and set much lower prices on days where they expect to have unsold rooms. The endogeneity of pricing decisions results in upward sloping demand curves when ordinary least squares is used to estimate a traditional linear model of demand. We show there are no obvious instrumental variables that are successful in dealing with the endogeneity problem using the standard instrumental variables estimation approach. We introduce a semi-parametric two-step method of simulated moments estimator that can consistently estimate the parameters of the stochastic process for consumer demand that is “instrument-free” and does not rely on the maintained assumption that hotel prices are in equilibrium or even the assumption that individual hotels set their prices optimally. We use our estimator to test the hypothesis that hotel 0 is setting its prices optimally. We reject this hypothesis and show how dynamic programming can be used to set optimal dynamic prices that significantly increase the hotel’s profits. | |||
2019/6/2 |
Room #3 | Empirical Trade II (13:40-15:10) | Can RTA Labor Provisions Prevent the Deterioration of Domestic Labor Standards? | Isao Kamata; University of Niigata Prefecture | This study investigates whether labor clauses in regional trade agreements (RTAs) are effective to maintain or improve the domestic labor standards in the signatory countries. The effects of RTA labor clauses on two measures of labor standards, statutory minimum wages and the strictness of employment protection, are empirically analyzed using a unique dataset that classifies the population of effective RTAs into those with and without labor clauses, together with multi-year data on minimum wages and the indicator of employment-protection strictness for a wide variety of countries. The result shows that having labor-clause-free RTAs with more or larger trading partners are associated with lower statutory minimum wages although this negative association is not found for labor-clause-inclusive RTAs. The separate estimation for countries in different income groups further demonstrates that this result is chiefly driven by middle-income countries that sign RTAs with high-income partners, implying that signing RTAs with more or larger high-income trading partners would create to the government of a middle-income country, which has a comparative advantage over the high-income partners in labor-intensive sectors, a downward policy pressure on statutory minimum wages whereas labor clauses could alleviate such a negative policy effect of RTAs on minimum wages. This finding is also contrasted with the case of actual wages for which no evidence is found for the impact of RTAs with or without labor clauses to reaffirm that labor-clause-free RTAs could create downward policy pressure on statutory minimum wages but RTAs might not bring market pressure on actual wages regardless of whether or not the RTAs include labor clauses. Finally, unlike this case of statutory minimum wages, the empirical analysis finds no clear evidence for the potential impacts of RTAs either with or without labor clauses on the strictness of employment protection in the signatory countries. |
Chair: ByeongHwa Choi; National Taiwan University | Effects of Trade Liberalization on the Gender Wage Gap: Evidence from Panel Data of the Indian Manufacturing Sector | Manabu
Furuta; Aichi Gakuin University* Prabir Bhattacharya; Heriot-Watt University Takahiro Sato; Kobe University |
This paper examines the effects of trade liberalization on the gender wage gap in the Indian manufacturing sector during the period 2000 to 2007. We find that trade liberalization has had the effect of widening the gender wage gap in the labour-intensive, but not in the capital-intensive, industries. The explanations offered for the widening gender wage gap are in terms of the Stolper-Samuelson effect and trade-induced skill biased technical change. Policy implications of the findings are noted. | ||
Does Population Aging Affect Trade Policy Preferences? Evidence from a Survey Experiment in Japan | ByeongHwa Choi; National Taiwan University | The recent backlash against globalization in many countries raises questions about the source of this protectionist sentiment. We argue that information interacts with individuals' characteristics to play a role in individuals' assessments of the benefits of trade. Population aging has created a more challenging environment for assessing the effects of trade. We find that knowledge about population aging issues can mitigate the impact of age on trade preferences. We construct the original Japanese survey experiment data that contain measures of individuals' attitudes towards population aging and trade policy preferences. We find that elderly people are less likely to support import restrictions. However, in aged prefectures with population aging rapidly, elderly people are more likely to support import restrictions, and more so for those who are exposed to the aging population issue from the perspective of producers. The results indicate that in face of rapid aging society, the elderly are pushed back to the labor market competing for low-paid and low-skill jobs and thus concerned more than ever about negative impacts of trade liberalization on the labor market. This study has important implications for trade policy choices in an aging society by offering new predictors of individuals' preferences regarding trade protection. | |||
2019/6/2 |
Room #4 | Panel Data (13:40-15:10) | Latent Group Structures with Heterogeneous Distributions: Identification and Estimation | Heng
Chen; Bank of Canada* Xuan Leng; Erasmus University Rotterdam Wendun Wang; Erasmus University Rotterdam |
Panel data are often characterized by cross-sectional heterogeneity, and a flexible yet parsimonious way of modeling heterogeneity is to cluster units into groups. A group pattern of heterogeneity may exist not only in the mean but also in the other characteristics of the distribution. To identify latent groups and recover the heterogeneous distribution, we propose a clustering method based on composite quantile regressions. We show that combining the strength across multiple panel quantile regression models improves the precision of the group membership estimates if the group structure is common across quantiles. Asymptotic theories for the proposed estimators are established, while their finite-sample performance is demonstrated by simulations. We finally apply the proposed methods to analyze the cross-country output effect of infrastructure capital. |
Chair: Cindy S.H. Wang; National Tsing Hua University | Identification and Estimation of Time-Varying Nonseparable Panel Data Models without Stayers | Takuya Ishihara | This paper explores the identification and estimation of nonseparable panel data models. We show that the structural function is nonparametrically identified when it is strictly increasing in a scalar unobservable variable, the conditional distributions of unobservable variables do not change over time, and the joint support of explanatory variables satisfies some weak assumptions. To identify the target parameters, existing studies assume that the structural function does not change over time, and that there are ``stayers", namely individuals with the same regressor values in two time periods. Our approach, by contrast, allows the structural function to depend on the time period in an arbitrary manner and does not require the existence of stayers. In estimation part of the paper, we consider parametric models and develop an estimator that implements our identification results. We then show the consistency and asymptotic normality of our estimator. Monte Carlo studies indicate that our estimator performs well in finite samples. Finally, we extend our identification results to models with discrete outcomes, and show that the structural function is partially identified. | ||
Market Integration, Systemic Risk and Diagnostic Tests in Large Mixed Panels | Cindy S.H. Wang; National Tsing Hua University | This study investigates an AR (autoregressive)-filtered version of several conventional diagnostic tests for cross-sectional dependence in large mixed panels, including the adjusted LM test, the CD test, and the Schott test. We show that the modified tests asymptotically follow the standard normal distribution. The distinctive feature of these new tests is their simple implementation, even though the exact time series properties of each component of a mixed panel are unknown or unobservable in practice. Simulations show that the AR-filtered version of the CD test ($CD_{AR}$) performs the best compared to the other testing procedures in finite samples and computation time, especially for those cases with large cross-sectional dimension. We also provide a new perspective on the role of $CD_{AR}$ statistic in an early warning indicator of market risk or crisis. | |||
2019/6/2 |
Room #1 | Empirical Macroeconomics (15:20-16:50) | Non-linear Effects of Fiscal Adjustments on Output Growth: Does Uncertain Environment Matter? | Siew-Voon Soon; University of Malaya | This paper proposes a Markov-switching model to assess the fiscal adjustments for the period 1990Q1-2018Q3. Our results indicate that the output growth response asymmetry to the change in the budget position only during stable regime. The positive change in the budget position dampens the output growth. It switches from negative effect to positive effect when there is an improvement in the budget position. The role of the fiscal adjustment only a short-run phenomenon and state dependence. However, the uncertainty in the country found temporary increase the output growth, but it is a threat to the country in the long-term. |
Chair: Travis Berge; Federal Reserve Board | Handling unobserved innovation | Ping-Sheng
Koh; HKUST David Reeb; National University of Singapore Elvira Sojli; University of New South Wales* Wing Wah Tham; University of New South Wales Wendun Wang; Erasmus University Rotterdam |
Most US-listed firms do not report R&D spending nor seek patents, even though many of them actively engage in research activity. We investigate the reliability of different methods for handling unreported innovation and consider the econometric implications of these common approaches. Using reconciled R&D we document that well-known innovation covariates are correlated with the failure to report R&D and that deleting firms without reported innovation leads to biased parameter estimates. A series of simulations, based on both the empirical distribution of Compustat data and on simulated data, provide a rank ordering of different methods to handle unreported R&D, ranging from the worst (deletion) to the best (multiple imputation). We also replicate an influential study, demonstrating how different approaches to unreported innovation affect empirical inferences. Finally, we provide guidance on handling unobserved innovation in studies using R&D spending, patent counts, and patent citations. | ||
Are all output gap estimates unstable in real time? | Alessandro
Barbarino; Federal Reserve Board Travis Berge; Federal Reserve Board* Han Chen; Board of Governors of the Federal Reserv Andrea Stella; Federal Reserve Board |
The output gap estimate produced by the Federal Reserve Staff is known to be more reliably estimated in real time than univariate de-trending models used in the literature (Edge and Rudd, 2016). The purpose of this paper is to understand why. We estimate several multivariate unobserved-component (UC) models of the economy and show that the real-time stability of the Federal Reserve estimates is likely due to the use of labor market data to inform the estimation of the cyclical state of the economy. We find that a simple two equation UC model that estimates the output gap using output and the unemployment rate produces real-time output gap estimates with time-series and revision properties very similar to the Federal Reserve staff's judgmental estimate. We also investigate the usefulness of the output gap estimates when forecasting inflation. | |||
2019/6/2 |
Room #2 | Structural Change (15:20-16:50) | Testing for observation-dependent regime switching in mixture autoregressive models | Mika
Meitz; University of Helsinki* Pentti Saikkonen; University of Helsinki |
Testing for regime switching when the regime switching probabilities are specified either as constants ("mixture models") or are governed by a finite-state Markov chain ("Markov switching models") are long-standing problems that have also attracted recent interest. This paper considers testing for regime switching when the regime switching probabilities are time-varying and depend on observed data ("observation-dependent regime switching"). Specifically, we consider the likelihood ratio test for observation-dependent regime switching in mixture autoregressive models. The testing problem is highly nonstandard, involving unidentified nuisance parameters under the null, parameters on the boundary, singular information matrices, and higher-order approximations of the log-likelihood. We derive the asymptotic null distribution of the likelihood ratio test statistic in a general mixture autoregressive setting using high-level conditions that allow for various forms of dependence of the regime switching probabilities on past observations, and we illustrate the theory using two particular mixture autoregressive models. The likelihood ratio test has a nonstandard asymptotic distribution that can easily be simulated, and Monte Carlo studies show the test to have satisfactory finite sample size and power properties. |
Chair: Eiji Kurozumi; Hitotsubashi University | Regime switches and permanent changes in impacts of housing risk factors on MSA-level housing returns | Meichi Huang; National Taipei University | This study assesses impacts of housing risk factors on metropolitan housing excess returns in the augmented housing pricing model. Credit and liquidity factors represent demand and supply sides, respectively, of housing markets. All MSAs have significant regime switches, and booms and busts are turbulent and influenced by the two factors. The explanatory powers of the liquidity factor shrivel for the boom-bust and regime-switching volatility specifications, and excess housing returns are less sensitive to nationwide housing dynamics after 2008. This study lends support to time-varying exposures of housing excess returns to risk factors in terms of temporary switches and permanent changes. | ||
Monitoring Parameter Changes in Models with a Trend | Peiyun
Jiang; Hitotsubashi University* Eiji Kurozumi; Hitotsubashi University |
In this paper, we develop a monitoring procedure for detecting structural changes in models with a trend. The procedure is based on the cumulative sum (CUSUM) of the ordinary least squares residuals and a proper boundary function is designed to control the size. We derive the asymptotic distribution of the detecting statistic under the null hypothesis, while proving the consistency of the procedure under the alternative. In addition, we derive the asymptotic distribution of the delay time for the CUSUM procedure as well as the fluctuation procedure proposed by Qi et al.(2016). Then, we compare these two monitoring procedures in a small simulation study and the results indicate that although neither procedure is uniformly superior to the other, the CUSUM test is more suitable for an early break. An empirical example is provided to support the theoretical analyses. | |||
2019/6/2 |
Room #3 | Other Empirical Studies (15:20-16:50) | Does the country-general heterogeneity in the hedonic price exist in Japan?: Penalised quantile regression analysis using GIS data | Yuya
Katafuchi; Graduate School of Economics, Kyushu University* Augusto Delgado Narro; Graduate School of Economics, Kyushu University |
Land price analysis remains one of the active research fields where new methods, in order to quantify the effect of economic (e.g. growth rate, inflation, interest rate) and non-economic characteristics (e.g. location, building characteristics), continually push knowledge frontiers up. Nevertheless, so far, most of research focus on measuring the causal effect to the mean value of land price by ordinary least squares (OLS) method, despite the possibility that covariates might affect the land price differently at each quantile, that is, causal effects might depend on the quantile of the land price distribution. Furthermore, most of the literature highlight the effect of a few accessibilities, building characteristics and amenities over the land price by using limited survey data even though the development of geographic information systems (GIS) improves accessibility information to various facilities by positioning properties on the map in terms of their geographic coordinates and provides larger dataset, i.e. GIS has better quantity and quality of dataset. To identify the heterogeneous causal effects on the land price, the paper applies the Quantile Regression (QR) method to the land prices function, using GIS data in Japan. Our dataset includes micro-level characteristics; for example, land information, land usage, supply facilities, building characteristics, distances from basic facilities, and transportation variables in 2017. As the number of covariates is large, penalised QR method by regularisation (least absolute shrinkage and selection operator - LASSO - and elastic net - EN -) helps us to obtain more accurate results in variable selection by adding penalty terms in the objective function. We find that QR with GIS data is crucial to obtain detailed relationships between micro level covariates and land price since GIS data explains that non-macroeconomic variables cause the land price heterogeneously at each quantile. Finally, there is evidence that heterogeneity in causal effects must be considered in hedonic price analysis as QR, LASSO, and EN showed us that estimated causal effects are significantly dependent of the quantile of the land price distribution. Among these methods, the EN marks the best goodness-of-fit because of its robust variable selection procedure in the case of large sample size with highly multicollinear covariates. |
Chair: Christian Otchia; Nagoya University | Business Cycle Spatial Synchronization: Measuring a Common Indicator | Shinya Fukui; Graduate School of Economics, Kobe University | With decreasing comprehensive transportation costs, business cycle synchronization seems to be increasing. In such circumstances, we estimate business cycle synchronization of the Asia-Pacific region and European region. We apply the spatial generalized autoregressive score (Spatial GAS) method to measure the time-varying business cycle synchronization. Estimated business cycle synchronization indicators show positive high values in periods of economic turmoil, such as the collapse of the Lehman Brothers in 2008. With the recent increase in economic integration, exogenous shocks increased geographically and economically closer countries’ business cycle synchronization to each other, and such shocks cause economic instability. | ||
Industrial growth with poverty reduction and equity? Evidence from nighttime lights data in Vietnam | Takahiro
Yamada; Ministry of Finance, Japan Christian Otchia; Nagoya University* |
Vietnam’s development after Doi Moi has been characterized by triple successes: a high economic growth rate, significant poverty reduction, and relatively low inequality. By employing provincial panel data from Vietnam, 2000-2010, this study verifies the relationship between industrial growth and poverty reduction with the consideration of initial conditions. To identify the effect of industrial growth on poverty reduction, we exploit nightlights satellite images data as a proxy of industrial growth. We find that industrial sector outputs are a strong driver of poverty reduction compared to agricultural sector outputs. These results are robust across various poverty indicators. We further show that industrial growth is effective in reducing poverty in provinces with initially lower inequality, higher educational investment, and higher access to social allowance. | |||
2019/6/2 |
Room #4 | Time Series III and Other (15:20-16:50) | Harmonically Weighted Processes | Uwe
Hassler; Goethe University Frankfurt* Mehdi Hosseinkouchack; Goethe University Frankfurt |
We discuss a model for long memory and persistence in time series that amounts to harmonically weighting short memory processes, $\sum_j x_{t-j}/(j+1)$. A nonstandard rate of convergence is required to establish a Gaussian functional central limit theorem. Theoretically, the harmonically weighted [HW] process displays less persistence and weaker memory than the classical competitor, fractional integration [FI] of order $d$. Still, we establish that a test rejects the null hypothesis of $d=0$ if the process is HW. Similarly, a bias approximation shows that estimators of $d$ will fail to distinguish between HW and FI given realistic sample sizes. The difficulties to disentangle HW and FI are illustrated experimentally and with U.S. inflation data. |
Chair: Shingo NAKANISHI; Osaka Institute of Technology | Long monthly temperature series and the Vector Seasonal Shifting Mean and Covariance Autoregressive model | Changli
He; Tianjin University of Finance and Economics Jian Kang; Tianjin University of Finance and Economics Timo Terasvirta; Aarhus University* |
We consider a vector version of the Shifting Seasonal Mean Autoregressive model with changing error covariances. The model is used for describing dynamic behaviour of and contemporaneous dependence between a number of long monthly European temperature series extending from the second half of the 19th century until (practically) today. The results indicate strong warming in the winter months, February excluded, and cooling followed by warming during the summer months. Error variances show some interesting regularities. No clear pattern for changing correlations can be detected. | ||
Volatility Regressions with Fat Tails | Jihyun
Kim; Toulouse School of Economics* Nour Meddahi; Toulouse School of Economics |
Nowadays, a common practice to forecast integrated variance is to do simple OLS autoregressions of the observed realized variance data. However, non-parametric estimates of the tail index of this realized variance process reveal that its second moment is possibly unbounded. In this case, the behavior of the OLS estimators and the corresponding statistics are unclear. We prove that when the second moment of the spot variance is unbounded, the slope of the spot variance's autoregression converges to a random variable when the sample size diverges. Likewise, the same result holds when one consider either integrated variance's autoregression or the realized variance one. We then consider a class of variance models based on diffusion processes having an affine form of drift, where the class includes GARCH and CEV processes, and we prove that IV estimations with adequate instruments provide consistent estimators of the drift parameters as long as the variance process has a finite first moment regardless of the existence of finite second moment. In particular, for the GARCH diffusion model with fat tails, an IV estimation where the instrument equals the sign of the (demeaned) lagged value of the variable of interest provides consistent estimators. Simulation results corroborate the theoretical findings of the paper. | |||
Symmetric Relations and Geometric Characterizations about Standard Normal Distribution by Circle and Square | Shingo
NAKANISHI; Osaka Institute of Technology* Masamitsu OHNISHI; Graduate School of Economics, Osaka University |
We are interested in the equilibrium relation about that a sum-total of maximal profits for winners is equal to the cost by their banker about the probability 27 percent on standard normal distribution. We investigate these characterizations by circle and square at Pearson’s finding probability point: 0.612003. One of our approaches is about integral forms of a cumulative distribution of standard normal distribution. The other is related to both Mill’s ratio and inverse Mill’s ratio. First, we clarify that these general solutions about three types of differential equations are geometrically related to a mathematical formulation. Second, we show that the ways about ancient Egyptian drawing styles and Pythagorean theorem are useful to illustrate them symmetrically and geometrically. Third, this method plays an important role about not only their symmetric relations but also the intercept formulations for winners, losers and a banker. | |||
Room #1 | Invited Speaker
(17:00-17:45) Chair: Naoya Sueishi; Kobe University |
Jackknife multiplier bootstrap: finite sample approximations to the U-process supremum with applications | Kengo Kato; Cornell University | In this talk, I will discuss finite sample approximations to the supremum of a non-degenerate U-process of a general order indexed by a function class. We are primarily interested in situations where the function class as well as the underlying distribution change with the sample size, and the U-process itself is not weakly convergent as a process. We first consider Gaussian approximations and derive coupling and Kolmogorov distance bounds. Such Gaussian approximations are, however, not often directly usable in statistical problems since the covariance function of the approximating Gaussian process is unknown. This motivates us to study bootstrap-type approximations to the U-process supremum. We propose a novel jackknife multiplier bootstrap (JMB) tailored to the U-process, and derive coupling and Kolmogorov distance bounds for the proposed JMB method. We also discuss applications of the general approximation results to testing for qualitative features of nonparametric functions based on generalized local U-processes. This talk is based on joint work with Xiaohui Chen (UIUC). | |
Room #1 | Closing
(17:45-17:50) Chung-Ming Kuan; National Taiwan University (Chair of SETA Advisory Committee) |