James Nesbit

I am a fourth year Ph.D. candidate in Economics at NYU. My research interests are econometrics and machine learning. My research focuses on how to incorporate techniques from machine learning into structural econometric modelling and time-series analysis.


Research

Working Papers

"(Machine) Learning Parameter Regions", (with José Luis Montiel Olea)
Taking random draws from a parameter region in order to approximate its shape is a supervised learning problem (analogous to sampling pixels of an image to recognize it). Misclassification error—a common criterion in machine learning—provides an off-the-shelf tool to assess the quality of a given approximation. We say a parameter region can be learned if there is an algorithm that yields a misclassification error of at most \(\epsilon\) with probability at least \(1-\delta\), regardless of the sampling distribution. We show that learning a parameter region is possible if and only if it is not too complex. Moreover, the tightest band that contains a \(d\)-dimensional parameter region is always learnable from the inside (in a sense we make precise), with at least \((1-\epsilon)/\epsilon \ln(1/\delta)\) draws, but at most \((2d/\epsilon) \ln(2d/\delta)\). We illustrate the usefulness of our results using structural vector autoregressions. We show how many orthogonal matrices are necessary/sufficient to evaluate the impulse responses' identified set and how many 'shotgun plots' to report when conducting joint inference on impulse responses.
October 2018
"A Robust Machine Learning Algorithm for Text Analysis", (with Shikun Ke and José Luis Montiel Olea)
Text is an increasingly popular (high-dimensional) input in empirical economics research. This paper studies the Latent Dirichlet Allocation model, a popular machine learning tool that reduces the dimension of text data via the action of a parametric likelihood and a prior. The parameters over which the priors are imposed are shown to be set-identified: hence, the choice of prior matters. The paper characterizes—theoretically and algorithmically—how much a given functional of the model's parameters varies in response to a change in the prior. In particular, we approximate the lower/upper bounds for the posterior mean of any continuous functional, as the number of words per document becomes large. The approximation is given by the smallest and largest value that the functional of interest attains over the set of all possible (column stochastic) Non-negative Matrix Factorizations of the corpus' term-document frequency matrix. Thus, reporting this range provides a simple, prior-robust algorithm for text analysis. We revisit recent work on the effects of increased `transparency' on discussions regarding monetary policy decisions in the United States, and show how to implement our algorithm.
May 2019

Work in Progress

"Machine Learning Demand Systems", (with Ryan Stevens)
Abstract | Paper | Slides
To come!
January 2018