Text is an increasingly popular (high-dimensional) input in empirical economics research. This paper studies the Latent Dirichlet Allocation model, a popular machine learning tool that reduces the dimension of text data via the action of a parametric likelihood and a prior. The parameters over which the priors are imposed are shown to be set-identified: hence, the choice of prior matters. The paper characterizes—theoretically and algorithmically—how much a given functional of the model's parameters varies in response to a change in the prior. In particular, we approximate the lower/upper bounds for the posterior mean of any continuous functional, as the number of words per document becomes large. The approximation is given by the smallest and largest value that the functional of interest attains over the set of all possible (column stochastic) Non-negative Matrix Factorizations of the corpus' term-document frequency matrix. Thus, reporting this range provides a simple, prior-robust algorithm for text analysis. We revisit recent work on the effects of increased `transparency' on discussions regarding monetary policy decisions in the United States, and show how to implement our algorithm.