Speaker: Jun Xu, Ball State University
Abstract: Once portrayed as a heretical paradigm and subjective doctrine, Bayesian inference has emerged from this abject oblivion to a tidal wave to sweep through the world of statistics and data science. This talk begins with the origin of Bayesian statistics, the Bayes theorem, and recounts how and (possibly) why this framework was created. Formerlly called the inverse probability approach, and probably more appropriately—Laplacian statistics—Bayesian statistics has undergone the nadir and zenith of its practice, due in part to its computational inconvenience and subjective assignment of priors. With the computational breakthroughs, especially those in the 1980s and early 1990s, several seemingly unrelated dots were connected to create the Markov chain Monte Carlo (MCMC) methods. This has completely changed the landscape in the field and revolutionized the estimation methods for Bayesian statistics. Unlike the classical frequentist statistics with the null hypothesis significance testing (NHST), Bayesian statistics usually uses Bayes factors, probabilities (not the confusing and problematic p-values), and credible intervals (not confidence intervals) to make inferences. Along with prior information integrated into the current iteration of estimation, the Bayesian approach dovetails well with how information is processed and updated epistemologically. This talk is based on the introductory sections of this recently published book.