Click the "Show Details" button in each presentation to view detailed information including abstracts, authors, affiliations, and keywords.
Loading sessions...
Contributed Sesion: Recent Developments in Bayesian Inference 3 Presentations
Bayesian crossover trial with binary data and extension to Latin-square design
Speaker: Mingan Yang / Saint Louis University
In-person
Abstract:
In clinical trials, crossover design is widely used to assess treatment eects of drugs. Due to many practical issues, each patient in the study may receive only a subset of treatments under comparison, which is called an incomplete block crossover design. Correspondingly, the associated challenges are limited information and small sample size. In addition, the outcome is binary instead of continuous. In this article, we propose a Bayesian approach to analyze the crossover design with binary data. Markov chain sampling method is used to analyze the model and explore the extension to Latin-square design. We use several approaches such as data augmentation, scaled mixture of normals representation, parameter expansion to improve eciency. The approach is illustrated using a simulation study and a real data example.
In clinical trials, crossover design is widely used to assess treatment eects of drugs. Due to many practical issues, each patient in the study may receive only a subset of treatments under comparison, which is called an incomplete block crossover design. Correspondingly, the associated challenges are limited information and small sample size. In addition, the outcome is binary instead of continuous. In this article, we propose a Bayesian approach to analyze the crossover design with binary data. Markov chain sampling method is used to analyze the model and explore the extension to Latin-square design. We use several approaches such as data augmentation, scaled mixture of normals representation, parameter expansion to improve eciency. The approach is illustrated using a simulation study and a real data example.
Keywords:
crossover trial, binary data, Bayesian analysis, markov chain, incomplete block
Eigenstructure inference for high-dimensional covariance with generalized shrinkage inverse-Wishart prior
Speaker: Seongmin Kim / Seoul National University
In-person
Abstract:
The eigenstructure of covariances is essential in understanding the covariance of multivariate observations. The inverse Wishart prior is commonly used for the covariance estimation in Bayesian inference. However, in the high-dimensional setting where the number of covariates increases with the sample size, it is well-known that the estimates of eigenvalues tend to spread out. Surprisingly, there has been little prior study in Bayesian statistics on eigenstructures of unconstrained covariance. Recently, the shrinkage inverse-Wishart (SIW) prior had been proposed to mitigate the limitation of the inverse-Wishart. In this study, we propose the generalized SIW (gSIW) prior and investigate the asymptotic properties of the posterior of both the eigenvalues and eigenvectors. We discuss the pros and cons of the gSIW in the high-dimensional setting. Specifically, we examine the behavior of the eigenvalues estimated using the shrinkage inverse Wishart prior in comparison to the inverse Wishart prior.
The eigenstructure of covariances is essential in understanding the covariance of multivariate observations. The inverse Wishart prior is commonly used for the covariance estimation in Bayesian inference. However, in the high-dimensional setting where the number of covariates increases with the sample size, it is well-known that the estimates of eigenvalues tend to spread out. Surprisingly, there has been little prior study in Bayesian statistics on eigenstructures of unconstrained covariance. Recently, the shrinkage inverse-Wishart (SIW) prior had been proposed to mitigate the limitation of the inverse-Wishart. In this study, we propose the generalized SIW (gSIW) prior and investigate the asymptotic properties of the posterior of both the eigenvalues and eigenvectors. We discuss the pros and cons of the gSIW in the high-dimensional setting. Specifically, we examine the behavior of the eigenvalues estimated using the shrinkage inverse Wishart prior in comparison to the inverse Wishart prior.
Keywords:
covariance, eigensturcture, shinkage inverse-Wishart prior
Posterior Concentration for Lévy Adaptive B-Spline Regression in Besov Spaces
Speaker: Jeunghun Oh / Seoul National University
In-person
Abstract:
We study the Lévy Adaptive B-Spline (LABS) regression model—an implementation of LARK employing mixtures of B-spline kernels of varying polynomial degrees. Since the model’s mean function is a linear combination of B-spline kernels, LABS flexibly captures local spatial features, including jump discontinuities and sharp peaks, and can therefore represent functions lying in Besov spaces. Focusing on one-dimensional regression with homoskedastic noise, we establish asymptotic guarantees when the true function lies in a Besov space of smoothness s > 0. Specifically, we prove that the LABS posterior contracts around the truth at a nearly optimal rate under the L2 loss—optimal up to logarithmic factors—while automatically adapting to unknown smoothness. This fills a gap in the literature, where rigorous posterior rates for fully Bayesian spline-kernel methods on Besov classes have been scarce. Numerical experiments—comprising simulations on Besov-space test functions and an application to real data—corroborate the theory and illustrate the practical utility of LABS.
We study the Lévy Adaptive B-Spline (LABS) regression model—an implementation of LARK employing mixtures of B-spline kernels of varying polynomial degrees. Since the model’s mean function is a linear combination of B-spline kernels, LABS flexibly captures local spatial features, including jump discontinuities and sharp peaks, and can therefore represent functions lying in Besov spaces. Focusing on one-dimensional regression with homoskedastic noise, we establish asymptotic guarantees when the true function lies in a Besov space of smoothness s > 0. Specifically, we prove that the LABS posterior contracts around the truth at a nearly optimal rate under the L2 loss—optimal up to logarithmic factors—while automatically adapting to unknown smoothness. This fills a gap in the literature, where rigorous posterior rates for fully Bayesian spline-kernel methods on Besov classes have been scarce. Numerical experiments—comprising simulations on Besov-space test functions and an application to real data—corroborate the theory and illustrate the practical utility of LABS.
Keywords:
Bayesian nonparametrics, B-splines, Posterior concentration, Besov spaces, LARK