Robert B. Gramacy Professor of Statistics
Particle learning of Gaussian processes
plgp
is an R
package implementing sequential
Monte Carlo inference for fully Bayesian Gaussian process (GP) models by particle learning (PL).
The sequential nature of inference and the active learning (AL) hooks provided facilitate thrifty
sequential design and optimization.
This software is licensed under the GNU Lesser Public License (LGPL), version 2 or later. See the change log and an archive of previous versions.
The current version provides:
- a generic PL interface
- static classification and regression GP models with three types of correlation functions: isotropic, separable and single-index Gaussian
- sequential design for optimization of (noisy) real-valued functions by expected improvement (EI) statistic using a regression GP model
- sequential design for exploring classification boundaries by the predictive entropy statistic via a classification GP model
- sequential design for optimization under known and unknown constraints by an integrated expected conditional improvement (IECI) statistic using a hybrid regression-classification GP model
Obtaining the package
- Download
R
from cran.r-project.org by selecting the version for your operating system. - Install the
plgp
,mvtnorm
andtgp
packages, from withinR
.
R> install.packages(c("plgp", "mvtnorm", "tgp"))
- Optionally, install the
akima
,ellipse
andsplancs
packages.
R> install.packages(c("akima", "ellipse", "splancs"))
- Load the library as you would for any
R
library.
R> library(plgp)
Documentation
- See the package documentation.
A pdf
version of the
reference manual, or help pages, is also available. The help pages can be accessed from within
R
. - The best way to acquaint yourself with the functionality of this package is to
run the demos which illustrate the examples contained in the papers referenced below.
Try starting with:
R> help(package=plgp)
R> ?plgp # follow the examples which point to the demos
R> demo(package="plgp") # for a direct listing of the demos
References
- Particle learning of Gaussian process models for sequential design and optimization (2011) with Nicholas Polson. Journal of Computational and Graphical Statistics, 20(1), pp. 109-118; preprint on arXiv:0909.5262
- Gaussian process single-index models as emulators for computer experiments (2012) with Heng Lian; Technometrics, 54(1), pp. 30-41; preprint on arXiv:1009.4241
- Optimization under unknown constraints with H.K.H. Lee (2010). Bayesian Statistics 9, J. M. Bernardo, M. J. Bayarri, J. O. Berger, A. P. Dawid, D. Heckerman, A. F. M. Smith and M. West (Eds.); Oxford University Press; preprint on arXiv:1004.4027
- Carvalho, C., Johannes, M., Lopes, H., and Polson, N. (2008) Particle Learning and Smoothing. Discussion Paper 2008-32, Duke University Dept. of Statistical Science