multifi_cokriging.py

This module integrates the Multi-Fidelity Co-Kriging method described in [LeGratiet2013]. (Author: Remi Vauclin <vauclin.remi@gmail.com>)

This code was implemented using the package scikit-learn as basis. (Author: Vincent Dubourg <vincent.dubourg@gmail.com>)

OpenMDAO adaptation. Regression and correlation functions were directly copied from scikit-learn package here to avoid scikit-learn dependency. (Author: Remi Lafage <remi.lafage@onera.fr>)

ISAE/DMSM - ONERA/DCPS

class openmdao.surrogate_models.multifi_cokriging.FloatMultiFiCoKrigingSurrogate(regr='constant', rho_regr='constant', theta=None, theta0=None, thetaL=None, thetaU=None, tolerance=1e-06, initial_range=0.3)[source]

Bases: openmdao.surrogate_models.multifi_cokriging.MultiFiCoKrigingSurrogate

Predictions are returned as floats, which are the mean of the NormalDistribution predicted by the base class model.

predict(new_x)[source]
class openmdao.surrogate_models.multifi_cokriging.MultiFiCoKriging(regr='constant', rho_regr='constant', theta=None, theta0=None, thetaL=None, thetaU=None)[source]

Bases: object

This class integrates the Multi-Fidelity Co-Kriging method described in [LeGratiet2013].

Args:

regr: string or callable, optional

A regression function returning an array of outputs of the linear regression functional basis for Universal Kriging purpose. regr is assumed to be the same for all levels of code. Default assumes a simple constant regression trend. Available built-in regression models are: ‘constant’, ‘linear’

rho_regr: string or callable, optional

A regression function returning an array of outputs of the linear regression functional basis. Defines the regression function for the autoregressive parameter rho. rho_regr is assumed to be the same for all levels of code. Default assumes a simple constant regression trend. Available built-in regression models are: ‘constant’, ‘linear’

theta: double, array_like or list, optional

Value of correlation parameters if they are known; no optimization is run. Default is None, so that optimization is run. if double: value is replicated for all features and all levels. if array_like: an array with shape (n_features, ) for isotropic calculation. It is replicated for all levels. if list: a list of nlevel arrays specifying value for each level

theta0: double, array_like or list, optional

Starting point for the maximum likelihood estimation of the best set of parameters. Default is None and meaning use of the default 0.5*np.ones(n_features) if double: value is replicated for all features and all levels. if array_like: an array with shape (n_features, ) for isotropic calculation. It is replicated for all levels. if list: a list of nlevel arrays specifying value for each level

thetaL: double, array_like or list, optional

Lower bound on the autocorrelation parameters for maximum likelihood estimation. Default is None meaning use of the default 1e-5*np.ones(n_features). if double: value is replicated for all features and all levels. if array_like: An array with shape matching theta0’s. It is replicated for all levels of code. if list: a list of nlevel arrays specifying value for each level

thetaU: double, array_like or list, optional

Upper bound on the autocorrelation parameters for maximum likelihood estimation. Default is None meaning use of default value 50*np.ones(n_features). if double: value is replicated for all features and all levels. if array_like: An array with shape matching theta0’s. It is replicated for all levels of code. if list: a list of nlevel arrays specifying value for each level

Notes

Implementation is based on the Package Scikit-Learn (Author: Vincent Dubourg <vincent.dubourg@gmail.com>) which translates the DACE Matlab toolbox, see [NLNS2002].

References

[NLNS2002](1, 2) H. B. Nielsen, S. N. Lophaven, and J. Sondergaard. DACE - A MATLAB Kriging Toolbox. (2002) http://www2.imm.dtu.dk/~hbn/dace/dace.pdf
[WBSWM1992]W. J. Welch, R. J. Buck, J. Sacks, H. P. Wynn, T. J. Mitchell, and M. D. Morris (1992). “Screening, predicting, and computer experiments.” Technometrics, 34(1) 15–25. http://www.jstor.org/pss/1269548
[LeGratiet2013](1, 2) L. Le Gratiet (2013). “Multi-fidelity Gaussian process regression for computer experiments.” PhD thesis, Universite Paris-Diderot-Paris VII.
[TBKH2011]Toal, D. J., Bressloff, N. W., Keane, A. J., & Holden, C. M. E. (2011). “The development of a hybridized particle swarm for kriging hyperparameter tuning.” Engineering optimization, 43(6), 675-699.

Examples

>>> from openmdao.surrogate_models.multifi_cokriging import MultiFiCoKriging
>>> import numpy as np
>>> # Xe: DOE for expensive code (nested in Xc)
>>> # Xc: DOE for cheap code
>>> # ye: expensive response
>>> # yc: cheap response
>>> Xe = np.array([[0],[0.4],[1]])
>>> Xc = np.vstack((np.array([[0.1],[0.2],[0.3],[0.5],[0.6],[0.7],[0.8],[0.9]]),Xe))
>>> ye = ((Xe*6-2)**2)*np.sin((Xe*6-2)*2)
>>> yc = 0.5*((Xc*6-2)**2)*np.sin((Xc*6-2)*2)+(Xc-0.5)*10. - 5
>>> model = MultiFiCoKriging(theta0=1, thetaL=1e-5, thetaU=50.)
>>> model.fit([Xc, Xe], [yc, ye])
>>> # Prediction on x=0.05
>>> np.abs(float(model.predict([0.05])[0])- ((0.05*6-2)**2)*np.sin((0.05*6-2)*2)) < 0.05
True

Attributes

theta: list Specified theta for each level OR the best set of autocorrelation parameters (the sought maximizer of the reduced likelihood function).
rlf_value: list The optimal negative concentrated reduced likelihood function value for each level.
fit(X, y, initial_range=0.3, tol=1e-06)[source]

The Multi-Fidelity co-kriging model fitting method.

Args:

X: list of double array_like elements

A list of arrays with the input at which observations were made, from lowest fidelity to highest fidelity. Designs must be nested with X[i] = np.vstack([..., X[i+1])

y: list of double array_like elements

A list of arrays with the observations of the scalar output to be predicted, from lowest fidelity to highest fidelity.

initial_range: float

Initial range for the optimizer.

tol: float

Optimizer terminates when the tolerance tol is reached.

predict(X, eval_MSE=True)[source]

This function performs the predictions of the kriging model on X.

Args:

X: array_like

An array with shape (n_eval, n_features) giving the point(s) at which the prediction(s) should be made.

eval_MSE: boolean, optional

A boolean specifying whether the Mean Squared Error should be evaluated or not. Default assumes evalMSE is True.

Returns:

y: array_like

An array with shape (n_eval, ) with the Best Linear Unbiased Prediction at X. If all_levels is set to True, an array with shape (n_eval, nlevel) giving the BLUP for all levels.

MSE: array_like, optional (if eval_MSE is True)

An array with shape (n_eval, ) with the Mean Squared Error at X. If all_levels is set to True, an array with shape (n_eval, nlevel) giving the MSE for all levels.

rlf(lvl, theta=None)[source]

This function determines the BLUP parameters and evaluates the negative reduced likelihood function for the given autocorrelation parameters theta.

Maximizing this function wrt the autocorrelation parameters theta is equivalent to maximizing the likelihood of the assumed joint Gaussian distribution of the observations y evaluated onto the design of experiments X.

Args:

self: Multi-Fidelity Co-Kriging object

lvl: Integer

Level of fidelity

theta: array_like, optional

An array containing the autocorrelation parameters at which the Gaussian Process model parameters should be determined. Default uses the built-in autocorrelation parameters (ie theta = self.theta).

Returns:

rlf_value: double

The value of the negative concentrated reduced likelihood function associated to the given autocorrelation parameters theta.

class openmdao.surrogate_models.multifi_cokriging.MultiFiCoKrigingSurrogate(regr='constant', rho_regr='constant', theta=None, theta0=None, thetaL=None, thetaU=None, tolerance=1e-06, initial_range=0.3)[source]

Bases: openmdao.surrogate_models.surrogate_model.MultiFiSurrogateModel

OpenMDAO adapter of multi-fidelity recursive cokriging method described in [LeGratiet2013]. See MultiFiCoKriging class.

predict(new_x)[source]

Calculates a predicted value of the response based on the current trained model for the supplied list of inputs.

train_multifi(X, Y)[source]

Train the surrogate model with the given set of inputs and outputs.

openmdao.surrogate_models.multifi_cokriging.constant_regression(x)[source]

Zero order polynomial (constant, p = 1) regression model. x –> f(x) = 1

openmdao.surrogate_models.multifi_cokriging.l1_cross_distances(X, Y=None)[source]

Computes the nonzero componentwise L1 cross-distances between the vectors in X and Y.

Args:

X: array_like

An array with shape (n_samples_X, n_features)

Y: array_like

An array with shape (n_samples_Y, n_features)

Returns:

D: array with shape (n_samples * (n_samples - 1) / 2, n_features)

The array of componentwise L1 cross-distances.

openmdao.surrogate_models.multifi_cokriging.linear_regression(x)[source]

First order polynomial (linear, p = n+1) regression model. x –> f(x) = [ 1, x_1, ..., x_n ].T

openmdao.surrogate_models.multifi_cokriging.squared_exponential_correlation(theta, d)[source]

Squared exponential correlation model (Radial Basis Function). (Infinitely differentiable stochastic process, very smooth):

                                    n
theta, dx --> r(theta, dx) = exp(  sum  - theta_i * (dx_i)^2 )
                                  i = 1
Args:

theta: array_like

An array with shape 1 (isotropic) or n (anisotropic) giving the autocorrelation parameter(s).

dx: array_like

An array with shape (n_eval, n_features) giving the componentwise distances between locations x and x’ at which the correlation model should be evaluated.

Returns:

r: array_like

An array with shape (n_eval, ) containing the values of the autocorrelation model.