Code Documentation
This page shows the documentation generated by sphinx automatically scanning the source code.
Bayesian Optimization
Main module.
Holds the BayesianOptimization class, which handles the maximization of a function over a specific target space.
- class bayes_opt.bayesian_optimization.BayesianOptimization(f, pbounds, constraint=None, random_state=None, verbose=2, bounds_transformer=None, allow_duplicate_points=False)
Handle optimization of a target function over a specific target space.
This class takes the function to optimize as well as the parameters bounds in order to find which values for the parameters yield the maximum value using bayesian optimization.
Parameters
- f: function
Function to be maximized.
- pbounds: dict
Dictionary with parameters names as keys and a tuple with minimum and maximum values.
- constraint: A ConstraintModel. Note that the names of arguments of the
constraint function and of f need to be the same.
- random_state: int or numpy.random.RandomState, optional(default=None)
If the value is an integer, it is used as the seed for creating a numpy.random.RandomState. Otherwise the random state provided is used. When set to None, an unseeded random state is generated.
- verbose: int, optional(default=2)
The level of verbosity.
- bounds_transformer: DomainTransformer, optional(default=None)
If provided, the transformation is applied to the bounds.
- allow_duplicate_points: bool, optional (default=False)
If True, the optimizer will allow duplicate points to be registered. This behavior may be desired in high noise situations where repeatedly probing the same point will give different answers. In other situations, the acquisition may occasionally generate a duplicate point.
Methods
- probe()
Evaluates the function on the given points. Can be used to guide the optimizer.
- maximize()
Tries to find the parameters that yield the maximum value for the given function.
- set_bounds()
Allows changing the lower and upper searching bounds
- property constraint
Return the constraint associated with the optimizer, if any.
- property max
Get maximum target value found and corresponding parameters.
See TargetSpace.max for more information.
- maximize(init_points=5, n_iter=25, acquisition_function=None, acq=None, kappa=None, kappa_decay=None, kappa_decay_delay=None, xi=None, **gp_params)
Maximize the given function over the target space.
Parameters
- init_pointsint, optional(default=5)
Number of iterations before the explorations starts the exploration for the maximum.
- n_iter: int, optional(default=25)
Number of iterations where the method attempts to find the maximum value.
- acquisition_function: object, optional
An instance of bayes_opt.util.UtilityFunction. If nothing is passed, a default using ucb is used
- acq:
Deprecated, unused and slated for deletion.
- kappa:
Deprecated, unused and slated for deletion.
- kappa_decay:
Deprecated, unused and slated for deletion.
- kappa_decay_delay:
Deprecated, unused and slated for deletion.
- xi:
Deprecated, unused and slated for deletion.
- **gp_params:
Deprecated, unused and slated for deletion.
- probe(params, lazy=True)
Evaluate the function at the given points.
Useful to guide the optimizer.
Parameters
- params: dict or list
The parameters where the optimizer will evaluate the function.
- lazy: bool, optional(default=True)
If True, the optimizer will evaluate the points when calling maximize(). Otherwise it will evaluate it at the moment.
- register(params, target, constraint_value=None)
Register an observation with known target.
Parameters
- params: dict or list
The parameters associated with the observation.
- target: float
Value of the target function at the observation.
- constraint_value: float or None
Value of the constraint function at the observation, if any.
- property res
Get all target values and constraint fulfillment for all parameters.
See TargetSpace.res for more information.
- set_bounds(new_bounds)
Modify the bounds of the search space.
Parameters
- new_boundsdict
A dictionary with the parameter name and its new bounds
- set_gp_params(**params)
Set parameters of the internal Gaussian Process Regressor.
- property space
Return the target space associated with the optimizer.
- class bayes_opt.bayesian_optimization.Observable(events)
Inspired by https://www.protechtraining.com/blog/post/879#simple-observer.
- dispatch(event)
Trigger callbacks for subscribers of an event.
- get_subscribers(event)
Return the subscribers of an event.
- subscribe(event, subscriber, callback=None)
Add subscriber to an event.
- unsubscribe(event, subscriber)
Remove a subscriber for a particular event.
Acquisition function
- class bayes_opt.util.UtilityFunction(kind='ucb', kappa=2.576, xi=0, kappa_decay=1, kappa_decay_delay=0)
An object to compute the acquisition functions.
Parameters
- kind: {‘ucb’, ‘ei’, ‘poi’}
‘ucb’ stands for the Upper Confidence Bounds method
‘ei’ is the Expected Improvement method
‘poi’ is the Probability Of Improvement criterion.
- kappa: float, optional(default=2.576)
Parameter to indicate how closed are the next parameters sampled. Higher value = favors spaces that are least explored. Lower value = favors spaces where the regression function is the highest.
- kappa_decay: float, optional(default=1)
kappa is multiplied by this factor every iteration.
- kappa_decay_delay: int, optional(default=0)
Number of iterations that must have passed before applying the decay to kappa.
xi: float, optional(default=0.0)
- static ei(x, gp, y_max, xi)
Calculate Expected Improvement acqusition function.
Similar to Probability of Improvement (UtilityFunction.poi), but also considers the magnitude of improvement. Calculated as
\[\text{EI}(x) = (\mu(x)-y_{\text{max}} - \xi) \Phi\left( \frac{\mu(x)-y_{\text{max}} - \xi }{\sigma(x)} \right) + \sigma(x) \phi\left( \frac{\mu(x)-y_{\text{max}} - \xi }{\sigma(x)} \right)\]where \(\Phi\) is the CDF and \(\phi\) the PDF of the normal distribution.
Parameters
- xnp.ndarray
Parameters to evaluate the function at.
- gpsklearn.gaussian_process.GaussianProcessRegressor
A gaussian process regressor modelling the target function based on previous observations.
- y_maxnumber
Highest found value of the target function.
- xifloat, positive
Governs the exploration/exploitation tradeoff. Lower prefers exploitation, higher prefers exploration.
Returns
Values of the acquisition function
- static poi(x, gp, y_max, xi)
Calculate Probability of Improvement acqusition function.
Calculated as
\[\text{POI}(x) = \Phi\left( \frac{\mu(x)-y_{\text{max}} - \xi }{\sigma(x)} \right)\]where \(\Phi\) is the CDF of the normal distribution.
Parameters
- xnp.ndarray
Parameters to evaluate the function at.
- gpsklearn.gaussian_process.GaussianProcessRegressor
A gaussian process regressor modelling the target function based on previous observations.
- y_maxnumber
Highest found value of the target function.
- xifloat, positive
Governs the exploration/exploitation tradeoff. Lower prefers exploitation, higher prefers exploration.
Returns
Values of the acquisition function
- static ucb(x, gp, kappa)
Calculate Upper Confidence Bound acquisition function.
Similar to Probability of Improvement (UtilityFunction.poi), but also considers the magnitude of improvement. Calculated as
\[\text{UCB}(x) = \mu(x) + \kappa \sigma(x)\]where \(\Phi\) is the CDF and \(\phi\) the PDF of the normal distribution.
Parameters
- xnp.ndarray
Parameters to evaluate the function at.
- gpsklearn.gaussian_process.GaussianProcessRegressor
A gaussian process regressor modelling the target function based on previous observations.
- y_maxnumber
Highest found value of the target function.
- kappafloat, positive
Governs the exploration/exploitation tradeoff. Lower prefers exploitation, higher prefers exploration.
Returns
Values of the acquisition function
- update_params()
Update internal parameters.
- utility(x, gp, y_max)
Calculate acquisition function.
Parameters
- xnp.ndarray
Parameters to evaluate the function at.
- gpsklearn.gaussian_process.GaussianProcessRegressor
A gaussian process regressor modelling the target function based on previous observations.
- y_maxnumber
Highest found value of the target function.
Returns
Values of the acquisition function
Target Space
- class bayes_opt.target_space.TargetSpace(target_func, pbounds, constraint=None, random_state=None, allow_duplicate_points=False)
Holds the param-space coordinates (X) and target values (Y).
Allows for constant-time appends.
Parameters
- target_funcfunction
Function to be maximized.
- pboundsdict
Dictionary with parameters names as keys and a tuple with minimum and maximum values.
- random_stateint, RandomState, or None
optionally specify a seed for a random number generator
- allow_duplicate_points: bool, optional (default=False)
If True, the optimizer will allow duplicate points to be registered. This behavior may be desired in high noise situations where repeatedly probing the same point will give different answers. In other situations, the acquisition may occasionally generate a duplicate point.
Examples
>>> def target_func(p1, p2): >>> return p1 + p2 >>> pbounds = {'p1': (0, 1), 'p2': (1, 100)} >>> space = TargetSpace(target_func, pbounds, random_state=0) >>> x = np.array([4 , 5]) >>> y = target_func(x) >>> space.register(x, y) >>> assert self.max()['target'] == 9 >>> assert self.max()['params'] == {'p1': 1.0, 'p2': 2.0}
- array_to_params(x)
Convert an array representation of parameters into a dict version.
Parameters
- xnp.ndarray
a single point, with len(x) == self.dim.
Returns
- dict
Representation of the parameters as dictionary.
- property constraint_values
Get the constraint values registered to this TargetSpace.
Returns
np.ndarray
- property mask
Return a boolean array of valid points.
Points are valid if they satisfy both the constraint and boundary conditions.
Returns
np.ndarray
- max()
Get maximum target value found and corresponding parameters.
If there is a constraint present, the maximum value that fulfills the constraint within the parameter bounds is returned.
Returns
- res: dict
A dictionary with the keys ‘target’ and ‘params’. The value of ‘target’ is the maximum target value, and the value of ‘params’ is a dictionary with the parameter names as keys and the parameter values as values.
- params_to_array(params)
Convert a dict representation of parameters into an array version.
Parameters
- paramsdict
a single point, with len(x) == self.dim.
Returns
- np.ndarray
Representation of the parameters as an array.
- probe(params)
Evaluate the target function on a point and register the result.
Notes
If params has been previously seen and duplicate points are not allowed, returns a cached value of result.
Parameters
- paramsnp.ndarray
a single point, with len(x) == self.dim
Returns
- resultfloat | Tuple(float, float)
target function value, or Tuple(target function value, constraint value)
Example
>>> target_func = lambda p1, p2: p1 + p2 >>> pbounds = {'p1': (0, 1), 'p2': (1, 100)} >>> space = TargetSpace(target_func, pbounds) >>> space.probe([1, 5]) >>> assert self.max()['target'] == 6 >>> assert self.max()['params'] == {'p1': 1.0, 'p2': 5.0}
- random_sample()
Sample a random point from within the bounds of the space.
Returns
- data: ndarray
[1 x dim] array with dimensions corresponding to self._keys
Examples
>>> target_func = lambda p1, p2: p1 + p2 >>> pbounds = {'p1': (0, 1), 'p2': (1, 100)} >>> space = TargetSpace(target_func, pbounds, random_state=0) >>> space.random_sample() array([[ 55.33253689, 0.54488318]])
- register(params, target, constraint_value=None)
Append a point and its target value to the known data.
Parameters
- paramsnp.ndarray
a single point, with len(x) == self.dim.
- targetfloat
target function value
- constraint_valuefloat or None
Constraint function value
Raises
- NotUniqueError:
if the point is not unique
Notes
runs in amortized constant time
Examples
>>> target_func = lambda p1, p2: p1 + p2 >>> pbounds = {'p1': (0, 1), 'p2': (1, 100)} >>> space = TargetSpace(target_func, pbounds) >>> len(space) 0 >>> x = np.array([0, 0]) >>> y = 1 >>> space.register(x, y) >>> len(space) 1
- res()
Get all target values and constraint fulfillment for all parameters.
Returns
- res: list
A list of dictionaries with the keys ‘target’, ‘params’, and ‘constraint’. The value of ‘target’ is the target value, the value of ‘params’ is a dictionary with the parameter names as keys and the parameter values as values, and the value of ‘constraint’ is the constraint fulfillment.
Notes
Does not report if points are within the bounds of the parameter space.
Domain reduction
- class bayes_opt.domain_reduction.SequentialDomainReductionTransformer(gamma_osc: float = 0.7, gamma_pan: float = 1.0, eta: float = 0.9, minimum_window: List[float] | float | Dict[str, float] | None = 0.0)
Reduce the searchable space.
A sequential domain reduction transformer based on the work by Stander, N. and Craig, K: “On the robustness of a simple domain reduction scheme for simulation-based optimization”
Parameters
- gamma_oscfloat, default=0.7
Parameter used to scale (typically dampen) oscillations.
- gamma_panfloat, default=1.0
Parameter used to scale (typically unitary) panning.
- etafloat, default=0.9
Zooming parameter used to shrink the region of interest.
- minimum_windowfloat or np.ndarray or dict, default=0.0
Minimum window size for each parameter. If a float is provided, the same value is used for all parameters.
- initialize(target_space: TargetSpace) None
Initialize all of the parameters.
Parameters
- target_spaceTargetSpace
TargetSpace this DomainTransformer operates on.
- transform(target_space: TargetSpace) dict
Transform the bounds of the target space.
Parameters
- target_spaceTargetSpace
TargetSpace this DomainTransformer operates on.
Returns
- dict
The new bounds of each parameter.
Constraints
- class bayes_opt.constraint.ConstraintModel(fun, lb, ub, random_state=None)
Model constraints using GP regressors.
This class takes the function to optimize as well as the parameters bounds in order to find which values for the parameters yield the maximum value using bayesian optimization.
Parameters
- funNone or Callable -> float or np.ndarray
The constraint function. Should be float-valued or array-valued (if multiple constraints are present). Needs to take the same parameters as the optimization target with the same argument names.
- lbfloat or np.ndarray
The lower bound on the constraints. Should have the same dimensionality as the return value of the constraint function.
- ubfloat or np.ndarray
The upper bound on the constraints. Should have the same dimensionality as the return value of the constraint function.
- random_statenp.random.RandomState or int or None, default=None
Random state to use.
Notes
In case of multiple constraints, this model assumes conditional independence. This means that for each constraint, the probability of fulfillment is the cdf of a univariate Gaussian. The overall probability is a simply the product of the individual probabilities.
- allowed(constraint_values)
Check whether constraint_values fulfills the specified limits.
Parameters
- constraint_valuesnp.ndarray of shape (n_samples, n_constraints)
The values of the constraint function.
Returns
- np.ndarrray of shape (n_samples,)
Specifying wheter the constraints are fulfilled.
- approx(X)
Approximate the constraint function using the internal GPR model.
Parameters
- Xnp.ndarray of shape (n_samples, n_features)
Parameters for which to estimate the constraint function value.
Returns
- np.ndarray of shape (n_samples, n_constraints)
Constraint function value estimates.
- eval(**kwargs: dict)
Evaluate the constraint function.
Parameters
- **kwargs :
Function arguments to evaluate the constraint function on.
Returns
Value of the constraint function.
Raises
- TypeError
If the kwargs’ keys don’t match the function argument names.
- fit(X, Y)
Fit internal GPRs to the data.
Parameters
- X :
Parameters of the constraint function.
- Y :
Values of the constraint function.
Returns
None
- property lb
Return lower bounds.
- property model
Return GP regressors of the constraint function.
- predict(X)
Calculate the probability that the constraint is fulfilled at X.
Note that this does not try to approximate the values of the constraint function (for this, see ConstraintModel.approx().), but probability that the constraint function is fulfilled. That is, this function calculates
\[p = \text{Pr}\left\{c^{\text{low}} \leq \tilde{c}(x) \leq c^{\text{up}} \right\} = \int_{c^{\text{low}}}^{c^{\text{up}}} \mathcal{N}(c, \mu(x), \sigma^2(x)) \, dc.\]with \(\mu(x)\), \(\sigma^2(x)\) the mean and variance at \(x\) as given by the GP and \(c^{\text{low}}\), \(c^{\text{up}}\) the lower and upper bounds of the constraint respectively.
In case of multiple constraints, we assume conditional independence. This means we calculate the probability of constraint fulfilment individually, with the joint probability given as their product.
Parameters
- Xnp.ndarray of shape (n_samples, n_features)
Parameters for which to predict the probability of constraint fulfilment.
Returns
- np.ndarray of shape (n_samples,)
Probability of constraint fulfilment.
- property ub
Return upper bounds.