Title: | Flexible Bayesian Optimization |
---|---|
Description: | A modern and flexible approach to Bayesian Optimization / Model Based Optimization building on the 'bbotk' package. 'mlr3mbo' is a toolbox providing both ready-to-use optimization algorithms as well as their fundamental building blocks allowing for straightforward implementation of custom algorithms. Single- and multi-objective optimization is supported as well as mixed continuous, categorical and conditional search spaces. Moreover, using 'mlr3mbo' for hyperparameter optimization of machine learning models within the 'mlr3' ecosystem is straightforward via 'mlr3tuning'. Examples of ready-to-use optimization algorithms include Efficient Global Optimization by Jones et al. (1998) <doi:10.1023/A:1008306431147>, ParEGO by Knowles (2006) <doi:10.1109/TEVC.2005.851274> and SMS-EGO by Ponweiser et al. (2008) <doi:10.1007/978-3-540-87700-4_78>. |
Authors: | Lennart Schneider [cre, aut] , Jakob Richter [aut] , Marc Becker [aut] , Michel Lang [aut] , Bernd Bischl [aut] , Florian Pfisterer [aut] , Martin Binder [aut], Sebastian Fischer [aut] , Michael H. Buselli [cph], Wessel Dankers [cph], Carlos Fonseca [cph], Manuel Lopez-Ibanez [cph], Luis Paquete [cph] |
Maintainer: | Lennart Schneider <[email protected]> |
License: | LGPL-3 |
Version: | 0.2.8 |
Built: | 2024-11-22 09:23:03 UTC |
Source: | https://github.com/mlr-org/mlr3mbo |
A modern and flexible approach to Bayesian Optimization / Model Based Optimization building on the 'bbotk' package. 'mlr3mbo' is a toolbox providing both ready-to-use optimization algorithms as well as their fundamental building blocks allowing for straightforward implementation of custom algorithms. Single- and multi-objective optimization is supported as well as mixed continuous, categorical and conditional search spaces. Moreover, using 'mlr3mbo' for hyperparameter optimization of machine learning models within the 'mlr3' ecosystem is straightforward via 'mlr3tuning'. Examples of ready-to-use optimization algorithms include Efficient Global Optimization by Jones et al. (1998) doi:10.1023/A:1008306431147, ParEGO by Knowles (2006) doi:10.1109/TEVC.2005.851274 and SMS-EGO by Ponweiser et al. (2008) doi:10.1007/978-3-540-87700-4_78.
Maintainer: Lennart Schneider [email protected] (ORCID)
Authors:
Jakob Richter [email protected] (ORCID)
Marc Becker [email protected] (ORCID)
Michel Lang [email protected] (ORCID)
Bernd Bischl [email protected] (ORCID)
Florian Pfisterer [email protected] (ORCID)
Martin Binder [email protected]
Sebastian Fischer [email protected] (ORCID)
Other contributors:
Michael H. Buselli [copyright holder]
Wessel Dankers [copyright holder]
Carlos Fonseca [copyright holder]
Manuel Lopez-Ibanez [copyright holder]
Luis Paquete [copyright holder]
Useful links:
Report bugs at https://github.com/mlr-org/mlr3mbo/issues
This function complements mlr_acqfunctions with functions in the spirit
of mlr_sugar
from mlr3.
acqf(.key, ...)
acqf(.key, ...)
.key |
( |
... |
(named |
acqf("ei")
acqf("ei")
This function complements mlr_acqfunctions with functions in the spirit
of mlr_sugar
from mlr3.
acqfs(.keys, ...)
acqfs(.keys, ...)
.keys |
( |
... |
(named |
List of AcqFunctions
acqfs(c("ei", "pi", "cb"))
acqfs(c("ei", "pi", "cb"))
Abstract acquisition function class.
Based on the predictions of a Surrogate, the acquisition function encodes the preference to evaluate a new point.
bbotk::Objective
-> AcqFunction
direction
("same"
| "minimize"
| "maximize"
)
Optimization direction of the acquisition function relative to the direction of the
objective function of the bbotk::OptimInstance.
Must be "same"
, "minimize"
, or "maximize"
.
surrogate_max_to_min
(-1
| 1
)
Multiplicative factor to correct for minimization or maximization of the acquisition
function.
label
(character(1)
)
Label for this object.
man
(character(1)
)
String in the format [pkg]::[topic]
pointing to a manual page for this object.
archive
(bbotk::Archive)
Points to the bbotk::Archive of the surrogate.
fun
(function
)
Points to the private acquisition function to be implemented by subclasses.
surrogate
(Surrogate)
Surrogate.
requires_predict_type_se
(logical(1)
)
Whether the acquisition function requires the surrogate to have "se"
as $predict_type
.
packages
(character()
)
Set of required packages.
new()
Creates a new instance of this R6 class.
Note that the surrogate can be initialized lazy and can later be set via the active binding $surrogate
.
AcqFunction$new( id, constants = ParamSet$new(), surrogate, requires_predict_type_se, direction, packages = NULL, label = NA_character_, man = NA_character_ )
id
(character(1)
).
constants
(paradox::ParamSet). Changeable constants or parameters.
surrogate
(NULL
| Surrogate).
Surrogate whose predictions are used in the acquisition function.
requires_predict_type_se
(logical(1)
)
Whether the acquisition function requires the surrogate to have "se"
as $predict_type
.
direction
("same"
| "minimize"
| "maximize"
).
Optimization direction of the acquisition function relative to the direction of the
objective function of the bbotk::OptimInstance.
Must be "same"
, "minimize"
, or "maximize"
.
packages
(character()
)
Set of required packages.
A warning is signaled prior to construction if at least one of the packages is not installed, but loaded (not attached) later on-demand via requireNamespace()
.
label
(character(1)
)
Label for this object.
man
(character(1)
)
String in the format [pkg]::[topic]
pointing to a manual page for this object.
update()
Update the acquisition function.
Can be implemented by subclasses.
AcqFunction$update()
reset()
Reset the acquisition function.
Can be implemented by subclasses.
AcqFunction$reset()
eval_many()
Evaluates multiple input values on the objective function.
AcqFunction$eval_many(xss)
xss
(list()
)
A list of lists that contains multiple x values, e.g.
list(list(x1 = 1, x2 = 2), list(x1 = 3, x2 = 4))
.
data.table::data.table() that contains one y-column for
single-objective functions and multiple y-columns for multi-objective functions,
e.g. data.table(y = 1:2)
or data.table(y1 = 1:2, y2 = 3:4)
.
eval_dt()
Evaluates multiple input values on the objective function
AcqFunction$eval_dt(xdt)
xdt
(data.table::data.table()
)
One point per row, e.g. data.table(x1 = c(1, 3), x2 = c(2, 4))
.
data.table::data.table() that contains one y-column for
single-objective functions and multiple y-columns for multi-objective
functions, e.g. data.table(y = 1:2)
or data.table(y1 = 1:2, y2 = 3:4)
.
clone()
The objects of this class are cloneable with this method.
AcqFunction$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other Acquisition Function:
mlr_acqfunctions
,
mlr_acqfunctions_aei
,
mlr_acqfunctions_cb
,
mlr_acqfunctions_ehvi
,
mlr_acqfunctions_ehvigh
,
mlr_acqfunctions_ei
,
mlr_acqfunctions_eips
,
mlr_acqfunctions_mean
,
mlr_acqfunctions_multi
,
mlr_acqfunctions_pi
,
mlr_acqfunctions_sd
,
mlr_acqfunctions_smsego
,
mlr_acqfunctions_stochastic_cb
,
mlr_acqfunctions_stochastic_ei
This function allows to construct an AcqOptimizer in the spirit
of mlr_sugar
from mlr3.
acqo(optimizer, terminator, acq_function = NULL, callbacks = NULL, ...)
acqo(optimizer, terminator, acq_function = NULL, callbacks = NULL, ...)
optimizer |
(bbotk::OptimizerBatch) |
terminator |
(bbotk::Terminator) |
acq_function |
( |
callbacks |
( |
... |
(named |
library(bbotk) acqo(opt("random_search"), trm("evals"), catch_errors = FALSE)
library(bbotk) acqo(opt("random_search"), trm("evals"), catch_errors = FALSE)
Optimizer for AcqFunctions which performs the acquisition function optimization. Wraps an bbotk::OptimizerBatch and bbotk::Terminator.
n_candidates
integer(1)
Number of candidate points to propose.
Note that this does not affect how the acquisition function itself is calculated (e.g., setting n_candidates > 1
will not
result in computing the q- or multi-Expected Improvement) but rather the top n_candidates
are selected from the
bbotk::ArchiveBatch of the acquisition function bbotk::OptimInstanceBatch.
Note that setting n_candidates > 1
is usually not a sensible idea but it is still supported for experimental reasons.
Note that in the case of the acquisition function bbotk::OptimInstanceBatch being multi-criteria, due to using an AcqFunctionMulti,
selection of the best candidates is performed via non-dominated-sorting.
Default is 1
.
logging_level
character(1)
Logging level during the acquisition function optimization.
Can be "fatal"
, "error"
, "warn"
, "info"
, "debug"
or "trace"
.
Default is "warn"
, i.e., only warnings are logged.
warmstart
logical(1)
Should the acquisition function optimization be warm-started by evaluating the best point(s) present in the bbotk::Archive of
the actual bbotk::OptimInstance (which is contained in the archive of the AcqFunction)?
This is sensible when using a population based acquisition function optimizer, e.g., local search or mutation.
Default is FALSE
.
Note that in the case of the bbotk::OptimInstance being multi-criteria, selection of the best point(s) is performed via non-dominated-sorting.
warmstart_size
integer(1) | "all"
Number of best points selected from the bbotk::Archive of the actual bbotk::OptimInstance that are to be used for warm starting.
Can either be an integer or "all" to use all available points.
Only relevant if warmstart = TRUE
.
Default is 1
.
skip_already_evaluated
logical(1)
It can happen that the candidate(s) resulting of the acquisition function optimization were already evaluated on the actual bbotk::OptimInstance.
Should such candidate proposals be ignored and only candidates that were yet not evaluated be considered?
Default is TRUE
.
catch_errors
logical(1)
Should errors during the acquisition function optimization be caught and propagated to the loop_function
which can then handle
the failed acquisition function optimization appropriately by, e.g., proposing a randomly sampled point for evaluation?
Setting this to FALSE
can be helpful for debugging.
Default is TRUE
.
optimizer
terminator
acq_function
(AcqFunction).
callbacks
(NULL
| list of mlr3misc::Callback).
print_id
(character
)
Id used when printing.
param_set
(paradox::ParamSet)
Set of hyperparameters.
new()
Creates a new instance of this R6 class.
AcqOptimizer$new(optimizer, terminator, acq_function = NULL, callbacks = NULL)
optimizer
terminator
acq_function
(NULL
| AcqFunction).
callbacks
(NULL
| list of mlr3misc::Callback)
format()
Helper for print outputs.
AcqOptimizer$format()
(character(1)
).
print()
Print method.
AcqOptimizer$print()
(character()
).
optimize()
Optimize the acquisition function.
AcqOptimizer$optimize()
data.table::data.table()
with 1 row per candidate.
reset()
Reset the acquisition function optimizer.
Currently not used.
AcqOptimizer$reset()
clone()
The objects of this class are cloneable with this method.
AcqOptimizer$clone(deep = FALSE)
deep
Whether to make a deep clone.
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) library(data.table) fun = function(xs) { list(y = xs$x ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) instance$eval_batch(data.table(x = c(-6, -5, 3, 9))) learner = default_gp() surrogate = srlrn(learner, archive = instance$archive) acq_function = acqf("ei", surrogate = surrogate) acq_function$surrogate$update() acq_function$update() acq_optimizer = acqo( optimizer = opt("random_search", batch_size = 1000), terminator = trm("evals", n_evals = 1000), acq_function = acq_function) acq_optimizer$optimize() }
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) library(data.table) fun = function(xs) { list(y = xs$x ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) instance$eval_batch(data.table(x = c(-6, -5, 3, 9))) learner = default_gp() surrogate = srlrn(learner, archive = instance$archive) acq_function = acqf("ei", surrogate = surrogate) acq_function$surrogate$update() acq_function$update() acq_optimizer = acqo( optimizer = opt("random_search", batch_size = 1000), terminator = trm("evals", n_evals = 1000), acq_function = acq_function) acq_optimizer$optimize() }
Chooses a default acquisition function, i.e. the criterion used to propose future points. For synchronous single-objective optimization, defaults to mlr_acqfunctions_ei. For synchronous multi-objective optimization, defaults to mlr_acqfunctions_smsego. For asynchronous single-objective optimization, defaults to mlr_acqfunctions_stochastic_cb.
default_acqfunction(instance)
default_acqfunction(instance)
instance |
(bbotk::OptimInstance). An object that inherits from bbotk::OptimInstance. |
Other mbo_defaults:
default_acqoptimizer()
,
default_gp()
,
default_loop_function()
,
default_result_assigner()
,
default_rf()
,
default_surrogate()
,
mbo_defaults
Chooses a default acquisition function optimizer. Defaults to wrapping bbotk::OptimizerBatchRandomSearch allowing 10000 function evaluations (with a batch size of 1000) via a bbotk::TerminatorEvals.
default_acqoptimizer(acq_function)
default_acqoptimizer(acq_function)
acq_function |
(AcqFunction). |
Other mbo_defaults:
default_acqfunction()
,
default_gp()
,
default_loop_function()
,
default_result_assigner()
,
default_rf()
,
default_surrogate()
,
mbo_defaults
This is a helper function that constructs a default Gaussian Process mlr3::LearnerRegr which is for example used in default_surrogate.
Constructs a Kriging learner “"regr.km"” with kernel “"matern5_2"”.
If noisy = FALSE
(default) a small nugget effect is added nugget.stability = 10^-8
to increase
numerical stability to hopefully prevent crashes of DiceKriging.
If noisy = TRUE
the nugget effect will be estimated with nugget.estim = TRUE
.
If noisy = TRUE
jitter
is set to TRUE
to circumvent a problem with DiceKriging where
already trained input values produce the exact trained output.
In general, instead of the default "BFGS"
optimization method we use rgenoud ("gen"
), which is a hybrid
algorithm, to combine global search based on genetic algorithms and local search based on gradients.
This may improve the model fit and will less frequently produce a constant model prediction.
default_gp(noisy = FALSE)
default_gp(noisy = FALSE)
noisy |
(logical(1)) |
Other mbo_defaults:
default_acqfunction()
,
default_acqoptimizer()
,
default_loop_function()
,
default_result_assigner()
,
default_rf()
,
default_surrogate()
,
mbo_defaults
Chooses a default loop_function, i.e. the Bayesian Optimization flavor to be used for optimization. For single-objective optimization, defaults to bayesopt_ego. For multi-objective optimization, defaults to bayesopt_smsego.
default_loop_function(instance)
default_loop_function(instance)
instance |
(bbotk::OptimInstance) |
Other mbo_defaults:
default_acqfunction()
,
default_acqoptimizer()
,
default_gp()
,
default_result_assigner()
,
default_rf()
,
default_surrogate()
,
mbo_defaults
Chooses a default result assigner. Defaults to ResultAssignerArchive.
default_result_assigner(instance)
default_result_assigner(instance)
instance |
(bbotk::OptimInstance) |
Other mbo_defaults:
default_acqfunction()
,
default_acqoptimizer()
,
default_gp()
,
default_loop_function()
,
default_rf()
,
default_surrogate()
,
mbo_defaults
This is a helper function that constructs a default random forest mlr3::LearnerRegr which is for example used in default_surrogate.
Constructs a ranger learner “"regr.ranger"” with num.trees = 100
, keep.inbag = TRUE
and
se.method = "jack"
.
default_rf(noisy = FALSE)
default_rf(noisy = FALSE)
noisy |
(logical(1)) |
Other mbo_defaults:
default_acqfunction()
,
default_acqoptimizer()
,
default_gp()
,
default_loop_function()
,
default_result_assigner()
,
default_surrogate()
,
mbo_defaults
This is a helper function that constructs a default Surrogate based on properties of the bbotk::OptimInstance.
For numeric-only (including integers) parameter spaces without any dependencies a Gaussian Process is constricted via
default_gp()
.
For mixed numeric-categorical parameter spaces, or spaces with conditional parameters a random forest is constructed via
default_rf()
.
In any case, learners are encapsulated using “"evaluate"”, and a fallback learner is set,
in cases where the surrogate learner errors.
Currently, the following learner is used as a fallback:
lrn("regr.ranger", num.trees = 10L, keep.inbag = TRUE, se.method = "jack")
.
If additionally dependencies are present in the parameter space, inactive conditional parameters
are represented by missing NA
values in the training design data.
We simply handle those with an imputation method, added to the random forest, more
concretely we use po("imputesample")
(for logicals) and po("imputeoor")
(for anything else) from
package mlr3pipelines.
Characters are always encoded as factors via po("colapply")
.
Out of range imputation makes sense for tree-based methods and is usually hard to beat, see Ding et al. (2010).
In the case of dependencies, the following learner is used as a fallback:
lrn("regr.featureless")
.
If n_learner
is 1
, the learner is wrapped as a SurrogateLearner.
Otherwise, if n_learner
is larger than 1
, multiple deep clones of the learner are wrapped as a SurrogateLearnerCollection.
default_surrogate( instance, learner = NULL, n_learner = NULL, force_random_forest = FALSE )
default_surrogate( instance, learner = NULL, n_learner = NULL, force_random_forest = FALSE )
instance |
(bbotk::OptimInstance) |
learner |
( |
n_learner |
( |
force_random_forest |
( |
Ding, Yufeng, Simonoff, S J (2010). “An Investigation of Missing Data Methods for Classification Trees Applied to Binary Response Data.” Journal of Machine Learning Research, 11(1), 131–170.
Other mbo_defaults:
default_acqfunction()
,
default_acqoptimizer()
,
default_gp()
,
default_loop_function()
,
default_result_assigner()
,
default_rf()
,
mbo_defaults
Loop functions determine the behavior of the Bayesian Optimization algorithm on a global level.
For an overview of readily available loop functions, see as.data.table(mlr_loop_functions)
.
In general, a loop function is simply a decorated member of the S3 class loop_function
.
Attributes must include: id
(id of the loop function), label
(brief description), instance
("single-crit" and
or "multi_crit"), and man
(link to the manual page).
As an example, see, e.g., bayesopt_ego.
Other Loop Function:
mlr_loop_functions
,
mlr_loop_functions_ego
,
mlr_loop_functions_emo
,
mlr_loop_functions_mpcl
,
mlr_loop_functions_parego
,
mlr_loop_functions_smsego
The following defaults are set for OptimizerMbo during optimization if the respective fields are not set during initialization.
Optimization Loop: default_loop_function
Surrogate: default_surrogate
Acquisition Function: default_acqfunction
Acqfun Optimizer: default_acqoptimizer
Result Assigner: default_result_assigner
Other mbo_defaults:
default_acqfunction()
,
default_acqoptimizer()
,
default_gp()
,
default_loop_function()
,
default_result_assigner()
,
default_rf()
,
default_surrogate()
A simple mlr3misc::Dictionary storing objects of class AcqFunction.
Each acquisition function has an associated help page, see mlr_acqfunctions_[id]
.
For a more convenient way to retrieve and construct an acquisition function, see acqf()
and acqfs()
.
R6::R6Class object inheriting from mlr3misc::Dictionary.
See mlr3misc::Dictionary.
Sugar functions: acqf()
, acqfs()
Other Dictionary:
mlr_loop_functions
,
mlr_result_assigners
Other Acquisition Function:
AcqFunction
,
mlr_acqfunctions_aei
,
mlr_acqfunctions_cb
,
mlr_acqfunctions_ehvi
,
mlr_acqfunctions_ehvigh
,
mlr_acqfunctions_ei
,
mlr_acqfunctions_eips
,
mlr_acqfunctions_mean
,
mlr_acqfunctions_multi
,
mlr_acqfunctions_pi
,
mlr_acqfunctions_sd
,
mlr_acqfunctions_smsego
,
mlr_acqfunctions_stochastic_cb
,
mlr_acqfunctions_stochastic_ei
library(data.table) as.data.table(mlr_acqfunctions) acqf("ei")
library(data.table) as.data.table(mlr_acqfunctions) acqf("ei")
Augmented Expected Improvement.
Useful when working with noisy objectives.
Currently only works correctly with "regr.km"
as surrogate model and nugget.estim = TRUE
or given.
This AcqFunction can be instantiated via the dictionary
mlr_acqfunctions or with the associated sugar function acqf()
:
mlr_acqfunctions$get("aei") acqf("aei")
"c"
(numeric(1)
)
Constant as used in Formula (14) of Huang (2012) to reflect the degree of risk aversion. Defaults to
1
.
bbotk::Objective
-> mlr3mbo::AcqFunction
-> AcqFunctionAEI
y_effective_best
(numeric(1)
)
Best effective objective value observed so far.
In the case of maximization, this already includes the necessary change of sign.
noise_var
(numeric(1)
)
Estimate of the variance of the noise.
This corresponds to the nugget
estimate when using a mlr3learners as surrogate model.
new()
Creates a new instance of this R6 class.
AcqFunctionAEI$new(surrogate = NULL, c = 1)
surrogate
(NULL
| SurrogateLearner).
c
(numeric(1)
).
update()
Update the acquisition function and set y_effective_best
and noise_var
.
AcqFunctionAEI$update()
clone()
The objects of this class are cloneable with this method.
AcqFunctionAEI$clone(deep = FALSE)
deep
Whether to make a deep clone.
Huang D, Allen TT, Notz WI, Zheng N (2012). “Erratum To: Global Optimization of Stochastic Black-box Systems via Sequential Kriging Meta-Models.” Journal of Global Optimization, 54(2), 431–431.
Other Acquisition Function:
AcqFunction
,
mlr_acqfunctions
,
mlr_acqfunctions_cb
,
mlr_acqfunctions_ehvi
,
mlr_acqfunctions_ehvigh
,
mlr_acqfunctions_ei
,
mlr_acqfunctions_eips
,
mlr_acqfunctions_mean
,
mlr_acqfunctions_multi
,
mlr_acqfunctions_pi
,
mlr_acqfunctions_sd
,
mlr_acqfunctions_smsego
,
mlr_acqfunctions_stochastic_cb
,
mlr_acqfunctions_stochastic_ei
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) library(data.table) set.seed(2906) fun = function(xs) { list(y = xs$x ^ 2 + rnorm(length(xs$x), mean = 0, sd = 1)) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain, properties = "noisy") instance = OptimInstanceBatchSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) instance$eval_batch(data.table(x = c(-6, -5, 3, 9))) learner = lrn("regr.km", covtype = "matern5_2", optim.method = "gen", nugget.estim = TRUE, jitter = 1e-12, control = list(trace = FALSE)) surrogate = srlrn(learner, archive = instance$archive) acq_function = acqf("aei", surrogate = surrogate) acq_function$surrogate$update() acq_function$update() acq_function$eval_dt(data.table(x = c(-1, 0, 1))) }
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) library(data.table) set.seed(2906) fun = function(xs) { list(y = xs$x ^ 2 + rnorm(length(xs$x), mean = 0, sd = 1)) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain, properties = "noisy") instance = OptimInstanceBatchSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) instance$eval_batch(data.table(x = c(-6, -5, 3, 9))) learner = lrn("regr.km", covtype = "matern5_2", optim.method = "gen", nugget.estim = TRUE, jitter = 1e-12, control = list(trace = FALSE)) surrogate = srlrn(learner, archive = instance$archive) acq_function = acqf("aei", surrogate = surrogate) acq_function$surrogate$update() acq_function$update() acq_function$eval_dt(data.table(x = c(-1, 0, 1))) }
Lower / Upper Confidence Bound.
This AcqFunction can be instantiated via the dictionary
mlr_acqfunctions or with the associated sugar function acqf()
:
mlr_acqfunctions$get("cb") acqf("cb")
"lambda"
(numeric(1)
) value used for the confidence bound.
Defaults to
2
.
bbotk::Objective
-> mlr3mbo::AcqFunction
-> AcqFunctionCB
new()
Creates a new instance of this R6 class.
AcqFunctionCB$new(surrogate = NULL, lambda = 2)
surrogate
(NULL
| SurrogateLearner).
lambda
(numeric(1)
).
clone()
The objects of this class are cloneable with this method.
AcqFunctionCB$clone(deep = FALSE)
deep
Whether to make a deep clone.
Snoek, Jasper, Larochelle, Hugo, Adams, P R (2012). “Practical Bayesian Optimization of Machine Learning Algorithms.” In Pereira F, Burges CJC, Bottou L, Weinberger KQ (eds.), Advances in Neural Information Processing Systems, volume 25, 2951–2959.
Other Acquisition Function:
AcqFunction
,
mlr_acqfunctions
,
mlr_acqfunctions_aei
,
mlr_acqfunctions_ehvi
,
mlr_acqfunctions_ehvigh
,
mlr_acqfunctions_ei
,
mlr_acqfunctions_eips
,
mlr_acqfunctions_mean
,
mlr_acqfunctions_multi
,
mlr_acqfunctions_pi
,
mlr_acqfunctions_sd
,
mlr_acqfunctions_smsego
,
mlr_acqfunctions_stochastic_cb
,
mlr_acqfunctions_stochastic_ei
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) library(data.table) fun = function(xs) { list(y = xs$x ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) instance$eval_batch(data.table(x = c(-6, -5, 3, 9))) learner = default_gp() surrogate = srlrn(learner, archive = instance$archive) acq_function = acqf("cb", surrogate = surrogate, lambda = 3) acq_function$surrogate$update() acq_function$eval_dt(data.table(x = c(-1, 0, 1))) }
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) library(data.table) fun = function(xs) { list(y = xs$x ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) instance$eval_batch(data.table(x = c(-6, -5, 3, 9))) learner = default_gp() surrogate = srlrn(learner, archive = instance$archive) acq_function = acqf("cb", surrogate = surrogate, lambda = 3) acq_function$surrogate$update() acq_function$eval_dt(data.table(x = c(-1, 0, 1))) }
Exact Expected Hypervolume Improvement. Calculates the exact expected hypervolume improvement in the case of two objectives. In the case of optimizing more than two objective functions, AcqFunctionEHVIGH can be used. See Emmerich et al. (2016) for details.
bbotk::Objective
-> mlr3mbo::AcqFunction
-> AcqFunctionEHVI
ys_front
(matrix()
)
Approximated Pareto front. Sorted by the first objective.
Signs are corrected with respect to assuming minimization of objectives.
ref_point
(numeric()
)
Reference point.
Signs are corrected with respect to assuming minimization of objectives.
ys_front_augmented
(matrix()
)
Augmented approximated Pareto front. Sorted by the first objective.
Signs are corrected with respect to assuming minimization of objectives.
new()
Creates a new instance of this R6 class.
AcqFunctionEHVI$new(surrogate = NULL)
surrogate
(NULL
| SurrogateLearnerCollection).
update()
Update the acquisition function and set ys_front
and ref_point
.
AcqFunctionEHVI$update()
clone()
The objects of this class are cloneable with this method.
AcqFunctionEHVI$clone(deep = FALSE)
deep
Whether to make a deep clone.
Emmerich, Michael, Yang, Kaifeng, Deutz, André, Wang, Hao, Fonseca, M. C (2016). “A Multicriteria Generalization of Bayesian Global Optimization.” In Pardalos, M. P, Zhigljavsky, Anatoly, Žilinskas, Julius (eds.), Advances in Stochastic and Deterministic Global Optimization, 229–242. Springer International Publishing, Cham.
Other Acquisition Function:
AcqFunction
,
mlr_acqfunctions
,
mlr_acqfunctions_aei
,
mlr_acqfunctions_cb
,
mlr_acqfunctions_ehvigh
,
mlr_acqfunctions_ei
,
mlr_acqfunctions_eips
,
mlr_acqfunctions_mean
,
mlr_acqfunctions_multi
,
mlr_acqfunctions_pi
,
mlr_acqfunctions_sd
,
mlr_acqfunctions_smsego
,
mlr_acqfunctions_stochastic_cb
,
mlr_acqfunctions_stochastic_ei
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) library(data.table) fun = function(xs) { list(y1 = xs$x^2, y2 = (xs$x - 2) ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y1 = p_dbl(tags = "minimize"), y2 = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchMultiCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) instance$eval_batch(data.table(x = c(-6, -5, 3, 9))) learner = default_gp() surrogate = srlrn(list(learner, learner$clone(deep = TRUE)), archive = instance$archive) acq_function = acqf("ehvi", surrogate = surrogate) acq_function$surrogate$update() acq_function$update() acq_function$eval_dt(data.table(x = c(-1, 0, 1))) }
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) library(data.table) fun = function(xs) { list(y1 = xs$x^2, y2 = (xs$x - 2) ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y1 = p_dbl(tags = "minimize"), y2 = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchMultiCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) instance$eval_batch(data.table(x = c(-6, -5, 3, 9))) learner = default_gp() surrogate = srlrn(list(learner, learner$clone(deep = TRUE)), archive = instance$archive) acq_function = acqf("ehvi", surrogate = surrogate) acq_function$surrogate$update() acq_function$update() acq_function$eval_dt(data.table(x = c(-1, 0, 1))) }
Expected Hypervolume Improvement. Computed via Gauss-Hermite quadrature.
In the case of optimizing only two objective functions AcqFunctionEHVI is to be preferred.
"k"
(integer(1)
)
Number of nodes per objective used for the numerical integration via Gauss-Hermite quadrature.
Defaults to 15
.
For example, if two objectives are to be optimized, the total number of nodes will therefore be 225 per default.
Changing this value after construction requires a call to $update()
to update the $gh_data
field.
"r"
(numeric(1)
)
Pruning rate between 0 and 1 that determines the fraction of nodes of the Gauss-Hermite quadrature rule that are ignored based on their weight value (the nodes with the lowest weights being ignored).
Default is 0.2
.
Changing this value after construction does not require a call to $update()
.
bbotk::Objective
-> mlr3mbo::AcqFunction
-> AcqFunctionEHVIGH
ys_front
(matrix()
)
Approximated Pareto front.
Signs are corrected with respect to assuming minimization of objectives.
ref_point
(numeric()
)
Reference point.
Signs are corrected with respect to assuming minimization of objectives.
hypervolume
(numeric(1)
).
Current hypervolume of the approximated Pareto front with respect to the reference point.
gh_data
(matrix()
)
Data required for the Gauss-Hermite quadrature rule in the form of a matrix of dimension (k x 2).
Each row corresponds to one Gauss-Hermite node (column "x"
) and corresponding weight (column "w"
).
Computed via fastGHQuad::gaussHermiteData.
Nodes are scaled by a factor of sqrt(2)
and weights are normalized under a sum to one constraint.
new()
Creates a new instance of this R6 class.
AcqFunctionEHVIGH$new(surrogate = NULL, k = 15L, r = 0.2)
surrogate
(NULL
| SurrogateLearnerCollection).
k
(integer(1)
).
r
(numeric(1)
).
update()
Update the acquisition function and set ys_front
, ref_point
, hypervolume
and gh_data
.
AcqFunctionEHVIGH$update()
clone()
The objects of this class are cloneable with this method.
AcqFunctionEHVIGH$clone(deep = FALSE)
deep
Whether to make a deep clone.
Rahat, Alma, Chugh, Tinkle, Fieldsend, Jonathan, Allmendinger, Richard, Miettinen, Kaisa (2022). “Efficient Approximation of Expected Hypervolume Improvement using Gauss-Hermit Quadrature.” In Rudolph, Günter, Kononova, V. A, Aguirre, Hernán, Kerschke, Pascal, Ochoa, Gabriela, Tušar, Tea (eds.), Parallel Problem Solving from Nature – PPSN XVII, 90–103.
Other Acquisition Function:
AcqFunction
,
mlr_acqfunctions
,
mlr_acqfunctions_aei
,
mlr_acqfunctions_cb
,
mlr_acqfunctions_ehvi
,
mlr_acqfunctions_ei
,
mlr_acqfunctions_eips
,
mlr_acqfunctions_mean
,
mlr_acqfunctions_multi
,
mlr_acqfunctions_pi
,
mlr_acqfunctions_sd
,
mlr_acqfunctions_smsego
,
mlr_acqfunctions_stochastic_cb
,
mlr_acqfunctions_stochastic_ei
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) library(data.table) fun = function(xs) { list(y1 = xs$x^2, y2 = (xs$x - 2) ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y1 = p_dbl(tags = "minimize"), y2 = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchMultiCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) instance$eval_batch(data.table(x = c(-6, -5, 3, 9))) learner = default_gp() surrogate = srlrn(list(learner, learner$clone(deep = TRUE)), archive = instance$archive) acq_function = acqf("ehvigh", surrogate = surrogate) acq_function$surrogate$update() acq_function$update() acq_function$eval_dt(data.table(x = c(-1, 0, 1))) }
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) library(data.table) fun = function(xs) { list(y1 = xs$x^2, y2 = (xs$x - 2) ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y1 = p_dbl(tags = "minimize"), y2 = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchMultiCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) instance$eval_batch(data.table(x = c(-6, -5, 3, 9))) learner = default_gp() surrogate = srlrn(list(learner, learner$clone(deep = TRUE)), archive = instance$archive) acq_function = acqf("ehvigh", surrogate = surrogate) acq_function$surrogate$update() acq_function$update() acq_function$eval_dt(data.table(x = c(-1, 0, 1))) }
Expected Improvement.
This AcqFunction can be instantiated via the dictionary
mlr_acqfunctions or with the associated sugar function acqf()
:
mlr_acqfunctions$get("ei") acqf("ei")
"epsilon"
(numeric(1)
) value used to determine the amount of exploration.
Higher values result in the importance of improvements predicted by the posterior mean
decreasing relative to the importance of potential improvements in regions of high predictive uncertainty.
Defaults to
0
(standard Expected Improvement).
bbotk::Objective
-> mlr3mbo::AcqFunction
-> AcqFunctionEI
y_best
(numeric(1)
)
Best objective function value observed so far.
In the case of maximization, this already includes the necessary change of sign.
new()
Creates a new instance of this R6 class.
AcqFunctionEI$new(surrogate = NULL, epsilon = 0)
surrogate
(NULL
| SurrogateLearner).
epsilon
(numeric(1)
).
update()
Update the acquisition function and set y_best
.
AcqFunctionEI$update()
clone()
The objects of this class are cloneable with this method.
AcqFunctionEI$clone(deep = FALSE)
deep
Whether to make a deep clone.
Jones, R. D, Schonlau, Matthias, Welch, J. W (1998). “Efficient Global Optimization of Expensive Black-Box Functions.” Journal of Global optimization, 13(4), 455–492.
Other Acquisition Function:
AcqFunction
,
mlr_acqfunctions
,
mlr_acqfunctions_aei
,
mlr_acqfunctions_cb
,
mlr_acqfunctions_ehvi
,
mlr_acqfunctions_ehvigh
,
mlr_acqfunctions_eips
,
mlr_acqfunctions_mean
,
mlr_acqfunctions_multi
,
mlr_acqfunctions_pi
,
mlr_acqfunctions_sd
,
mlr_acqfunctions_smsego
,
mlr_acqfunctions_stochastic_cb
,
mlr_acqfunctions_stochastic_ei
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) library(data.table) fun = function(xs) { list(y = xs$x ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) instance$eval_batch(data.table(x = c(-6, -5, 3, 9))) learner = default_gp() surrogate = srlrn(learner, archive = instance$archive) acq_function = acqf("ei", surrogate = surrogate) acq_function$surrogate$update() acq_function$update() acq_function$eval_dt(data.table(x = c(-1, 0, 1))) }
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) library(data.table) fun = function(xs) { list(y = xs$x ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) instance$eval_batch(data.table(x = c(-6, -5, 3, 9))) learner = default_gp() surrogate = srlrn(learner, archive = instance$archive) acq_function = acqf("ei", surrogate = surrogate) acq_function$surrogate$update() acq_function$update() acq_function$eval_dt(data.table(x = c(-1, 0, 1))) }
Expected Improvement per Second.
It is assumed that calculations are performed on an bbotk::OptimInstanceBatchSingleCrit.
Additionally to target values of the codomain that should be minimized or maximized, the
bbotk::Objective of the bbotk::OptimInstanceBatchSingleCrit should return time values.
The column names of the target variable and time variable must be passed as cols_y
in the
order (target, time)
when constructing the SurrogateLearnerCollection that is being used as a
surrogate.
This AcqFunction can be instantiated via the dictionary
mlr_acqfunctions or with the associated sugar function acqf()
:
mlr_acqfunctions$get("eips") acqf("eips")
bbotk::Objective
-> mlr3mbo::AcqFunction
-> AcqFunctionEIPS
y_best
(numeric(1)
)
Best objective function value observed so far.
In the case of maximization, this already includes the necessary change of sign.
col_y
(character(1)
).
col_time
(character(1)
).
new()
Creates a new instance of this R6 class.
AcqFunctionEIPS$new(surrogate = NULL)
surrogate
(NULL
| SurrogateLearnerCollection).
update()
Update the acquisition function and set y_best
.
AcqFunctionEIPS$update()
clone()
The objects of this class are cloneable with this method.
AcqFunctionEIPS$clone(deep = FALSE)
deep
Whether to make a deep clone.
Snoek, Jasper, Larochelle, Hugo, Adams, P R (2012). “Practical Bayesian Optimization of Machine Learning Algorithms.” In Pereira F, Burges CJC, Bottou L, Weinberger KQ (eds.), Advances in Neural Information Processing Systems, volume 25, 2951–2959.
Other Acquisition Function:
AcqFunction
,
mlr_acqfunctions
,
mlr_acqfunctions_aei
,
mlr_acqfunctions_cb
,
mlr_acqfunctions_ehvi
,
mlr_acqfunctions_ehvigh
,
mlr_acqfunctions_ei
,
mlr_acqfunctions_mean
,
mlr_acqfunctions_multi
,
mlr_acqfunctions_pi
,
mlr_acqfunctions_sd
,
mlr_acqfunctions_smsego
,
mlr_acqfunctions_stochastic_cb
,
mlr_acqfunctions_stochastic_ei
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) library(data.table) fun = function(xs) { list(y = xs$x ^ 2, time = abs(xs$x)) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize"), time = p_dbl(tags = "time")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) instance$eval_batch(data.table(x = c(-6, -5, 3, 9))) learner = default_gp() surrogate = srlrn(list(learner, learner$clone(deep = TRUE)), archive = instance$archive) surrogate$cols_y = c("y", "time") acq_function = acqf("eips", surrogate = surrogate) acq_function$surrogate$update() acq_function$update() acq_function$eval_dt(data.table(x = c(-1, 0, 1))) }
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) library(data.table) fun = function(xs) { list(y = xs$x ^ 2, time = abs(xs$x)) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize"), time = p_dbl(tags = "time")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) instance$eval_batch(data.table(x = c(-6, -5, 3, 9))) learner = default_gp() surrogate = srlrn(list(learner, learner$clone(deep = TRUE)), archive = instance$archive) surrogate$cols_y = c("y", "time") acq_function = acqf("eips", surrogate = surrogate) acq_function$surrogate$update() acq_function$update() acq_function$eval_dt(data.table(x = c(-1, 0, 1))) }
Posterior Mean.
This AcqFunction can be instantiated via the dictionary
mlr_acqfunctions or with the associated sugar function acqf()
:
mlr_acqfunctions$get("mean") acqf("mean")
bbotk::Objective
-> mlr3mbo::AcqFunction
-> AcqFunctionMean
new()
Creates a new instance of this R6 class.
AcqFunctionMean$new(surrogate = NULL)
surrogate
(NULL
| SurrogateLearner).
clone()
The objects of this class are cloneable with this method.
AcqFunctionMean$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other Acquisition Function:
AcqFunction
,
mlr_acqfunctions
,
mlr_acqfunctions_aei
,
mlr_acqfunctions_cb
,
mlr_acqfunctions_ehvi
,
mlr_acqfunctions_ehvigh
,
mlr_acqfunctions_ei
,
mlr_acqfunctions_eips
,
mlr_acqfunctions_multi
,
mlr_acqfunctions_pi
,
mlr_acqfunctions_sd
,
mlr_acqfunctions_smsego
,
mlr_acqfunctions_stochastic_cb
,
mlr_acqfunctions_stochastic_ei
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) library(data.table) fun = function(xs) { list(y = xs$x ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) instance$eval_batch(data.table(x = c(-6, -5, 3, 9))) learner = default_gp() surrogate = srlrn(learner, archive = instance$archive) acq_function = acqf("mean", surrogate = surrogate) acq_function$surrogate$update() acq_function$update() acq_function$eval_dt(data.table(x = c(-1, 0, 1))) }
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) library(data.table) fun = function(xs) { list(y = xs$x ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) instance$eval_batch(data.table(x = c(-6, -5, 3, 9))) learner = default_gp() surrogate = srlrn(learner, archive = instance$archive) acq_function = acqf("mean", surrogate = surrogate) acq_function$surrogate$update() acq_function$update() acq_function$eval_dt(data.table(x = c(-1, 0, 1))) }
Wrapping multiple AcqFunctions resulting in a multi-objective acquisition function composed of the individual ones. Note that the optimization direction of each wrapped acquisition function is corrected for maximization.
For each acquisition function, the same Surrogate must be used. If acquisition functions passed during construction already have been initialized with a surrogate, it is checked whether the surrogate is the same for all acquisition functions. If acquisition functions have not been initialized with a surrogate, the surrogate passed during construction or lazy initialization will be used for all acquisition functions.
For optimization, AcqOptimizer can be used as for any other AcqFunction, however, the bbotk::OptimizerBatch wrapped within the AcqOptimizer
must support multi-objective optimization as indicated via the multi-crit
property.
This AcqFunction can be instantiated via the dictionary
mlr_acqfunctions or with the associated sugar function acqf()
:
mlr_acqfunctions$get("multi") acqf("multi")
bbotk::Objective
-> mlr3mbo::AcqFunction
-> AcqFunctionMulti
surrogate
(Surrogate)
Surrogate.
acq_functions
(list of AcqFunction)
Points to the list of the individual acquisition functions.
acq_function_ids
(character())
Points to the ids of the individual acquisition functions.
new()
Creates a new instance of this R6 class.
AcqFunctionMulti$new(acq_functions, surrogate = NULL)
acq_functions
(list of AcqFunctions).
surrogate
(NULL
| Surrogate).
update()
Update each of the wrapped acquisition functions.
AcqFunctionMulti$update()
clone()
The objects of this class are cloneable with this method.
AcqFunctionMulti$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other Acquisition Function:
AcqFunction
,
mlr_acqfunctions
,
mlr_acqfunctions_aei
,
mlr_acqfunctions_cb
,
mlr_acqfunctions_ehvi
,
mlr_acqfunctions_ehvigh
,
mlr_acqfunctions_ei
,
mlr_acqfunctions_eips
,
mlr_acqfunctions_mean
,
mlr_acqfunctions_pi
,
mlr_acqfunctions_sd
,
mlr_acqfunctions_smsego
,
mlr_acqfunctions_stochastic_cb
,
mlr_acqfunctions_stochastic_ei
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) library(data.table) fun = function(xs) { list(y = xs$x ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) instance$eval_batch(data.table(x = c(-6, -5, 3, 9))) learner = default_gp() surrogate = srlrn(learner, archive = instance$archive) acq_function = acqf("multi", acq_functions = acqfs(c("ei", "pi", "cb")), surrogate = surrogate ) acq_function$surrogate$update() acq_function$update() acq_function$eval_dt(data.table(x = c(-1, 0, 1))) }
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) library(data.table) fun = function(xs) { list(y = xs$x ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) instance$eval_batch(data.table(x = c(-6, -5, 3, 9))) learner = default_gp() surrogate = srlrn(learner, archive = instance$archive) acq_function = acqf("multi", acq_functions = acqfs(c("ei", "pi", "cb")), surrogate = surrogate ) acq_function$surrogate$update() acq_function$update() acq_function$eval_dt(data.table(x = c(-1, 0, 1))) }
Probability of Improvement.
This AcqFunction can be instantiated via the dictionary
mlr_acqfunctions or with the associated sugar function acqf()
:
mlr_acqfunctions$get("pi") acqf("pi")
bbotk::Objective
-> mlr3mbo::AcqFunction
-> AcqFunctionPI
y_best
(numeric(1)
)
Best objective function value observed so far.
In the case of maximization, this already includes the necessary change of sign.
new()
Creates a new instance of this R6 class.
AcqFunctionPI$new(surrogate = NULL)
surrogate
(NULL
| SurrogateLearner).
update()
Update the acquisition function and set y_best
.
AcqFunctionPI$update()
clone()
The objects of this class are cloneable with this method.
AcqFunctionPI$clone(deep = FALSE)
deep
Whether to make a deep clone.
Kushner, J. H (1964). “A New Method of Locating the Maximum Point of an Arbitrary Multipeak Curve in the Presence of Noise.” Journal of Basic Engineering, 86(1), 97–106.
Other Acquisition Function:
AcqFunction
,
mlr_acqfunctions
,
mlr_acqfunctions_aei
,
mlr_acqfunctions_cb
,
mlr_acqfunctions_ehvi
,
mlr_acqfunctions_ehvigh
,
mlr_acqfunctions_ei
,
mlr_acqfunctions_eips
,
mlr_acqfunctions_mean
,
mlr_acqfunctions_multi
,
mlr_acqfunctions_sd
,
mlr_acqfunctions_smsego
,
mlr_acqfunctions_stochastic_cb
,
mlr_acqfunctions_stochastic_ei
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) library(data.table) fun = function(xs) { list(y = xs$x ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) instance$eval_batch(data.table(x = c(-6, -5, 3, 9))) learner = default_gp() surrogate = srlrn(learner, archive = instance$archive) acq_function = acqf("pi", surrogate = surrogate) acq_function$surrogate$update() acq_function$update() acq_function$eval_dt(data.table(x = c(-1, 0, 1))) }
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) library(data.table) fun = function(xs) { list(y = xs$x ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) instance$eval_batch(data.table(x = c(-6, -5, 3, 9))) learner = default_gp() surrogate = srlrn(learner, archive = instance$archive) acq_function = acqf("pi", surrogate = surrogate) acq_function$surrogate$update() acq_function$update() acq_function$eval_dt(data.table(x = c(-1, 0, 1))) }
Posterior Standard Deviation.
This AcqFunction can be instantiated via the dictionary
mlr_acqfunctions or with the associated sugar function acqf()
:
mlr_acqfunctions$get("sd") acqf("sd")
bbotk::Objective
-> mlr3mbo::AcqFunction
-> AcqFunctionSD
new()
Creates a new instance of this R6 class.
AcqFunctionSD$new(surrogate = NULL)
surrogate
(NULL
| SurrogateLearner).
clone()
The objects of this class are cloneable with this method.
AcqFunctionSD$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other Acquisition Function:
AcqFunction
,
mlr_acqfunctions
,
mlr_acqfunctions_aei
,
mlr_acqfunctions_cb
,
mlr_acqfunctions_ehvi
,
mlr_acqfunctions_ehvigh
,
mlr_acqfunctions_ei
,
mlr_acqfunctions_eips
,
mlr_acqfunctions_mean
,
mlr_acqfunctions_multi
,
mlr_acqfunctions_pi
,
mlr_acqfunctions_smsego
,
mlr_acqfunctions_stochastic_cb
,
mlr_acqfunctions_stochastic_ei
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) library(data.table) fun = function(xs) { list(y = xs$x ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) instance$eval_batch(data.table(x = c(-6, -5, 3, 9))) learner = default_gp() surrogate = srlrn(learner, archive = instance$archive) acq_function = acqf("sd", surrogate = surrogate) acq_function$surrogate$update() acq_function$update() acq_function$eval_dt(data.table(x = c(-1, 0, 1))) }
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) library(data.table) fun = function(xs) { list(y = xs$x ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) instance$eval_batch(data.table(x = c(-6, -5, 3, 9))) learner = default_gp() surrogate = srlrn(learner, archive = instance$archive) acq_function = acqf("sd", surrogate = surrogate) acq_function$surrogate$update() acq_function$update() acq_function$eval_dt(data.table(x = c(-1, 0, 1))) }
S-Metric Selection Evolutionary Multi-Objective Optimization Algorithm Acquisition Function.
"lambda"
(numeric(1)
) value used for the confidence bound.
Defaults to
1
.
Based on confidence = (1 - 2 * dnorm(lambda)) ^ m
you can calculate a
lambda for a given confidence level, see Ponweiser et al. (2008).
"epsilon"
(numeric(1)
) used for the additive epsilon dominance.
Can either be a single numeric value > 0 or
NULL
(default).
In the case of being NULL
, an epsilon vector is maintained dynamically as
described in Horn et al. (2015).
This acquisition function always also returns its current epsilon values in a list column (acq_epsilon
).
These values will be logged into the bbotk::ArchiveBatch of the bbotk::OptimInstanceBatch of the AcqOptimizer and
therefore also in the bbotk::Archive of the actual bbotk::OptimInstance that is to be optimized.
bbotk::Objective
-> mlr3mbo::AcqFunction
-> AcqFunctionSmsEgo
ys_front
(matrix()
)
Approximated Pareto front.
Signs are corrected with respect to assuming minimization of objectives.
ref_point
(numeric()
)
Reference point.
Signs are corrected with respect to assuming minimization of objectives.
epsilon
(numeric()
)
Epsilon used for the additive epsilon dominance.
progress
(numeric(1)
)
Optimization progress (typically, the number of function evaluations left).
Note that this requires the bbotk::OptimInstanceBatch to be terminated via a bbotk::TerminatorEvals.
new()
Creates a new instance of this R6 class.
AcqFunctionSmsEgo$new(surrogate = NULL, lambda = 1, epsilon = NULL)
surrogate
(NULL
| SurrogateLearnerCollection).
lambda
(numeric(1)
).
epsilon
(NULL
| numeric(1)
).
update()
Update the acquisition function and set ys_front
, ref_point
and epsilon
.
AcqFunctionSmsEgo$update()
reset()
Reset the acquisition function.
Resets epsilon
.
AcqFunctionSmsEgo$reset()
clone()
The objects of this class are cloneable with this method.
AcqFunctionSmsEgo$clone(deep = FALSE)
deep
Whether to make a deep clone.
Ponweiser, Wolfgang, Wagner, Tobias, Biermann, Dirk, Vincze, Markus (2008). “Multiobjective Optimization on a Limited Budget of Evaluations Using Model-Assisted S-Metric Selection.” In Proceedings of the 10th International Conference on Parallel Problem Solving from Nature, 784–794.
Horn, Daniel, Wagner, Tobias, Biermann, Dirk, Weihs, Claus, Bischl, Bernd (2015). “Model-Based Multi-objective Optimization: Taxonomy, Multi-Point Proposal, Toolbox and Benchmark.” In International Conference on Evolutionary Multi-Criterion Optimization, 64–78.
Other Acquisition Function:
AcqFunction
,
mlr_acqfunctions
,
mlr_acqfunctions_aei
,
mlr_acqfunctions_cb
,
mlr_acqfunctions_ehvi
,
mlr_acqfunctions_ehvigh
,
mlr_acqfunctions_ei
,
mlr_acqfunctions_eips
,
mlr_acqfunctions_mean
,
mlr_acqfunctions_multi
,
mlr_acqfunctions_pi
,
mlr_acqfunctions_sd
,
mlr_acqfunctions_stochastic_cb
,
mlr_acqfunctions_stochastic_ei
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) library(data.table) fun = function(xs) { list(y1 = xs$x^2, y2 = (xs$x - 2) ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y1 = p_dbl(tags = "minimize"), y2 = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchMultiCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) instance$eval_batch(data.table(x = c(-6, -5, 3, 9))) learner = default_gp() surrogate = srlrn(list(learner, learner$clone(deep = TRUE)), archive = instance$archive) acq_function = acqf("smsego", surrogate = surrogate) acq_function$surrogate$update() acq_function$progress = 5 - 4 # n_evals = 5 and 4 points already evaluated acq_function$update() acq_function$eval_dt(data.table(x = c(-1, 0, 1))) }
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) library(data.table) fun = function(xs) { list(y1 = xs$x^2, y2 = (xs$x - 2) ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y1 = p_dbl(tags = "minimize"), y2 = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchMultiCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) instance$eval_batch(data.table(x = c(-6, -5, 3, 9))) learner = default_gp() surrogate = srlrn(list(learner, learner$clone(deep = TRUE)), archive = instance$archive) acq_function = acqf("smsego", surrogate = surrogate) acq_function$surrogate$update() acq_function$progress = 5 - 4 # n_evals = 5 and 4 points already evaluated acq_function$update() acq_function$eval_dt(data.table(x = c(-1, 0, 1))) }
Lower / Upper Confidence Bound with lambda sampling and decay.
The initial is drawn from an uniform distribution between
min_lambda
and max_lambda
or from an exponential distribution with rate 1 / lambda
.
is updated after each update by the formula
lambda * exp(-rate * (t %% period))
, where t
is the number of times the acquisition function has been updated.
While this acquisition function usually would be used within an asynchronous optimizer, e.g., OptimizerAsyncMbo, it can in principle also be used in synchronous optimizers, e.g., OptimizerMbo.
This AcqFunction can be instantiated via the dictionary
mlr_acqfunctions or with the associated sugar function acqf()
:
mlr_acqfunctions$get("stochastic_cb") acqf("stochastic_cb")
"lambda"
(numeric(1)
) value for sampling from the exponential distribution.
Defaults to
1.96
.
"min_lambda"
(numeric(1)
)
Minimum value of for sampling from the uniform distribution.
Defaults to
0.01
.
"max_lambda"
(numeric(1)
)
Maximum value of for sampling from the uniform distribution.
Defaults to
10
.
"distribution"
(character(1)
)
Distribution to sample from.
One of
c("uniform", "exponential")
.
Defaults to uniform
.
"rate"
(numeric(1)
)
Rate of the exponential decay.
Defaults to 0
i.e. no decay.
"period"
(integer(1)
)
Period of the exponential decay.
Defaults to NULL
, i.e., the decay has no period.
This acquisition function always also returns its current (acq_lambda
) and original (acq_lambda_0
) .
These values will be logged into the bbotk::ArchiveBatch of the bbotk::OptimInstanceBatch of the AcqOptimizer and
therefore also in the bbotk::Archive of the actual bbotk::OptimInstance that is to be optimized.
bbotk::Objective
-> mlr3mbo::AcqFunction
-> AcqFunctionStochasticCB
new()
Creates a new instance of this R6 class.
AcqFunctionStochasticCB$new( surrogate = NULL, lambda = 1.96, min_lambda = 0.01, max_lambda = 10, distribution = "uniform", rate = 0, period = NULL )
surrogate
(NULL
| SurrogateLearner).
lambda
(numeric(1)
).
min_lambda
(numeric(1)
).
max_lambda
(numeric(1)
).
distribution
(character(1)
).
rate
(numeric(1)
).
period
(NULL
| integer(1)
).
update()
Update the acquisition function. Samples and decays lambda.
AcqFunctionStochasticCB$update()
reset()
Reset the acquisition function.
Resets the private update counter .t
used within the epsilon decay.
AcqFunctionStochasticCB$reset()
clone()
The objects of this class are cloneable with this method.
AcqFunctionStochasticCB$clone(deep = FALSE)
deep
Whether to make a deep clone.
Snoek, Jasper, Larochelle, Hugo, Adams, P R (2012). “Practical Bayesian Optimization of Machine Learning Algorithms.” In Pereira F, Burges CJC, Bottou L, Weinberger KQ (eds.), Advances in Neural Information Processing Systems, volume 25, 2951–2959.
Egelé, Romain, Guyon, Isabelle, Vishwanath, Venkatram, Balaprakash, Prasanna (2023). “Asynchronous Decentralized Bayesian Optimization for Large Scale Hyperparameter Optimization.” In 2023 IEEE 19th International Conference on e-Science (e-Science), 1–10.
Other Acquisition Function:
AcqFunction
,
mlr_acqfunctions
,
mlr_acqfunctions_aei
,
mlr_acqfunctions_cb
,
mlr_acqfunctions_ehvi
,
mlr_acqfunctions_ehvigh
,
mlr_acqfunctions_ei
,
mlr_acqfunctions_eips
,
mlr_acqfunctions_mean
,
mlr_acqfunctions_multi
,
mlr_acqfunctions_pi
,
mlr_acqfunctions_sd
,
mlr_acqfunctions_smsego
,
mlr_acqfunctions_stochastic_ei
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) library(data.table) fun = function(xs) { list(y = xs$x ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) instance$eval_batch(data.table(x = c(-6, -5, 3, 9))) learner = default_gp() surrogate = srlrn(learner, archive = instance$archive) acq_function = acqf("stochastic_cb", surrogate = surrogate, lambda = 3) acq_function$surrogate$update() acq_function$update() acq_function$eval_dt(data.table(x = c(-1, 0, 1))) }
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) library(data.table) fun = function(xs) { list(y = xs$x ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) instance$eval_batch(data.table(x = c(-6, -5, 3, 9))) learner = default_gp() surrogate = srlrn(learner, archive = instance$archive) acq_function = acqf("stochastic_cb", surrogate = surrogate, lambda = 3) acq_function$surrogate$update() acq_function$update() acq_function$eval_dt(data.table(x = c(-1, 0, 1))) }
Expected Improvement with epsilon decay.
is updated after each update by the formula
epsilon * exp(-rate * (t %% period))
where t
is the number of times the acquisition function has been updated.
While this acquisition function usually would be used within an asynchronous optimizer, e.g., OptimizerAsyncMbo, it can in principle also be used in synchronous optimizers, e.g., OptimizerMbo.
This AcqFunction can be instantiated via the dictionary
mlr_acqfunctions or with the associated sugar function acqf()
:
mlr_acqfunctions$get("stochastic_ei") acqf("stochastic_ei")
"epsilon"
(numeric(1)
) value used to determine the amount of exploration.
Higher values result in the importance of improvements predicted by the posterior mean
decreasing relative to the importance of potential improvements in regions of high predictive uncertainty.
Defaults to
0.1
.
"rate"
(numeric(1)
)
Defaults to 0.05
.
"period"
(integer(1)
)
Period of the exponential decay.
Defaults to NULL
, i.e., the decay has no period.
This acquisition function always also returns its current (acq_epsilon
) and original (acq_epsilon_0
) .
These values will be logged into the bbotk::ArchiveBatch of the bbotk::OptimInstanceBatch of the AcqOptimizer and
therefore also in the bbotk::Archive of the actual bbotk::OptimInstance that is to be optimized.
bbotk::Objective
-> mlr3mbo::AcqFunction
-> AcqFunctionStochasticEI
y_best
(numeric(1)
)
Best objective function value observed so far.
In the case of maximization, this already includes the necessary change of sign.
new()
Creates a new instance of this R6 class.
AcqFunctionStochasticEI$new( surrogate = NULL, epsilon = 0.1, rate = 0.05, period = NULL )
surrogate
(NULL
| SurrogateLearner).
epsilon
(numeric(1)
).
rate
(numeric(1)
).
period
(NULL
| integer(1)
).
update()
Update the acquisition function.
Sets y_best
to the best observed objective function value.
Decays epsilon.
AcqFunctionStochasticEI$update()
reset()
Reset the acquisition function.
Resets the private update counter .t
used within the epsilon decay.
AcqFunctionStochasticEI$reset()
clone()
The objects of this class are cloneable with this method.
AcqFunctionStochasticEI$clone(deep = FALSE)
deep
Whether to make a deep clone.
Jones, R. D, Schonlau, Matthias, Welch, J. W (1998). “Efficient Global Optimization of Expensive Black-Box Functions.” Journal of Global optimization, 13(4), 455–492.
Other Acquisition Function:
AcqFunction
,
mlr_acqfunctions
,
mlr_acqfunctions_aei
,
mlr_acqfunctions_cb
,
mlr_acqfunctions_ehvi
,
mlr_acqfunctions_ehvigh
,
mlr_acqfunctions_ei
,
mlr_acqfunctions_eips
,
mlr_acqfunctions_mean
,
mlr_acqfunctions_multi
,
mlr_acqfunctions_pi
,
mlr_acqfunctions_sd
,
mlr_acqfunctions_smsego
,
mlr_acqfunctions_stochastic_cb
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) library(data.table) fun = function(xs) { list(y = xs$x ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) instance$eval_batch(data.table(x = c(-6, -5, 3, 9))) learner = default_gp() surrogate = srlrn(learner, archive = instance$archive) acq_function = acqf("stochastic_ei", surrogate = surrogate) acq_function$surrogate$update() acq_function$update() acq_function$eval_dt(data.table(x = c(-1, 0, 1))) }
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) library(data.table) fun = function(xs) { list(y = xs$x ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) instance$eval_batch(data.table(x = c(-6, -5, 3, 9))) learner = default_gp() surrogate = srlrn(learner, archive = instance$archive) acq_function = acqf("stochastic_ei", surrogate = surrogate) acq_function$surrogate$update() acq_function$update() acq_function$eval_dt(data.table(x = c(-1, 0, 1))) }
A simple mlr3misc::Dictionary storing objects of class loop_function
.
Each loop function has an associated help page, see mlr_loop_functions_[id]
.
Retrieves object with key key
from the dictionary.
Additional arguments must be named and are passed to the constructor of the stored object.
key |
( |
... |
( |
R6::R6Class object inheriting from mlr3misc::Dictionary.
Object with corresponding key.
See mlr3misc::Dictionary.
Other Dictionary:
mlr_acqfunctions
,
mlr_result_assigners
Other Loop Function:
loop_function
,
mlr_loop_functions_ego
,
mlr_loop_functions_emo
,
mlr_loop_functions_mpcl
,
mlr_loop_functions_parego
,
mlr_loop_functions_smsego
library(data.table) as.data.table(mlr_loop_functions)
library(data.table) as.data.table(mlr_loop_functions)
Loop function for sequential single-objective Bayesian Optimization. Normally used inside an OptimizerMbo.
In each iteration after the initial design, the surrogate and acquisition function are updated and the next candidate is chosen based on optimizing the acquisition function.
bayesopt_ego( instance, surrogate, acq_function, acq_optimizer, init_design_size = NULL, random_interleave_iter = 0L )
bayesopt_ego( instance, surrogate, acq_function, acq_optimizer, init_design_size = NULL, random_interleave_iter = 0L )
instance |
(bbotk::OptimInstanceBatchSingleCrit) |
surrogate |
(Surrogate) |
acq_function |
(AcqFunction) |
acq_optimizer |
(AcqOptimizer) |
init_design_size |
( |
random_interleave_iter |
( |
invisible(instance)
The original instance is modified in-place and returned invisible.
The acq_function$surrogate
, even if already populated, will always be overwritten by the surrogate
.
The acq_optimizer$acq_function
, even if already populated, will always be overwritten by acq_function
.
The surrogate$archive
, even if already populated, will always be overwritten by the bbotk::ArchiveBatch of the bbotk::OptimInstanceBatchSingleCrit.
Jones, R. D, Schonlau, Matthias, Welch, J. W (1998). “Efficient Global Optimization of Expensive Black-Box Functions.” Journal of Global optimization, 13(4), 455–492.
Snoek, Jasper, Larochelle, Hugo, Adams, P R (2012). “Practical Bayesian Optimization of Machine Learning Algorithms.” In Pereira F, Burges CJC, Bottou L, Weinberger KQ (eds.), Advances in Neural Information Processing Systems, volume 25, 2951–2959.
Other Loop Function:
loop_function
,
mlr_loop_functions
,
mlr_loop_functions_emo
,
mlr_loop_functions_mpcl
,
mlr_loop_functions_parego
,
mlr_loop_functions_smsego
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) fun = function(xs) { list(y = xs$x ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) surrogate = default_surrogate(instance) acq_function = acqf("ei") acq_optimizer = acqo( optimizer = opt("random_search", batch_size = 100), terminator = trm("evals", n_evals = 100)) optimizer = opt("mbo", loop_function = bayesopt_ego, surrogate = surrogate, acq_function = acq_function, acq_optimizer = acq_optimizer) optimizer$optimize(instance) # expected improvement per second example fun = function(xs) { list(y = xs$x ^ 2, time = abs(xs$x)) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize"), time = p_dbl(tags = "time")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) surrogate = default_surrogate(instance, n_learner = 2) surrogate$cols_y = c("y", "time") optimizer = opt("mbo", loop_function = bayesopt_ego, surrogate = surrogate, acq_function = acqf("eips"), acq_optimizer = acq_optimizer) optimizer$optimize(instance) }
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) fun = function(xs) { list(y = xs$x ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) surrogate = default_surrogate(instance) acq_function = acqf("ei") acq_optimizer = acqo( optimizer = opt("random_search", batch_size = 100), terminator = trm("evals", n_evals = 100)) optimizer = opt("mbo", loop_function = bayesopt_ego, surrogate = surrogate, acq_function = acq_function, acq_optimizer = acq_optimizer) optimizer$optimize(instance) # expected improvement per second example fun = function(xs) { list(y = xs$x ^ 2, time = abs(xs$x)) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize"), time = p_dbl(tags = "time")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) surrogate = default_surrogate(instance, n_learner = 2) surrogate$cols_y = c("y", "time") optimizer = opt("mbo", loop_function = bayesopt_ego, surrogate = surrogate, acq_function = acqf("eips"), acq_optimizer = acq_optimizer) optimizer$optimize(instance) }
Loop function for sequential multi-objective Bayesian Optimization. Normally used inside an OptimizerMbo. The conceptual counterpart to mlr_loop_functions_ego.
In each iteration after the initial design, the surrogate and acquisition function are updated and the next candidate is chosen based on optimizing the acquisition function.
bayesopt_emo( instance, surrogate, acq_function, acq_optimizer, init_design_size = NULL, random_interleave_iter = 0L )
bayesopt_emo( instance, surrogate, acq_function, acq_optimizer, init_design_size = NULL, random_interleave_iter = 0L )
instance |
(bbotk::OptimInstanceBatchMultiCrit) |
surrogate |
(SurrogateLearnerCollection) |
acq_function |
(AcqFunction) |
acq_optimizer |
(AcqOptimizer) |
init_design_size |
( |
random_interleave_iter |
( |
invisible(instance)
The original instance is modified in-place and returned invisible.
The acq_function$surrogate
, even if already populated, will always be overwritten by the surrogate
.
The acq_optimizer$acq_function
, even if already populated, will always be overwritten by acq_function
.
The surrogate$archive
, even if already populated, will always be overwritten by the bbotk::ArchiveBatch of the bbotk::OptimInstanceBatchMultiCrit.
Other Loop Function:
loop_function
,
mlr_loop_functions
,
mlr_loop_functions_ego
,
mlr_loop_functions_mpcl
,
mlr_loop_functions_parego
,
mlr_loop_functions_smsego
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) fun = function(xs) { list(y1 = xs$x^2, y2 = (xs$x - 2) ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y1 = p_dbl(tags = "minimize"), y2 = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchMultiCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) surrogate = default_surrogate(instance) acq_function = acqf("ehvi") acq_optimizer = acqo( optimizer = opt("random_search", batch_size = 100), terminator = trm("evals", n_evals = 100)) optimizer = opt("mbo", loop_function = bayesopt_emo, surrogate = surrogate, acq_function = acq_function, acq_optimizer = acq_optimizer) optimizer$optimize(instance) }
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) fun = function(xs) { list(y1 = xs$x^2, y2 = (xs$x - 2) ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y1 = p_dbl(tags = "minimize"), y2 = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchMultiCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) surrogate = default_surrogate(instance) acq_function = acqf("ehvi") acq_optimizer = acqo( optimizer = opt("random_search", batch_size = 100), terminator = trm("evals", n_evals = 100)) optimizer = opt("mbo", loop_function = bayesopt_emo, surrogate = surrogate, acq_function = acq_function, acq_optimizer = acq_optimizer) optimizer$optimize(instance) }
Loop function for single-objective Bayesian Optimization via multipoint constant liar. Normally used inside an OptimizerMbo.
In each iteration after the initial design, the surrogate and acquisition function are updated.
The acquisition function is then optimized, to find a candidate but instead of evaluating this candidate, the
objective function value is obtained by applying the liar
function to all previously obtained objective function values.
This is repeated q - 1
times to obtain a total of q
candidates that are then evaluated in a single batch.
bayesopt_mpcl( instance, surrogate, acq_function, acq_optimizer, init_design_size = NULL, q = 2L, liar = mean, random_interleave_iter = 0L )
bayesopt_mpcl( instance, surrogate, acq_function, acq_optimizer, init_design_size = NULL, q = 2L, liar = mean, random_interleave_iter = 0L )
instance |
(bbotk::OptimInstanceBatchSingleCrit) |
surrogate |
(Surrogate) |
acq_function |
(AcqFunction) |
acq_optimizer |
(AcqOptimizer) |
init_design_size |
( |
q |
( |
liar |
( |
random_interleave_iter |
( |
invisible(instance)
The original instance is modified in-place and returned invisible.
The acq_function$surrogate
, even if already populated, will always be overwritten by the surrogate
.
The acq_optimizer$acq_function
, even if already populated, will always be overwritten by acq_function
.
The surrogate$archive
, even if already populated, will always be overwritten by the bbotk::ArchiveBatch of the bbotk::OptimInstanceBatchSingleCrit.
To make use of parallel evaluations in the case of 'q > 1, the objective function of the bbotk::OptimInstanceBatchSingleCrit must be implemented accordingly.
Ginsbourger, David, Le Riche, Rodolphe, Carraro, Laurent (2008). “A Multi-Points Criterion for Deterministic Parallel Global Optimization Based on Gaussian Processes.”
Wang, Jialei, Clark, C. S, Liu, Eric, Frazier, I. P (2020). “Parallel Bayesian Global Optimization of Expensive Functions.” Operations Research, 68(6), 1850–1865.
Other Loop Function:
loop_function
,
mlr_loop_functions
,
mlr_loop_functions_ego
,
mlr_loop_functions_emo
,
mlr_loop_functions_parego
,
mlr_loop_functions_smsego
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) fun = function(xs) { list(y = xs$x ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 7)) surrogate = default_surrogate(instance) acq_function = acqf("ei") acq_optimizer = acqo( optimizer = opt("random_search", batch_size = 100), terminator = trm("evals", n_evals = 100)) optimizer = opt("mbo", loop_function = bayesopt_mpcl, surrogate = surrogate, acq_function = acq_function, acq_optimizer = acq_optimizer, args = list(q = 3)) optimizer$optimize(instance) }
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) fun = function(xs) { list(y = xs$x ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 7)) surrogate = default_surrogate(instance) acq_function = acqf("ei") acq_optimizer = acqo( optimizer = opt("random_search", batch_size = 100), terminator = trm("evals", n_evals = 100)) optimizer = opt("mbo", loop_function = bayesopt_mpcl, surrogate = surrogate, acq_function = acq_function, acq_optimizer = acq_optimizer, args = list(q = 3)) optimizer$optimize(instance) }
Loop function for multi-objective Bayesian Optimization via ParEGO. Normally used inside an OptimizerMbo.
In each iteration after the initial design, the observed objective function values are normalized and q
candidates are
obtained by scalarizing these values via the augmented Tchebycheff function, updating the surrogate with respect to
these scalarized values and optimizing the acquisition function.
bayesopt_parego( instance, surrogate, acq_function, acq_optimizer, init_design_size = NULL, q = 1L, s = 100L, rho = 0.05, random_interleave_iter = 0L )
bayesopt_parego( instance, surrogate, acq_function, acq_optimizer, init_design_size = NULL, q = 1L, s = 100L, rho = 0.05, random_interleave_iter = 0L )
instance |
(bbotk::OptimInstanceBatchMultiCrit) |
surrogate |
(SurrogateLearner) |
acq_function |
(AcqFunction) |
acq_optimizer |
(AcqOptimizer) |
init_design_size |
( |
q |
( |
s |
( |
rho |
( |
random_interleave_iter |
( |
invisible(instance)
The original instance is modified in-place and returned invisible.
The acq_function$surrogate
, even if already populated, will always be overwritten by the surrogate
.
The acq_optimizer$acq_function
, even if already populated, will always be overwritten by acq_function
.
The surrogate$archive
, even if already populated, will always be overwritten by the bbotk::ArchiveBatch of the bbotk::OptimInstanceBatchMultiCrit.
The scalarizations of the objective function values are stored as the y_scal
column in the
bbotk::ArchiveBatch of the bbotk::OptimInstanceBatchMultiCrit.
To make use of parallel evaluations in the case of 'q > 1, the objective function of the bbotk::OptimInstanceBatchMultiCrit must be implemented accordingly.
Knowles, Joshua (2006). “ParEGO: A Hybrid Algorithm With On-Line Landscape Approximation for Expensive Multiobjective Optimization Problems.” IEEE Transactions on Evolutionary Computation, 10(1), 50–66.
Other Loop Function:
loop_function
,
mlr_loop_functions
,
mlr_loop_functions_ego
,
mlr_loop_functions_emo
,
mlr_loop_functions_mpcl
,
mlr_loop_functions_smsego
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) fun = function(xs) { list(y1 = xs$x^2, y2 = (xs$x - 2) ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y1 = p_dbl(tags = "minimize"), y2 = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchMultiCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) surrogate = default_surrogate(instance, n_learner = 1) acq_function = acqf("ei") acq_optimizer = acqo( optimizer = opt("random_search", batch_size = 100), terminator = trm("evals", n_evals = 100)) optimizer = opt("mbo", loop_function = bayesopt_parego, surrogate = surrogate, acq_function = acq_function, acq_optimizer = acq_optimizer) optimizer$optimize(instance) }
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) fun = function(xs) { list(y1 = xs$x^2, y2 = (xs$x - 2) ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y1 = p_dbl(tags = "minimize"), y2 = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchMultiCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) surrogate = default_surrogate(instance, n_learner = 1) acq_function = acqf("ei") acq_optimizer = acqo( optimizer = opt("random_search", batch_size = 100), terminator = trm("evals", n_evals = 100)) optimizer = opt("mbo", loop_function = bayesopt_parego, surrogate = surrogate, acq_function = acq_function, acq_optimizer = acq_optimizer) optimizer$optimize(instance) }
Loop function for sequential multi-objective Bayesian Optimization via SMS-EGO. Normally used inside an OptimizerMbo.
In each iteration after the initial design, the surrogate and acquisition function (mlr_acqfunctions_smsego) are updated and the next candidate is chosen based on optimizing the acquisition function.
bayesopt_smsego( instance, surrogate, acq_function, acq_optimizer, init_design_size = NULL, random_interleave_iter = 0L )
bayesopt_smsego( instance, surrogate, acq_function, acq_optimizer, init_design_size = NULL, random_interleave_iter = 0L )
instance |
(bbotk::OptimInstanceBatchMultiCrit) |
surrogate |
(SurrogateLearnerCollection) |
acq_function |
(mlr_acqfunctions_smsego) |
acq_optimizer |
(AcqOptimizer) |
init_design_size |
( |
random_interleave_iter |
( |
invisible(instance)
The original instance is modified in-place and returned invisible.
The acq_function$surrogate
, even if already populated, will always be overwritten by the surrogate
.
The acq_optimizer$acq_function
, even if already populated, will always be overwritten by acq_function
.
The surrogate$archive
, even if already populated, will always be overwritten by the bbotk::ArchiveBatch of the bbotk::OptimInstanceBatchMultiCrit.
Due to the iterative computation of the epsilon within the mlr_acqfunctions_smsego, requires the bbotk::Terminator of the bbotk::OptimInstanceBatchMultiCrit to be a bbotk::TerminatorEvals.
Beume N, Naujoks B, Emmerich M (2007). “SMS-EMOA: Multiobjective selection based on dominated hypervolume.” European Journal of Operational Research, 181(3), 1653–1669.
Ponweiser, Wolfgang, Wagner, Tobias, Biermann, Dirk, Vincze, Markus (2008). “Multiobjective Optimization on a Limited Budget of Evaluations Using Model-Assisted S-Metric Selection.” In Proceedings of the 10th International Conference on Parallel Problem Solving from Nature, 784–794.
Other Loop Function:
loop_function
,
mlr_loop_functions
,
mlr_loop_functions_ego
,
mlr_loop_functions_emo
,
mlr_loop_functions_mpcl
,
mlr_loop_functions_parego
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) fun = function(xs) { list(y1 = xs$x^2, y2 = (xs$x - 2) ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y1 = p_dbl(tags = "minimize"), y2 = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchMultiCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) surrogate = default_surrogate(instance) acq_function = acqf("smsego") acq_optimizer = acqo( optimizer = opt("random_search", batch_size = 100), terminator = trm("evals", n_evals = 100)) optimizer = opt("mbo", loop_function = bayesopt_smsego, surrogate = surrogate, acq_function = acq_function, acq_optimizer = acq_optimizer) optimizer$optimize(instance) }
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) fun = function(xs) { list(y1 = xs$x^2, y2 = (xs$x - 2) ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y1 = p_dbl(tags = "minimize"), y2 = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchMultiCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) surrogate = default_surrogate(instance) acq_function = acqf("smsego") acq_optimizer = acqo( optimizer = opt("random_search", batch_size = 100), terminator = trm("evals", n_evals = 100)) optimizer = opt("mbo", loop_function = bayesopt_smsego, surrogate = surrogate, acq_function = acq_function, acq_optimizer = acq_optimizer) optimizer$optimize(instance) }
OptimizerADBO
class that implements Asynchronous Decentralized Bayesian Optimization (ADBO).
ADBO is a variant of Asynchronous Model Based Optimization (AMBO) that uses AcqFunctionStochasticCB with exponential lambda decay.
Currently, only single-objective optimization is supported and OptimizerADBO is considered an experimental feature and API might be subject to changes.
lambda
numeric(1)
Value used for sampling the lambda for each worker from an exponential distribution.
rate
numeric(1)
Rate of the exponential decay.
period
integer(1)
Period of the exponential decay.
initial_design
data.table::data.table()
Initial design of the optimization.
If NULL
, a design of size design_size
is generated with the specified design_function
.
Default is NULL
.
design_size
integer(1)
Size of the initial design if it is to be generated.
Default is 100
.
design_function
character(1)
Sampling function to generate the initial design.
Can be random
paradox::generate_design_random, lhs
paradox::generate_design_lhs, or sobol
paradox::generate_design_sobol.
Default is sobol
.
n_workers
integer(1)
Number of parallel workers.
If NULL
, all rush workers specified via rush::rush_plan()
are used.
Default is NULL
.
bbotk::Optimizer
-> bbotk::OptimizerAsync
-> mlr3mbo::OptimizerAsyncMbo
-> OptimizerADBO
new()
Creates a new instance of this R6 class.
OptimizerADBO$new()
optimize()
Performs the optimization on an bbotk::OptimInstanceAsyncSingleCrit until termination. The single evaluations will be written into the bbotk::ArchiveAsync. The result will be written into the instance object.
OptimizerADBO$optimize(inst)
clone()
The objects of this class are cloneable with this method.
OptimizerADBO$clone(deep = FALSE)
deep
Whether to make a deep clone.
The lambda parameter of the confidence bound acquisition function controls the trade-off between exploration and exploitation.
A large lambda value leads to more exploration, while a small lambda value leads to more exploitation.
The initial lambda value of the acquisition function used on each worker is drawn from an exponential distribution with rate 1 / lambda
.
ADBO can use periodic exponential decay to reduce lambda periodically for a given time step t
with the formula lambda * exp(-rate * (t %% period))
.
The SurrogateLearner is configured to use a random forest and the AcqOptimizer is a random search with a batch size of 1000 and a budget of 10000 evaluations.
Egelé, Romain, Guyon, Isabelle, Vishwanath, Venkatram, Balaprakash, Prasanna (2023). “Asynchronous Decentralized Bayesian Optimization for Large Scale Hyperparameter Optimization.” In 2023 IEEE 19th International Conference on e-Science (e-Science), 1–10.
if (requireNamespace("rush") & requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { if (redis_available()) { library(bbotk) library(paradox) library(mlr3learners) fun = function(xs) { list(y = xs$x ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceAsyncSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 10)) rush::rush_plan(n_workers=2) optimizer = opt("adbo", design_size = 4, n_workers = 2) optimizer$optimize(instance) } else { message("Redis server is not available.\nPlease set up Redis prior to running the example.") } }
if (requireNamespace("rush") & requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { if (redis_available()) { library(bbotk) library(paradox) library(mlr3learners) fun = function(xs) { list(y = xs$x ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceAsyncSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 10)) rush::rush_plan(n_workers=2) optimizer = opt("adbo", design_size = 4, n_workers = 2) optimizer$optimize(instance) } else { message("Redis server is not available.\nPlease set up Redis prior to running the example.") } }
OptimizerAsyncMbo
class that implements Asynchronous Model Based Optimization (AMBO).
AMBO starts multiple sequential MBO runs on different workers.
The worker communicate asynchronously through a shared archive relying on the rush package.
The optimizer follows a modular layout in which the surrogate model, acquisition function, and acquisition optimizer can be changed.
The SurrogateLearner will impute missing values due to pending evaluations.
A stochastic AcqFunction, e.g., AcqFunctionStochasticEI or AcqFunctionStochasticCB is used to create varying versions of the acquisition
function on each worker, promoting different exploration-exploitation trade-offs.
The AcqOptimizer class remains consistent with the one used in synchronous MBO.
In contrast to OptimizerMbo, no loop_function can be specified that determines the AMBO flavor as OptimizerAsyncMbo
simply relies on
a surrogate update, acquisition function update and acquisition function optimization step as an internal loop.
Currently, only single-objective optimization is supported and OptimizerAsyncMbo
is considered an experimental feature and API might be subject to changes.
Note that in general the SurrogateLearner is updated one final time on all available data after the optimization process has terminated. However, in certain scenarios this is not always possible or meaningful. It is therefore recommended to manually inspect the SurrogateLearner after optimization if it is to be used, e.g., for visualization purposes to make sure that it has been properly updated on all available data. If this final update of the SurrogateLearner could not be performed successfully, a warning will be logged.
By specifying a ResultAssigner, one can alter how the final result is determined after optimization, e.g., simply based on the evaluations logged in the archive ResultAssignerArchive or based on the Surrogate via ResultAssignerSurrogate.
The bbotk::ArchiveAsync holds the following additional columns that are specific to AMBO algorithms:
acq_function$id
(numeric(1)
)
The value of the acquisition function.
".already_evaluated"
(logical(1))
Whether this point was already evaluated. Depends on the skip_already_evaluated
parameter of the AcqOptimizer.
If the bbotk::ArchiveAsync does not contain any evaluations prior to optimization, an initial design is needed.
If the initial_design
parameter is specified to be a data.table
, this data will be used.
Otherwise, if it is NULL
, an initial design of size design_size
will be generated based on the generate_design
sampling function.
See also the parameters below.
initial_design
data.table::data.table()
Initial design of the optimization.
If NULL
, a design of size design_size
is generated with the specified design_function
.
Default is NULL
.
design_size
integer(1)
Size of the initial design if it is to be generated.
Default is 100
.
design_function
character(1)
Sampling function to generate the initial design.
Can be random
paradox::generate_design_random, lhs
paradox::generate_design_lhs, or sobol
paradox::generate_design_sobol.
Default is sobol
.
n_workers
integer(1)
Number of parallel workers.
If NULL
, all rush workers specified via rush::rush_plan()
are used.
Default is NULL
.
bbotk::Optimizer
-> bbotk::OptimizerAsync
-> OptimizerAsyncMbo
surrogate
(Surrogate | NULL
)
The surrogate.
acq_function
(AcqFunction | NULL
)
The acquisition function.
acq_optimizer
(AcqOptimizer | NULL
)
The acquisition function optimizer.
result_assigner
(ResultAssigner | NULL
)
The result assigner.
param_classes
(character()
)
Supported parameter classes that the optimizer can optimize.
Determined based on the surrogate
and the acq_optimizer
.
This corresponds to the values given by a paradox::ParamSet's
$class
field.
properties
(character()
)
Set of properties of the optimizer.
Must be a subset of bbotk_reflections$optimizer_properties
.
MBO in principle is very flexible and by default we assume that the optimizer has all properties.
When fully initialized, properties are determined based on the loop, e.g., the loop_function
, and surrogate
.
packages
(character()
)
Set of required packages.
A warning is signaled prior to optimization if at least one of the packages is not installed, but loaded (not attached) later on-demand via requireNamespace()
.
Required packages are determined based on the acq_function
, surrogate
and the acq_optimizer
.
new()
Creates a new instance of this R6 class.
If surrogate
is NULL
and the acq_function$surrogate
field is populated, this SurrogateLearner is used.
Otherwise, default_surrogate(instance)
is used.
If acq_function
is NULL
and the acq_optimizer$acq_function
field is populated, this AcqFunction is used (and therefore its $surrogate
if populated; see above).
Otherwise default_acqfunction(instance)
is used.
If acq_optimizer
is NULL
, default_acqoptimizer(instance)
is used.
Even if already initialized, the surrogate$archive
field will always be overwritten by the bbotk::ArchiveAsync of the current bbotk::OptimInstanceAsyncSingleCrit to be optimized.
For more information on default values for surrogate
, acq_function
, acq_optimizer
and result_assigner
, see ?mbo_defaults
.
OptimizerAsyncMbo$new( id = "async_mbo", surrogate = NULL, acq_function = NULL, acq_optimizer = NULL, result_assigner = NULL, param_set = NULL, label = "Asynchronous Model Based Optimization", man = "mlr3mbo::OptimizerAsyncMbo" )
id
(character(1)
)
Identifier for the new instance.
surrogate
(Surrogate | NULL
)
The surrogate.
acq_function
(AcqFunction | NULL
)
The acquisition function.
acq_optimizer
(AcqOptimizer | NULL
)
The acquisition function optimizer.
result_assigner
(ResultAssigner | NULL
)
The result assigner.
param_set
(paradox::ParamSet)
Set of control parameters.
label
(character(1)
)
Label for this object.
Can be used in tables, plot and text output instead of the ID.
man
(character(1)
)
String in the format [pkg]::[topic]
pointing to a manual page for this object.
The referenced help package can be opened via method $help()
.
print()
Print method.
OptimizerAsyncMbo$print()
(character()
).
reset()
Reset the optimizer.
Sets the following fields to NULL
:
surrogate
, acq_function
, acq_optimizer
,result_assigner
Resets parameter values design_size
and design_function
to their defaults.
OptimizerAsyncMbo$reset()
optimize()
Performs the optimization on an bbotk::OptimInstanceAsyncSingleCrit until termination. The single evaluations will be written into the bbotk::ArchiveAsync. The result will be written into the instance object.
OptimizerAsyncMbo$optimize(inst)
clone()
The objects of this class are cloneable with this method.
OptimizerAsyncMbo$clone(deep = FALSE)
deep
Whether to make a deep clone.
if (requireNamespace("rush") & requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { if (redis_available()) { library(bbotk) library(paradox) library(mlr3learners) fun = function(xs) { list(y = xs$x ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceAsyncSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 10)) rush::rush_plan(n_workers=2) optimizer = opt("async_mbo", design_size = 4, n_workers = 2) optimizer$optimize(instance) } else { message("Redis server is not available.\nPlease set up Redis prior to running the example.") } }
if (requireNamespace("rush") & requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { if (redis_available()) { library(bbotk) library(paradox) library(mlr3learners) fun = function(xs) { list(y = xs$x ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceAsyncSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 10)) rush::rush_plan(n_workers=2) optimizer = opt("async_mbo", design_size = 4, n_workers = 2) optimizer$optimize(instance) } else { message("Redis server is not available.\nPlease set up Redis prior to running the example.") } }
OptimizerMbo
class that implements Model Based Optimization (MBO).
The implementation follows a modular layout relying on a loop_function determining the MBO flavor to be used, e.g.,
bayesopt_ego for sequential single-objective Bayesian Optimization, a Surrogate, an AcqFunction, e.g., mlr_acqfunctions_ei for
Expected Improvement and an AcqOptimizer.
MBO algorithms are iterative optimization algorithms that make use of a continuously updated surrogate model built for the objective function. By optimizing a comparably cheap to evaluate acquisition function defined on the surrogate prediction, the next candidate is chosen for evaluation.
Detailed descriptions of different MBO flavors are provided in the documentation of the respective loop_function.
Termination is handled via a bbotk::Terminator part of the bbotk::OptimInstanceBatch to be optimized.
Note that in general the Surrogate is updated one final time on all available data after the optimization process has terminated.
However, in certain scenarios this is not always possible or meaningful, e.g., when using bayesopt_parego()
for multi-objective optimization
which uses a surrogate that relies on a scalarization of the objectives.
It is therefore recommended to manually inspect the Surrogate after optimization if it is to be used, e.g., for visualization purposes to make
sure that it has been properly updated on all available data.
If this final update of the Surrogate could not be performed successfully, a warning will be logged.
By specifying a ResultAssigner, one can alter how the final result is determined after optimization, e.g., simply based on the evaluations logged in the archive ResultAssignerArchive or based on the Surrogate via ResultAssignerSurrogate.
The bbotk::ArchiveBatch holds the following additional columns that are specific to MBO algorithms:
acq_function$id
(numeric(1)
)
The value of the acquisition function.
".already_evaluated"
(logical(1))
Whether this point was already evaluated. Depends on the skip_already_evaluated
parameter of the AcqOptimizer.
bbotk::Optimizer
-> bbotk::OptimizerBatch
-> OptimizerMbo
loop_function
(loop_function | NULL
)
Loop function determining the MBO flavor.
surrogate
(Surrogate | NULL
)
The surrogate.
acq_function
(AcqFunction | NULL
)
The acquisition function.
acq_optimizer
(AcqOptimizer | NULL
)
The acquisition function optimizer.
args
(named list()
)
Further arguments passed to the loop_function
.
For example, random_interleave_iter
.
result_assigner
(ResultAssigner | NULL
)
The result assigner.
param_classes
(character()
)
Supported parameter classes that the optimizer can optimize.
Determined based on the surrogate
and the acq_optimizer
.
This corresponds to the values given by a paradox::ParamSet's
$class
field.
properties
(character()
)
Set of properties of the optimizer.
Must be a subset of bbotk_reflections$optimizer_properties
.
MBO in principle is very flexible and by default we assume that the optimizer has all properties.
When fully initialized, properties are determined based on the loop, e.g., the loop_function
, and surrogate
.
packages
(character()
)
Set of required packages.
A warning is signaled prior to optimization if at least one of the packages is not installed, but loaded (not attached) later on-demand via requireNamespace()
.
Required packages are determined based on the acq_function
, surrogate
and the acq_optimizer
.
new()
Creates a new instance of this R6 class.
If surrogate
is NULL
and the acq_function$surrogate
field is populated, this Surrogate is used.
Otherwise, default_surrogate(instance)
is used.
If acq_function
is NULL
and the acq_optimizer$acq_function
field is populated, this AcqFunction is used (and therefore its $surrogate
if populated; see above).
Otherwise default_acqfunction(instance)
is used.
If acq_optimizer
is NULL
, default_acqoptimizer(instance)
is used.
Even if already initialized, the surrogate$archive
field will always be overwritten by the bbotk::ArchiveBatch of the current bbotk::OptimInstanceBatch to be optimized.
For more information on default values for loop_function
, surrogate
, acq_function
, acq_optimizer
and result_assigner
, see ?mbo_defaults
.
OptimizerMbo$new( loop_function = NULL, surrogate = NULL, acq_function = NULL, acq_optimizer = NULL, args = NULL, result_assigner = NULL )
loop_function
(loop_function | NULL
)
Loop function determining the MBO flavor.
surrogate
(Surrogate | NULL
)
The surrogate.
acq_function
(AcqFunction | NULL
)
The acquisition function.
acq_optimizer
(AcqOptimizer | NULL
)
The acquisition function optimizer.
args
(named list()
)
Further arguments passed to the loop_function
.
For example, random_interleave_iter
.
result_assigner
(ResultAssigner | NULL
)
The result assigner.
print()
Print method.
OptimizerMbo$print()
(character()
).
reset()
Reset the optimizer.
Sets the following fields to NULL
:
loop_function
, surrogate
, acq_function
, acq_optimizer
, args
, result_assigner
OptimizerMbo$reset()
optimize()
Performs the optimization and writes optimization result into bbotk::OptimInstanceBatch. The optimization result is returned but the complete optimization path is stored in bbotk::ArchiveBatch of bbotk::OptimInstanceBatch.
OptimizerMbo$optimize(inst)
inst
clone()
The objects of this class are cloneable with this method.
OptimizerMbo$clone(deep = FALSE)
deep
Whether to make a deep clone.
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) # single-objective EGO fun = function(xs) { list(y = xs$x ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) surrogate = default_surrogate(instance) acq_function = acqf("ei") acq_optimizer = acqo( optimizer = opt("random_search", batch_size = 100), terminator = trm("evals", n_evals = 100)) optimizer = opt("mbo", loop_function = bayesopt_ego, surrogate = surrogate, acq_function = acq_function, acq_optimizer = acq_optimizer) optimizer$optimize(instance) # multi-objective ParEGO fun = function(xs) { list(y1 = xs$x^2, y2 = (xs$x - 2) ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y1 = p_dbl(tags = "minimize"), y2 = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchMultiCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) optimizer = opt("mbo", loop_function = bayesopt_parego, surrogate = surrogate, acq_function = acq_function, acq_optimizer = acq_optimizer) optimizer$optimize(instance) }
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) # single-objective EGO fun = function(xs) { list(y = xs$x ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) surrogate = default_surrogate(instance) acq_function = acqf("ei") acq_optimizer = acqo( optimizer = opt("random_search", batch_size = 100), terminator = trm("evals", n_evals = 100)) optimizer = opt("mbo", loop_function = bayesopt_ego, surrogate = surrogate, acq_function = acq_function, acq_optimizer = acq_optimizer) optimizer$optimize(instance) # multi-objective ParEGO fun = function(xs) { list(y1 = xs$x^2, y2 = (xs$x - 2) ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y1 = p_dbl(tags = "minimize"), y2 = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchMultiCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) optimizer = opt("mbo", loop_function = bayesopt_parego, surrogate = surrogate, acq_function = acq_function, acq_optimizer = acq_optimizer) optimizer$optimize(instance) }
A simple mlr3misc::Dictionary storing objects of class ResultAssigner.
Each acquisition function has an associated help page, see mlr_result_assigners_[id]
.
For a more convenient way to retrieve and construct an acquisition function, see ras()
.
R6::R6Class object inheriting from mlr3misc::Dictionary.
See mlr3misc::Dictionary.
Sugar function: ras()
Other Dictionary:
mlr_acqfunctions
,
mlr_loop_functions
Other Result Assigner:
ResultAssigner
,
mlr_result_assigners_archive
,
mlr_result_assigners_surrogate
library(data.table) as.data.table(mlr_result_assigners) ras("archive")
library(data.table) as.data.table(mlr_result_assigners) ras("archive")
Result assigner that chooses the final point(s) based on all evaluations in the bbotk::Archive. This mimics the default behavior of any bbotk::Optimizer.
mlr3mbo::ResultAssigner
-> ResultAssignerArchive
packages
(character()
)
Set of required packages.
A warning is signaled if at least one of the packages is not installed, but loaded (not attached) later on-demand via requireNamespace()
.
new()
Creates a new instance of this R6 class.
ResultAssignerArchive$new()
assign_result()
Assigns the result, i.e., the final point(s) to the instance.
ResultAssignerArchive$assign_result(instance)
instance
(bbotk::OptimInstanceBatchSingleCrit | bbotk::OptimInstanceBatchMultiCrit |bbotk::OptimInstanceAsyncSingleCrit | bbotk::OptimInstanceAsyncMultiCrit)
The bbotk::OptimInstance the final result should be assigned to.
clone()
The objects of this class are cloneable with this method.
ResultAssignerArchive$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other Result Assigner:
ResultAssigner
,
mlr_result_assigners
,
mlr_result_assigners_surrogate
result_assigner = ras("archive")
result_assigner = ras("archive")
Result assigner that chooses the final point(s) based on a surrogate mean prediction of all evaluated points in the bbotk::Archive. This is especially useful in the case of noisy objective functions.
In the case of operating on an bbotk::OptimInstanceBatchMultiCrit or bbotk::OptimInstanceAsyncMultiCrit the SurrogateLearnerCollection must use as many learners as there are objective functions.
mlr3mbo::ResultAssigner
-> ResultAssignerSurrogate
surrogate
(Surrogate | NULL
)
The surrogate.
packages
(character()
)
Set of required packages.
A warning is signaled if at least one of the packages is not installed, but loaded (not attached) later on-demand via requireNamespace()
.
new()
Creates a new instance of this R6 class.
ResultAssignerSurrogate$new(surrogate = NULL)
surrogate
(Surrogate | NULL
)
The surrogate that is used to predict the mean of all evaluated points.
assign_result()
Assigns the result, i.e., the final point(s) to the instance.
If $surrogate
is NULL
, default_surrogate(instance)
is used and also assigned to $surrogate
.
ResultAssignerSurrogate$assign_result(instance)
instance
(bbotk::OptimInstanceBatchSingleCrit | bbotk::OptimInstanceBatchMultiCrit |bbotk::OptimInstanceAsyncSingleCrit | bbotk::OptimInstanceAsyncMultiCrit)
The bbotk::OptimInstance the final result should be assigned to.
clone()
The objects of this class are cloneable with this method.
ResultAssignerSurrogate$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other Result Assigner:
ResultAssigner
,
mlr_result_assigners
,
mlr_result_assigners_archive
result_assigner = ras("surrogate")
result_assigner = ras("surrogate")
TunerADBO
class that implements Asynchronous Decentralized Bayesian Optimization (ADBO).
ADBO is a variant of Asynchronous Model Based Optimization (AMBO) that uses AcqFunctionStochasticCB with exponential lambda decay.
This is a minimal interface internally passing on to OptimizerAsyncMbo.
For additional information and documentation see OptimizerAsyncMbo.
Currently, only single-objective optimization is supported and TunerADBO
is considered an experimental feature and API might be subject to changes.
initial_design
data.table::data.table()
Initial design of the optimization.
If NULL
, a design of size design_size
is generated with the specified design_function
.
Default is NULL
.
design_size
integer(1)
Size of the initial design if it is to be generated.
Default is 100
.
design_function
character(1)
Sampling function to generate the initial design.
Can be random
paradox::generate_design_random, lhs
paradox::generate_design_lhs, or sobol
paradox::generate_design_sobol.
Default is sobol
.
n_workers
integer(1)
Number of parallel workers.
If NULL
, all rush workers specified via rush::rush_plan()
are used.
Default is NULL
.
mlr3tuning::Tuner
-> mlr3tuning::TunerAsync
-> mlr3tuning::TunerAsyncFromOptimizerAsync
-> TunerADBO
surrogate
(Surrogate | NULL
)
The surrogate.
acq_function
(AcqFunction | NULL
)
The acquisition function.
acq_optimizer
(AcqOptimizer | NULL
)
The acquisition function optimizer.
result_assigner
(ResultAssigner | NULL
)
The result assigner.
param_classes
(character()
)
Supported parameter classes that the optimizer can optimize.
Determined based on the surrogate
and the acq_optimizer
.
This corresponds to the values given by a paradox::ParamSet's
$class
field.
properties
(character()
)
Set of properties of the optimizer.
Must be a subset of bbotk_reflections$optimizer_properties
.
MBO in principle is very flexible and by default we assume that the optimizer has all properties.
When fully initialized, properties are determined based on the loop, e.g., the loop_function
, and surrogate
.
packages
(character()
)
Set of required packages.
A warning is signaled prior to optimization if at least one of the packages is not installed, but loaded (not attached) later on-demand via requireNamespace()
.
Required packages are determined based on the acq_function
, surrogate
and the acq_optimizer
.
new()
Creates a new instance of this R6 class.
TunerADBO$new()
print()
Print method.
TunerADBO$print()
(character()
).
reset()
Reset the tuner.
Sets the following fields to NULL
:
surrogate
, acq_function
, acq_optimizer
, result_assigner
Resets parameter values design_size
and design_function
to their defaults.
TunerADBO$reset()
clone()
The objects of this class are cloneable with this method.
TunerADBO$clone(deep = FALSE)
deep
Whether to make a deep clone.
Egelé, Romain, Guyon, Isabelle, Vishwanath, Venkatram, Balaprakash, Prasanna (2023). “Asynchronous Decentralized Bayesian Optimization for Large Scale Hyperparameter Optimization.” In 2023 IEEE 19th International Conference on e-Science (e-Science), 1–10.
if (requireNamespace("rush") & requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { if (redis_available()) { library(mlr3) library(mlr3tuning) # single-objective task = tsk("wine") learner = lrn("classif.rpart", cp = to_tune(lower = 1e-4, upper = 1, logscale = TRUE)) resampling = rsmp("cv", folds = 3) measure = msr("classif.acc") instance = TuningInstanceAsyncSingleCrit$new( task = task, learner = learner, resampling = resampling, measure = measure, terminator = trm("evals", n_evals = 10)) rush::rush_plan(n_workers=2) tnr("adbo", design_size = 4, n_workers = 2)$optimize(instance) } else { message("Redis server is not available.\nPlease set up Redis prior to running the example.") } }
if (requireNamespace("rush") & requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { if (redis_available()) { library(mlr3) library(mlr3tuning) # single-objective task = tsk("wine") learner = lrn("classif.rpart", cp = to_tune(lower = 1e-4, upper = 1, logscale = TRUE)) resampling = rsmp("cv", folds = 3) measure = msr("classif.acc") instance = TuningInstanceAsyncSingleCrit$new( task = task, learner = learner, resampling = resampling, measure = measure, terminator = trm("evals", n_evals = 10)) rush::rush_plan(n_workers=2) tnr("adbo", design_size = 4, n_workers = 2)$optimize(instance) } else { message("Redis server is not available.\nPlease set up Redis prior to running the example.") } }
TunerAsyncMbo
class that implements Asynchronous Model Based Optimization (AMBO).
This is a minimal interface internally passing on to OptimizerAsyncMbo.
For additional information and documentation see OptimizerAsyncMbo.
Currently, only single-objective optimization is supported and TunerAsyncMbo
is considered an experimental feature and API might be subject to changes.
initial_design
data.table::data.table()
Initial design of the optimization.
If NULL
, a design of size design_size
is generated with the specified design_function
.
Default is NULL
.
design_size
integer(1)
Size of the initial design if it is to be generated.
Default is 100
.
design_function
character(1)
Sampling function to generate the initial design.
Can be random
paradox::generate_design_random, lhs
paradox::generate_design_lhs, or sobol
paradox::generate_design_sobol.
Default is sobol
.
n_workers
integer(1)
Number of parallel workers.
If NULL
, all rush workers specified via rush::rush_plan()
are used.
Default is NULL
.
mlr3tuning::Tuner
-> mlr3tuning::TunerAsync
-> mlr3tuning::TunerAsyncFromOptimizerAsync
-> TunerAsyncMbo
surrogate
(Surrogate | NULL
)
The surrogate.
acq_function
(AcqFunction | NULL
)
The acquisition function.
acq_optimizer
(AcqOptimizer | NULL
)
The acquisition function optimizer.
result_assigner
(ResultAssigner | NULL
)
The result assigner.
param_classes
(character()
)
Supported parameter classes that the optimizer can optimize.
Determined based on the surrogate
and the acq_optimizer
.
This corresponds to the values given by a paradox::ParamSet's
$class
field.
properties
(character()
)
Set of properties of the optimizer.
Must be a subset of bbotk_reflections$optimizer_properties
.
MBO in principle is very flexible and by default we assume that the optimizer has all properties.
When fully initialized, properties are determined based on the loop, e.g., the loop_function
, and surrogate
.
packages
(character()
)
Set of required packages.
A warning is signaled prior to optimization if at least one of the packages is not installed, but loaded (not attached) later on-demand via requireNamespace()
.
Required packages are determined based on the acq_function
, surrogate
and the acq_optimizer
.
new()
Creates a new instance of this R6 class.
For more information on default values for surrogate
, acq_function
, acq_optimizer
, and result_assigner
, see ?mbo_defaults
.
Note that all the parameters below are simply passed to the OptimizerAsyncMbo and the respective fields are simply (settable) active bindings to the fields of the OptimizerAsyncMbo.
TunerAsyncMbo$new( surrogate = NULL, acq_function = NULL, acq_optimizer = NULL, param_set = NULL )
surrogate
(Surrogate | NULL
)
The surrogate.
acq_function
(AcqFunction | NULL
)
The acquisition function.
acq_optimizer
(AcqOptimizer | NULL
)
The acquisition function optimizer.
param_set
(paradox::ParamSet)
Set of control parameters.
print()
Print method.
TunerAsyncMbo$print()
(character()
).
reset()
Reset the tuner.
Sets the following fields to NULL
:
surrogate
, acq_function
, acq_optimizer
, result_assigner
Resets parameter values design_size
and design_function
to their defaults.
TunerAsyncMbo$reset()
clone()
The objects of this class are cloneable with this method.
TunerAsyncMbo$clone(deep = FALSE)
deep
Whether to make a deep clone.
if (requireNamespace("rush") & requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { if (redis_available()) { library(mlr3) library(mlr3tuning) # single-objective task = tsk("wine") learner = lrn("classif.rpart", cp = to_tune(lower = 1e-4, upper = 1, logscale = TRUE)) resampling = rsmp("cv", folds = 3) measure = msr("classif.acc") instance = TuningInstanceAsyncSingleCrit$new( task = task, learner = learner, resampling = resampling, measure = measure, terminator = trm("evals", n_evals = 10)) rush::rush_plan(n_workers=2) tnr("async_mbo", design_size = 4, n_workers = 2)$optimize(instance) } else { message("Redis server is not available.\nPlease set up Redis prior to running the example.") } }
if (requireNamespace("rush") & requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { if (redis_available()) { library(mlr3) library(mlr3tuning) # single-objective task = tsk("wine") learner = lrn("classif.rpart", cp = to_tune(lower = 1e-4, upper = 1, logscale = TRUE)) resampling = rsmp("cv", folds = 3) measure = msr("classif.acc") instance = TuningInstanceAsyncSingleCrit$new( task = task, learner = learner, resampling = resampling, measure = measure, terminator = trm("evals", n_evals = 10)) rush::rush_plan(n_workers=2) tnr("async_mbo", design_size = 4, n_workers = 2)$optimize(instance) } else { message("Redis server is not available.\nPlease set up Redis prior to running the example.") } }
TunerMbo
class that implements Model Based Optimization (MBO).
This is a minimal interface internally passing on to OptimizerMbo.
For additional information and documentation see OptimizerMbo.
mlr3tuning::Tuner
-> mlr3tuning::TunerBatch
-> mlr3tuning::TunerBatchFromOptimizerBatch
-> TunerMbo
loop_function
(loop_function | NULL
)
Loop function determining the MBO flavor.
surrogate
(Surrogate | NULL
)
The surrogate.
acq_function
(AcqFunction | NULL
)
The acquisition function.
acq_optimizer
(AcqOptimizer | NULL
)
The acquisition function optimizer.
args
(named list()
)
Further arguments passed to the loop_function
.
For example, random_interleave_iter
.
result_assigner
(ResultAssigner | NULL
)
The result assigner.
param_classes
(character()
)
Supported parameter classes that the optimizer can optimize.
Determined based on the surrogate
and the acq_optimizer
.
This corresponds to the values given by a paradox::ParamSet's
$class
field.
properties
(character()
)
Set of properties of the optimizer.
Must be a subset of bbotk_reflections$optimizer_properties
.
MBO in principle is very flexible and by default we assume that the optimizer has all properties.
When fully initialized, properties are determined based on the loop, e.g., the loop_function
, and surrogate
.
packages
(character()
)
Set of required packages.
A warning is signaled prior to optimization if at least one of the packages is not installed, but loaded (not attached) later on-demand via requireNamespace()
.
Required packages are determined based on the acq_function
, surrogate
and the acq_optimizer
.
new()
Creates a new instance of this R6 class.
For more information on default values for loop_function
, surrogate
, acq_function
, acq_optimizer
, and result_assigner
, see ?mbo_defaults
.
Note that all the parameters below are simply passed to the OptimizerMbo and the respective fields are simply (settable) active bindings to the fields of the OptimizerMbo.
TunerMbo$new( loop_function = NULL, surrogate = NULL, acq_function = NULL, acq_optimizer = NULL, args = NULL, result_assigner = NULL )
loop_function
(loop_function | NULL
)
Loop function determining the MBO flavor.
surrogate
(Surrogate | NULL
)
The surrogate.
acq_function
(AcqFunction | NULL
)
The acquisition function.
acq_optimizer
(AcqOptimizer | NULL
)
The acquisition function optimizer.
args
(named list()
)
Further arguments passed to the loop_function
.
For example, random_interleave_iter
.
result_assigner
(ResultAssigner | NULL
)
The result assigner.
print()
Print method.
TunerMbo$print()
(character()
).
reset()
Reset the tuner.
Sets the following fields to NULL
:
loop_function
, surrogate
, acq_function
, acq_optimizer
, args
, result_assigner
TunerMbo$reset()
clone()
The objects of this class are cloneable with this method.
TunerMbo$clone(deep = FALSE)
deep
Whether to make a deep clone.
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(mlr3) library(mlr3tuning) # single-objective task = tsk("wine") learner = lrn("classif.rpart", cp = to_tune(lower = 1e-4, upper = 1, logscale = TRUE)) resampling = rsmp("cv", folds = 3) measure = msr("classif.acc") instance = TuningInstanceBatchSingleCrit$new( task = task, learner = learner, resampling = resampling, measure = measure, terminator = trm("evals", n_evals = 5)) tnr("mbo")$optimize(instance) # multi-objective task = tsk("wine") learner = lrn("classif.rpart", cp = to_tune(lower = 1e-4, upper = 1, logscale = TRUE)) resampling = rsmp("cv", folds = 3) measures = msrs(c("classif.acc", "selected_features")) instance = TuningInstanceBatchMultiCrit$new( task = task, learner = learner, resampling = resampling, measures = measures, terminator = trm("evals", n_evals = 5), store_models = TRUE) # required due to selected features tnr("mbo")$optimize(instance) }
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(mlr3) library(mlr3tuning) # single-objective task = tsk("wine") learner = lrn("classif.rpart", cp = to_tune(lower = 1e-4, upper = 1, logscale = TRUE)) resampling = rsmp("cv", folds = 3) measure = msr("classif.acc") instance = TuningInstanceBatchSingleCrit$new( task = task, learner = learner, resampling = resampling, measure = measure, terminator = trm("evals", n_evals = 5)) tnr("mbo")$optimize(instance) # multi-objective task = tsk("wine") learner = lrn("classif.rpart", cp = to_tune(lower = 1e-4, upper = 1, logscale = TRUE)) resampling = rsmp("cv", folds = 3) measures = msrs(c("classif.acc", "selected_features")) instance = TuningInstanceBatchMultiCrit$new( task = task, learner = learner, resampling = resampling, measures = measures, terminator = trm("evals", n_evals = 5), store_models = TRUE) # required due to selected features tnr("mbo")$optimize(instance) }
This function complements mlr_result_assigners with functions in the spirit
of mlr_sugar
from mlr3.
ras(.key, ...)
ras(.key, ...)
.key |
( |
... |
(named |
ras("archive")
ras("archive")
Attempts to establish a connection to a Redis server using the redux package
and sends a PING
command. Returns TRUE
if the server is available and
responds appropriately, FALSE
otherwise.
redis_available()
redis_available()
(logical(1)
)
if (redis_available()) { # Proceed with code that requires Redis message("Redis server is available.") } else { message("Redis server is not available.") }
if (redis_available()) { # Proceed with code that requires Redis message("Redis server is available.") } else { message("Redis server is not available.") }
Abstract result assigner class.
A result assigner is responsible for assigning the final optimization result to the bbotk::OptimInstance. Normally, it is only used within an OptimizerMbo.
label
(character(1)
)
Label for this object.
man
(character(1)
)
String in the format [pkg]::[topic]
pointing to a manual page for this object.
packages
(character()
)
Set of required packages.
A warning is signaled if at least one of the packages is not installed, but loaded (not attached) later on-demand via requireNamespace()
.
new()
Creates a new instance of this R6 class.
ResultAssigner$new(label = NA_character_, man = NA_character_)
label
(character(1)
)
Label for this object.
man
(character(1)
)
String in the format [pkg]::[topic]
pointing to a manual page for this object.
assign_result()
Assigns the result, i.e., the final point(s) to the instance.
ResultAssigner$assign_result(instance)
instance
(bbotk::OptimInstanceBatchSingleCrit | bbotk::OptimInstanceBatchMultiCrit |bbotk::OptimInstanceAsyncSingleCrit | bbotk::OptimInstanceAsyncMultiCrit)
The bbotk::OptimInstance the final result should be assigned to.
format()
Helper for print outputs.
ResultAssigner$format()
(character(1)
).
print()
Print method.
ResultAssigner$print()
(character()
).
clone()
The objects of this class are cloneable with this method.
ResultAssigner$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other Result Assigner:
mlr_result_assigners
,
mlr_result_assigners_archive
,
mlr_result_assigners_surrogate
This function allows to construct a SurrogateLearner or SurrogateLearnerCollection in the spirit
of mlr_sugar
from mlr3.
If the archive
references more than one target variable or cols_y
contains more than one
target variable but only a single learner
is specified, this learner is replicated as many
times as needed to build the SurrogateLearnerCollection.
srlrn(learner, archive = NULL, cols_x = NULL, cols_y = NULL, ...)
srlrn(learner, archive = NULL, cols_x = NULL, cols_y = NULL, ...)
learner |
(mlr3::LearnerRegr | List of mlr3::LearnerRegr) |
archive |
( |
cols_x |
( |
cols_y |
( |
... |
(named |
SurrogateLearner | SurrogateLearnerCollection
library(mlr3) srlrn(lrn("regr.featureless"), catch_errors = FALSE) srlrn(list(lrn("regr.featureless"), lrn("regr.featureless")))
library(mlr3) srlrn(lrn("regr.featureless"), catch_errors = FALSE) srlrn(list(lrn("regr.featureless"), lrn("regr.featureless")))
Abstract surrogate model class.
A surrogate model is used to model the unknown objective function(s) based on all points evaluated so far.
learner
(learner)
Arbitrary learner object depending on the subclass.
print_id
(character
)
Id used when printing.
archive
(bbotk::Archive | NULL
)
bbotk::Archive of the bbotk::OptimInstance.
archive_is_async
('bool(1)“)
Whether the bbotk::Archive is an asynchronous one.
n_learner
(integer(1)
)
Returns the number of surrogate models.
cols_x
(character()
| NULL
)
Column id's of variables that should be used as features.
By default, automatically inferred based on the archive.
cols_y
(character()
| NULL
)
Column id's of variables that should be used as targets.
By default, automatically inferred based on the archive.
insample_perf
(numeric()
)
Surrogate model's current insample performance.
param_set
(paradox::ParamSet)
Set of hyperparameters.
assert_insample_perf
(numeric()
)
Asserts whether the current insample performance meets the performance threshold.
packages
(character()
)
Set of required packages.
A warning is signaled if at least one of the packages is not installed, but loaded (not attached) later on-demand via requireNamespace()
.
feature_types
(character()
)
Stores the feature types the surrogate can handle, e.g. "logical"
, "numeric"
, or "factor"
.
A complete list of candidate feature types, grouped by task type, is stored in mlr_reflections$task_feature_types
.
properties
(character()
)
Stores a set of properties/capabilities the surrogate has.
A complete list of candidate properties, grouped by task type, is stored in mlr_reflections$learner_properties
.
predict_type
(character(1)
)
Retrieves the currently active predict type, e.g. "response"
.
new()
Creates a new instance of this R6 class.
Surrogate$new(learner, archive, cols_x, cols_y, param_set)
learner
(learner)
Arbitrary learner object depending on the subclass.
archive
(bbotk::Archive | NULL
)
bbotk::Archive of the bbotk::OptimInstance.
cols_x
(character()
| NULL
)
Column id's of variables that should be used as features.
By default, automatically inferred based on the archive.
cols_y
(character()
| NULL
)
Column id's of variables that should be used as targets.
By default, automatically inferred based on the archive.
param_set
(paradox::ParamSet)
Parameter space description depending on the subclass.
update()
Train learner with new data.
Subclasses must implement private.update()
and private.update_async()
.
Surrogate$update()
NULL
.
reset()
Reset the surrogate model.
Subclasses must implement private$.reset()
.
Surrogate$reset()
NULL
predict()
Predict mean response and standard error. Must be implemented by subclasses.
Surrogate$predict(xdt)
xdt
(data.table::data.table()
)
New data. One row per observation.
Arbitrary prediction object.
format()
Helper for print outputs.
Surrogate$format()
(character(1)
).
print()
Print method.
Surrogate$print()
(character()
).
clone()
The objects of this class are cloneable with this method.
Surrogate$clone(deep = FALSE)
deep
Whether to make a deep clone.
Surrogate model containing a single mlr3::LearnerRegr.
assert_insample_perf
logical(1)
Should the insample performance of the mlr3::LearnerRegr be asserted after updating the surrogate?
If the assertion fails (i.e., the insample performance based on the perf_measure
does not meet the
perf_threshold
), an error is thrown.
Default is FALSE
.
perf_measure
mlr3::MeasureRegr
Performance measure which should be use to assert the insample performance of the mlr3::LearnerRegr.
Only relevant if assert_insample_perf = TRUE
.
Default is mlr3::mlr_measures_regr.rsq.
perf_threshold
numeric(1)
Threshold the insample performance of the mlr3::LearnerRegr should be asserted against.
Only relevant if assert_insample_perf = TRUE
.
Default is 0
.
catch_errors
logical(1)
Should errors during updating the surrogate be caught and propagated to the loop_function
which can then handle
the failed acquisition function optimization (as a result of the failed surrogate) appropriately by, e.g., proposing a randomly sampled point for evaluation?
Default is TRUE
.
impute_method
character(1)
Method to impute missing values in the case of updating on an asynchronous bbotk::ArchiveAsync with pending evaluations.
Can be "mean"
to use mean imputation or "random"
to sample values uniformly at random between the empirical minimum and maximum.
Default is "random"
.
mlr3mbo::Surrogate
-> SurrogateLearner
print_id
(character
)
Id used when printing.
n_learner
(integer(1)
)
Returns the number of surrogate models.
assert_insample_perf
(numeric()
)
Asserts whether the current insample performance meets the performance threshold.
packages
(character()
)
Set of required packages.
A warning is signaled if at least one of the packages is not installed, but loaded (not attached) later on-demand via requireNamespace()
.
feature_types
(character()
)
Stores the feature types the surrogate can handle, e.g. "logical"
, "numeric"
, or "factor"
.
A complete list of candidate feature types, grouped by task type, is stored in mlr_reflections$task_feature_types
.
properties
(character()
)
Stores a set of properties/capabilities the surrogate has.
A complete list of candidate properties, grouped by task type, is stored in mlr_reflections$learner_properties
.
predict_type
(character(1)
)
Retrieves the currently active predict type, e.g. "response"
.
new()
Creates a new instance of this R6 class.
SurrogateLearner$new(learner, archive = NULL, cols_x = NULL, col_y = NULL)
learner
archive
(bbotk::Archive | NULL
)
bbotk::Archive of the bbotk::OptimInstance.
cols_x
(character()
| NULL
)
Column id's of variables that should be used as features.
By default, automatically inferred based on the archive.
col_y
(character(1)
| NULL
)
Column id of variable that should be used as a target.
By default, automatically inferred based on the archive.
predict()
Predict mean response and standard error.
SurrogateLearner$predict(xdt)
xdt
(data.table::data.table()
)
New data. One row per observation.
data.table::data.table()
with the columns mean
and se
.
clone()
The objects of this class are cloneable with this method.
SurrogateLearner$clone(deep = FALSE)
deep
Whether to make a deep clone.
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) fun = function(xs) { list(y = xs$x ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) xdt = generate_design_random(instance$search_space, n = 4)$data instance$eval_batch(xdt) learner = default_gp() surrogate = srlrn(learner, archive = instance$archive) surrogate$update() surrogate$learner$model }
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud")) { library(bbotk) library(paradox) library(mlr3learners) fun = function(xs) { list(y = xs$x ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) xdt = generate_design_random(instance$search_space, n = 4)$data instance$eval_batch(xdt) learner = default_gp() surrogate = srlrn(learner, archive = instance$archive) surrogate$update() surrogate$learner$model }
Surrogate model containing multiple mlr3::LearnerRegr.
The mlr3::LearnerRegr are fit on the target variables as indicated via cols_y
.
Note that redundant mlr3::LearnerRegr must be deep clones.
assert_insample_perf
logical(1)
Should the insample performance of the mlr3::LearnerRegr be asserted after updating the surrogate?
If the assertion fails (i.e., the insample performance based on the perf_measure
does not meet the
perf_threshold
), an error is thrown.
Default is FALSE
.
perf_measure
List of mlr3::MeasureRegr
Performance measures which should be use to assert the insample performance of the mlr3::LearnerRegr.
Only relevant if assert_insample_perf = TRUE
.
Default is mlr3::mlr_measures_regr.rsq for each learner.
perf_threshold
List of numeric(1)
Thresholds the insample performance of the mlr3::LearnerRegr should be asserted against.
Only relevant if assert_insample_perf = TRUE
.
Default is 0
for each learner.
catch_errors
logical(1)
Should errors during updating the surrogate be caught and propagated to the loop_function
which can then handle
the failed acquisition function optimization (as a result of the failed surrogate) appropriately by, e.g., proposing a randomly sampled point for evaluation?
Default is TRUE
.
impute_method
character(1)
Method to impute missing values in the case of updating on an asynchronous bbotk::ArchiveAsync with pending evaluations.
Can be "mean"
to use mean imputation or "random"
to sample values uniformly at random between the empirical minimum and maximum.
Default is "random"
.
mlr3mbo::Surrogate
-> SurrogateLearnerCollection
print_id
(character
)
Id used when printing.
n_learner
(integer(1)
)
Returns the number of surrogate models.
assert_insample_perf
(numeric()
)
Asserts whether the current insample performance meets the performance threshold.
packages
(character()
)
Set of required packages.
A warning is signaled if at least one of the packages is not installed, but loaded (not attached) later on-demand via requireNamespace()
.
feature_types
(character()
)
Stores the feature types the surrogate can handle, e.g. "logical"
, "numeric"
, or "factor"
.
A complete list of candidate feature types, grouped by task type, is stored in mlr_reflections$task_feature_types
.
properties
(character()
)
Stores a set of properties/capabilities the surrogate has.
A complete list of candidate properties, grouped by task type, is stored in mlr_reflections$learner_properties
.
predict_type
(character(1)
)
Retrieves the currently active predict type, e.g. "response"
.
new()
Creates a new instance of this R6 class.
SurrogateLearnerCollection$new( learners, archive = NULL, cols_x = NULL, cols_y = NULL )
learners
(list of mlr3::LearnerRegr).
archive
(bbotk::Archive | NULL
)
bbotk::Archive of the bbotk::OptimInstance.
cols_x
(character()
| NULL
)
Column id's of variables that should be used as features.
By default, automatically inferred based on the archive.
cols_y
(character()
| NULL
)
Column id's of variables that should be used as targets.
By default, automatically inferred based on the archive.
predict()
Predict mean response and standard error.
Returns a named list of data.tables.
Each contains the mean response and standard error for one col_y
.
SurrogateLearnerCollection$predict(xdt)
xdt
(data.table::data.table()
)
New data. One row per observation.
list of data.table::data.table()
s with the columns mean
and se
.
clone()
The objects of this class are cloneable with this method.
SurrogateLearnerCollection$clone(deep = FALSE)
deep
Whether to make a deep clone.
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud") & requireNamespace("ranger")) { library(bbotk) library(paradox) library(mlr3learners) fun = function(xs) { list(y1 = xs$x^2, y2 = (xs$x - 2) ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y1 = p_dbl(tags = "minimize"), y2 = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchMultiCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) xdt = generate_design_random(instance$search_space, n = 4)$data instance$eval_batch(xdt) learner1 = default_gp() learner2 = default_rf() surrogate = srlrn(list(learner1, learner2), archive = instance$archive) surrogate$update() surrogate$learner surrogate$learner[["y1"]]$model surrogate$learner[["y2"]]$model }
if (requireNamespace("mlr3learners") & requireNamespace("DiceKriging") & requireNamespace("rgenoud") & requireNamespace("ranger")) { library(bbotk) library(paradox) library(mlr3learners) fun = function(xs) { list(y1 = xs$x^2, y2 = (xs$x - 2) ^ 2) } domain = ps(x = p_dbl(lower = -10, upper = 10)) codomain = ps(y1 = p_dbl(tags = "minimize"), y2 = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain) instance = OptimInstanceBatchMultiCrit$new( objective = objective, terminator = trm("evals", n_evals = 5)) xdt = generate_design_random(instance$search_space, n = 4)$data instance$eval_batch(xdt) learner1 = default_gp() learner2 = default_rf() surrogate = srlrn(list(learner1, learner2), archive = instance$archive) surrogate$update() surrogate$learner surrogate$learner[["y1"]]$model surrogate$learner[["y2"]]$model }