Title: | Hyperband for 'mlr3' |
---|---|
Description: | Successive Halving (Jamieson and Talwalkar (2016) <doi:10.48550/arXiv.1502.07943>) and Hyperband (Li et al. 2018 <doi:10.48550/arXiv.1603.06560>) optimization algorithm for the mlr3 ecosystem. The implementation in mlr3hyperband features improved scheduling and parallelizes the evaluation of configurations. The package includes tuners for hyperparameter optimization in mlr3tuning and optimizers for black-box optimization in bbotk. |
Authors: | Marc Becker [aut, cre] , Sebastian Gruber [aut] , Jakob Richter [aut] , Julia Moosbauer [aut] , Bernd Bischl [aut] |
Maintainer: | Marc Becker <[email protected]> |
License: | LGPL-3 |
Version: | 0.6.0 |
Built: | 2024-10-28 05:05:32 UTC |
Source: | https://github.com/mlr-org/mlr3hyperband |
Successive Halving (Jamieson and Talwalkar (2016) doi:10.48550/arXiv.1502.07943) and Hyperband (Li et al. 2018 doi:10.48550/arXiv.1603.06560) optimization algorithm for the mlr3 ecosystem. The implementation in mlr3hyperband features improved scheduling and parallelizes the evaluation of configurations. The package includes tuners for hyperparameter optimization in mlr3tuning and optimizers for black-box optimization in bbotk.
Maintainer: Marc Becker [email protected] (ORCID)
Authors:
Sebastian Gruber [email protected] (ORCID)
Jakob Richter [email protected] (ORCID)
Julia Moosbauer [email protected] (ORCID)
Bernd Bischl [email protected] (ORCID)
Useful links:
Report bugs at https://github.com/mlr-org/mlr3hyperband/issues
Calculates the total budget used by hyperband.
hyperband_budget(r_min, r_max, eta, integer_budget = FALSE)
hyperband_budget(r_min, r_max, eta, integer_budget = FALSE)
r_min |
( |
r_max |
( |
eta |
( |
integer_budget |
( |
integer(1)
Calculates how many different configurations are sampled.
hyperband_n_configs(r_min, r_max, eta)
hyperband_n_configs(r_min, r_max, eta)
r_min |
( |
r_max |
( |
eta |
( |
integer(1)
Returns hyperband schedule.
hyperband_schedule(r_min, r_max, eta, integer_budget = FALSE)
hyperband_schedule(r_min, r_max, eta, integer_budget = FALSE)
r_min |
( |
r_max |
( |
eta |
( |
integer_budget |
( |
Optimizer using the Hyperband (HB) algorithm.
HB runs the Successive Halving Algorithm (SHA) with different numbers of stating configurations.
The algorithm is initialized with the same parameters as Successive Halving but without n
.
Each run of Successive Halving is called a bracket and starts with a different budget r_0
.
A smaller starting budget means that more configurations can be tried out.
The most explorative bracket allocated the minimum budget r_min
.
The next bracket increases the starting budget by a factor of eta
.
In each bracket, the starting budget increases further until the last bracket s = 0
essentially performs a random search with the full budget r_max
.
The number of brackets s_max + 1
is calculated with s_max = log(r_min / r_max)(eta)
.
Under the condition that r_0
increases by eta
with each bracket, r_min
sometimes has to be adjusted slightly in order not to use more than r_max
resources in the last bracket.
The number of configurations in the base stages is calculated so that each bracket uses approximately the same amount of budget.
The following table shows a full run of HB with eta = 2
, r_min = 1
and r_max = 8
.
s |
3 | 2 | 1 | 0 | ||||||||
i |
n_i |
r_i |
n_i |
r_i |
n_i |
r_i |
n_i |
r_i |
||||
0 | 8 | 1 | 6 | 2 | 4 | 4 | 8 | 4 | ||||
1 | 4 | 2 | 3 | 4 | 2 | 8 | ||||||
2 | 2 | 4 | 1 | 8 | ||||||||
3 | 1 | 8 | ||||||||||
s
is the bracket number, i
is the stage number, n_i
is the number of configurations and r_i
is the budget allocated to a single configuration.
The budget hyperparameter must be tagged with "budget"
in the search space.
The minimum budget (r_min
) which is allocated in the base stage of the most explorative bracket, is set by the lower bound of the budget parameter.
The upper bound defines the maximum budget (r_max
) which is allocated to the candidates in the last stages.
The gallery features a collection of case studies and demos about optimization.
Tune the hyperparameters of XGBoost with Hyperband.
Use data subsampling and Hyperband to optimize a support vector machine.
This bbotk::Optimizer can be instantiated via the dictionary
bbotk::mlr_optimizers or with the associated sugar function bbotk::opt()
:
mlr_optimizers$get("hyperband") opt("hyperband")
eta
numeric(1)
With every stage, the budget is increased by a factor of eta
and only the best 1 / eta
points are promoted to the next stage.
Non-integer values are supported, but eta
is not allowed to be less or equal to 1.
sampler
paradox::Sampler
Object defining how the samples of the parameter space should be drawn in the base stage of each bracket.
The default is uniform sampling.
repetitions
integer(1)
If 1
(default), optimization is stopped once all brackets are evaluated.
Otherwise, optimization is stopped after repetitions
runs of HB.
The bbotk::Terminator might stop the optimization before all repetitions are executed.
The bbotk::Archive holds the following additional columns that are specific to HB:
bracket
(integer(1)
)
The bracket index. Counts down to 0.
stage
(integer(1))
The stages of each bracket. Starts counting at 0.
repetition
(integer(1))
Repetition index. Start counting at 1.
Hyperband supports custom paradox::Sampler object for initial configurations in each bracket. A custom sampler may look like this (the full example is given in the examples section):
# - beta distribution with alpha = 2 and beta = 5 # - categorical distribution with custom probabilities sampler = SamplerJointIndep$new(list( Sampler1DRfun$new(params[[2]], function(n) rbeta(n, 2, 5)), Sampler1DCateg$new(params[[3]], prob = c(0.2, 0.3, 0.5)) ))
$optimize()
supports progress bars via the package progressr
combined with a bbotk::Terminator. Simply wrap the function in
progressr::with_progress()
to enable them. We recommend to use package
progress as backend; enable with progressr::handlers("progress")
.
Hyperband uses a logger (as implemented in lgr) from package
bbotk.
Use lgr::get_logger("bbotk")
to access and control the logger.
bbotk::Optimizer
-> bbotk::OptimizerBatch
-> OptimizerBatchHyperband
new()
Creates a new instance of this R6 class.
OptimizerBatchHyperband$new()
clone()
The objects of this class are cloneable with this method.
OptimizerBatchHyperband$clone(deep = FALSE)
deep
Whether to make a deep clone.
Li L, Jamieson K, DeSalvo G, Rostamizadeh A, Talwalkar A (2018). “Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization.” Journal of Machine Learning Research, 18(185), 1-52. https://jmlr.org/papers/v18/16-558.html.
library(bbotk) library(data.table) # set search space search_space = domain = ps( x1 = p_dbl(-5, 10), x2 = p_dbl(0, 15), fidelity = p_dbl(1e-2, 1, tags = "budget") ) # Branin function with fidelity, see `bbotk::branin()` fun = function(xs) branin_wu(xs[["x1"]], xs[["x2"]], xs[["fidelity"]]) # create objective objective = ObjectiveRFun$new( fun = fun, domain = domain, codomain = ps(y = p_dbl(tags = "minimize")) ) # initialize instance and optimizer instance = OptimInstanceSingleCrit$new( objective = objective, search_space = search_space, terminator = trm("evals", n_evals = 50) ) optimizer = opt("hyperband") # optimize branin function optimizer$optimize(instance) # best scoring evaluation instance$result # all evaluations as.data.table(instance$archive)
library(bbotk) library(data.table) # set search space search_space = domain = ps( x1 = p_dbl(-5, 10), x2 = p_dbl(0, 15), fidelity = p_dbl(1e-2, 1, tags = "budget") ) # Branin function with fidelity, see `bbotk::branin()` fun = function(xs) branin_wu(xs[["x1"]], xs[["x2"]], xs[["fidelity"]]) # create objective objective = ObjectiveRFun$new( fun = fun, domain = domain, codomain = ps(y = p_dbl(tags = "minimize")) ) # initialize instance and optimizer instance = OptimInstanceSingleCrit$new( objective = objective, search_space = search_space, terminator = trm("evals", n_evals = 50) ) optimizer = opt("hyperband") # optimize branin function optimizer$optimize(instance) # best scoring evaluation instance$result # all evaluations as.data.table(instance$archive)
Optimizer using the Successive Halving Algorithm (SHA).
SHA is initialized with the number of starting configurations n
, the proportion of configurations discarded in each stage eta
, and the minimum r_min
and maximum _max
budget of a single evaluation.
The algorithm starts by sampling n
random configurations and allocating the minimum budget r_min
to them.
The configurations are evaluated and 1 / eta
of the worst-performing configurations are discarded.
The remaining configurations are promoted to the next stage and evaluated on a larger budget.
The following table is the stage layout for eta = 2
, r_min = 1
and r_max = 8
.
i |
n_i |
r_i |
0 | 8 | 1 |
1 | 4 | 2 |
2 | 2 | 4 |
3 | 1 | 8 |
i
is the stage number, n_i
is the number of configurations and r_i
is the budget allocated to a single configuration.
The number of stages is calculated so that each stage consumes approximately the same budget. This sometimes results in the minimum budget having to be slightly adjusted by the algorithm.
The gallery features a collection of case studies and demos about optimization.
Tune the hyperparameters of XGBoost with Hyperband (Hyperband can be easily swapped with SHA).
Use data subsampling and Hyperband to optimize a support vector machine.
This bbotk::Optimizer can be instantiated via the dictionary
bbotk::mlr_optimizers or with the associated sugar function bbotk::opt()
:
mlr_optimizers$get("successive_halving") opt("successive_halving")
n
integer(1)
Number of configurations in the base stage.
eta
numeric(1)
With every stage, the budget is increased by a factor of eta
and only the best 1 / eta
configurations are promoted to the next stage.
Non-integer values are supported, but eta
is not allowed to be less or equal to 1.
sampler
paradox::Sampler
Object defining how the samples of the parameter space should be drawn.
The default is uniform sampling.
repetitions
integer(1)
If 1
(default), optimization is stopped once all stages are evaluated.
Otherwise, optimization is stopped after repetitions
runs of SHA.
The bbotk::Terminator might stop the optimization before all repetitions are executed.
adjust_minimum_budget
logical(1)
If TRUE
, the minimum budget is increased so that the last stage uses the maximum budget defined in the search space.
The bbotk::Archive holds the following additional columns that are specific to SHA:
stage
(integer(1))
Stage index. Starts counting at 0.
repetition
(integer(1))
Repetition index. Start counting at 1.
Hyperband supports custom paradox::Sampler object for initial configurations in each bracket. A custom sampler may look like this (the full example is given in the examples section):
# - beta distribution with alpha = 2 and beta = 5 # - categorical distribution with custom probabilities sampler = SamplerJointIndep$new(list( Sampler1DRfun$new(params[[2]], function(n) rbeta(n, 2, 5)), Sampler1DCateg$new(params[[3]], prob = c(0.2, 0.3, 0.5)) ))
$optimize()
supports progress bars via the package progressr
combined with a bbotk::Terminator. Simply wrap the function in
progressr::with_progress()
to enable them. We recommend to use package
progress as backend; enable with progressr::handlers("progress")
.
Hyperband uses a logger (as implemented in lgr) from package
bbotk.
Use lgr::get_logger("bbotk")
to access and control the logger.
bbotk::Optimizer
-> bbotk::OptimizerBatch
-> OptimizerBatchSuccessiveHalving
new()
Creates a new instance of this R6 class.
OptimizerBatchSuccessiveHalving$new()
clone()
The objects of this class are cloneable with this method.
OptimizerBatchSuccessiveHalving$clone(deep = FALSE)
deep
Whether to make a deep clone.
Jamieson K, Talwalkar A (2016). “Non-stochastic Best Arm Identification and Hyperparameter Optimization.” In Gretton A, Robert CC (eds.), Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, volume 51 series Proceedings of Machine Learning Research, 240-248. http://proceedings.mlr.press/v51/jamieson16.html.
library(bbotk) library(data.table) # set search space search_space = domain = ps( x1 = p_dbl(-5, 10), x2 = p_dbl(0, 15), fidelity = p_dbl(1e-2, 1, tags = "budget") ) # Branin function with fidelity, see `bbotk::branin()` fun = function(xs) branin_wu(xs[["x1"]], xs[["x2"]], xs[["fidelity"]]) # create objective objective = ObjectiveRFun$new( fun = fun, domain = domain, codomain = ps(y = p_dbl(tags = "minimize")) ) # initialize instance and optimizer instance = OptimInstanceSingleCrit$new( objective = objective, search_space = search_space, terminator = trm("evals", n_evals = 50) ) optimizer = opt("successive_halving") # optimize branin function optimizer$optimize(instance) # best scoring evaluation instance$result # all evaluations as.data.table(instance$archive)
library(bbotk) library(data.table) # set search space search_space = domain = ps( x1 = p_dbl(-5, 10), x2 = p_dbl(0, 15), fidelity = p_dbl(1e-2, 1, tags = "budget") ) # Branin function with fidelity, see `bbotk::branin()` fun = function(xs) branin_wu(xs[["x1"]], xs[["x2"]], xs[["fidelity"]]) # create objective objective = ObjectiveRFun$new( fun = fun, domain = domain, codomain = ps(y = p_dbl(tags = "minimize")) ) # initialize instance and optimizer instance = OptimInstanceSingleCrit$new( objective = objective, search_space = search_space, terminator = trm("evals", n_evals = 50) ) optimizer = opt("successive_halving") # optimize branin function optimizer$optimize(instance) # best scoring evaluation instance$result # all evaluations as.data.table(instance$archive)
Optimizer using the Hyperband (HB) algorithm.
HB runs the Successive Halving Algorithm (SHA) with different numbers of stating configurations.
The algorithm is initialized with the same parameters as Successive Halving but without n
.
Each run of Successive Halving is called a bracket and starts with a different budget r_0
.
A smaller starting budget means that more configurations can be tried out.
The most explorative bracket allocated the minimum budget r_min
.
The next bracket increases the starting budget by a factor of eta
.
In each bracket, the starting budget increases further until the last bracket s = 0
essentially performs a random search with the full budget r_max
.
The number of brackets s_max + 1
is calculated with s_max = log(r_min / r_max)(eta)
.
Under the condition that r_0
increases by eta
with each bracket, r_min
sometimes has to be adjusted slightly in order not to use more than r_max
resources in the last bracket.
The number of configurations in the base stages is calculated so that each bracket uses approximately the same amount of budget.
The following table shows a full run of HB with eta = 2
, r_min = 1
and r_max = 8
.
s |
3 | 2 | 1 | 0 | ||||||||
i |
n_i |
r_i |
n_i |
r_i |
n_i |
r_i |
n_i |
r_i |
||||
0 | 8 | 1 | 6 | 2 | 4 | 4 | 8 | 4 | ||||
1 | 4 | 2 | 3 | 4 | 2 | 8 | ||||||
2 | 2 | 4 | 1 | 8 | ||||||||
3 | 1 | 8 | ||||||||||
s
is the bracket number, i
is the stage number, n_i
is the number of configurations and r_i
is the budget allocated to a single configuration.
The budget hyperparameter must be tagged with "budget"
in the search space.
The minimum budget (r_min
) which is allocated in the base stage of the most explorative bracket, is set by the lower bound of the budget parameter.
The upper bound defines the maximum budget (r_max
) which is allocated to the candidates in the last stages.
This mlr3tuning::Tuner can be instantiated via the dictionary
mlr3tuning::mlr_tuners or with the associated sugar function mlr3tuning::tnr()
:
TunerBatchHyperband$new() mlr_tuners$get("hyperband") tnr("hyperband")
If the learner lacks a natural budget parameter, mlr3pipelines::PipeOpSubsample can be applied to use the subsampling rate as budget parameter. The resulting mlr3pipelines::GraphLearner is fitted on small proportions of the mlr3::Task in the first stage, and on the complete task in last stage.
Hyperband supports custom paradox::Sampler object for initial configurations in each bracket. A custom sampler may look like this (the full example is given in the examples section):
# - beta distribution with alpha = 2 and beta = 5 # - categorical distribution with custom probabilities sampler = SamplerJointIndep$new(list( Sampler1DRfun$new(params[[2]], function(n) rbeta(n, 2, 5)), Sampler1DCateg$new(params[[3]], prob = c(0.2, 0.3, 0.5)) ))
$optimize()
supports progress bars via the package progressr
combined with a bbotk::Terminator. Simply wrap the function in
progressr::with_progress()
to enable them. We recommend to use package
progress as backend; enable with progressr::handlers("progress")
.
This hyperband implementation evaluates hyperparameter configurations of equal budget across brackets in one batch.
For example, all configurations in stage 1 of bracket 3 and stage 0 of bracket 2 in one batch.
To select a parallel backend, use the plan()
function of the future package.
Hyperband uses a logger (as implemented in lgr) from package
bbotk.
Use lgr::get_logger("bbotk")
to access and control the logger.
The gallery features a collection of case studies and demos about optimization.
Tune the hyperparameters of XGBoost with Hyperband.
Use data subsampling and Hyperband to optimize a support vector machine.
eta
numeric(1)
With every stage, the budget is increased by a factor of eta
and only the best 1 / eta
points are promoted to the next stage.
Non-integer values are supported, but eta
is not allowed to be less or equal to 1.
sampler
paradox::Sampler
Object defining how the samples of the parameter space should be drawn in the base stage of each bracket.
The default is uniform sampling.
repetitions
integer(1)
If 1
(default), optimization is stopped once all brackets are evaluated.
Otherwise, optimization is stopped after repetitions
runs of HB.
The bbotk::Terminator might stop the optimization before all repetitions are executed.
The bbotk::Archive holds the following additional columns that are specific to HB:
bracket
(integer(1)
)
The bracket index. Counts down to 0.
stage
(integer(1))
The stages of each bracket. Starts counting at 0.
repetition
(integer(1))
Repetition index. Start counting at 1.
mlr3tuning::Tuner
-> mlr3tuning::TunerBatch
-> mlr3tuning::TunerBatchFromOptimizerBatch
-> TunerBatchHyperband
new()
Creates a new instance of this R6 class.
TunerBatchHyperband$new()
clone()
The objects of this class are cloneable with this method.
TunerBatchHyperband$clone(deep = FALSE)
deep
Whether to make a deep clone.
Li L, Jamieson K, DeSalvo G, Rostamizadeh A, Talwalkar A (2018). “Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization.” Journal of Machine Learning Research, 18(185), 1-52. https://jmlr.org/papers/v18/16-558.html.
if(requireNamespace("xgboost")) { library(mlr3learners) # define hyperparameter and budget parameter search_space = ps( nrounds = p_int(lower = 1, upper = 16, tags = "budget"), eta = p_dbl(lower = 0, upper = 1), booster = p_fct(levels = c("gbtree", "gblinear", "dart")) ) # hyperparameter tuning on the pima indians diabetes data set instance = tune( tnr("hyperband"), task = tsk("pima"), learner = lrn("classif.xgboost", eval_metric = "logloss"), resampling = rsmp("cv", folds = 3), measures = msr("classif.ce"), search_space = search_space, term_evals = 100 ) # best performing hyperparameter configuration instance$result }
if(requireNamespace("xgboost")) { library(mlr3learners) # define hyperparameter and budget parameter search_space = ps( nrounds = p_int(lower = 1, upper = 16, tags = "budget"), eta = p_dbl(lower = 0, upper = 1), booster = p_fct(levels = c("gbtree", "gblinear", "dart")) ) # hyperparameter tuning on the pima indians diabetes data set instance = tune( tnr("hyperband"), task = tsk("pima"), learner = lrn("classif.xgboost", eval_metric = "logloss"), resampling = rsmp("cv", folds = 3), measures = msr("classif.ce"), search_space = search_space, term_evals = 100 ) # best performing hyperparameter configuration instance$result }
Optimizer using the Successive Halving Algorithm (SHA).
SHA is initialized with the number of starting configurations n
, the proportion of configurations discarded in each stage eta
, and the minimum r_min
and maximum _max
budget of a single evaluation.
The algorithm starts by sampling n
random configurations and allocating the minimum budget r_min
to them.
The configurations are evaluated and 1 / eta
of the worst-performing configurations are discarded.
The remaining configurations are promoted to the next stage and evaluated on a larger budget.
The following table is the stage layout for eta = 2
, r_min = 1
and r_max = 8
.
i |
n_i |
r_i |
0 | 8 | 1 |
1 | 4 | 2 |
2 | 2 | 4 |
3 | 1 | 8 |
i
is the stage number, n_i
is the number of configurations and r_i
is the budget allocated to a single configuration.
The number of stages is calculated so that each stage consumes approximately the same budget. This sometimes results in the minimum budget having to be slightly adjusted by the algorithm.
This mlr3tuning::Tuner can be instantiated via the dictionary
mlr3tuning::mlr_tuners or with the associated sugar function mlr3tuning::tnr()
:
TunerBatchSuccessiveHalving$new() mlr_tuners$get("successive_halving") tnr("successive_halving")
If the learner lacks a natural budget parameter, mlr3pipelines::PipeOpSubsample can be applied to use the subsampling rate as budget parameter. The resulting mlr3pipelines::GraphLearner is fitted on small proportions of the mlr3::Task in the first stage, and on the complete task in last stage.
Hyperband supports custom paradox::Sampler object for initial configurations in each bracket. A custom sampler may look like this (the full example is given in the examples section):
# - beta distribution with alpha = 2 and beta = 5 # - categorical distribution with custom probabilities sampler = SamplerJointIndep$new(list( Sampler1DRfun$new(params[[2]], function(n) rbeta(n, 2, 5)), Sampler1DCateg$new(params[[3]], prob = c(0.2, 0.3, 0.5)) ))
$optimize()
supports progress bars via the package progressr
combined with a bbotk::Terminator. Simply wrap the function in
progressr::with_progress()
to enable them. We recommend to use package
progress as backend; enable with progressr::handlers("progress")
.
The hyperparameter configurations of one stage are evaluated in parallel with the future package.
To select a parallel backend, use the plan()
function of the future package.
Hyperband uses a logger (as implemented in lgr) from package
bbotk.
Use lgr::get_logger("bbotk")
to access and control the logger.
The gallery features a collection of case studies and demos about optimization.
Tune the hyperparameters of XGBoost with Hyperband (Hyperband can be easily swapped with SHA).
Use data subsampling and Hyperband to optimize a support vector machine.
n
integer(1)
Number of configurations in the base stage.
eta
numeric(1)
With every stage, the budget is increased by a factor of eta
and only the best 1 / eta
configurations are promoted to the next stage.
Non-integer values are supported, but eta
is not allowed to be less or equal to 1.
sampler
paradox::Sampler
Object defining how the samples of the parameter space should be drawn.
The default is uniform sampling.
repetitions
integer(1)
If 1
(default), optimization is stopped once all stages are evaluated.
Otherwise, optimization is stopped after repetitions
runs of SHA.
The bbotk::Terminator might stop the optimization before all repetitions are executed.
adjust_minimum_budget
logical(1)
If TRUE
, the minimum budget is increased so that the last stage uses the maximum budget defined in the search space.
The bbotk::Archive holds the following additional columns that are specific to SHA:
stage
(integer(1))
Stage index. Starts counting at 0.
repetition
(integer(1))
Repetition index. Start counting at 1.
mlr3tuning::Tuner
-> mlr3tuning::TunerBatch
-> mlr3tuning::TunerBatchFromOptimizerBatch
-> TunerBatchSuccessiveHalving
new()
Creates a new instance of this R6 class.
TunerBatchSuccessiveHalving$new()
clone()
The objects of this class are cloneable with this method.
TunerBatchSuccessiveHalving$clone(deep = FALSE)
deep
Whether to make a deep clone.
Jamieson K, Talwalkar A (2016). “Non-stochastic Best Arm Identification and Hyperparameter Optimization.” In Gretton A, Robert CC (eds.), Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, volume 51 series Proceedings of Machine Learning Research, 240-248. http://proceedings.mlr.press/v51/jamieson16.html.
if(requireNamespace("xgboost")) { library(mlr3learners) # define hyperparameter and budget parameter search_space = ps( nrounds = p_int(lower = 1, upper = 16, tags = "budget"), eta = p_dbl(lower = 0, upper = 1), booster = p_fct(levels = c("gbtree", "gblinear", "dart")) ) # hyperparameter tuning on the pima indians diabetes data set instance = tune( tnr("successive_halving"), task = tsk("pima"), learner = lrn("classif.xgboost", eval_metric = "logloss"), resampling = rsmp("cv", folds = 3), measures = msr("classif.ce"), search_space = search_space, term_evals = 100 ) # best performing hyperparameter configuration instance$result }
if(requireNamespace("xgboost")) { library(mlr3learners) # define hyperparameter and budget parameter search_space = ps( nrounds = p_int(lower = 1, upper = 16, tags = "budget"), eta = p_dbl(lower = 0, upper = 1), booster = p_fct(levels = c("gbtree", "gblinear", "dart")) ) # hyperparameter tuning on the pima indians diabetes data set instance = tune( tnr("successive_halving"), task = tsk("pima"), learner = lrn("classif.xgboost", eval_metric = "logloss"), resampling = rsmp("cv", folds = 3), measures = msr("classif.ce"), search_space = search_space, term_evals = 100 ) # best performing hyperparameter configuration instance$result }