Title: | Deep Learning with 'mlr3' |
---|---|
Description: | Deep Learning library that extends the mlr3 framework by building upon the 'torch' package. It allows to conveniently build, train, and evaluate deep learning models without having to worry about low level details. Custom architectures can be created using the graph language defined in 'mlr3pipelines'. |
Authors: | Sebastian Fischer [cre, aut] , Bernd Bischl [ctb] , Lukas Burk [ctb] , Martin Binder [aut], Florian Pfisterer [ctb] |
Maintainer: | Sebastian Fischer <[email protected]> |
License: | LGPL (>= 3) |
Version: | 0.1.2 |
Built: | 2024-11-21 05:36:42 UTC |
Source: | https://github.com/mlr-org/mlr3torch |
Deep Learning library that extends the mlr3 framework by building upon the 'torch' package. It allows to conveniently build, train, and evaluate deep learning models without having to worry about low level details. Custom architectures can be created using the graph language defined in 'mlr3pipelines'.
mlr3torch.cache
:
Whether to cache the downloaded data (TRUE
) or not (FALSE
, default).
This can also be set to a specific folder on the file system to be used as the cache directory.
Maintainer: Sebastian Fischer [email protected] (ORCID)
Authors:
Martin Binder [email protected]
Other contributors:
Bernd Bischl [email protected] (ORCID) [contributor]
Lukas Burk [email protected] (ORCID) [contributor]
Florian Pfisterer [email protected] (ORCID) [contributor]
Useful links:
Report bugs at https://github.com/mlr-org/mlr3torch/issues
Converts the input to a DataDescriptor
.
as_data_descriptor(x, dataset_shapes, ...)
as_data_descriptor(x, dataset_shapes, ...)
x |
(any) |
dataset_shapes |
(named |
... |
(any) |
ds = dataset("example", initialize = function() self$iris = iris[, -5], .getitem = function(i) list(x = torch_tensor(as.numeric(self$iris[i, ]))), .length = function() nrow(self$iris) )() as_data_descriptor(ds, list(x = c(NA, 4L))) # if the dataset has a .getbatch method, the shapes are inferred ds2 = dataset("example", initialize = function() self$iris = iris[, -5], .getbatch = function(i) list(x = torch_tensor(as.matrix(self$iris[i, ]))), .length = function() nrow(self$iris) )() as_data_descriptor(ds2)
ds = dataset("example", initialize = function() self$iris = iris[, -5], .getitem = function(i) list(x = torch_tensor(as.numeric(self$iris[i, ]))), .length = function() nrow(self$iris) )() as_data_descriptor(ds, list(x = c(NA, 4L))) # if the dataset has a .getbatch method, the shapes are inferred ds2 = dataset("example", initialize = function() self$iris = iris[, -5], .getbatch = function(i) list(x = torch_tensor(as.matrix(self$iris[i, ]))), .length = function() nrow(self$iris) )() as_data_descriptor(ds2)
Convert a object to a lazy_tensor
.
as_lazy_tensor(x, ...) ## S3 method for class 'dataset' as_lazy_tensor(x, dataset_shapes = NULL, ids = NULL, ...)
as_lazy_tensor(x, ...) ## S3 method for class 'dataset' as_lazy_tensor(x, dataset_shapes = NULL, ids = NULL, ...)
x |
(any) |
... |
(any) |
dataset_shapes |
(named |
ids |
( |
iris_ds = dataset("iris", initialize = function() { self$iris = iris[, -5] }, .getbatch = function(i) { list(x = torch_tensor(as.matrix(self$iris[i, ]))) }, .length = function() nrow(self$iris) )() # no need to specify the dataset shapes as they can be inferred from the .getbatch method # only first 5 observations as_lazy_tensor(iris_ds, ids = 1:5) # all observations head(as_lazy_tensor(iris_ds)) iris_ds2 = dataset("iris", initialize = function() self$iris = iris[, -5], .getitem = function(i) list(x = torch_tensor(as.numeric(self$iris[i, ]))), .length = function() nrow(self$iris) )() # if .getitem is implemented we cannot infer the shapes as they might vary, # so we have to annotate them explicitly as_lazy_tensor(iris_ds2, dataset_shapes = list(x = c(NA, 4L)))[1:5] # Convert a matrix lt = as_lazy_tensor(matrix(rnorm(100), nrow = 20)) materialize(lt[1:5], rbind = TRUE)
iris_ds = dataset("iris", initialize = function() { self$iris = iris[, -5] }, .getbatch = function(i) { list(x = torch_tensor(as.matrix(self$iris[i, ]))) }, .length = function() nrow(self$iris) )() # no need to specify the dataset shapes as they can be inferred from the .getbatch method # only first 5 observations as_lazy_tensor(iris_ds, ids = 1:5) # all observations head(as_lazy_tensor(iris_ds)) iris_ds2 = dataset("iris", initialize = function() self$iris = iris[, -5], .getitem = function(i) list(x = torch_tensor(as.numeric(self$iris[i, ]))), .length = function() nrow(self$iris) )() # if .getitem is implemented we cannot infer the shapes as they might vary, # so we have to annotate them explicitly as_lazy_tensor(iris_ds2, dataset_shapes = list(x = c(NA, 4L)))[1:5] # Convert a matrix lt = as_lazy_tensor(matrix(rnorm(100), nrow = 20)) materialize(lt[1:5], rbind = TRUE)
Converts an object to a TorchCallback
.
as_torch_callback(x, clone = FALSE, ...)
as_torch_callback(x, clone = FALSE, ...)
x |
(any) |
clone |
( |
... |
(any) |
Other Callback:
TorchCallback
,
as_torch_callbacks()
,
callback_set()
,
mlr3torch_callbacks
,
mlr_callback_set
,
mlr_callback_set.checkpoint
,
mlr_callback_set.progress
,
mlr_context_torch
,
t_clbk()
,
torch_callback()
Converts an object to a list of TorchCallback
.
as_torch_callbacks(x, clone, ...)
as_torch_callbacks(x, clone, ...)
x |
(any) |
clone |
( |
... |
(any) |
list()
of TorchCallback
s
Other Callback:
TorchCallback
,
as_torch_callback()
,
callback_set()
,
mlr3torch_callbacks
,
mlr_callback_set
,
mlr_callback_set.checkpoint
,
mlr_callback_set.progress
,
mlr_context_torch
,
t_clbk()
,
torch_callback()
Other Torch Descriptor:
TorchCallback
,
TorchDescriptor
,
TorchLoss
,
TorchOptimizer
,
as_torch_loss()
,
as_torch_optimizer()
,
mlr3torch_losses
,
mlr3torch_optimizers
,
t_clbk()
,
t_loss()
,
t_opt()
Converts an object to a TorchLoss
.
as_torch_loss(x, clone = FALSE, ...)
as_torch_loss(x, clone = FALSE, ...)
x |
(any) |
clone |
( |
... |
(any) |
Other Torch Descriptor:
TorchCallback
,
TorchDescriptor
,
TorchLoss
,
TorchOptimizer
,
as_torch_callbacks()
,
as_torch_optimizer()
,
mlr3torch_losses
,
mlr3torch_optimizers
,
t_clbk()
,
t_loss()
,
t_opt()
Converts an object to a TorchOptimizer
.
as_torch_optimizer(x, clone = FALSE, ...)
as_torch_optimizer(x, clone = FALSE, ...)
x |
(any) |
clone |
( |
... |
(any) |
Other Torch Descriptor:
TorchCallback
,
TorchDescriptor
,
TorchLoss
,
TorchOptimizer
,
as_torch_callbacks()
,
as_torch_loss()
,
mlr3torch_losses
,
mlr3torch_optimizers
,
t_clbk()
,
t_loss()
,
t_opt()
Asserts whether something is a lazy tensor.
assert_lazy_tensor(x)
assert_lazy_tensor(x)
x |
(any) |
First tries cuda, then cpu.
auto_device(device = NULL)
auto_device(device = NULL)
device |
( |
Converts a data frame of categorical data into a long tensor by converting the data to integers. No input checks are performed.
batchgetter_categ(data, device, ...)
batchgetter_categ(data, device, ...)
data |
( |
device |
( |
... |
(any) |
Converts a data frame of numeric data into a float tensor by calling as.matrix()
.
No input checks are performed
batchgetter_num(data, device, ...)
batchgetter_num(data, device, ...)
data |
( |
device |
( |
... |
(any) |
Creates an R6ClassGenerator
inheriting from CallbackSet
.
Additionally performs checks such as that the stages are not accidentally misspelled.
To create a TorchCallback
use torch_callback()
.
In order for the resulting class to be cloneable, the private method $deep_clone()
must be
provided.
callback_set( classname, on_begin = NULL, on_end = NULL, on_exit = NULL, on_epoch_begin = NULL, on_before_valid = NULL, on_epoch_end = NULL, on_batch_begin = NULL, on_batch_end = NULL, on_after_backward = NULL, on_batch_valid_begin = NULL, on_batch_valid_end = NULL, on_valid_end = NULL, state_dict = NULL, load_state_dict = NULL, initialize = NULL, public = NULL, private = NULL, active = NULL, parent_env = parent.frame(), inherit = CallbackSet, lock_objects = FALSE )
callback_set( classname, on_begin = NULL, on_end = NULL, on_exit = NULL, on_epoch_begin = NULL, on_before_valid = NULL, on_epoch_end = NULL, on_batch_begin = NULL, on_batch_end = NULL, on_after_backward = NULL, on_batch_valid_begin = NULL, on_batch_valid_end = NULL, on_valid_end = NULL, state_dict = NULL, load_state_dict = NULL, initialize = NULL, public = NULL, private = NULL, active = NULL, parent_env = parent.frame(), inherit = CallbackSet, lock_objects = FALSE )
classname |
( |
on_begin , on_end , on_epoch_begin , on_before_valid , on_epoch_end , on_batch_begin , on_batch_end , on_after_backward , on_batch_valid_begin , on_batch_valid_end , on_valid_end , on_exit
|
( |
state_dict |
( |
load_state_dict |
( |
initialize |
( |
public , private , active
|
( |
parent_env |
( |
inherit |
( |
lock_objects |
( |
Other Callback:
TorchCallback
,
as_torch_callback()
,
as_torch_callbacks()
,
mlr3torch_callbacks
,
mlr_callback_set
,
mlr_callback_set.checkpoint
,
mlr_callback_set.progress
,
mlr_context_torch
,
t_clbk()
,
torch_callback()
A data descriptor is a rather internal data structure used in the lazy_tensor
data type.
In essence it is an annotated torch::dataset
and a preprocessing graph (consisting mosty of PipeOpModule
operators). The additional meta data (e.g. pointer, shapes) allows to preprocess lazy_tensor
s in an
mlr3pipelines::Graph
just like any (non-lazy) data types.
The preprocessing is applied when materialize()
is called on the lazy_tensor
.
To create a data descriptor, you can also use the as_data_descriptor()
function.
While it would be more natural to define this as an S3 class, we opted for an R6 class to avoid the usual trouble of serializing S3 objects. If each row contained a DataDescriptor as an S3 class, this would copy the object when serializing.
dataset
(torch::dataset
)
The dataset.
graph
(Graph
)
The preprocessing graph.
dataset_shapes
(named list()
of (integer()
or NULL
))
The shapes of the output.
input_map
(character()
)
The input map from the dataset to the preprocessing graph.
pointer
(character(2)
)
The output pointer.
pointer_shape
(integer()
| NULL
)
The shape of the output indicated by pointer
.
dataset_hash
(character(1)
)
Hash for the wrapped dataset.
hash
(character(1)
)
Hash for the data descriptor.
graph_input
(character()
)
The input channels of the preprocessing graph (cached to save time).
pointer_shape_predict
(integer()
or NULL
)
Internal use only.
new()
Creates a new instance of this R6 class.
DataDescriptor$new( dataset, dataset_shapes = NULL, graph = NULL, input_map = NULL, pointer = NULL, pointer_shape = NULL, pointer_shape_predict = NULL, clone_graph = TRUE )
dataset
(torch::dataset
)
The torch dataset.
It should return a named list()
of torch_tensor
objects.
dataset_shapes
(named list()
of (integer()
or NULL
))
The shapes of the output.
Names are the elements of the list returned by the dataset.
If the shape is not NULL
(unknown, e.g. for images of different sizes) the first dimension must be NA
to
indicate the batch dimension.
graph
(Graph
)
The preprocessing graph.
If left NULL
, no preprocessing is applied to the data and input_map
, pointer
, pointer_shape
, and
pointer_shape_predict
are inferred in case the dataset returns only one element.
input_map
(character()
)
Character vector that must have the same length as the input of the graph.
Specifies how the data from the dataset
is fed into the preprocessing graph.
pointer
(character(2)
| NULL
)
Points to an output channel within graph
:
Element 1 is the PipeOp
's id and element 2 is that PipeOp
's output channel.
pointer_shape
(integer()
| NULL
)
Shape of the output indicated by pointer
.
pointer_shape_predict
(integer()
or NULL
)
Internal use only.
Used in a Graph
to anticipate possible mismatches between train and predict shapes.
clone_graph
(logical(1)
)
Whether to clone the preprocessing graph.
print()
Prints the object
DataDescriptor$print(...)
...
(any)
Unused
clone()
The objects of this class are cloneable with this method.
DataDescriptor$clone(deep = FALSE)
deep
Whether to make a deep clone.
ModelDescriptor, lazy_tensor
# Create a dataset ds = dataset( initialize = function() self$x = torch_randn(10, 3, 3), .getitem = function(i) list(x = self$x[i, ]), .length = function() nrow(self$x) )() dd = DataDescriptor$new(ds, list(x = c(NA, 3, 3))) dd # is the same as using the converter: as_data_descriptor(ds, list(x = c(NA, 3, 3)))
# Create a dataset ds = dataset( initialize = function() self$x = torch_randn(10, 3, 3), .getitem = function(i) list(x = self$x[i, ]), .length = function() nrow(self$x) )() dd = DataDescriptor$new(ds, list(x = c(NA, 3, 3))) dd # is the same as using the converter: as_data_descriptor(ds, list(x = c(NA, 3, 3)))
Checks whether an object is a lazy tensor.
is_lazy_tensor(x)
is_lazy_tensor(x)
x |
(any) |
Create a lazy tensor.
lazy_tensor(data_descriptor = NULL, ids = NULL)
lazy_tensor(data_descriptor = NULL, ids = NULL)
data_descriptor |
( |
ids |
( |
ds = dataset("example", initialize = function() self$iris = iris[, -5], .getitem = function(i) list(x = torch_tensor(as.numeric(self$iris[i, ]))), .length = function() nrow(self$iris) )() dd = as_data_descriptor(ds, list(x = c(NA, 4L))) lt = as_lazy_tensor(dd)
ds = dataset("example", initialize = function() self$iris = iris[, -5], .getitem = function(i) list(x = torch_tensor(as.numeric(self$iris[i, ]))), .length = function() nrow(self$iris) )() dd = as_data_descriptor(ds, list(x = c(NA, 4L))) lt = as_lazy_tensor(dd)
This will materialize a lazy_tensor()
or a data.frame()
/ list()
containing – among other things –
lazy_tensor()
columns.
I.e. the data described in the underlying DataDescriptor
s is loaded for the indices in the lazy_tensor()
,
is preprocessed and then put unto the specified device.
Because not all elements in a lazy tensor must have the same shape, a list of tensors is returned by default.
If all elements have the same shape, these tensors can also be rbinded into a single tensor (parameter rbind
).
materialize(x, device = "cpu", rbind = FALSE, ...) ## S3 method for class 'list' materialize(x, device = "cpu", rbind = FALSE, cache = "auto", ...)
materialize(x, device = "cpu", rbind = FALSE, ...) ## S3 method for class 'list' materialize(x, device = "cpu", rbind = FALSE, cache = "auto", ...)
x |
(any) |
device |
( |
rbind |
( |
... |
(any) |
cache |
( |
Materializing a lazy tensor consists of:
Loading the data from the internal dataset of the DataDescriptor
.
Processing these batches in the preprocessing Graph
s.
Returning the result of the PipeOp
pointed to by the DataDescriptor
(pointer
).
With multiple lazy_tensor
columns we can benefit from caching because:
a) Output(s) from the dataset might be input to multiple graphs.
b) Different lazy tensors might be outputs from the same graph.
For this reason it is possible to provide a cache environment. The hash key for a) is the hash of the indices and the dataset. The hash key for b) is the hash of the indices, dataset and preprocessing graph.
(list()
of lazy_tensor
s or a lazy_tensor
)
lt1 = as_lazy_tensor(torch_randn(10, 3)) materialize(lt1, rbind = TRUE) materialize(lt1, rbind = FALSE) lt2 = as_lazy_tensor(torch_randn(10, 4)) d = data.table::data.table(lt1 = lt1, lt2 = lt2) materialize(d, rbind = TRUE) materialize(d, rbind = FALSE)
lt1 = as_lazy_tensor(torch_randn(10, 3)) materialize(lt1, rbind = TRUE) materialize(lt1, rbind = FALSE) lt2 = as_lazy_tensor(torch_randn(10, 4)) d = data.table::data.table(lt1 = lt1, lt2 = lt2) materialize(d, rbind = TRUE) materialize(d, rbind = FALSE)
This lazy data backend wraps a constructor that lazily creates another backend, e.g. by downloading
(and caching) some data from the internet.
This backend should be used, when some metadata of the backend is known in advance and should be accessible
before downloading the actual data.
When the backend is first constructed, it is verified that the provided metadata was correct, otherwise
an informative error message is thrown.
After the construction of the lazily constructed backend, calls like $data()
, $missings()
, $distinct()
,
or $hash()
are redirected to it.
Information that is available before the backend is constructed is:
nrow
- The number of rows (set as the length of the rownames
).
ncol
- The number of columns (provided via the id
column of col_info
).
colnames
- The column names.
rownames
- The row names.
col_info
- The column information, which can be obtained via mlr3::col_info()
.
Beware that accessing the backend's hash also contructs the backend.
Note that while in most cases the data contains lazy_tensor
columns, this is not necessary and the naming
of this class has nothing to do with the lazy_tensor
data type.
Important
When the constructor generates factor()
variables it is important that the ordering of the levels in data
corresponds to the ordering of the levels in the col_info
argument.
mlr3::DataBackend
-> DataBackendLazy
backend
(DataBackend
)
The wrapped backend that is lazily constructed when first accessed.
nrow
(integer(1)
)
Number of rows (observations).
ncol
(integer(1)
)
Number of columns (variables), including the primary key column.
rownames
(integer()
)
Returns vector of all distinct row identifiers, i.e. the contents of the primary key column.
colnames
(character()
)
Returns vector of all column names, including the primary key column.
is_constructed
(logical(1)
)
Whether the backend has already been constructed.
new()
Creates a new instance of this R6 class.
DataBackendLazy$new(constructor, rownames, col_info, primary_key)
constructor
(function
)
A function with argument backend
(the lazy backend), whose return value must be the actual backend.
This function is called the first time the field $backend
is accessed.
rownames
(integer()
)
The row names. Must be a permutation of the rownames of the lazily constructed backend.
col_info
(data.table::data.table()
)
A data.table with columns id
, type
and levels
containing the column id, type and levels.
Note that the levels must be provided in the correct order.
primary_key
(character(1)
)
Name of the primary key column.
data()
Returns a slice of the data in the specified format.
The rows must be addressed as vector of primary key values, columns must be referred to via column names.
Queries for rows with no matching row id and queries for columns with no matching column name are silently ignored.
Rows are guaranteed to be returned in the same order as rows
, columns may be returned in an arbitrary order.
Duplicated row ids result in duplicated rows, duplicated column names lead to an exception.
Accessing the data triggers the construction of the backend.
DataBackendLazy$data(rows, cols)
rows
(integer()
)
Row indices.
cols
(character()
)
Column names.
head()
Retrieve the first n
rows.
This triggers the construction of the backend.
DataBackendLazy$head(n = 6L)
n
(integer(1)
)
Number of rows.
data.table::data.table()
of the first n
rows.
distinct()
Returns a named list of vectors of distinct values for each column
specified. If na_rm
is TRUE
, missing values are removed from the
returned vectors of distinct values. Non-existing rows and columns are
silently ignored.
This triggers the construction of the backend.
DataBackendLazy$distinct(rows, cols, na_rm = TRUE)
rows
(integer()
)
Row indices.
cols
(character()
)
Column names.
na_rm
(logical(1)
)
Whether to remove NAs or not.
Named list()
of distinct values.
missings()
Returns the number of missing values per column in the specified slice of data. Non-existing rows and columns are silently ignored.
This triggers the construction of the backend.
DataBackendLazy$missings(rows, cols)
rows
(integer()
)
Row indices.
cols
(character()
)
Column names.
Total of missing values per column (named numeric()
).
print()
Printer.
DataBackendLazy$print()
# We first define a backend constructor constructor = function(backend) { cat("Data is constructed!\n") DataBackendDataTable$new( data.table(x = rnorm(10), y = rnorm(10), row_id = 1:10), primary_key = "row_id" ) } # to wrap this backend constructor in a lazy backend, we need to provide the correct metadata for it column_info = data.table( id = c("x", "y", "row_id"), type = c("numeric", "numeric", "integer"), levels = list(NULL, NULL, NULL) ) backend_lazy = DataBackendLazy$new( constructor = constructor, rownames = 1:10, col_info = column_info, primary_key = "row_id" ) # Note that the constructor is not called for the calls below # as they can be read from the metadata backend_lazy$nrow backend_lazy$rownames backend_lazy$ncol backend_lazy$colnames col_info(backend_lazy) # Only now the backend is constructed backend_lazy$data(1, "x") # Is the same as: backend_lazy$backend$data(1, "x")
# We first define a backend constructor constructor = function(backend) { cat("Data is constructed!\n") DataBackendDataTable$new( data.table(x = rnorm(10), y = rnorm(10), row_id = 1:10), primary_key = "row_id" ) } # to wrap this backend constructor in a lazy backend, we need to provide the correct metadata for it column_info = data.table( id = c("x", "y", "row_id"), type = c("numeric", "numeric", "integer"), levels = list(NULL, NULL, NULL) ) backend_lazy = DataBackendLazy$new( constructor = constructor, rownames = 1:10, col_info = column_info, primary_key = "row_id" ) # Note that the constructor is not called for the calls below # as they can be read from the metadata backend_lazy$nrow backend_lazy$rownames backend_lazy$ncol backend_lazy$colnames col_info(backend_lazy) # Only now the backend is constructed backend_lazy$data(1, "x") # Is the same as: backend_lazy$backend$data(1, "x")
Base class from which callbacks should inherit (see section Inheriting). A callback set is a collection of functions that are executed at different stages of the training loop. They can be used to gain more control over the training process of a neural network without having to write everything from scratch.
When used a in torch learner, the CallbackSet
is wrapped in a TorchCallback
.
The latters parameter set represents the arguments of the CallbackSet
's $initialize()
method.
For each available stage (see section Stages) a public method $on_<stage>()
can be defined.
The evaluation context (a ContextTorch
) can be accessed via self$ctx
, which contains
the current state of the training loop.
This context is assigned at the beginning of the training loop and removed afterwards.
Different stages of a callback can communicate with each other by assigning values to $self
.
State:
To be able to store information in the $model
slot of a LearnerTorch
, callbacks support a state API.
You can overload the $state_dict()
public method to define what will be stored in learner$model$callbacks$<id>
after training finishes.
This then also requires to implement a $load_state_dict(state_dict)
method that defines how to load a previously saved
callback state into a different callback.
Note that the $state_dict()
should not include the parameter values that were used to initialize the callback.
For creating custom callbacks, the function torch_callback()
is recommended, which creates a
CallbackSet
and then wraps it in a TorchCallback
.
To create a CallbackSet
the convenience function callback_set()
can be used.
These functions perform checks such as that the stages are not accidentally misspelled.
begin
:: Run before the training loop begins.
epoch_begin
:: Run he beginning of each epoch.
batch_begin
:: Run before the forward call.
after_backward
:: Run after the backward call.
batch_end
:: Run after the optimizer step.
batch_valid_begin
:: Run before the forward call in the validation loop.
batch_valid_end
:: Run after the forward call in the validation loop.
valid_end
:: Run at the end of validation.
epoch_end
:: Run at the end of each epoch.
end
:: Run after last epoch.
exit
:: Run at last, using on.exit()
.
If training is to be stopped, it is possible to set the field $terminate
of ContextTorch
.
At the end of every epoch this field is checked and if it is TRUE
, training stops.
This can for example be used to implement custom early stopping.
ctx
(ContextTorch
or NULL
)
The evaluation context for the callback.
This field should always be NULL
except during the $train()
call of the torch learner.
stages
(character()
)
The active stages of this callback set.
print()
Prints the object.
CallbackSet$print(...)
...
(any)
Currently unused.
state_dict()
Returns information that is kept in the the LearnerTorch
's state after training.
This information should be loadable into the callback using $load_state_dict()
to be able to continue training.
This returns NULL
by default.
CallbackSet$state_dict()
load_state_dict()
Loads the state dict into the callback to continue training.
CallbackSet$load_state_dict(state_dict)
state_dict
(any)
The state dict as retrieved via $state_dict()
.
clone()
The objects of this class are cloneable with this method.
CallbackSet$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other Callback:
TorchCallback
,
as_torch_callback()
,
as_torch_callbacks()
,
callback_set()
,
mlr3torch_callbacks
,
mlr_callback_set.checkpoint
,
mlr_callback_set.progress
,
mlr_context_torch
,
t_clbk()
,
torch_callback()
Saves the optimizer and network states during training. The final network and optimizer are always stored.
Saving the learner itself in the callback with a trained model is impossible, as the model slot is set after the last callback step is executed.
mlr3torch::CallbackSet
-> CallbackSetCheckpoint
new()
Creates a new instance of this R6 class.
CallbackSetCheckpoint$new(path, freq, freq_type = "epoch")
path
(character(1)
)
The path to a folder where the models are saved.
freq
(integer(1)
)
The frequency how often the model is saved.
Frequency is either per step or epoch, which can be configured through the freq_type
parameter.
freq_type
(character(1)
)
Can be be either "epoch"
(default) or "step"
.
on_epoch_end()
Saves the network and optimizer state dict.
Does nothing if freq_type
or freq
are not met.
CallbackSetCheckpoint$on_epoch_end()
on_batch_end()
Saves the selected objects defined in save
.
Does nothing if freq_type or freq are not met.
CallbackSetCheckpoint$on_batch_end()
on_exit()
Saves the learner.
CallbackSetCheckpoint$on_exit()
clone()
The objects of this class are cloneable with this method.
CallbackSetCheckpoint$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other Callback:
TorchCallback
,
as_torch_callback()
,
as_torch_callbacks()
,
callback_set()
,
mlr3torch_callbacks
,
mlr_callback_set
,
mlr_callback_set.progress
,
mlr_context_torch
,
t_clbk()
,
torch_callback()
Saves the training and validation history during training.
The history is saved as a data.table in the $train
and $valid
slots.
The first column is always epoch
.
mlr3torch::CallbackSet
-> CallbackSetHistory
on_begin()
Initializes lists where the train and validation metrics are stored.
CallbackSetHistory$on_begin()
state_dict()
Converts the lists to data.tables.
CallbackSetHistory$state_dict()
load_state_dict()
Sets the field $train
and $valid
to those contained in the state dict.
CallbackSetHistory$load_state_dict(state_dict)
state_dict
(callback_state_history
)
The state dict as retrieved via $state_dict()
.
on_before_valid()
Add the latest training scores to the history.
CallbackSetHistory$on_before_valid()
on_epoch_end()
Add the latest validation scores to the history.
CallbackSetHistory$on_epoch_end()
clone()
The objects of this class are cloneable with this method.
CallbackSetHistory$clone(deep = FALSE)
deep
Whether to make a deep clone.
Prints a progress bar and the metrics for training and validation.
mlr3torch::CallbackSet
-> CallbackSetProgress
on_epoch_begin()
Initializes the progress bar for training.
CallbackSetProgress$on_epoch_begin()
on_batch_end()
Increments the training progress bar.
CallbackSetProgress$on_batch_end()
on_before_valid()
Creates the progress bar for validation.
CallbackSetProgress$on_before_valid()
on_batch_valid_end()
Increments the validation progress bar.
CallbackSetProgress$on_batch_valid_end()
on_epoch_end()
Prints a summary of the training and validation process.
CallbackSetProgress$on_epoch_end()
clone()
The objects of this class are cloneable with this method.
CallbackSetProgress$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other Callback:
TorchCallback
,
as_torch_callback()
,
as_torch_callbacks()
,
callback_set()
,
mlr3torch_callbacks
,
mlr_callback_set
,
mlr_callback_set.checkpoint
,
mlr_context_torch
,
t_clbk()
,
torch_callback()
Context for training a torch learner.
This is the - mostly read-only - information callbacks have access to through the argument ctx
.
For more information on callbacks, see CallbackSet
.
learner
(Learner
)
The torch learner.
task_train
(Task
)
The training task.
task_valid
(Task
or NULL
)
The validation task.
loader_train
(torch::dataloader
)
The data loader for training.
loader_valid
(torch::dataloader
)
The data loader for validation.
measures_train
(list()
of Measure
s)
Measures used for training.
measures_valid
(list()
of Measure
s)
Measures used for validation.
network
(torch::nn_module
)
The torch network.
optimizer
(torch::optimizer
)
The optimizer.
loss_fn
(torch::nn_module
)
The loss function.
total_epochs
(integer(1)
)
The total number of epochs the learner is trained for.
last_scores_train
(named list()
or NULL
)
The scores from the last training batch. Names are the ids of the training measures.
If LearnerTorch
sets eval_freq
different from 1
, this is NULL
in all epochs
that don't evaluate the model.
last_scores_valid
(list()
)
The scores from the last validation batch. Names are the ids of the validation measures.
If LearnerTorch
sets eval_freq
different from 1
, this is NULL
in all epochs
that don't evaluate the model.
epoch
(integer(1)
)
The current epoch.
step
(integer(1)
)
The current iteration.
prediction_encoder
(function()
)
The learner's prediction encoder.
batch
(named list()
of torch_tensor
s)
The current batch.
terminate
(logical(1)
)
If this field is set to TRUE
at the end of an epoch, training stops.
new()
Creates a new instance of this R6 class.
ContextTorch$new( learner, task_train, task_valid = NULL, loader_train, loader_valid = NULL, measures_train = NULL, measures_valid = NULL, network, optimizer, loss_fn, total_epochs, prediction_encoder, eval_freq = 1L )
learner
(Learner
)
The torch learner.
task_train
(Task
)
The training task.
task_valid
(Task
or NULL
)
The validation task.
loader_train
(torch::dataloader
)
The data loader for training.
loader_valid
(torch::dataloader
or NULL
)
The data loader for validation.
measures_train
(list()
of Measure
s or NULL
)
Measures used for training. Default is NULL
.
measures_valid
(list()
of Measure
s or NULL
)
Measures used for validation.
network
(torch::nn_module
)
The torch network.
optimizer
(torch::optimizer
)
The optimizer.
loss_fn
(torch::nn_module
)
The loss function.
total_epochs
(integer(1)
)
The total number of epochs the learner is trained for.
prediction_encoder
(function()
)
The learner's prediction encoder.
eval_freq
(integer(1)
)
The evaluation frequency.
clone()
The objects of this class are cloneable with this method.
ContextTorch$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other Callback:
TorchCallback
,
as_torch_callback()
,
as_torch_callbacks()
,
callback_set()
,
mlr3torch_callbacks
,
mlr_callback_set
,
mlr_callback_set.checkpoint
,
mlr_callback_set.progress
,
t_clbk()
,
torch_callback()
This base class provides the basic functionality for training and prediction of a neural network. All torch learners should inherit from this class.
To specify the validation data, you can set the $validate
field of the Learner, which can be set to:
NULL
: no validation
ratio
: only proportion 1 - ratio
of the task is used for training and ratio
is used for validation.
"test"
means that the "test"
task of a resampling is used and is not possible when calling $train()
manually.
"predefined"
: This will use the predefined $internal_valid_task
of a mlr3::Task
.
This validation data can also be used for early stopping, see the description of the Learner
's parameters.
In order to save a LearnerTorch
for later usage, it is necessary to call the $marshal()
method on the Learner
before writing it to disk, as the object will otherwise not be saved correctly.
After loading a marshaled LearnerTorch
into R again, you then need to call $unmarshal()
to transform it
into a useable state.
In order to prevent overfitting, the LearnerTorch
class allows to use early stopping via the patience
and min_delta
parameters, see the Learner
's parameters.
When tuning a LearnerTorch
it is also possible to combine the explicit tuning via mlr3tuning
and the LearnerTorch
's internal tuning of the epochs via early stopping.
To do so, you just need to include epochs = to_tune(upper = <upper>, internal = TRUE)
in the search space,
where <upper>
is the maximally allowed number of epochs, and configure the early stopping.
The Model is a list of class "learner_torch_model"
with the following elements:
network
:: The trained network.
optimizer
:: The $state_dict()
optimizer used to train the network.
loss_fn
:: The $state_dict()
of the loss used to train the network.
callbacks
:: The callbacks used to train the network.
seed
:: The seed that was / is used for training and prediction.
epochs
:: How many epochs the model was trained for (early stopping).
task_col_info
:: A data.table()
containing information about the train-task.
General:
The parameters of the optimizer, loss and callbacks,
prefixed with "opt."
, "loss."
and "cb.<callback id>."
respectively, as well as:
epochs
:: integer(1)
The number of epochs.
device
:: character(1)
The device. One of "auto"
, "cpu"
, or "cuda"
or other values defined in mlr_reflections$torch$devices
.
The value is initialized to "auto"
, which will select "cuda"
if possible, then try "mps"
and otherwise
fall back to "cpu"
.
num_threads
:: integer(1)
The number of threads for intraop pararallelization (if device
is "cpu"
).
This value is initialized to 1.
seed
:: integer(1)
or "random"
or NULL
The torch seed that is used during training and prediction.
This value is initialized to "random"
, which means that a random seed will be sampled at the beginning of the
training phase. This seed (either set or randomly sampled) is available via $model$seed
after training
and used during prediction.
Note that by setting the seed during the training phase this will mean that by default (i.e. when seed
is
"random"
), clones of the learner will use a different seed.
If set to NULL
, no seeding will be done.
Evaluation:
measures_train
:: Measure
or list()
of Measure
s.
Measures to be evaluated during training.
measures_valid
:: Measure
or list()
of Measure
s.
Measures to be evaluated during validation.
eval_freq
:: integer(1)
How often the train / validation predictions are evaluated using measures_train
/ measures_valid
.
This is initialized to 1
.
Note that the final model is always evaluated.
Early Stopping:
patience
:: integer(1)
This activates early stopping using the validation scores.
If the performance of a model does not improve for patience
evaluation steps, training is ended.
Note that the final model is stored in the learner, not the best model.
This is initialized to 0
, which means no early stopping.
The first entry from measures_valid
is used as the metric.
This also requires to specify the $validate
field of the Learner, as well as measures_valid
.
min_delta
:: double(1)
The minimum improvement threshold (>
) for early stopping.
Is initialized to 0.
Dataloader:
batch_size
:: integer(1)
The batch size (required).
shuffle
:: logical(1)
Whether to shuffle the instances in the dataset. Default is FALSE
.
This does not impact validation.
sampler
:: torch::sampler
Object that defines how the dataloader draw samples.
batch_sampler
:: torch::sampler
Object that defines how the dataloader draws batches.
num_workers
:: integer(1)
The number of workers for data loading (batches are loaded in parallel).
The default is 0
, which means that data will be loaded in the main process.
collate_fn
:: function
How to merge a list of samples to form a batch.
pin_memory
:: logical(1)
Whether the dataloader copies tensors into CUDA pinned memory before returning them.
drop_last
:: logical(1)
Whether to drop the last training batch in each epoch during training. Default is FALSE
.
timeout
:: numeric(1)
The timeout value for collecting a batch from workers.
Negative values mean no timeout and the default is -1
.
worker_init_fn
:: function(id)
A function that receives the worker id (in [1, num_workers]
) and is exectued after seeding
on the worker but before data loading.
worker_globals
:: list()
| character()
When loading data in parallel, this allows to export globals to the workers.
If this is a character vector, the objects in the global environment with those names
are copied to the workers.
worker_packages
:: character()
Which packages to load on the workers.
Also see torch::dataloder
for more information.
There are no seperate classes for classification and regression to inherit from.
Instead, the task_type
must be specified as a construction argument.
Currently, only classification and regression are supported.
When inheriting from this class, one should overload two private methods:
.network(task, param_vals)
(Task
, list()
) -> nn_module
Construct a torch::nn_module
object for the given task and parameter values, i.e. the neural network that
is trained by the learner.
For classification, the output of this network are expected to be the scores before the application of the
final softmax layer.
.dataset(task, param_vals)
(Task
, list()
) -> torch::dataset
Create the dataset for the task.
Must respect the parameter value of the device.
Moreover, one needs to pay attention respect the row ids of the provided task.
It is also possible to overwrite the private .dataloader()
method instead of the .dataset()
method.
Per default, a dataloader is constructed using the output from the .dataset()
method.
However, this should respect the dataloader parameters from the ParamSet
.
.dataloader(task, param_vals)
(Task
, list()
) -> torch::dataloader
Create a dataloader from the task.
Needs to respect at least batch_size
and shuffle
(otherwise predictions can be permuted).
To change the predict types, the private .encode_prediction()
method can be overwritten:
.encode_prediction(predict_tensor, task, param_vals)
(torch_tensor
, Task
, list()
) -> list()
Take in the raw predictions from self$network
(predict_tensor
) and encode them into a
format that can be converted to valid mlr3
predictions using mlr3::as_prediction_data()
.
This method must take self$predict_type
into account.
While it is possible to add parameters by specifying the param_set
construction argument, it is currently
not possible to remove existing parameters, i.e. those listed in section Parameters.
None of the parameters provided in param_set
can have an id that starts with "loss."
, "opt.", or
"cb."', as these are preserved for the dynamically constructed parameters of the optimizer, the loss function,
and the callbacks.
To perform additional input checks on the task, the private .verify_train_task(task, param_vals)
and
.verify_predict_task(task, param_vals)
can be overwritten.
For learners that have other construction arguments that should change the hash of a learner, it is required
to implement the private $.additional_phash_input()
.
mlr3::Learner
-> LearnerTorch
validate
How to construct the internal validation data. This parameter can be either NULL
,
a ratio in $(0, 1)$, "test"
, or "predefined"
.
loss
(TorchLoss
)
The torch loss.
optimizer
(TorchOptimizer
)
The torch optimizer.
callbacks
(list()
of TorchCallback
s)
List of torch callbacks.
The ids will be set as the names.
internal_valid_scores
Retrieves the internal validation scores as a named list()
.
Specify the $validate
field and the measures_valid
parameter to configure this.
Returns NULL
if learner is not trained yet.
internal_tuned_values
When early stopping is activate, this returns a named list with the early-stopped epochs,
otherwise an empty list is returned.
Returns NULL
if learner is not trained yet.
marshaled
(logical(1)
)
Whether the learner is marshaled.
network
(nn_module()
)
Shortcut for learner$model$network
.
param_set
(ParamSet
)
The parameter set
hash
(character(1)
)
Hash (unique identifier) for this object.
phash
(character(1)
)
Hash (unique identifier) for this partial object, excluding some components
which are varied systematically during tuning (parameter values).
new()
Creates a new instance of this R6 class.
LearnerTorch$new( id, task_type, param_set, properties, man, label, feature_types, optimizer = NULL, loss = NULL, packages = character(), predict_types = NULL, callbacks = list() )
id
(character(1)
)
The id for of the new object.
task_type
(character(1)
)
The task type.
param_set
(ParamSet
or alist()
)
Either a parameter set, or an alist()
containing different values of self,
e.g. alist(private$.param_set1, private$.param_set2)
, from which a ParamSet
collection
should be created.
properties
(character()
)
The properties of the object.
See mlr_reflections$learner_properties
for available values.
man
(character(1)
)
String in the format [pkg]::[topic]
pointing to a manual page for this object.
The referenced help package can be opened via method $help()
.
label
(character(1)
)
Label for the new instance.
feature_types
(character()
)
The feature types.
See mlr_reflections$task_feature_types
for available values,
Additionally, "lazy_tensor"
is supported.
optimizer
(NULL
or TorchOptimizer
)
The optimizer to use for training.
Defaults to adam.
loss
(NULL
or TorchLoss
)
The loss to use for training.
Defaults to MSE for regression and cross entropy for classification.
packages
(character()
)
The R packages this object depends on.
predict_types
(character()
)
The predict types.
See mlr_reflections$learner_predict_types
for available values.
For regression, the default is "response"
.
For classification, this defaults to "response"
and "prob"
.
To deviate from the defaults, it is necessary to overwrite the private $.encode_prediction()
method, see section Inheriting.
callbacks
(list()
of TorchCallback
s)
The callbacks to use for training.
Defaults to an empty list()
, i.e. no callbacks.
format()
Helper for print outputs.
LearnerTorch$format(...)
...
(ignored).
print()
Prints the object.
LearnerTorch$print(...)
...
(any)
Currently unused.
marshal()
Marshal the learner.
LearnerTorch$marshal(...)
...
(any)
Additional parameters.
self
unmarshal()
Unmarshal the learner.
LearnerTorch$unmarshal(...)
...
(any)
Additional parameters.
self
dataset()
Create the dataset for a task.
LearnerTorch$dataset(task)
task
Task
The task
clone()
The objects of this class are cloneable with this method.
LearnerTorch$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other Learner:
mlr_learners.mlp
,
mlr_learners.tab_resnet
,
mlr_learners.torch_featureless
,
mlr_learners_torch_image
,
mlr_learners_torch_model
Base Class for Image Learners.
The features are assumed to be a single lazy_tensor
column in RGB format.
Parameters include those inherited from LearnerTorch
and the param_set
construction argument.
mlr3::Learner
-> mlr3torch::LearnerTorch
-> LearnerTorchImage
mlr3::Learner$base_learner()
mlr3::Learner$encapsulate()
mlr3::Learner$help()
mlr3::Learner$predict()
mlr3::Learner$predict_newdata()
mlr3::Learner$reset()
mlr3::Learner$train()
mlr3torch::LearnerTorch$dataset()
mlr3torch::LearnerTorch$format()
mlr3torch::LearnerTorch$marshal()
mlr3torch::LearnerTorch$print()
mlr3torch::LearnerTorch$unmarshal()
new()
Creates a new instance of this R6 class.
LearnerTorchImage$new( id, task_type, param_set = ps(), label, optimizer = NULL, loss = NULL, callbacks = list(), packages = c("torchvision", "magick"), man, properties = NULL, predict_types = NULL )
id
(character(1)
)
The id for of the new object.
task_type
(character(1)
)
The task type.
param_set
(ParamSet
)
The parameter set.
label
(character(1)
)
Label for the new instance.
optimizer
(TorchOptimizer
)
The torch optimizer.
loss
(TorchLoss
)
The loss to use for training.
callbacks
(list()
of TorchCallback
s)
The callbacks used during training.
Must have unique ids.
They are executed in the order in which they are provided
packages
(character()
)
The R packages this object depends on.
man
(character(1)
)
String in the format [pkg]::[topic]
pointing to a manual page for this object.
The referenced help package can be opened via method $help()
.
properties
(character()
)
The properties of the object.
See mlr_reflections$learner_properties
for available values.
predict_types
(character()
)
The predict types.
See mlr_reflections$learner_predict_types
for available values.
clone()
The objects of this class are cloneable with this method.
LearnerTorchImage$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other Learner:
mlr_learners.mlp
,
mlr_learners.tab_resnet
,
mlr_learners.torch_featureless
,
mlr_learners_torch
,
mlr_learners_torch_model
Create a torch learner from an instantiated nn_module()
.
For classification, the output of the network must be the scores (before the softmax).
See LearnerTorch
mlr3::Learner
-> mlr3torch::LearnerTorch
-> LearnerTorchModel
network_stored
(nn_module
or NULL
)
The network that will be trained.
After calling $train()
, this is NULL
.
ingress_tokens
(named list()
with TorchIngressToken
or NULL
)
The ingress tokens. Must be non-NULL
when calling $train()
.
mlr3::Learner$base_learner()
mlr3::Learner$encapsulate()
mlr3::Learner$help()
mlr3::Learner$predict()
mlr3::Learner$predict_newdata()
mlr3::Learner$reset()
mlr3::Learner$train()
mlr3torch::LearnerTorch$dataset()
mlr3torch::LearnerTorch$format()
mlr3torch::LearnerTorch$marshal()
mlr3torch::LearnerTorch$print()
mlr3torch::LearnerTorch$unmarshal()
new()
Creates a new instance of this R6 class.
LearnerTorchModel$new( network = NULL, ingress_tokens = NULL, task_type, properties = NULL, optimizer = NULL, loss = NULL, callbacks = list(), packages = character(0), feature_types = NULL )
network
(nn_module
)
An instantiated nn_module
. Is not cloned during construction.
For classification, outputs must be the scores (before the softmax).
ingress_tokens
(list
of TorchIngressToken()
)
A list with ingress tokens that defines how the dataloader will be defined.
task_type
(character(1)
)
The task type.
properties
(NULL
or character()
)
The properties of the learner.
Defaults to all available properties for the given task type.
optimizer
(TorchOptimizer
)
The torch optimizer.
loss
(TorchLoss
)
The loss to use for training.
callbacks
(list()
of TorchCallback
s)
The callbacks used during training.
Must have unique ids.
They are executed in the order in which they are provided
packages
(character()
)
The R packages this object depends on.
feature_types
(NULL
or character()
)
The feature types. Defaults to all available feature types.
clone()
The objects of this class are cloneable with this method.
LearnerTorchModel$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other Learner:
mlr_learners.mlp
,
mlr_learners.tab_resnet
,
mlr_learners.torch_featureless
,
mlr_learners_torch
,
mlr_learners_torch_image
Other Graph Network:
ModelDescriptor()
,
TorchIngressToken()
,
mlr_pipeops_module
,
mlr_pipeops_torch
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
model_descriptor_to_learner()
,
model_descriptor_to_module()
,
model_descriptor_union()
,
nn_graph()
# We show the learner using a classification task # The iris task has 4 features and 3 classes network = nn_linear(4, 3) task = tsk("iris") # This defines the dataloader. # It loads all 4 features, which are also numeric. # The shape is (NA, 4) because the batch dimension is generally NA ingress_tokens = list( input = TorchIngressToken(task$feature_names, batchgetter_num, c(NA, 4)) ) # Creating the learner and setting required parameters learner = lrn("classif.torch_model", network = network, ingress_tokens = ingress_tokens, batch_size = 16, epochs = 1, device = "cpu" ) # A simple train-predict ids = partition(task) learner$train(task, ids$train) learner$predict(task, ids$test)
# We show the learner using a classification task # The iris task has 4 features and 3 classes network = nn_linear(4, 3) task = tsk("iris") # This defines the dataloader. # It loads all 4 features, which are also numeric. # The shape is (NA, 4) because the batch dimension is generally NA ingress_tokens = list( input = TorchIngressToken(task$feature_names, batchgetter_num, c(NA, 4)) ) # Creating the learner and setting required parameters learner = lrn("classif.torch_model", network = network, ingress_tokens = ingress_tokens, batch_size = 16, epochs = 1, device = "cpu" ) # A simple train-predict ids = partition(task) learner$train(task, ids$train) learner$predict(task, ids$test)
Fully connected feed forward network with dropout after each activation function.
The features can either be a single lazy_tensor
or one or more numeric columns (but not both).
This Learner can be instantiated using the sugar function lrn()
:
lrn("classif.mlp", ...) lrn("regr.mlp", ...)
Supported task types: 'classif', 'regr'
Predict Types:
classif: 'response', 'prob'
regr: 'response'
Feature Types: “integer”, “numeric”, “lazy_tensor”
Parameters from LearnerTorch
, as well as:
activation
:: [nn_module]
The activation function. Is initialized to nn_relu
.
activation_args
:: named list()
A named list with initialization arguments for the activation function.
This is intialized to an empty list.
neurons
:: integer()
The number of neurons per hidden layer. By default there is no hidden layer.
Setting this to c(10, 20)
would have a the first hidden layer with 10 neurons and the second with 20.
p
:: numeric(1)
The dropout probability. Is initialized to 0.5
.
shape
:: integer()
or NULL
The input shape of length 2, e.g. c(NA, 5)
.
Only needs to be present when there is a lazy tensor input with unknown shape (NULL
).
Otherwise the input shape is inferred from the number of numeric features.
mlr3::Learner
-> mlr3torch::LearnerTorch
-> LearnerTorchMLP
mlr3::Learner$base_learner()
mlr3::Learner$encapsulate()
mlr3::Learner$help()
mlr3::Learner$predict()
mlr3::Learner$predict_newdata()
mlr3::Learner$reset()
mlr3::Learner$train()
mlr3torch::LearnerTorch$dataset()
mlr3torch::LearnerTorch$format()
mlr3torch::LearnerTorch$marshal()
mlr3torch::LearnerTorch$print()
mlr3torch::LearnerTorch$unmarshal()
new()
Creates a new instance of this R6 class.
LearnerTorchMLP$new( task_type, optimizer = NULL, loss = NULL, callbacks = list() )
task_type
(character(1)
)
The task type, either "classif
" or "regr"
.
optimizer
(TorchOptimizer
)
The optimizer to use for training.
Per default, adam is used.
loss
(TorchLoss
)
The loss used to train the network.
Per default, mse is used for regression and cross_entropy for classification.
callbacks
(list()
of TorchCallback
s)
The callbacks. Must have unique ids.
clone()
The objects of this class are cloneable with this method.
LearnerTorchMLP$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other Learner:
mlr_learners.tab_resnet
,
mlr_learners.torch_featureless
,
mlr_learners_torch
,
mlr_learners_torch_image
,
mlr_learners_torch_model
# Define the Learner and set parameter values learner = lrn("classif.mlp") learner$param_set$set_values( epochs = 1, batch_size = 16, device = "cpu", neurons = 10 ) # Define a Task task = tsk("iris") # Create train and test set ids = partition(task) # Train the learner on the training ids learner$train(task, row_ids = ids$train) # Make predictions for the test rows predictions = learner$predict(task, row_ids = ids$test) # Score the predictions predictions$score()
# Define the Learner and set parameter values learner = lrn("classif.mlp") learner$param_set$set_values( epochs = 1, batch_size = 16, device = "cpu", neurons = 10 ) # Define a Task task = tsk("iris") # Create train and test set ids = partition(task) # Train the learner on the training ids learner$train(task, row_ids = ids$train) # Make predictions for the test rows predictions = learner$predict(task, row_ids = ids$test) # Score the predictions predictions$score()
Tabular resnet.
This Learner can be instantiated using the sugar function lrn()
:
lrn("classif.tab_resnet", ...) lrn("regr.tab_resnet", ...)
Supported task types: 'classif', 'regr'
Predict Types:
classif: 'response', 'prob'
regr: 'response'
Feature Types: “integer”, “numeric”
Parameters from LearnerTorch
, as well as:
n_blocks
:: integer(1)
The number of blocks.
d_block
:: integer(1)
The input and output dimension of a block.
d_hidden
:: integer(1)
The latent dimension of a block.
d_hidden_multiplier
:: integer(1)
Alternative way to specify the latent dimension as d_block * d_hidden_multiplier
.
dropout1
:: numeric(1)
First dropout ratio.
dropout2
:: numeric(1)
Second dropout ratio.
mlr3::Learner
-> mlr3torch::LearnerTorch
-> LearnerTorchTabResNet
mlr3::Learner$base_learner()
mlr3::Learner$encapsulate()
mlr3::Learner$help()
mlr3::Learner$predict()
mlr3::Learner$predict_newdata()
mlr3::Learner$reset()
mlr3::Learner$train()
mlr3torch::LearnerTorch$dataset()
mlr3torch::LearnerTorch$format()
mlr3torch::LearnerTorch$marshal()
mlr3torch::LearnerTorch$print()
mlr3torch::LearnerTorch$unmarshal()
new()
Creates a new instance of this R6 class.
LearnerTorchTabResNet$new( task_type, optimizer = NULL, loss = NULL, callbacks = list() )
task_type
(character(1)
)
The task type, either "classif
" or "regr"
.
optimizer
(TorchOptimizer
)
The optimizer to use for training.
Per default, adam is used.
loss
(TorchLoss
)
The loss used to train the network.
Per default, mse is used for regression and cross_entropy for classification.
callbacks
(list()
of TorchCallback
s)
The callbacks. Must have unique ids.
clone()
The objects of this class are cloneable with this method.
LearnerTorchTabResNet$clone(deep = FALSE)
deep
Whether to make a deep clone.
Gorishniy Y, Rubachev I, Khrulkov V, Babenko A (2021). “Revisiting Deep Learning for Tabular Data.” arXiv, 2106.11959.
Other Learner:
mlr_learners.mlp
,
mlr_learners.torch_featureless
,
mlr_learners_torch
,
mlr_learners_torch_image
,
mlr_learners_torch_model
# Define the Learner and set parameter values learner = lrn("classif.tab_resnet") learner$param_set$set_values( epochs = 1, batch_size = 16, device = "cpu", n_blocks = 2, d_block = 10, d_hidden = 20, dropout1 = 0.3, dropout2 = 0.3 ) # Define a Task task = tsk("iris") # Create train and test set ids = partition(task) # Train the learner on the training ids learner$train(task, row_ids = ids$train) # Make predictions for the test rows predictions = learner$predict(task, row_ids = ids$test) # Score the predictions predictions$score()
# Define the Learner and set parameter values learner = lrn("classif.tab_resnet") learner$param_set$set_values( epochs = 1, batch_size = 16, device = "cpu", n_blocks = 2, d_block = 10, d_hidden = 20, dropout1 = 0.3, dropout2 = 0.3 ) # Define a Task task = tsk("iris") # Create train and test set ids = partition(task) # Train the learner on the training ids learner$train(task, row_ids = ids$train) # Make predictions for the test rows predictions = learner$predict(task, row_ids = ids$test) # Score the predictions predictions$score()
Featureless torch learner. Output is a constant weight that is learned during training. For classification, this should (asymptoptically) result in a majority class prediction when using the standard cross-entropy loss. For regression, this should result in the median for L1 loss and in the mean for L2 loss.
This Learner can be instantiated using the sugar function lrn()
:
lrn("classif.torch_featureless", ...) lrn("regr.torch_featureless", ...)
Supported task types: 'classif', 'regr'
Predict Types:
classif: 'response', 'prob'
regr: 'response'
Feature Types: “logical”, “integer”, “numeric”, “character”, “factor”, “ordered”, “POSIXct”, “lazy_tensor”
Only those from LearnerTorch
.
mlr3::Learner
-> mlr3torch::LearnerTorch
-> LearnerTorchFeatureless
mlr3::Learner$base_learner()
mlr3::Learner$encapsulate()
mlr3::Learner$help()
mlr3::Learner$predict()
mlr3::Learner$predict_newdata()
mlr3::Learner$reset()
mlr3::Learner$train()
mlr3torch::LearnerTorch$dataset()
mlr3torch::LearnerTorch$format()
mlr3torch::LearnerTorch$marshal()
mlr3torch::LearnerTorch$print()
mlr3torch::LearnerTorch$unmarshal()
new()
Creates a new instance of this R6 class.
LearnerTorchFeatureless$new( task_type, optimizer = NULL, loss = NULL, callbacks = list() )
task_type
(character(1)
)
The task type, either "classif
" or "regr"
.
optimizer
(TorchOptimizer
)
The optimizer to use for training.
Per default, adam is used.
loss
(TorchLoss
)
The loss used to train the network.
Per default, mse is used for regression and cross_entropy for classification.
callbacks
(list()
of TorchCallback
s)
The callbacks. Must have unique ids.
clone()
The objects of this class are cloneable with this method.
LearnerTorchFeatureless$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other Learner:
mlr_learners.mlp
,
mlr_learners.tab_resnet
,
mlr_learners_torch
,
mlr_learners_torch_image
,
mlr_learners_torch_model
# Define the Learner and set parameter values learner = lrn("classif.torch_featureless") learner$param_set$set_values( epochs = 1, batch_size = 16, device = "cpu" ) # Define a Task task = tsk("iris") # Create train and test set ids = partition(task) # Train the learner on the training ids learner$train(task, row_ids = ids$train) # Make predictions for the test rows predictions = learner$predict(task, row_ids = ids$test) # Score the predictions predictions$score()
# Define the Learner and set parameter values learner = lrn("classif.torch_featureless") learner$param_set$set_values( epochs = 1, batch_size = 16, device = "cpu" ) # Define a Task task = tsk("iris") # Create train and test set ids = partition(task) # Train the learner on the training ids learner$train(task, row_ids = ids$train) # Make predictions for the test rows predictions = learner$predict(task, row_ids = ids$test) # Score the predictions predictions$score()
Classic image classification networks from torchvision
.
Parameters from LearnerTorchImage
and
pretrained
:: logical(1)
Whether to use the pretrained model.
The final linear layer will be replaced with a new nn_linear
with the
number of classes inferred from the Task
.
Supported task types: "classif"
Predict Types: "response"
and "prob"
Feature Types: "lazy_tensor"
Required packages: "mlr3torch"
, "torch"
, "torchvision"
mlr3::Learner
-> mlr3torch::LearnerTorch
-> mlr3torch::LearnerTorchImage
-> LearnerTorchVision
mlr3::Learner$base_learner()
mlr3::Learner$encapsulate()
mlr3::Learner$help()
mlr3::Learner$predict()
mlr3::Learner$predict_newdata()
mlr3::Learner$reset()
mlr3::Learner$train()
mlr3torch::LearnerTorch$dataset()
mlr3torch::LearnerTorch$format()
mlr3torch::LearnerTorch$marshal()
mlr3torch::LearnerTorch$print()
mlr3torch::LearnerTorch$unmarshal()
new()
Creates a new instance of this R6 class.
LearnerTorchVision$new( name, module_generator, label, optimizer = NULL, loss = NULL, callbacks = list() )
name
(character(1)
)
The name of the network.
module_generator
(function(pretrained, num_classes)
)
Function that generates the network.
label
(character(1)
)
The label of the network.
#' @references
Krizhevsky, Alex, Sutskever, Ilya, Hinton, E. G (2017).
“Imagenet classification with deep convolutional neural networks.”
Communications of the ACM, 60(6), 84–90.
Sandler, Mark, Howard, Andrew, Zhu, Menglong, Zhmoginov, Andrey, Chen, Liang-Chieh (2018).
“Mobilenetv2: Inverted residuals and linear bottlenecks.”
In Proceedings of the IEEE conference on computer vision and pattern recognition, 4510–4520.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, Sun, Jian (2016).
“Deep residual learning for image recognition.”
In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778.
Simonyan, Karen, Zisserman, Andrew (2014).
“Very deep convolutional networks for large-scale image recognition.”
arXiv preprint arXiv:1409.1556.
optimizer
(TorchOptimizer
)
The optimizer to use for training.
Per default, adam is used.
loss
(TorchLoss
)
The loss used to train the network.
Per default, mse is used for regression and cross_entropy for classification.
callbacks
(list()
of TorchCallback
s)
The callbacks. Must have unique ids.
clone()
The objects of this class are cloneable with this method.
LearnerTorchVision$clone(deep = FALSE)
deep
Whether to make a deep clone.
Calls torchvision::transform_center_crop
,
see there for more information on the parameters.
The preprocessing is applied row wise (no batch dimension).
R6Class
inheriting from PipeOpTaskPreprocTorch
.
Id | Type | Default | Levels |
size | untyped | - | |
stages | character | - | train, predict, both |
affect_columns | untyped | selector_all() | |
Calls torchvision::transform_color_jitter
,
see there for more information on the parameters.
The preprocessing is applied row wise (no batch dimension).
R6Class
inheriting from PipeOpTaskPreprocTorch
.
Id | Type | Default | Levels | Range |
brightness | numeric | 0 | |
|
contrast | numeric | 0 | |
|
saturation | numeric | 0 | |
|
hue | numeric | 0 | |
|
stages | character | - | train, predict, both | - |
affect_columns | untyped | selector_all() | - | |
Calls torchvision::transform_crop
,
see there for more information on the parameters.
The preprocessing is applied row wise (no batch dimension).
R6Class
inheriting from PipeOpTaskPreprocTorch
.
Id | Type | Default | Levels | Range |
top | integer | - | |
|
left | integer | - | |
|
height | integer | - | |
|
width | integer | - | |
|
stages | character | - | train, predict, both | - |
affect_columns | untyped | selector_all() | - | |
Calls torchvision::transform_hflip
,
see there for more information on the parameters.
The preprocessing is applied row wise (no batch dimension).
R6Class
inheriting from PipeOpTaskPreprocTorch
.
Id | Type | Default | Levels |
stages | character | - | train, predict, both |
affect_columns | untyped | selector_all() | |
Calls torchvision::transform_random_affine
,
see there for more information on the parameters.
The preprocessing is applied row wise (no batch dimension).
R6Class
inheriting from PipeOpTaskPreprocTorch
.
Id | Type | Default | Levels | Range |
degrees | untyped | - | - | |
translate | untyped | NULL | - | |
scale | untyped | NULL | - | |
resample | integer | 0 | |
|
fillcolor | untyped | 0 | - | |
stages | character | - | train, predict, both | - |
affect_columns | untyped | selector_all() | - | |
Calls torchvision::transform_random_choice
,
see there for more information on the parameters.
The preprocessing is applied row wise (no batch dimension).
R6Class
inheriting from PipeOpTaskPreprocTorch
.
Id | Type | Default | Levels |
transforms | untyped | - | |
stages | character | - | train, predict, both |
affect_columns | untyped | selector_all() | |
Calls torchvision::transform_random_crop
,
see there for more information on the parameters.
The preprocessing is applied row wise (no batch dimension).
R6Class
inheriting from PipeOpTaskPreprocTorch
.
Id | Type | Default | Levels |
size | untyped | - | |
padding | untyped | NULL | |
pad_if_needed | logical | FALSE | TRUE, FALSE |
fill | untyped | 0L | |
padding_mode | character | constant | constant, edge, reflect, symmetric |
stages | character | - | train, predict, both |
affect_columns | untyped | selector_all() | |
Calls torchvision::transform_random_horizontal_flip
,
see there for more information on the parameters.
The preprocessing is applied row wise (no batch dimension).
R6Class
inheriting from PipeOpTaskPreprocTorch
.
Id | Type | Default | Levels | Range |
p | numeric | 0.5 | |
|
stages | character | - | train, predict, both | - |
affect_columns | untyped | selector_all() | - | |
Calls torchvision::transform_random_order
,
see there for more information on the parameters.
The preprocessing is applied row wise (no batch dimension).
R6Class
inheriting from PipeOpTaskPreprocTorch
.
Id | Type | Default | Levels |
transforms | untyped | - | |
stages | character | - | train, predict, both |
affect_columns | untyped | selector_all() | |
Calls torchvision::transform_random_resized_crop
,
see there for more information on the parameters.
The preprocessing is applied row wise (no batch dimension).
R6Class
inheriting from PipeOpTaskPreprocTorch
.
Id | Type | Default | Levels | Range |
size | untyped | - | - | |
scale | untyped | c(0.08, 1) | - | |
ratio | untyped | c(3/4, 4/3) | - | |
interpolation | integer | 2 | |
|
stages | character | - | train, predict, both | - |
affect_columns | untyped | selector_all() | - | |
Calls torchvision::transform_random_vertical_flip
,
see there for more information on the parameters.
The preprocessing is applied row wise (no batch dimension).
R6Class
inheriting from PipeOpTaskPreprocTorch
.
Id | Type | Default | Levels | Range |
p | numeric | 0.5 | |
|
stages | character | - | train, predict, both | - |
affect_columns | untyped | selector_all() | - | |
Calls torchvision::transform_resized_crop
,
see there for more information on the parameters.
The preprocessing is applied row wise (no batch dimension).
R6Class
inheriting from PipeOpTaskPreprocTorch
.
Id | Type | Default | Levels | Range |
top | integer | - | |
|
left | integer | - | |
|
height | integer | - | |
|
width | integer | - | |
|
size | untyped | - | - | |
interpolation | integer | 2 | |
|
stages | character | - | train, predict, both | - |
affect_columns | untyped | selector_all() | - | |
Calls torchvision::transform_rotate
,
see there for more information on the parameters.
The preprocessing is applied row wise (no batch dimension).
R6Class
inheriting from PipeOpTaskPreprocTorch
.
Id | Type | Default | Levels | Range |
angle | untyped | - | - | |
resample | integer | 0 | |
|
expand | logical | FALSE | TRUE, FALSE | - |
center | untyped | NULL | - | |
fill | untyped | NULL | - | |
stages | character | - | train, predict, both | - |
affect_columns | untyped | selector_all() | - | |
Calls torchvision::transform_vflip
,
see there for more information on the parameters.
The preprocessing is applied row wise (no batch dimension).
R6Class
inheriting from PipeOpTaskPreprocTorch
.
Id | Type | Default | Levels |
stages | character | - | train, predict, both |
affect_columns | untyped | selector_all() | |
PipeOpModule
wraps an nn_module
or function
that is being called during the train
phase of this
mlr3pipelines::PipeOp
. By doing so, this allows to assemble PipeOpModule
s in a computational
mlr3pipelines::Graph
that represents either a neural network or a preprocessing graph of a lazy_tensor
.
In most cases it is easier to create such a network by creating a graph that generates this graph.
In most cases it is easier to create such a network by creating a structurally related graph consisting
of nodes of class PipeOpTorchIngress
and PipeOpTorch
. This graph will then generate the graph consisting
of PipeOpModule
s as part of the ModelDescriptor
.
The number and names of the input and output channels can be set during construction. They input and output
"torch_tensor"
during training, and NULL
during prediction as the prediction phase currently serves no
meaningful purpose.
The state is the value calculated by the public method shapes_out()
.
No parameters.
During training, the wrapped nn_module
/ function
is called with the provided inputs in the order in which
the channels are defined. Arguments are not matched by name.
mlr3pipelines::PipeOp
-> PipeOpModule
module
(nn_module
)
The torch module that is called during the training phase.
new()
Creates a new instance of this R6 class.
PipeOpModule$new( id = "module", module = nn_identity(), inname = "input", outname = "output", param_vals = list(), packages = character(0) )
id
(character(1)
)
The id for of the new object.
module
(nn_module
or function()
)
The torch module or function that is being wrapped.
inname
(character()
)
The names of the input channels.
outname
(character()
)
The names of the output channels. If this parameter has length 1, the parameter module must
return a tensor. Otherwise it must return a list()
of tensors of corresponding length.
param_vals
(named list()
)
Parameter values to be set after construction.
packages
(character()
)
The R packages this object depends on.
clone()
The objects of this class are cloneable with this method.
PipeOpModule$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other Graph Network:
ModelDescriptor()
,
TorchIngressToken()
,
mlr_learners_torch_model
,
mlr_pipeops_torch
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
model_descriptor_to_learner()
,
model_descriptor_to_module()
,
model_descriptor_union()
,
nn_graph()
Other PipeOp:
mlr_pipeops_torch_callbacks
,
mlr_pipeops_torch_optimizer
## creating an PipeOpModule manually # one input and output channel po_module = po("module", id = "linear", module = torch::nn_linear(10, 20), inname = "input", outname = "output" ) x = torch::torch_randn(16, 10) # This calls the forward function of the wrapped module. y = po_module$train(list(input = x)) str(y) # multiple input and output channels nn_custom = torch::nn_module("nn_custom", initialize = function(in_features, out_features) { self$lin1 = torch::nn_linear(in_features, out_features) self$lin2 = torch::nn_linear(in_features, out_features) }, forward = function(x, z) { list(out1 = self$lin1(x), out2 = torch::nnf_relu(self$lin2(z))) } ) module = nn_custom(3, 2) po_module = po("module", id = "custom", module = module, inname = c("x", "z"), outname = c("out1", "out2") ) x = torch::torch_randn(1, 3) z = torch::torch_randn(1, 3) out = po_module$train(list(x = x, z = z)) str(out) # How such a PipeOpModule is usually generated graph = po("torch_ingress_num") %>>% po("nn_linear", out_features = 10L) result = graph$train(tsk("iris")) # The PipeOpTorchLinear generates a PipeOpModule and adds it to a new (module) graph result[[1]]$graph linear_module = result[[1L]]$graph$pipeops$nn_linear linear_module formalArgs(linear_module$module) linear_module$input$name # Constructing a PipeOpModule using a simple function po_add1 = po("module", id = "add_one", module = function(x) x + 1 ) input = list(torch_tensor(1)) po_add1$train(input)$output
## creating an PipeOpModule manually # one input and output channel po_module = po("module", id = "linear", module = torch::nn_linear(10, 20), inname = "input", outname = "output" ) x = torch::torch_randn(16, 10) # This calls the forward function of the wrapped module. y = po_module$train(list(input = x)) str(y) # multiple input and output channels nn_custom = torch::nn_module("nn_custom", initialize = function(in_features, out_features) { self$lin1 = torch::nn_linear(in_features, out_features) self$lin2 = torch::nn_linear(in_features, out_features) }, forward = function(x, z) { list(out1 = self$lin1(x), out2 = torch::nnf_relu(self$lin2(z))) } ) module = nn_custom(3, 2) po_module = po("module", id = "custom", module = module, inname = c("x", "z"), outname = c("out1", "out2") ) x = torch::torch_randn(1, 3) z = torch::torch_randn(1, 3) out = po_module$train(list(x = x, z = z)) str(out) # How such a PipeOpModule is usually generated graph = po("torch_ingress_num") %>>% po("nn_linear", out_features = 10L) result = graph$train(tsk("iris")) # The PipeOpTorchLinear generates a PipeOpModule and adds it to a new (module) graph result[[1]]$graph linear_module = result[[1L]]$graph$pipeops$nn_linear linear_module formalArgs(linear_module$module) linear_module$input$name # Constructing a PipeOpModule using a simple function po_add1 = po("module", id = "add_one", module = function(x) x + 1 ) input = list(torch_tensor(1)) po_add1$train(input)$output
Applies a 1D adaptive average pooling over an input signal composed of several input planes.
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
kernel_size
:: (integer()
)
The size of the window. Can be a single number or a vector.
stride
:: integer()
The stride of the window. Can be a single number or a vector. Default: kernel_size
.
padding
:: integer()
Implicit zero paddings on both sides of the input. Can be a single number or a vector. Default: 0.
ceil_mode
:: integer()
When TRUE
, will use ceil instead of floor to compute the output shape. Default: FALSE
.
count_include_pad
:: logical(1)
When TRUE
, will include the zero-padding in the averaging calculation. Default: TRUE
.
divisor_override
:: logical(1)
If specified, it will be used as divisor, otherwise size of the pooling region will be used. Default: NULL.
Only available for dimension greater than 1.
Calls nn_avg_pool1d()
during training.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> mlr3torch::PipeOpTorchAvgPool
-> PipeOpTorchAvgPool1D
new()
Creates a new instance of this R6 class.
PipeOpTorchAvgPool1D$new(id = "nn_avg_pool1d", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchAvgPool1D$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_avg_pool1d") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_avg_pool1d") pipeop # The available parameters pipeop$param_set
Applies a 2D adaptive average pooling over an input signal composed of several input planes.
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
Calls nn_avg_pool2d()
during training.
kernel_size
:: (integer()
)
The size of the window. Can be a single number or a vector.
stride
:: integer()
The stride of the window. Can be a single number or a vector. Default: kernel_size
.
padding
:: integer()
Implicit zero paddings on both sides of the input. Can be a single number or a vector. Default: 0.
ceil_mode
:: integer()
When TRUE
, will use ceil instead of floor to compute the output shape. Default: FALSE
.
count_include_pad
:: logical(1)
When TRUE
, will include the zero-padding in the averaging calculation. Default: TRUE
.
divisor_override
:: logical(1)
If specified, it will be used as divisor, otherwise size of the pooling region will be used. Default: NULL.
Only available for dimension greater than 1.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> mlr3torch::PipeOpTorchAvgPool
-> PipeOpTorchAvgPool2D
new()
Creates a new instance of this R6 class.
PipeOpTorchAvgPool2D$new(id = "nn_avg_pool2d", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchAvgPool2D$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_avg_pool2d") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_avg_pool2d") pipeop # The available parameters pipeop$param_set
Applies a 3D adaptive average pooling over an input signal composed of several input planes.
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
Calls nn_avg_pool3d()
during training.
kernel_size
:: (integer()
)
The size of the window. Can be a single number or a vector.
stride
:: integer()
The stride of the window. Can be a single number or a vector. Default: kernel_size
.
padding
:: integer()
Implicit zero paddings on both sides of the input. Can be a single number or a vector. Default: 0.
ceil_mode
:: integer()
When TRUE
, will use ceil instead of floor to compute the output shape. Default: FALSE
.
count_include_pad
:: logical(1)
When TRUE
, will include the zero-padding in the averaging calculation. Default: TRUE
.
divisor_override
:: logical(1)
If specified, it will be used as divisor, otherwise size of the pooling region will be used. Default: NULL.
Only available for dimension greater than 1.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> mlr3torch::PipeOpTorchAvgPool
-> PipeOpTorchAvgPool3D
new()
Creates a new instance of this R6 class.
PipeOpTorchAvgPool3D$new(id = "nn_avg_pool3d", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchAvgPool3D$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_avg_pool3d") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_avg_pool3d") pipeop # The available parameters pipeop$param_set
Applies Batch Normalization for each channel across a batch of data.
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
eps
:: numeric(1)
A value added to the denominator for numerical stability. Default: 1e-5
.
momentum
:: numeric(1)
The value used for the running_mean and running_var computation. Can be set to NULL
for cumulative moving average
(i.e. simple average). Default: 0.1
affine
:: logical(1)
a boolean value that when set to TRUE
, this module has learnable affine parameters. Default: TRUE
track_running_stats
:: logical(1)
a boolean value that when set to TRUE
, this module tracks the running mean and variance, and when set to FALSE
,
this module does not track such statistics and always uses batch statistics in both training and eval modes.
Default: TRUE
Calls torch::nn_batch_norm1d()
.
The parameter num_features
is inferred as the second dimension of the input shape.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> mlr3torch::PipeOpTorchBatchNorm
-> PipeOpTorchBatchNorm1D
new()
Creates a new instance of this R6 class.
PipeOpTorchBatchNorm1D$new(id = "nn_batch_norm1d", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchBatchNorm1D$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_batch_norm1d") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_batch_norm1d") pipeop # The available parameters pipeop$param_set
Applies Batch Normalization for each channel across a batch of data.
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
Calls torch::nn_batch_norm2d()
.
The parameter num_features
is inferred as the second dimension of the input shape.
eps
:: numeric(1)
A value added to the denominator for numerical stability. Default: 1e-5
.
momentum
:: numeric(1)
The value used for the running_mean and running_var computation. Can be set to NULL
for cumulative moving average
(i.e. simple average). Default: 0.1
affine
:: logical(1)
a boolean value that when set to TRUE
, this module has learnable affine parameters. Default: TRUE
track_running_stats
:: logical(1)
a boolean value that when set to TRUE
, this module tracks the running mean and variance, and when set to FALSE
,
this module does not track such statistics and always uses batch statistics in both training and eval modes.
Default: TRUE
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> mlr3torch::PipeOpTorchBatchNorm
-> PipeOpTorchBatchNorm2D
new()
Creates a new instance of this R6 class.
PipeOpTorchBatchNorm2D$new(id = "nn_batch_norm2d", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchBatchNorm2D$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_batch_norm2d") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_batch_norm2d") pipeop # The available parameters pipeop$param_set
Applies Batch Normalization for each channel across a batch of data.
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
Calls torch::nn_batch_norm3d()
.
The parameter num_features
is inferred as the second dimension of the input shape.
eps
:: numeric(1)
A value added to the denominator for numerical stability. Default: 1e-5
.
momentum
:: numeric(1)
The value used for the running_mean and running_var computation. Can be set to NULL
for cumulative moving average
(i.e. simple average). Default: 0.1
affine
:: logical(1)
a boolean value that when set to TRUE
, this module has learnable affine parameters. Default: TRUE
track_running_stats
:: logical(1)
a boolean value that when set to TRUE
, this module tracks the running mean and variance, and when set to FALSE
,
this module does not track such statistics and always uses batch statistics in both training and eval modes.
Default: TRUE
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> mlr3torch::PipeOpTorchBatchNorm
-> PipeOpTorchBatchNorm3D
new()
Creates a new instance of this R6 class.
PipeOpTorchBatchNorm3D$new(id = "nn_batch_norm3d", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchBatchNorm3D$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_batch_norm3d") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_batch_norm3d") pipeop # The available parameters pipeop$param_set
Repeat a block n_blocks
times.
The parameters available for the block itself, as well as
n_blocks
:: integer(1)
How often to repeat the block.
The PipeOp
sets its input and output channels to those from the block
(Graph)
it received during construction.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> PipeOpTorchBlock
block
(Graph
)
The neural network segment that is repeated by this PipeOp
.
new()
Creates a new instance of this R6 class.
PipeOpTorchBlock$new(block, id = "nn_block", param_vals = list())
block
(Graph
)
A graph consisting primarily of PipeOpTorch
objects that is to be
repeated.
id
(character(1)
)
The id for of the new object.
param_vals
(named list()
)
Parameter values to be set after construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchBlock$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
block = po("nn_linear") %>>% po("nn_relu") po_block = po("nn_block", block, nn_linear.out_features = 10L, n_blocks = 3) network = po("torch_ingress_num") %>>% po_block %>>% po("nn_head") %>>% po("torch_loss", t_loss("cross_entropy")) %>>% po("torch_optimizer", t_opt("adam")) %>>% po("torch_model_classif", batch_size = 50, epochs = 3) task = tsk("iris") network$train(task)
block = po("nn_linear") %>>% po("nn_relu") po_block = po("nn_block", block, nn_linear.out_features = 10L, n_blocks = 3) network = po("torch_ingress_num") %>>% po_block %>>% po("nn_head") %>>% po("torch_loss", t_loss("cross_entropy")) %>>% po("torch_optimizer", t_opt("adam")) %>>% po("torch_model_classif", batch_size = 50, epochs = 3) task = tsk("iris") network$train(task)
Applies element-wise, .
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
alpha
:: numeric(1)
The alpha value for the ELU formulation. Default: 1.0
inplace
:: logical(1)
Whether to do the operation in-place. Default: FALSE
.
Calls torch::nn_celu()
when trained.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> PipeOpTorchCELU
new()
Creates a new instance of this R6 class.
PipeOpTorchCELU$new(id = "nn_celu", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchCELU$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_celu") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_celu") pipeop # The available parameters pipeop$param_set
Transpose 1D Convolution
Transpose 1D Convolution
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
out_channels
:: integer(1)
Number of output channels produce by the convolution.
kernel_size
:: integer()
Size of the convolving kernel.
stride
:: integer()
Stride of the convolution. Default: 1.
padding
::
integer()'
‘dilation * (kernel_size - 1) - padding’ zero-padding will be added to both sides of the input. Default: 0.
output_padding
::integer()
Additional size added to one side of the output shape. Default: 0.
groups
:: integer()
Number of blocked connections from input channels to output channels. Default: 1
bias
:: logical(1)
If ‘True’, adds a learnable bias to the output. Default: ‘TRUE’.
dilation
:: integer()
Spacing between kernel elements. Default: 1.
padding_mode
:: character(1)
The padding mode. One of "zeros"
, "reflect"
, "replicate"
, or "circular"
. Default is "zeros"
.
Calls nn_conv_transpose1d
.
The parameter in_channels
is inferred as the second dimension of the input tensor.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> mlr3torch::PipeOpTorchConvTranspose
-> PipeOpTorchConvTranspose1D
new()
Creates a new instance of this R6 class.
PipeOpTorchConvTranspose1D$new(id = "nn_conv_transpose1d", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchConvTranspose1D$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_conv_transpose1d", kernel_size = 3, out_channels = 2) pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_conv_transpose1d", kernel_size = 3, out_channels = 2) pipeop # The available parameters pipeop$param_set
Applies a 2D transposed convolution operator over an input image composed of several input planes, sometimes also called "deconvolution".
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
Calls nn_conv_transpose2d
.
The parameter in_channels
is inferred as the second dimension of the input tensor.
out_channels
:: integer(1)
Number of output channels produce by the convolution.
kernel_size
:: integer()
Size of the convolving kernel.
stride
:: integer()
Stride of the convolution. Default: 1.
padding
::
integer()'
‘dilation * (kernel_size - 1) - padding’ zero-padding will be added to both sides of the input. Default: 0.
output_padding
::integer()
Additional size added to one side of the output shape. Default: 0.
groups
:: integer()
Number of blocked connections from input channels to output channels. Default: 1
bias
:: logical(1)
If ‘True’, adds a learnable bias to the output. Default: ‘TRUE’.
dilation
:: integer()
Spacing between kernel elements. Default: 1.
padding_mode
:: character(1)
The padding mode. One of "zeros"
, "reflect"
, "replicate"
, or "circular"
. Default is "zeros"
.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> mlr3torch::PipeOpTorchConvTranspose
-> PipeOpTorchConvTranspose2D
new()
Creates a new instance of this R6 class.
PipeOpTorchConvTranspose2D$new(id = "nn_conv_transpose2d", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchConvTranspose2D$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_conv_transpose2d", kernel_size = 3, out_channels = 2) pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_conv_transpose2d", kernel_size = 3, out_channels = 2) pipeop # The available parameters pipeop$param_set
Applies a 3D transposed convolution operator over an input image composed of several input planes, sometimes also called "deconvolution"
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
Calls nn_conv_transpose3d
.
The parameter in_channels
is inferred as the second dimension of the input tensor.
out_channels
:: integer(1)
Number of output channels produce by the convolution.
kernel_size
:: integer()
Size of the convolving kernel.
stride
:: integer()
Stride of the convolution. Default: 1.
padding
::
integer()'
‘dilation * (kernel_size - 1) - padding’ zero-padding will be added to both sides of the input. Default: 0.
output_padding
::integer()
Additional size added to one side of the output shape. Default: 0.
groups
:: integer()
Number of blocked connections from input channels to output channels. Default: 1
bias
:: logical(1)
If ‘True’, adds a learnable bias to the output. Default: ‘TRUE’.
dilation
:: integer()
Spacing between kernel elements. Default: 1.
padding_mode
:: character(1)
The padding mode. One of "zeros"
, "reflect"
, "replicate"
, or "circular"
. Default is "zeros"
.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> mlr3torch::PipeOpTorchConvTranspose
-> PipeOpTorchConvTranspose3D
new()
Creates a new instance of this R6 class.
PipeOpTorchConvTranspose3D$new(id = "nn_conv_transpose3d", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchConvTranspose3D$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_conv_transpose3d", kernel_size = 3, out_channels = 2) pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_conv_transpose3d", kernel_size = 3, out_channels = 2) pipeop # The available parameters pipeop$param_set
Applies a 1D convolution over an input signal composed of several input planes.
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
out_channels
:: integer(1)
Number of channels produced by the convolution.
kernel_size
:: integer()
Size of the convolving kernel.
stride
:: integer()
Stride of the convolution. The default is 1.
padding
:: integer()
‘dilation * (kernel_size - 1) - padding’ zero-padding will be added to both sides of the input. Default: 0.
groups
:: integer()
Number of blocked connections from input channels to output channels. Default: 1
bias
:: logical(1)
If ‘TRUE’, adds a learnable bias to the output. Default: ‘TRUE’.
dilation
:: integer()
Spacing between kernel elements. Default: 1.
padding_mode
:: character(1)
The padding mode. One of "zeros"
, "reflect"
, "replicate"
, or "circular"
. Default is "zeros"
.
Calls torch::nn_conv1d()
when trained.
The paramter in_channels
is inferred from the second dimension of the input tensor.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> mlr3torch::PipeOpTorchConv
-> PipeOpTorchConv1D
new()
Creates a new instance of this R6 class.
PipeOpTorchConv1D$new(id = "nn_conv1d", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchConv1D$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_conv1d", kernel_size = 10, out_channels = 1) pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_conv1d", kernel_size = 10, out_channels = 1) pipeop # The available parameters pipeop$param_set
Applies a 2D convolution over an input image composed of several input planes.
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
Calls torch::nn_conv2d()
when trained.
The paramter in_channels
is inferred from the second dimension of the input tensor.
out_channels
:: integer(1)
Number of channels produced by the convolution.
kernel_size
:: integer()
Size of the convolving kernel.
stride
:: integer()
Stride of the convolution. The default is 1.
padding
:: integer()
‘dilation * (kernel_size - 1) - padding’ zero-padding will be added to both sides of the input. Default: 0.
groups
:: integer()
Number of blocked connections from input channels to output channels. Default: 1
bias
:: logical(1)
If ‘TRUE’, adds a learnable bias to the output. Default: ‘TRUE’.
dilation
:: integer()
Spacing between kernel elements. Default: 1.
padding_mode
:: character(1)
The padding mode. One of "zeros"
, "reflect"
, "replicate"
, or "circular"
. Default is "zeros"
.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> mlr3torch::PipeOpTorchConv
-> PipeOpTorchConv2D
new()
Creates a new instance of this R6 class.
PipeOpTorchConv2D$new(id = "nn_conv2d", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchConv2D$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_conv2d", kernel_size = 10, out_channels = 1) pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_conv2d", kernel_size = 10, out_channels = 1) pipeop # The available parameters pipeop$param_set
Applies a 3D convolution over an input image composed of several input planes.
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
Calls torch::nn_conv3d()
when trained.
The paramter in_channels
is inferred from the second dimension of the input tensor.
out_channels
:: integer(1)
Number of channels produced by the convolution.
kernel_size
:: integer()
Size of the convolving kernel.
stride
:: integer()
Stride of the convolution. The default is 1.
padding
:: integer()
‘dilation * (kernel_size - 1) - padding’ zero-padding will be added to both sides of the input. Default: 0.
groups
:: integer()
Number of blocked connections from input channels to output channels. Default: 1
bias
:: logical(1)
If ‘TRUE’, adds a learnable bias to the output. Default: ‘TRUE’.
dilation
:: integer()
Spacing between kernel elements. Default: 1.
padding_mode
:: character(1)
The padding mode. One of "zeros"
, "reflect"
, "replicate"
, or "circular"
. Default is "zeros"
.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> mlr3torch::PipeOpTorchConv
-> PipeOpTorchConv3D
new()
Creates a new instance of this R6 class.
PipeOpTorchConv3D$new(id = "nn_conv3d", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchConv3D$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_conv3d", kernel_size = 10, out_channels = 1) pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_conv3d", kernel_size = 10, out_channels = 1) pipeop # The available parameters pipeop$param_set
During training, randomly zeroes some of the elements of the input
tensor with probability p
using samples from a Bernoulli
distribution.
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
p
:: numeric(1)
Probability of an element to be zeroed. Default: 0.5 inplace
inplace
:: logical(1)
If set to TRUE
, will do this operation in-place. Default: FALSE
.
Calls torch::nn_dropout()
when trained.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> PipeOpTorchDropout
new()
Creates a new instance of this R6 class.
PipeOpTorchDropout$new(id = "nn_dropout", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchDropout$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_dropout") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_dropout") pipeop # The available parameters pipeop$param_set
Applies element-wise,
.
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
alpha
:: numeric(1)
The alpha value for the ELU formulation. Default: 1.0
inplace
:: logical(1)
Whether to do the operation in-place. Default: FALSE
.
Calls torch::nn_elu()
when trained.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> PipeOpTorchELU
new()
Creates a new instance of this R6 class.
PipeOpTorchELU$new(id = "nn_elu", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchELU$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_elu") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_elu") pipeop # The available parameters pipeop$param_set
For use with nn_sequential.
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
start_dim
:: integer(1)
At wich dimension to start flattening. Default is 2.
end_dim
:: integer(1)
At wich dimension to stop flattening. Default is -1.
Calls torch::nn_flatten()
when trained.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> PipeOpTorchFlatten
new()
Creates a new instance of this R6 class.
PipeOpTorchFlatten$new(id = "nn_flatten", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchFlatten$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_flatten") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_flatten") pipeop # The available parameters pipeop$param_set
Gelu
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
approximate
:: character(1)
Whether to use an approximation algorithm. Default is "none"
.
Calls torch::nn_gelu()
when trained.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> PipeOpTorchGELU
new()
Creates a new instance of this R6 class.
PipeOpTorchGELU$new(id = "nn_gelu", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchGELU$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_gelu") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_gelu") pipeop # The available parameters pipeop$param_set
The gated linear unit. Computes:
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
dim
:: integer(1)
Dimension on which to split the input. Default: -1
Calls torch::nn_glu()
when trained.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> PipeOpTorchGLU
new()
Creates a new instance of this R6 class.
PipeOpTorchGLU$new(id = "nn_glu", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchGLU$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_glu") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_glu") pipeop # The available parameters pipeop$param_set
Applies the hard shrinkage function element-wise
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
lambd
:: numeric(1)
The lambda value for the Hardshrink formulation formulation. Default 0.5.
Calls torch::nn_hardshrink()
when trained.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> PipeOpTorchHardShrink
new()
Creates a new instance of this R6 class.
PipeOpTorchHardShrink$new(id = "nn_hardshrink", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchHardShrink$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_hardshrink") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_hardshrink") pipeop # The available parameters pipeop$param_set
Applies the element-wise function
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
No parameters.
Calls torch::nn_hardsigmoid()
when trained.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> PipeOpTorchHardSigmoid
new()
Creates a new instance of this R6 class.
PipeOpTorchHardSigmoid$new(id = "nn_hardsigmoid", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchHardSigmoid$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_hardsigmoid") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_hardsigmoid") pipeop # The available parameters pipeop$param_set
Applies the HardTanh function element-wise.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
min_val
:: numeric(1)
Minimum value of the linear region range. Default: -1.
max_val
:: numeric(1)
Maximum value of the linear region range. Default: 1.
inplace
:: logical(1)
Can optionally do the operation in-place. Default: FALSE
.
Calls torch::nn_hardtanh()
when trained.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> PipeOpTorchHardTanh
new()
Creates a new instance of this R6 class.
PipeOpTorchHardTanh$new(id = "nn_hardtanh", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchHardTanh$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_hardtanh") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_hardtanh") pipeop # The available parameters pipeop$param_set
Output head for classification and regresssion.
NOTE
Because the method $shapes_out()
does not have access to the task, it returns c(NA, NA)
.
When this PipeOp
is trained however, the model descriptor has the correct output shape.
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
bias
:: logical(1)
Whether to use a bias. Default is TRUE
.
Calls torch::nn_linear()
with the input and output features inferred from the input shape / task.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> PipeOpTorchHead
new()
Creates a new instance of this R6 class.
PipeOpTorchHead$new(id = "nn_head", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchHead$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_head") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_head") pipeop # The available parameters pipeop$param_set
Applies Layer Normalization for last certain number of dimensions.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
dims
:: integer(1)
The number of dimensions over which will be normalized (starting from the last dimension).
elementwise_affine
:: logical(1)
Whether to learn affine-linear parameters initialized to 1
for weights and to 0
for biases.
The default is TRUE
.
eps
:: numeric(1)
A value added to the denominator for numerical stability.
Calls torch::nn_layer_norm()
when trained.
The parameter normalized_shape
is inferred as the dimensions of the last dims
dimensions of the input shape.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> PipeOpTorchLayerNorm
new()
Creates a new instance of this R6 class.
PipeOpTorchLayerNorm$new(id = "nn_layer_norm", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchLayerNorm$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_layer_norm", dims = 1) pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_layer_norm", dims = 1) pipeop # The available parameters pipeop$param_set
Applies element-wise,
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
negative_slope
:: numeric(1)
Controls the angle of the negative slope. Default: 1e-2.
inplace
:: logical(1)
Can optionally do the operation in-place. Default: ‘FALSE’.
Calls torch::nn_hardswish()
when trained.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> PipeOpTorchLeakyReLU
new()
Creates a new instance of this R6 class.
PipeOpTorchLeakyReLU$new(id = "nn_leaky_relu", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchLeakyReLU$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_leaky_relu") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_leaky_relu") pipeop # The available parameters pipeop$param_set
Applies a linear transformation to the incoming data: .
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
out_features
:: integer(1)
The output features of the linear layer.
bias
:: logical(1)
Whether to use a bias.
Default is TRUE
.
Calls torch::nn_linear()
when trained where the parameter in_features
is inferred as the second
to last dimension of the input tensor.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> PipeOpTorchLinear
new()
Creates a new instance of this R6 class.
PipeOpTorchLinear$new(id = "nn_linear", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchLinear$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_linear", out_features = 10) pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_linear", out_features = 10) pipeop # The available parameters pipeop$param_set
Applies element-wise
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
No parameters.
Calls torch::nn_log_sigmoid()
when trained.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> PipeOpTorchLogSigmoid
new()
Creates a new instance of this R6 class.
PipeOpTorchLogSigmoid$new(id = "nn_log_sigmoid", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchLogSigmoid$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_log_sigmoid") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_log_sigmoid") pipeop # The available parameters pipeop$param_set
Applies a 1D max pooling over an input signal composed of several input planes.
If return_indices
is FALSE
during construction, there is one input channel 'input' and one output channel 'output'.
If return_indices
is TRUE
, there are two output channels 'output' and 'indices'.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
kernel_size
:: integer()
The size of the window. Can be single number or a vector.
stride
:: (integer(1))
The stride of the window. Can be a single number or a vector. Default: kernel_size
padding
:: integer()
Implicit zero paddings on both sides of the input. Can be a single number or a tuple (padW,). Default: 0
dilation
:: integer()
Controls the spacing between the kernel points; also known as the à trous algorithm. Default: 1
ceil_mode
:: logical(1)
When True, will use ceil instead of floor to compute the output shape. Default: FALSE
Calls torch::nn_max_pool1d()
during training.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> mlr3torch::PipeOpTorchMaxPool
-> PipeOpTorchMaxPool1D
new()
Creates a new instance of this R6 class.
PipeOpTorchMaxPool1D$new( id = "nn_max_pool1d", return_indices = FALSE, param_vals = list() )
id
(character(1)
)
Identifier of the resulting object.
return_indices
(logical(1)
)
Whether to return the indices.
If this is TRUE
, there are two output channels "output"
and "indices"
.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchMaxPool1D$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_max_pool1d") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_max_pool1d") pipeop # The available parameters pipeop$param_set
Applies a 2D max pooling over an input signal composed of several input planes.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
Calls torch::nn_max_pool2d()
during training.
If return_indices
is FALSE
during construction, there is one input channel 'input' and one output channel 'output'.
If return_indices
is TRUE
, there are two output channels 'output' and 'indices'.
For an explanation see PipeOpTorch
.
kernel_size
:: integer()
The size of the window. Can be single number or a vector.
stride
:: (integer(1))
The stride of the window. Can be a single number or a vector. Default: kernel_size
padding
:: integer()
Implicit zero paddings on both sides of the input. Can be a single number or a tuple (padW,). Default: 0
dilation
:: integer()
Controls the spacing between the kernel points; also known as the à trous algorithm. Default: 1
ceil_mode
:: logical(1)
When True, will use ceil instead of floor to compute the output shape. Default: FALSE
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> mlr3torch::PipeOpTorchMaxPool
-> PipeOpTorchMaxPool2D
new()
Creates a new instance of this R6 class.
PipeOpTorchMaxPool2D$new( id = "nn_max_pool2d", return_indices = FALSE, param_vals = list() )
id
(character(1)
)
Identifier of the resulting object.
return_indices
(logical(1)
)
Whether to return the indices.
If this is TRUE
, there are two output channels "output"
and "indices"
.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchMaxPool2D$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_max_pool2d") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_max_pool2d") pipeop # The available parameters pipeop$param_set
Applies a 3D max pooling over an input signal composed of several input planes.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
Calls torch::nn_max_pool3d()
during training.
If return_indices
is FALSE
during construction, there is one input channel 'input' and one output channel 'output'.
If return_indices
is TRUE
, there are two output channels 'output' and 'indices'.
For an explanation see PipeOpTorch
.
kernel_size
:: integer()
The size of the window. Can be single number or a vector.
stride
:: (integer(1))
The stride of the window. Can be a single number or a vector. Default: kernel_size
padding
:: integer()
Implicit zero paddings on both sides of the input. Can be a single number or a tuple (padW,). Default: 0
dilation
:: integer()
Controls the spacing between the kernel points; also known as the à trous algorithm. Default: 1
ceil_mode
:: logical(1)
When True, will use ceil instead of floor to compute the output shape. Default: FALSE
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> mlr3torch::PipeOpTorchMaxPool
-> PipeOpTorchMaxPool3D
new()
Creates a new instance of this R6 class.
PipeOpTorchMaxPool3D$new( id = "nn_max_pool3d", return_indices = FALSE, param_vals = list() )
id
(character(1)
)
Identifier of the resulting object.
return_indices
(logical(1)
)
Whether to return the indices.
If this is TRUE
, there are two output channels "output"
and "indices"
.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchMaxPool3D$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_max_pool3d") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_max_pool3d") pipeop # The available parameters pipeop$param_set
Base class for merge operations such as addition (PipeOpTorchMergeSum
), multiplication
(PipeOpTorchMergeProd
or concatenation (PipeOpTorchMergeCat
).
The state is the value calculated by the public method shapes_out()
.
PipeOpTorchMerge
s has either a vararg input channel if the constructor argument innum
is not set, or
input channels "input1"
, ..., "input<innum>"
. There is one output channel "output"
.
For an explanation see PipeOpTorch
.
See the respective child class.
Per default, the private$.shapes_out()
method outputs the broadcasted tensors. There are two things to be aware:
NA
s are assumed to batch (this should almost always be the batch size in the first dimension).
Tensors are expected to have the same number of dimensions, i.e. missing dimensions are not filled with 1s.
The reason is that again that the first dimension should be the batch dimension.
This private method can be overwritten by PipeOpTorch
s inheriting from this class.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> PipeOpTorchMerge
new()
Creates a new instance of this R6 class.
PipeOpTorchMerge$new( id, module_generator, param_set = ps(), innum = 0, param_vals = list() )
id
(character(1)
)
Identifier of the resulting object.
module_generator
(nn_module_generator
)
The torch module generator.
param_set
(ParamSet
)
The parameter set.
innum
(integer(1)
)
The number of inputs. Default is 0 which means there is one vararg input channel.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchMerge$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
Concatenates multiple tensors on a given dimension. No broadcasting rules are applied here, you must reshape the tensors before to have the same shape.
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
PipeOpTorchMerge
s has either a vararg input channel if the constructor argument innum
is not set, or
input channels "input1"
, ..., "input<innum>"
. There is one output channel "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
dim
:: integer(1)
The dimension along which to concatenate the tensors.
Calls nn_merge_cat()
when trained.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> mlr3torch::PipeOpTorchMerge
-> PipeOpTorchMergeCat
new()
Creates a new instance of this R6 class.
PipeOpTorchMergeCat$new(id = "nn_merge_cat", innum = 0, param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
innum
(integer(1)
)
The number of inputs. Default is 0 which means there is one vararg input channel.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
speak()
What does the cat say?
PipeOpTorchMergeCat$speak()
clone()
The objects of this class are cloneable with this method.
PipeOpTorchMergeCat$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_merge_cat") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_merge_cat") pipeop # The available parameters pipeop$param_set
Calculates the product of all input tensors.
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
PipeOpTorchMerge
s has either a vararg input channel if the constructor argument innum
is not set, or
input channels "input1"
, ..., "input<innum>"
. There is one output channel "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
No parameters.
Calls nn_merge_prod()
when trained.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> mlr3torch::PipeOpTorchMerge
-> PipeOpTorchMergeProd
new()
Creates a new instance of this R6 class.
PipeOpTorchMergeProd$new(id = "nn_merge_prod", innum = 0, param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
innum
(integer(1)
)
The number of inputs. Default is 0 which means there is one vararg input channel.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchMergeProd$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_merge_prod") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_merge_prod") pipeop # The available parameters pipeop$param_set
Calculates the sum of all input tensors.
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
PipeOpTorchMerge
s has either a vararg input channel if the constructor argument innum
is not set, or
input channels "input1"
, ..., "input<innum>"
. There is one output channel "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
No parameters.
Calls nn_merge_sum()
when trained.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> mlr3torch::PipeOpTorchMerge
-> PipeOpTorchMergeSum
new()
Creates a new instance of this R6 class.
PipeOpTorchMergeSum$new(id = "nn_merge_sum", innum = 0, param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
innum
(integer(1)
)
The number of inputs. Default is 0 which means there is one vararg input channel.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchMergeSum$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_merge_sum") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_merge_sum") pipeop # The available parameters pipeop$param_set
Applies element-wise the function
where weight is a learnable parameter.
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
num_parameters
:: integer(1)
:
Number of a to learn. Although it takes an int as input, there is only two values are legitimate: 1, or the
number of channels at input. Default: 1.
init
:: numeric(1)
T
The initial value of a. Default: 0.25.
Calls torch::nn_prelu()
when trained.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> PipeOpTorchPReLU
new()
Creates a new instance of this R6 class.
PipeOpTorchPReLU$new(id = "nn_prelu", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchPReLU$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_prelu") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_prelu") pipeop # The available parameters pipeop$param_set
Applies the rectified linear unit function element-wise.
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
inplace
:: logical(1)
Whether to do the operation in-place. Default: FALSE
.
Calls torch::nn_relu()
when trained.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> PipeOpTorchReLU
new()
Creates a new instance of this R6 class.
PipeOpTorchReLU$new(id = "nn_relu", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchReLU$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_relu") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_relu") pipeop # The available parameters pipeop$param_set
Applies the element-wise function .
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
inplace
:: logical(1)
Whether to do the operation in-place. Default: FALSE
.
Calls torch::nn_relu6()
when trained.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> PipeOpTorchReLU6
new()
Creates a new instance of this R6 class.
PipeOpTorchReLU6$new(id = "nn_relu6", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchReLU6$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_relu6") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_relu6") pipeop # The available parameters pipeop$param_set
Reshape a tensor to the given shape.
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
shape
:: integer(1)
The desired output shape. Unknown dimension (one at most) can either be specified as -1
or NA
.
Calls nn_reshape()
when trained.
This internally calls torch::torch_reshape()
with the given shape
.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> PipeOpTorchReshape
new()
Creates a new instance of this R6 class.
PipeOpTorchReshape$new(id = "nn_reshape", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchReshape$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_reshape") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_reshape") pipeop # The available parameters pipeop$param_set
Randomized leaky ReLU.
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
lower
:: numeric(1)
Lower bound of the uniform distribution. Default: 1/8.
upper
:: numeric(1)
Upper bound of the uniform distribution. Default: 1/3.
inplace
:: logical(1)
Whether to do the operation in-place. Default: FALSE
.
Calls torch::nn_rrelu()
when trained.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> PipeOpTorchRReLU
new()
Creates a new instance of this R6 class.
PipeOpTorchRReLU$new(id = "nn_rrelu", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchRReLU$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_rrelu") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_rrelu") pipeop # The available parameters pipeop$param_set
Applies element-wise,
,
with and
.
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
inplace
:: logical(1)
Whether to do the operation in-place. Default: FALSE
.
Calls torch::nn_selu()
when trained.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> PipeOpTorchSELU
new()
Creates a new instance of this R6 class.
PipeOpTorchSELU$new(id = "nn_selu", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchSELU$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_selu") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_selu") pipeop # The available parameters pipeop$param_set
Applies element-wise
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
No parameters.
Calls torch::nn_sigmoid()
when trained.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> PipeOpTorchSigmoid
new()
Creates a new instance of this R6 class.
PipeOpTorchSigmoid$new(id = "nn_sigmoid", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchSigmoid$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_sigmoid") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_sigmoid") pipeop # The available parameters pipeop$param_set
Applies a softmax function.
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
dim
:: integer(1)
A dimension along which Softmax will be computed (so every slice along dim will sum to 1).
Calls torch::nn_softmax()
when trained.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> PipeOpTorchSoftmax
new()
Creates a new instance of this R6 class.
PipeOpTorchSoftmax$new(id = "nn_softmax", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchSoftmax$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_softmax") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_softmax") pipeop # The available parameters pipeop$param_set
Applies element-wise, the function .
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
beta
:: numeric(1)
The beta value for the Softplus formulation. Default: 1
threshold
:: numeric(1)
Values above this revert to a linear function. Default: 20
Calls torch::nn_softplus()
when trained.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> PipeOpTorchSoftPlus
new()
Creates a new instance of this R6 class.
PipeOpTorchSoftPlus$new(id = "nn_softplus", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchSoftPlus$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_softplus") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_softplus") pipeop # The available parameters pipeop$param_set
Applies the soft shrinkage function elementwise
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
lamd
:: numeric(1)
The lambda (must be no less than zero) value for the Softshrink formulation. Default: 0.5
Calls torch::nn_softshrink()
when trained.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> PipeOpTorchSoftShrink
new()
Creates a new instance of this R6 class.
PipeOpTorchSoftShrink$new(id = "nn_softshrink", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchSoftShrink$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_softshrink") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_softshrink") pipeop # The available parameters pipeop$param_set
Applies element-wise, the function
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
No parameters.
Calls torch::nn_softsign()
when trained.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> PipeOpTorchSoftSign
new()
Creates a new instance of this R6 class.
PipeOpTorchSoftSign$new(id = "nn_softsign", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchSoftSign$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_softsign") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_softsign") pipeop # The available parameters pipeop$param_set
Squeezes a tensor by calling torch::torch_squeeze()
with the given dimension dim
.
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
dim
:: integer(1)
The dimension to squeeze. If NULL
, all dimensions of size 1 will be squeezed.
Negative values are interpreted downwards from the last dimension.
Calls nn_squeeze()
when trained.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> PipeOpTorchSqueeze
new()
Creates a new instance of this R6 class.
PipeOpTorchSqueeze$new(id = "nn_squeeze", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchSqueeze$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_squeeze") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_squeeze") pipeop # The available parameters pipeop$param_set
Applies the element-wise function:
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
No parameters.
Calls torch::nn_tanh()
when trained.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> PipeOpTorchTanh
new()
Creates a new instance of this R6 class.
PipeOpTorchTanh$new(id = "nn_tanh", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchTanh$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_tanh") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_tanh") pipeop # The available parameters pipeop$param_set
Applies element-wise,
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
No parameters.
Calls torch::nn_tanhshrink()
when trained.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> PipeOpTorchTanhShrink
new()
Creates a new instance of this R6 class.
PipeOpTorchTanhShrink$new(id = "nn_tanhshrink", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchTanhShrink$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_tanhshrink") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_tanhshrink") pipeop # The available parameters pipeop$param_set
Thresholds each element of the input Tensor.
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
threshold
:: numeric(1)
The value to threshold at.
value
:: numeric(1)
The value to replace with.
inplace
:: logical(1)
Can optionally do the operation in-place. Default: ‘FALSE’.
Calls torch::nn_threshold()
when trained.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> PipeOpTorchThreshold
new()
Creates a new instance of this R6 class.
PipeOpTorchThreshold$new(id = "nn_threshold", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchThreshold$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_threshold", threshold = 1, value = 2) pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_threshold", threshold = 1, value = 2) pipeop # The available parameters pipeop$param_set
Unqueeze a Tensor
Unqueeze a Tensor
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method $shapes_out()
.
Part of this documentation have been copied or adapted from the documentation of torch.
dim
:: integer(1)
The dimension which to unsqueeze. Negative values are interpreted downwards from the last dimension.
Calls nn_unsqueeze()
when trained.
This internally calls torch::torch_unsqueeze()
.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorch
-> PipeOpTorchUnsqueeze
new()
Creates a new instance of this R6 class.
PipeOpTorchUnsqueeze$new(id = "nn_unsqueeze", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchUnsqueeze$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
# Construct the PipeOp pipeop = po("nn_unsqueeze") pipeop # The available parameters pipeop$param_set
# Construct the PipeOp pipeop = po("nn_unsqueeze") pipeop # The available parameters pipeop$param_set
This PipeOp
can be used to preprocess (one or more) lazy_tensor
columns contained in an mlr3::Task
.
The preprocessing function is specified as construction argument fn
and additional arguments to this
function can be defined through the PipeOp
's parameter set.
The preprocessing is done per column, i.e. the number of lazy tensor output columns is equal
to the number of lazy tensor input columns.
To create custom preprocessing PipeOp
s you can use pipeop_preproc_torch
.
In addition to specifying the construction arguments, you can overwrite the private .shapes_out()
method.
If you don't overwrite it, the output shapes are assumed to be unknown (NULL
).
.shapes_out(shapes_in, param_vals, task)
(list()
, list(),
Taskor
NULL) ->
list()\cr This private method calculates the output shapes of the lazy tensor columns that are created from applying the preprocessing function with the provided parameter values (
param_vals). The
taskis very rarely needed, but if it is it should be checked that it is not
NULL'.
This private method only has the responsibility to calculate the output shapes for one input column, i.e. the
input shapes_in
can be assumed to have exactly one shape vector for which it must calculate the output shapes
and return it as a list()
of length 1.
It can also be assumed that the shape is not NULL
(i.e. unknown).
Also, the first dimension can be NA
, i.e. is unknown (as for the batch dimension).
See PipeOpTaskPreproc
.
In addition to state elements from PipeOpTaskPreprocSimple
,
the state also contains the $param_vals
that were set during training.
In addition to the parameters inherited from PipeOpTaskPreproc
as well as those specified during construction
as the argument param_set
there are the following parameters:
stages
:: character(1)
The stages during which to apply the preprocessing.
Can be one of "train"
, "predict"
or "both"
.
The initial value of this parameter is set to "train"
when the PipeOp
's id starts with "augment_"
and
to "both"
otherwise.
Note that the preprocessing that is applied during $predict()
uses the parameters that were set during
$train()
and not those that are set when performing the prediction.
During $train()
/ $predict()
, a PipeOpModule
with one input and one output channel is created.
The pipeop applies the function fn
to the input tensor while additionally
passing the parameter values (minus stages
and affect_columns
) to fn
.
The preprocessing graph of the lazy tensor columns is shallowly cloned and the PipeOpModule
is added.
This is done to avoid modifying user input and means that identical PipeOpModule
s can be part of different
preprocessing graphs. This is only possible, because the created PipeOpModule
is stateless.
At a later point in the graph, preprocessing graphs will be merged if possible to avoid unnecessary computation.
This is best illustrated by example:
One lazy tensor column's preprocessing graph is A -> B
.
Then, two branches are created B -> C
and B -> D
, creating two preprocessing graphs
A -> B -> C
and A -> B -> D
. When loading the data, we want to run the preprocessing only once, i.e. we don't
want to run the A -> B
part twice. For this reason, task_dataset()
will try to merge graphs and cache
results from graphs. However, only graphs using the same dataset can currently be merged.
Also, the shapes created during $train()
and $predict()
might differ.
To avoid the creation of graphs where the predict shapes are incompatible with the train shapes,
the hypothetical predict shapes are already calculated during $train()
(this is why the parameters that are set
during train are also used during predict) and the PipeOpTorchModel
will check the train and predict shapes for
compatibility before starting the training.
Otherwise, this mechanism is very similar to the ModelDescriptor
construct.
mlr3pipelines::PipeOp
-> mlr3pipelines::PipeOpTaskPreproc
-> PipeOpTaskPreprocTorch
fn
The preprocessing function.
rowwise
Whether the preprocessing is applied rowwise.
new()
Creates a new instance of this R6
class.
PipeOpTaskPreprocTorch$new( fn, id = "preproc_torch", param_vals = list(), param_set = ps(), packages = character(0), rowwise = FALSE, stages_init = NULL, tags = NULL )
fn
(function
or character(2)
)
The preprocessing function. Must not modify its input in-place.
If it is a character(2)
, the first element should be the namespace and the second element the name.
When the preprocessing function is applied to the tensor, the tensor will be passed by position as the first argument.
If the param_set
is inferred (left as NULL
) it is assumed that the first argument is the torch_tensor
.
id
(character(1)
)
The id for of the new object.
param_vals
(named list()
)
Parameter values to be set after construction.
param_set
(ParamSet
)
In case the function fn
takes additional parameter besides a torch_tensor
they can be
specfied as parameters. None of the parameters can have the "predict"
tag.
All tags should include "train"
.
packages
(character()
)
The packages the preprocessing function depends on.
rowwise
(logical(1)
)
Whether the preprocessing function is applied rowwise (and then concatenated by row) or directly to the whole
tensor. In the first case there is no batch dimension.
stages_init
(character(1)
)
Initial value for the stages
parameter.
tags
(character()
)
Tags for the pipeop.
shapes_out()
Calculates the output shapes that would result in applying the preprocessing to one or more lazy tensor columns with the provided shape. Names are ignored and only order matters. It uses the parameter values that are currently set.
PipeOpTaskPreprocTorch$shapes_out(shapes_in, stage = NULL, task = NULL)
shapes_in
(list()
of (integer()
or NULL
))
The input input shapes of the lazy tensors.
NULL
indicates that the shape is unknown.
First dimension must be NA
(if it is not NULL
).
stage
(character(1)
)
The stage: either "train"
or "predict"
.
task
(Task
or NULL
)
The task, which is very rarely needed.
list()
of (integer()
or NULL
)
clone()
The objects of this class are cloneable with this method.
PipeOpTaskPreprocTorch$clone(deep = FALSE)
deep
Whether to make a deep clone.
# Creating a simple task d = data.table( x1 = as_lazy_tensor(rnorm(10)), x2 = as_lazy_tensor(rnorm(10)), x3 = as_lazy_tensor(as.double(1:10)), y = rnorm(10) ) taskin = as_task_regr(d, target = "y") # Creating a simple preprocessing pipeop po_simple = po("preproc_torch", # get rid of environment baggage fn = mlr3misc::crate(function(x, a) x + a), param_set = paradox::ps(a = paradox::p_int(tags = c("train", "required"))) ) po_simple$param_set$set_values( a = 100, affect_columns = selector_name(c("x1", "x2")), stages = "both" # use during train and predict ) taskout_train = po_simple$train(list(taskin))[[1L]] materialize(taskout_train$data(cols = c("x1", "x2")), rbind = TRUE) taskout_predict_noaug = po_simple$predict(list(taskin))[[1L]] materialize(taskout_predict_noaug$data(cols = c("x1", "x2")), rbind = TRUE) po_simple$param_set$set_values( stages = "train" ) # transformation is not applied taskout_predict_aug = po_simple$predict(list(taskin))[[1L]] materialize(taskout_predict_aug$data(cols = c("x1", "x2")), rbind = TRUE) # Creating a more complex preprocessing PipeOp PipeOpPreprocTorchPoly = R6::R6Class("PipeOpPreprocTorchPoly", inherit = PipeOpTaskPreprocTorch, public = list( initialize = function(id = "preproc_poly", param_vals = list()) { param_set = paradox::ps( n_degree = paradox::p_int(lower = 1L, tags = c("train", "required")) ) param_set$set_values( n_degree = 1L ) fn = mlr3misc::crate(function(x, n_degree) { torch::torch_cat( lapply(seq_len(n_degree), function(d) torch::torch_pow(x, d)), dim = 2L ) }) super$initialize( fn = fn, id = id, packages = character(0), param_vals = param_vals, param_set = param_set, stages_init = "both" ) } ), private = list( .shapes_out = function(shapes_in, param_vals, task) { # shapes_in is a list of length 1 containing the shapes checkmate::assert_true(length(shapes_in[[1L]]) == 2L) if (shapes_in[[1L]][2L] != 1L) { stop("Input shape must be (NA, 1)") } list(c(NA, param_vals$n_degree)) } ) ) po_poly = PipeOpPreprocTorchPoly$new( param_vals = list(n_degree = 3L, affect_columns = selector_name("x3")) ) po_poly$shapes_out(list(c(NA, 1L)), stage = "train") taskout = po_poly$train(list(taskin))[[1L]] materialize(taskout$data(cols = "x3"), rbind = TRUE)
# Creating a simple task d = data.table( x1 = as_lazy_tensor(rnorm(10)), x2 = as_lazy_tensor(rnorm(10)), x3 = as_lazy_tensor(as.double(1:10)), y = rnorm(10) ) taskin = as_task_regr(d, target = "y") # Creating a simple preprocessing pipeop po_simple = po("preproc_torch", # get rid of environment baggage fn = mlr3misc::crate(function(x, a) x + a), param_set = paradox::ps(a = paradox::p_int(tags = c("train", "required"))) ) po_simple$param_set$set_values( a = 100, affect_columns = selector_name(c("x1", "x2")), stages = "both" # use during train and predict ) taskout_train = po_simple$train(list(taskin))[[1L]] materialize(taskout_train$data(cols = c("x1", "x2")), rbind = TRUE) taskout_predict_noaug = po_simple$predict(list(taskin))[[1L]] materialize(taskout_predict_noaug$data(cols = c("x1", "x2")), rbind = TRUE) po_simple$param_set$set_values( stages = "train" ) # transformation is not applied taskout_predict_aug = po_simple$predict(list(taskin))[[1L]] materialize(taskout_predict_aug$data(cols = c("x1", "x2")), rbind = TRUE) # Creating a more complex preprocessing PipeOp PipeOpPreprocTorchPoly = R6::R6Class("PipeOpPreprocTorchPoly", inherit = PipeOpTaskPreprocTorch, public = list( initialize = function(id = "preproc_poly", param_vals = list()) { param_set = paradox::ps( n_degree = paradox::p_int(lower = 1L, tags = c("train", "required")) ) param_set$set_values( n_degree = 1L ) fn = mlr3misc::crate(function(x, n_degree) { torch::torch_cat( lapply(seq_len(n_degree), function(d) torch::torch_pow(x, d)), dim = 2L ) }) super$initialize( fn = fn, id = id, packages = character(0), param_vals = param_vals, param_set = param_set, stages_init = "both" ) } ), private = list( .shapes_out = function(shapes_in, param_vals, task) { # shapes_in is a list of length 1 containing the shapes checkmate::assert_true(length(shapes_in[[1L]]) == 2L) if (shapes_in[[1L]][2L] != 1L) { stop("Input shape must be (NA, 1)") } list(c(NA, param_vals$n_degree)) } ) ) po_poly = PipeOpPreprocTorchPoly$new( param_vals = list(n_degree = 3L, affect_columns = selector_name("x3")) ) po_poly$shapes_out(list(c(NA, 1L)), stage = "train") taskout = po_poly$train(list(taskin))[[1L]] materialize(taskout$data(cols = "x3"), rbind = TRUE)
PipeOpTorch
is the base class for all PipeOp
s that represent
neural network layers in a Graph
.
During training, it generates a PipeOpModule
that wraps an nn_module
and attaches it
to the architecture, which is also represented as a Graph
consisting mostly of PipeOpModule
s
an PipeOpNOP
s.
While the former Graph
operates on ModelDescriptor
s, the latter operates on tensors.
The relationship between a PipeOpTorch
and a PipeOpModule
is similar to the
relationshop between a nn_module_generator
(like nn_linear
) and a
nn_module
(like the output of nn_linear(...)
).
A crucial difference is that the PipeOpTorch
infers auxiliary parameters (like in_features
for
nn_linear
) automatically from the intermediate tensor shapes that are being communicated through the
ModelDescriptor
.
During prediction, PipeOpTorch
takes in a Task
in each channel and outputs the same new
Task
resulting from their feature union in each channel.
If there is only one input and output channel, the task is simply piped through.
When inheriting from this class, one should overload either the private$.shapes_out()
and the
private$.shape_dependent_params()
methods, or overload private$.make_module()
.
.make_module(shapes_in, param_vals, task)
(list()
, list()
) -> nn_module
This private method is called to generated the nn_module
that is passed as argument module
to
PipeOpModule
. It must be overwritten, when no module_generator
is provided.
If left as is, it calls the provided module_generator
with the arguments obtained by
the private method .shape_dependent_params()
.
.shapes_out(shapes_in, param_vals, task)
(list()
, list()
, Task
or NULL
) -> named list()
This private method gets a list of numeric
vectors (shapes_in
), the parameter values (param_vals
),
as well as an (optional) Task
.
The shapes_in
can be assumed to be in the same order as the input names of the PipeOp
.
The output shapes must be in the same order as the output names of the PipeOp
.
In case the output shapes depends on the task (as is the case for PipeOpTorchHead
), the function should return
valid output shapes (possibly containing NA
s) if the task
argument is provided or not.
.shape_dependent_params(shapes_in, param_vals, task)
(list()
, list()
) -> named list()
This private method has the same inputs as .shapes_out
.
If .make_module()
is not overwritten, it constructs the arguments passed to module_generator
.
Usually this means that it must infer the auxiliary parameters that can be inferred from the input shapes
and add it to the user-supplied parameter values (param_vals
).
During training, all inputs and outputs are of class ModelDescriptor
.
During prediction, all input and output channels are of class Task
.
The state is the value calculated by the public method shapes_out()
.
The ParamSet
is specified by the child class inheriting from PipeOpTorch
.
Usually the parameters are the arguments of the wrapped nn_module
minus the auxiliary parameter that can
be automatically inferred from the shapes of the input tensors.
During training, the PipeOpTorch
creates a PipeOpModule
for the given parameter specification and the
input shapes from the incoming ModelDescriptor
s using the private method .make_module()
.
The input shapes are provided by the slot pointer_shape
of the incoming ModelDescriptor
s.
The channel names of this PipeOpModule
are identical to the channel names of the generating PipeOpTorch
.
A model descriptor union of all incoming ModelDescriptor
s is then created.
Note that this modifies the graph
of the first ModelDescriptor
in place for efficiency.
The PipeOpModule
is added to the graph
slot of this union and the the edges that connect the
sending PipeOpModule
s to the input channel of this PipeOpModule
are addeded to the graph.
This is possible because every incoming ModelDescriptor
contains the information about the
id
and the channel
name of the sending PipeOp
in the slot pointer
.
The new graph in the model_descriptor_union
represents the current state of the neural network
architecture. It is structurally similar to the subgraph that consists of all pipeops of class PipeOpTorch
and
PipeOpTorchIngress
that are ancestors of this PipeOpTorch
.
For the output, a shallow copy of the ModelDescriptor
is created and the pointer
and
pointer_shape
are updated accordingly. The shallow copy means that all ModelDescriptor
s point to the same
Graph
which allows the graph to be modified by-reference in different parts of the code.
mlr3pipelines::PipeOp
-> PipeOpTorch
module_generator
(nn_module_generator
or NULL
)
The module generator wrapped by this PipeOpTorch
. If NULL
, the private method
private$.make_module(shapes_in, param_vals)
must be overwritte, see section 'Inheriting'.
Do not change this after construction.
new()
Creates a new instance of this R6 class.
PipeOpTorch$new( id, module_generator, param_set = ps(), param_vals = list(), inname = "input", outname = "output", packages = "torch", tags = NULL )
id
(character(1)
)
Identifier of the resulting object.
module_generator
(nn_module_generator
)
The torch module generator.
param_set
(ParamSet
)
The parameter set.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
inname
(character()
)
The names of the PipeOp
's input channels. These will be the input channels of the generated PipeOpModule
.
Unless the wrapped module_generator
's forward method (if present) has the argument ...
, inname
must be
identical to those argument names in order to avoid any ambiguity.
If the forward method has the argument ...
, the order of the input channels determines how the tensors
will be passed to the wrapped nn_module
.
If left as NULL
(default), the argument module_generator
must be given and the argument names of the
modue_generator
's forward function are set as inname
.
outname
(character()
)
The names of the output channels channels. These will be the ouput channels of the generated PipeOpModule
and therefore also the names of the list returned by its $train()
.
In case there is more than one output channel, the nn_module
that is constructed by this
PipeOp
during training must return a named list()
, where the names of the list are the
names out the output channels. The default is "output"
.
packages
(character()
)
The R packages this object depends on.
tags
(character()
)
The tags of the PipeOp
. The tags "torch"
is always added.
shapes_out()
Calculates the output shapes for the given input shapes, parameters and task.
PipeOpTorch$shapes_out(shapes_in, task = NULL)
shapes_in
(list()
of integer()
)
The input input shapes, which must be in the same order as the input channel names of the PipeOp
.
task
(Task
or NULL
)
The task, which is very rarely used (default is NULL
). An exception is PipeOpTorchHead
.
A named list()
containing the output shapes. The names are the names of the output channels of
the PipeOp
.
Other Graph Network:
ModelDescriptor()
,
TorchIngressToken()
,
mlr_learners_torch_model
,
mlr_pipeops_module
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
model_descriptor_to_learner()
,
model_descriptor_to_module()
,
model_descriptor_union()
,
nn_graph()
## Creating a neural network # In torch task = tsk("iris") network_generator = torch::nn_module( initialize = function(task, d_hidden) { d_in = length(task$feature_names) self$linear = torch::nn_linear(d_in, d_hidden) self$output = if (task$task_type == "regr") { torch::nn_linear(d_hidden, 1) } else if (task$task_type == "classif") { torch::nn_linear(d_hidden, length(task$class_names)) } }, forward = function(x) { x = self$linear(x) x = torch::nnf_relu(x) self$output(x) } ) network = network_generator(task, d_hidden = 50) x = torch::torch_tensor(as.matrix(task$data(1, task$feature_names))) y = torch::with_no_grad(network(x)) # In mlr3torch network_generator = po("torch_ingress_num") %>>% po("nn_linear", out_features = 50) %>>% po("nn_head") md = network_generator$train(task)[[1L]] network = model_descriptor_to_module(md) y = torch::with_no_grad(network(torch_ingress_num.input = x)) ## Implementing a custom PipeOpTorch # defining a custom module nn_custom = nn_module("nn_custom", initialize = function(d_in1, d_in2, d_out1, d_out2, bias = TRUE) { self$linear1 = nn_linear(d_in1, d_out1, bias) self$linear2 = nn_linear(d_in2, d_out2, bias) }, forward = function(input1, input2) { output1 = self$linear1(input1) output2 = self$linear1(input2) list(output1 = output1, output2 = output2) } ) # wrapping the module into a custom PipeOpTorch library(paradox) PipeOpTorchCustom = R6::R6Class("PipeOpTorchCustom", inherit = PipeOpTorch, public = list( initialize = function(id = "nn_custom", param_vals = list()) { param_set = ps( d_out1 = p_int(lower = 1, tags = c("required", "train")), d_out2 = p_int(lower = 1, tags = c("required", "train")), bias = p_lgl(default = TRUE, tags = "train") ) super$initialize( id = id, param_vals = param_vals, param_set = param_set, inname = c("input1", "input2"), outname = c("output1", "output2"), module_generator = nn_custom ) } ), private = list( .shape_dependent_params = function(shapes_in, param_vals, task) { c(param_vals, list(d_in1 = tail(shapes_in[["input1"]], 1)), d_in2 = tail(shapes_in[["input2"]], 1) ) }, .shapes_out = function(shapes_in, param_vals, task) { list( input1 = c(head(shapes_in[["input1"]], -1), param_vals$d_out1), input2 = c(head(shapes_in[["input2"]], -1), param_vals$d_out2) ) } ) ) ## Training # generate input task = tsk("iris") task1 = task$clone()$select(paste0("Sepal.", c("Length", "Width"))) task2 = task$clone()$select(paste0("Petal.", c("Length", "Width"))) graph = gunion(list(po("torch_ingress_num_1"), po("torch_ingress_num_2"))) mds_in = graph$train(list(task1, task2), single_input = FALSE) mds_in[[1L]][c("graph", "task", "ingress", "pointer", "pointer_shape")] mds_in[[2L]][c("graph", "task", "ingress", "pointer", "pointer_shape")] # creating the PipeOpTorch and training it po_torch = PipeOpTorchCustom$new() po_torch$param_set$values = list(d_out1 = 10, d_out2 = 20) train_input = list(input1 = mds_in[[1L]], input2 = mds_in[[2L]]) mds_out = do.call(po_torch$train, args = list(input = train_input)) po_torch$state # the new model descriptors # the resulting graphs are identical identical(mds_out[[1L]]$graph, mds_out[[2L]]$graph) # not that as a side-effect, also one of the input graphs is modified in-place for efficiency mds_in[[1L]]$graph$edges # The new task has both Sepal and Petal features identical(mds_out[[1L]]$task, mds_out[[2L]]$task) mds_out[[2L]]$task # The new ingress slot contains all ingressors identical(mds_out[[1L]]$ingress, mds_out[[2L]]$ingress) mds_out[[1L]]$ingress # The pointer and pointer_shape slots are different mds_out[[1L]]$pointer mds_out[[2L]]$pointer mds_out[[1L]]$pointer_shape mds_out[[2L]]$pointer_shape ## Prediction predict_input = list(input1 = task1, input2 = task2) tasks_out = do.call(po_torch$predict, args = list(input = predict_input)) identical(tasks_out[[1L]], tasks_out[[2L]])
## Creating a neural network # In torch task = tsk("iris") network_generator = torch::nn_module( initialize = function(task, d_hidden) { d_in = length(task$feature_names) self$linear = torch::nn_linear(d_in, d_hidden) self$output = if (task$task_type == "regr") { torch::nn_linear(d_hidden, 1) } else if (task$task_type == "classif") { torch::nn_linear(d_hidden, length(task$class_names)) } }, forward = function(x) { x = self$linear(x) x = torch::nnf_relu(x) self$output(x) } ) network = network_generator(task, d_hidden = 50) x = torch::torch_tensor(as.matrix(task$data(1, task$feature_names))) y = torch::with_no_grad(network(x)) # In mlr3torch network_generator = po("torch_ingress_num") %>>% po("nn_linear", out_features = 50) %>>% po("nn_head") md = network_generator$train(task)[[1L]] network = model_descriptor_to_module(md) y = torch::with_no_grad(network(torch_ingress_num.input = x)) ## Implementing a custom PipeOpTorch # defining a custom module nn_custom = nn_module("nn_custom", initialize = function(d_in1, d_in2, d_out1, d_out2, bias = TRUE) { self$linear1 = nn_linear(d_in1, d_out1, bias) self$linear2 = nn_linear(d_in2, d_out2, bias) }, forward = function(input1, input2) { output1 = self$linear1(input1) output2 = self$linear1(input2) list(output1 = output1, output2 = output2) } ) # wrapping the module into a custom PipeOpTorch library(paradox) PipeOpTorchCustom = R6::R6Class("PipeOpTorchCustom", inherit = PipeOpTorch, public = list( initialize = function(id = "nn_custom", param_vals = list()) { param_set = ps( d_out1 = p_int(lower = 1, tags = c("required", "train")), d_out2 = p_int(lower = 1, tags = c("required", "train")), bias = p_lgl(default = TRUE, tags = "train") ) super$initialize( id = id, param_vals = param_vals, param_set = param_set, inname = c("input1", "input2"), outname = c("output1", "output2"), module_generator = nn_custom ) } ), private = list( .shape_dependent_params = function(shapes_in, param_vals, task) { c(param_vals, list(d_in1 = tail(shapes_in[["input1"]], 1)), d_in2 = tail(shapes_in[["input2"]], 1) ) }, .shapes_out = function(shapes_in, param_vals, task) { list( input1 = c(head(shapes_in[["input1"]], -1), param_vals$d_out1), input2 = c(head(shapes_in[["input2"]], -1), param_vals$d_out2) ) } ) ) ## Training # generate input task = tsk("iris") task1 = task$clone()$select(paste0("Sepal.", c("Length", "Width"))) task2 = task$clone()$select(paste0("Petal.", c("Length", "Width"))) graph = gunion(list(po("torch_ingress_num_1"), po("torch_ingress_num_2"))) mds_in = graph$train(list(task1, task2), single_input = FALSE) mds_in[[1L]][c("graph", "task", "ingress", "pointer", "pointer_shape")] mds_in[[2L]][c("graph", "task", "ingress", "pointer", "pointer_shape")] # creating the PipeOpTorch and training it po_torch = PipeOpTorchCustom$new() po_torch$param_set$values = list(d_out1 = 10, d_out2 = 20) train_input = list(input1 = mds_in[[1L]], input2 = mds_in[[2L]]) mds_out = do.call(po_torch$train, args = list(input = train_input)) po_torch$state # the new model descriptors # the resulting graphs are identical identical(mds_out[[1L]]$graph, mds_out[[2L]]$graph) # not that as a side-effect, also one of the input graphs is modified in-place for efficiency mds_in[[1L]]$graph$edges # The new task has both Sepal and Petal features identical(mds_out[[1L]]$task, mds_out[[2L]]$task) mds_out[[2L]]$task # The new ingress slot contains all ingressors identical(mds_out[[1L]]$ingress, mds_out[[2L]]$ingress) mds_out[[1L]]$ingress # The pointer and pointer_shape slots are different mds_out[[1L]]$pointer mds_out[[2L]]$pointer mds_out[[1L]]$pointer_shape mds_out[[2L]]$pointer_shape ## Prediction predict_input = list(input1 = task1, input2 = task2) tasks_out = do.call(po_torch$predict, args = list(input = predict_input)) identical(tasks_out[[1L]], tasks_out[[2L]])
Configures the callbacks of a deep learning model.
There is one input channel "input"
and one output channel "output"
.
During training, the channels are of class ModelDescriptor
.
During prediction, the channels are of class Task
.
The state is the value calculated by the public method shapes_out()
.
The parameters are defined dynamically from the callbacks, where the id of the respective callbacks is the respective set id.
During training the callbacks are cloned and added to the ModelDescriptor
.
mlr3pipelines::PipeOp
-> PipeOpTorchCallbacks
new()
Creates a new instance of this R6 class.
PipeOpTorchCallbacks$new( callbacks = list(), id = "torch_callbacks", param_vals = list() )
callbacks
(list
of TorchCallback
s)
The callbacks (or something convertible via as_torch_callbacks()
).
Must have unique ids.
All callbacks are cloned during construction.
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchCallbacks$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other Model Configuration:
ModelDescriptor()
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_optimizer
,
model_descriptor_union()
Other PipeOp:
mlr_pipeops_module
,
mlr_pipeops_torch_optimizer
po_cb = po("torch_callbacks", "checkpoint") po_cb$param_set mdin = po("torch_ingress_num")$train(list(tsk("iris"))) mdin[[1L]]$callbacks mdout = po_cb$train(mdin)[[1L]] mdout$callbacks # Can be called again po_cb1 = po("torch_callbacks", t_clbk("progress")) mdout1 = po_cb1$train(list(mdout))[[1L]] mdout1$callbacks
po_cb = po("torch_callbacks", "checkpoint") po_cb$param_set mdin = po("torch_ingress_num")$train(list(tsk("iris"))) mdin[[1L]]$callbacks mdout = po_cb$train(mdin)[[1L]] mdout$callbacks # Can be called again po_cb1 = po("torch_callbacks", t_clbk("progress")) mdout1 = po_cb1$train(list(mdout))[[1L]] mdout1$callbacks
Use this as entry-point to mlr3torch-networks.
Unless you are an advanced user, you should not need to use this directly but PipeOpTorchIngressNumeric
,
PipeOpTorchIngressCategorical
or PipeOpTorchIngressLazyTensor
.
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is set to the input shape.
Defined by the construction argument param_set
.
Creates an object of class TorchIngressToken
for the given task.
The purpuse of this is to store the information on how to construct the torch dataloader from the task for this
entry point of the network.
mlr3pipelines::PipeOp
-> PipeOpTorchIngress
feature_types
(character(1)
)
The features types that can be consumed by this PipeOpTorchIngress
.
new()
Creates a new instance of this R6 class.
PipeOpTorchIngress$new( id, param_set = ps(), param_vals = list(), packages = character(0), feature_types )
id
(character(1)
)
Identifier of the resulting object.
param_set
(ParamSet
)
The parameter set.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
packages
(character()
)
The R packages this object depends on.
feature_types
(character()
)
The feature types.
See mlr_reflections$task_feature_types
for available values,
Additionally, "lazy_tensor"
is supported.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchIngress$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
Other Graph Network:
ModelDescriptor()
,
TorchIngressToken()
,
mlr_learners_torch_model
,
mlr_pipeops_module
,
mlr_pipeops_torch
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
model_descriptor_to_learner()
,
model_descriptor_to_module()
,
model_descriptor_union()
,
nn_graph()
Ingress PipeOp that represents a categorical (factor()
, ordered()
and logical()
) entry point to a torch network.
select
:: logical(1)
Whether PipeOp
should selected the supported feature types. Otherwise it will err on receiving tasks
with unsupported feature types.
Uses batchgetter_categ()
.
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is set to the input shape.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorchIngress
-> PipeOpTorchIngressCategorical
new()
Creates a new instance of this R6 class.
PipeOpTorchIngressCategorical$new( id = "torch_ingress_categ", param_vals = list() )
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchIngressCategorical$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
Other Graph Network:
ModelDescriptor()
,
TorchIngressToken()
,
mlr_learners_torch_model
,
mlr_pipeops_module
,
mlr_pipeops_torch
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
model_descriptor_to_learner()
,
model_descriptor_to_module()
,
model_descriptor_union()
,
nn_graph()
graph = po("select", selector = selector_type("factor")) %>>% po("torch_ingress_categ") task = tsk("german_credit") # The output is a model descriptor md = graph$train(task)[[1L]] ingress = md$ingress[[1L]] ingress$batchgetter(task$data(1, ingress$features), "cpu")
graph = po("select", selector = selector_type("factor")) %>>% po("torch_ingress_categ") task = tsk("german_credit") # The output is a model descriptor md = graph$train(task)[[1L]] ingress = md$ingress[[1L]] ingress$batchgetter(task$data(1, ingress$features), "cpu")
Ingress for a single lazy_tensor
column.
shape
:: integer()
The shape of the tensor, where the first dimension (batch) must be NA
.
When it is not specified, the lazy tensor input column needs to have a known shape.
The returned batchgetter materializes the lazy tensor column to a tensor.
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is set to the input shape.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorchIngress
-> PipeOpTorchIngressLazyTensor
new()
Creates a new instance of this R6 class.
PipeOpTorchIngressLazyTensor$new( id = "torch_ingress_ltnsr", param_vals = list() )
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchIngressLazyTensor$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
Other Graph Network:
ModelDescriptor()
,
TorchIngressToken()
,
mlr_learners_torch_model
,
mlr_pipeops_module
,
mlr_pipeops_torch
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_num
,
model_descriptor_to_learner()
,
model_descriptor_to_module()
,
model_descriptor_union()
,
nn_graph()
po_ingress = po("torch_ingress_ltnsr") task = tsk("lazy_iris") md = po_ingress$train(list(task))[[1L]] ingress = md$ingress x_batch = ingress[[1L]]$batchgetter(data = task$data(1, "x"), device = "cpu", cache = NULL) x_batch # Now we try a lazy tensor with unknown shape, i.e. the shapes between the rows can differ ds = dataset( initialize = function() self$x = list(torch_randn(3, 10, 10), torch_randn(3, 8, 8)), .getitem = function(i) list(x = self$x[[i]]), .length = function() 2)() task_unknown = as_task_regr(data.table( x = as_lazy_tensor(ds, dataset_shapes = list(x = NULL)), y = rnorm(2) ), target = "y", id = "example2") # this task (as it is) can NOT be processed by PipeOpTorchIngressLazyTensor # It therefore needs to be preprocessed po_resize = po("trafo_resize", size = c(6, 6)) task_unknown_resize = po_resize$train(list(task_unknown))[[1L]] # printing the transformed column still shows unknown shapes, # because the preprocessing pipeop cannot infer them, # however we know that the shape is now (3, 10, 10) for all rows task_unknown_resize$data(1:2, "x") po_ingress$param_set$set_values(shape = c(NA, 3, 6, 6)) md2 = po_ingress$train(list(task_unknown_resize))[[1L]] ingress2 = md2$ingress x_batch2 = ingress2[[1L]]$batchgetter( data = task_unknown_resize$data(1:2, "x"), device = "cpu", cache = NULL ) x_batch2
po_ingress = po("torch_ingress_ltnsr") task = tsk("lazy_iris") md = po_ingress$train(list(task))[[1L]] ingress = md$ingress x_batch = ingress[[1L]]$batchgetter(data = task$data(1, "x"), device = "cpu", cache = NULL) x_batch # Now we try a lazy tensor with unknown shape, i.e. the shapes between the rows can differ ds = dataset( initialize = function() self$x = list(torch_randn(3, 10, 10), torch_randn(3, 8, 8)), .getitem = function(i) list(x = self$x[[i]]), .length = function() 2)() task_unknown = as_task_regr(data.table( x = as_lazy_tensor(ds, dataset_shapes = list(x = NULL)), y = rnorm(2) ), target = "y", id = "example2") # this task (as it is) can NOT be processed by PipeOpTorchIngressLazyTensor # It therefore needs to be preprocessed po_resize = po("trafo_resize", size = c(6, 6)) task_unknown_resize = po_resize$train(list(task_unknown))[[1L]] # printing the transformed column still shows unknown shapes, # because the preprocessing pipeop cannot infer them, # however we know that the shape is now (3, 10, 10) for all rows task_unknown_resize$data(1:2, "x") po_ingress$param_set$set_values(shape = c(NA, 3, 6, 6)) md2 = po_ingress$train(list(task_unknown_resize))[[1L]] ingress2 = md2$ingress x_batch2 = ingress2[[1L]]$batchgetter( data = task_unknown_resize$data(1:2, "x"), device = "cpu", cache = NULL ) x_batch2
Ingress PipeOp that represents a numeric (integer()
and numeric()
) entry point to a torch network.
Uses batchgetter_num()
.
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is set to the input shape.
mlr3pipelines::PipeOp
-> mlr3torch::PipeOpTorchIngress
-> PipeOpTorchIngressNumeric
new()
Creates a new instance of this R6 class.
PipeOpTorchIngressNumeric$new(id = "torch_ingress_num", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchIngressNumeric$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other Graph Network:
ModelDescriptor()
,
TorchIngressToken()
,
mlr_learners_torch_model
,
mlr_pipeops_module
,
mlr_pipeops_torch
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
model_descriptor_to_learner()
,
model_descriptor_to_module()
,
model_descriptor_union()
,
nn_graph()
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
graph = po("select", selector = selector_type(c("numeric", "integer"))) %>>% po("torch_ingress_num") task = tsk("german_credit") # The output is a model descriptor md = graph$train(task)[[1L]] ingress = md$ingress[[1L]] ingress$batchgetter(task$data(1:5, ingress$features), "cpu")
graph = po("select", selector = selector_type(c("numeric", "integer"))) %>>% po("torch_ingress_num") task = tsk("german_credit") # The output is a model descriptor md = graph$train(task)[[1L]] ingress = md$ingress[[1L]] ingress$batchgetter(task$data(1:5, ingress$features), "cpu")
Configures the loss of a deep learning model.
One input channel called "input"
and one output channel called "output"
.
For an explanation see PipeOpTorch
.
The state is the value calculated by the public method shapes_out()
.
The parameters are defined dynamically from the loss set during construction.
During training the loss is cloned and added to the ModelDescriptor
.
mlr3pipelines::PipeOp
-> PipeOpTorchLoss
new()
Creates a new instance of this R6 class.
PipeOpTorchLoss$new(loss, id = "torch_loss", param_vals = list())
loss
(TorchLoss
or character(1)
or nn_loss
)
The loss (or something convertible via as_torch_loss()
).
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchLoss$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
Other Model Configuration:
ModelDescriptor()
,
mlr_pipeops_torch_callbacks
,
mlr_pipeops_torch_optimizer
,
model_descriptor_union()
po_loss = po("torch_loss", loss = t_loss("cross_entropy")) po_loss$param_set mdin = po("torch_ingress_num")$train(list(tsk("iris"))) mdin[[1L]]$loss mdout = po_loss$train(mdin)[[1L]] mdout$loss
po_loss = po("torch_loss", loss = t_loss("cross_entropy")) po_loss$param_set mdin = po("torch_ingress_num")$train(list(tsk("iris"))) mdin[[1L]]$loss mdout = po_loss$train(mdin)[[1L]] mdout$loss
Builds a Torch Learner from a ModelDescriptor
and trains it with the given parameter specification.
The task type must be specified during construction.
There is one input channel "input"
that takes in ModelDescriptor
during traing and a Task
of the specified
task_type
during prediction.
The output is NULL
during training and a Prediction
of given task_type
during prediction.
A trained LearnerTorchModel
.
General:
The parameters of the optimizer, loss and callbacks,
prefixed with "opt."
, "loss."
and "cb.<callback id>."
respectively, as well as:
epochs
:: integer(1)
The number of epochs.
device
:: character(1)
The device. One of "auto"
, "cpu"
, or "cuda"
or other values defined in mlr_reflections$torch$devices
.
The value is initialized to "auto"
, which will select "cuda"
if possible, then try "mps"
and otherwise
fall back to "cpu"
.
num_threads
:: integer(1)
The number of threads for intraop pararallelization (if device
is "cpu"
).
This value is initialized to 1.
seed
:: integer(1)
or "random"
or NULL
The torch seed that is used during training and prediction.
This value is initialized to "random"
, which means that a random seed will be sampled at the beginning of the
training phase. This seed (either set or randomly sampled) is available via $model$seed
after training
and used during prediction.
Note that by setting the seed during the training phase this will mean that by default (i.e. when seed
is
"random"
), clones of the learner will use a different seed.
If set to NULL
, no seeding will be done.
Evaluation:
measures_train
:: Measure
or list()
of Measure
s.
Measures to be evaluated during training.
measures_valid
:: Measure
or list()
of Measure
s.
Measures to be evaluated during validation.
eval_freq
:: integer(1)
How often the train / validation predictions are evaluated using measures_train
/ measures_valid
.
This is initialized to 1
.
Note that the final model is always evaluated.
Early Stopping:
patience
:: integer(1)
This activates early stopping using the validation scores.
If the performance of a model does not improve for patience
evaluation steps, training is ended.
Note that the final model is stored in the learner, not the best model.
This is initialized to 0
, which means no early stopping.
The first entry from measures_valid
is used as the metric.
This also requires to specify the $validate
field of the Learner, as well as measures_valid
.
min_delta
:: double(1)
The minimum improvement threshold (>
) for early stopping.
Is initialized to 0.
Dataloader:
batch_size
:: integer(1)
The batch size (required).
shuffle
:: logical(1)
Whether to shuffle the instances in the dataset. Default is FALSE
.
This does not impact validation.
sampler
:: torch::sampler
Object that defines how the dataloader draw samples.
batch_sampler
:: torch::sampler
Object that defines how the dataloader draws batches.
num_workers
:: integer(1)
The number of workers for data loading (batches are loaded in parallel).
The default is 0
, which means that data will be loaded in the main process.
collate_fn
:: function
How to merge a list of samples to form a batch.
pin_memory
:: logical(1)
Whether the dataloader copies tensors into CUDA pinned memory before returning them.
drop_last
:: logical(1)
Whether to drop the last training batch in each epoch during training. Default is FALSE
.
timeout
:: numeric(1)
The timeout value for collecting a batch from workers.
Negative values mean no timeout and the default is -1
.
worker_init_fn
:: function(id)
A function that receives the worker id (in [1, num_workers]
) and is exectued after seeding
on the worker but before data loading.
worker_globals
:: list()
| character()
When loading data in parallel, this allows to export globals to the workers.
If this is a character vector, the objects in the global environment with those names
are copied to the workers.
worker_packages
:: character()
Which packages to load on the workers.
Also see torch::dataloder
for more information.
A LearnerTorchModel
is created by calling model_descriptor_to_learner()
on the
provided ModelDescriptor
that is received through the input channel.
Then the parameters are set according to the parameters specified in PipeOpTorchModel
and
its '$train() method is called on the [
Task][mlr3::Task] stored in the [
ModelDescriptor'].
mlr3pipelines::PipeOp
-> mlr3pipelines::PipeOpLearner
-> PipeOpTorchModel
new()
Creates a new instance of this R6 class.
PipeOpTorchModel$new(task_type, id = "torch_model", param_vals = list())
task_type
(character(1)
)
The task type of the model.
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchModel$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model_classif
,
mlr_pipeops_torch_model_regr
Builds a torch classifier and trains it.
See LearnerTorch
There is one input channel "input"
that takes in ModelDescriptor
during traing and a Task
of the specified
task_type
during prediction.
The output is NULL
during training and a Prediction
of given task_type
during prediction.
A trained LearnerTorchModel
.
A LearnerTorchModel
is created by calling model_descriptor_to_learner()
on the
provided ModelDescriptor
that is received through the input channel.
Then the parameters are set according to the parameters specified in PipeOpTorchModel
and
its '$train() method is called on the [
Task][mlr3::Task] stored in the [
ModelDescriptor'].
mlr3pipelines::PipeOp
-> mlr3pipelines::PipeOpLearner
-> mlr3torch::PipeOpTorchModel
-> PipeOpTorchModelClassif
new()
Creates a new instance of this R6 class.
PipeOpTorchModelClassif$new(id = "torch_model_classif", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchModelClassif$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_regr
# simple logistic regression # configure the model descriptor md = as_graph(po("torch_ingress_num") %>>% po("nn_head") %>>% po("torch_loss", "cross_entropy") %>>% po("torch_optimizer", "adam"))$train(tsk("iris"))[[1L]] print(md) # build the learner from the model descriptor and train it po_model = po("torch_model_classif", batch_size = 50, epochs = 1) po_model$train(list(md)) po_model$state
# simple logistic regression # configure the model descriptor md = as_graph(po("torch_ingress_num") %>>% po("nn_head") %>>% po("torch_loss", "cross_entropy") %>>% po("torch_optimizer", "adam"))$train(tsk("iris"))[[1L]] print(md) # build the learner from the model descriptor and train it po_model = po("torch_model_classif", batch_size = 50, epochs = 1) po_model$train(list(md)) po_model$state
Builds a torch regression model and trains it.
See LearnerTorch
There is one input channel "input"
that takes in ModelDescriptor
during traing and a Task
of the specified
task_type
during prediction.
The output is NULL
during training and a Prediction
of given task_type
during prediction.
A trained LearnerTorchModel
.
A LearnerTorchModel
is created by calling model_descriptor_to_learner()
on the
provided ModelDescriptor
that is received through the input channel.
Then the parameters are set according to the parameters specified in PipeOpTorchModel
and
its '$train() method is called on the [
Task][mlr3::Task] stored in the [
ModelDescriptor'].
mlr3pipelines::PipeOp
-> mlr3pipelines::PipeOpLearner
-> mlr3torch::PipeOpTorchModel
-> PipeOpTorchModelRegr
new()
Creates a new instance of this R6 class.
PipeOpTorchModelRegr$new(id = "torch_model_regr", param_vals = list())
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchModelRegr$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOps:
mlr_pipeops_nn_avg_pool1d
,
mlr_pipeops_nn_avg_pool2d
,
mlr_pipeops_nn_avg_pool3d
,
mlr_pipeops_nn_batch_norm1d
,
mlr_pipeops_nn_batch_norm2d
,
mlr_pipeops_nn_batch_norm3d
,
mlr_pipeops_nn_block
,
mlr_pipeops_nn_celu
,
mlr_pipeops_nn_conv1d
,
mlr_pipeops_nn_conv2d
,
mlr_pipeops_nn_conv3d
,
mlr_pipeops_nn_conv_transpose1d
,
mlr_pipeops_nn_conv_transpose2d
,
mlr_pipeops_nn_conv_transpose3d
,
mlr_pipeops_nn_dropout
,
mlr_pipeops_nn_elu
,
mlr_pipeops_nn_flatten
,
mlr_pipeops_nn_gelu
,
mlr_pipeops_nn_glu
,
mlr_pipeops_nn_hardshrink
,
mlr_pipeops_nn_hardsigmoid
,
mlr_pipeops_nn_hardtanh
,
mlr_pipeops_nn_head
,
mlr_pipeops_nn_layer_norm
,
mlr_pipeops_nn_leaky_relu
,
mlr_pipeops_nn_linear
,
mlr_pipeops_nn_log_sigmoid
,
mlr_pipeops_nn_max_pool1d
,
mlr_pipeops_nn_max_pool2d
,
mlr_pipeops_nn_max_pool3d
,
mlr_pipeops_nn_merge
,
mlr_pipeops_nn_merge_cat
,
mlr_pipeops_nn_merge_prod
,
mlr_pipeops_nn_merge_sum
,
mlr_pipeops_nn_prelu
,
mlr_pipeops_nn_relu
,
mlr_pipeops_nn_relu6
,
mlr_pipeops_nn_reshape
,
mlr_pipeops_nn_rrelu
,
mlr_pipeops_nn_selu
,
mlr_pipeops_nn_sigmoid
,
mlr_pipeops_nn_softmax
,
mlr_pipeops_nn_softplus
,
mlr_pipeops_nn_softshrink
,
mlr_pipeops_nn_softsign
,
mlr_pipeops_nn_squeeze
,
mlr_pipeops_nn_tanh
,
mlr_pipeops_nn_tanhshrink
,
mlr_pipeops_nn_threshold
,
mlr_pipeops_nn_unsqueeze
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_model
,
mlr_pipeops_torch_model_classif
# simple linear regression # build the model descriptor md = as_graph(po("torch_ingress_num") %>>% po("nn_head") %>>% po("torch_loss", "mse") %>>% po("torch_optimizer", "adam"))$train(tsk("mtcars"))[[1L]] print(md) # build the learner from the model descriptor and train it po_model = po("torch_model_regr", batch_size = 20, epochs = 1) po_model$train(list(md)) po_model$state
# simple linear regression # build the model descriptor md = as_graph(po("torch_ingress_num") %>>% po("nn_head") %>>% po("torch_loss", "mse") %>>% po("torch_optimizer", "adam"))$train(tsk("mtcars"))[[1L]] print(md) # build the learner from the model descriptor and train it po_model = po("torch_model_regr", batch_size = 20, epochs = 1) po_model$train(list(md)) po_model$state
Configures the optimizer of a deep learning model.
There is one input channel "input"
and one output channel "output"
.
During training, the channels are of class ModelDescriptor
.
During prediction, the channels are of class Task
.
The state is the value calculated by the public method shapes_out()
.
The parameters are defined dynamically from the optimizer that is set during construction.
During training, the optimizer is cloned and added to the ModelDescriptor
.
Note that the parameter set of the stored TorchOptimizer
is reference-identical to the parameter set of the
pipeop itself.
mlr3pipelines::PipeOp
-> PipeOpTorchOptimizer
new()
Creates a new instance of this R6 class.
PipeOpTorchOptimizer$new( optimizer = t_opt("adam"), id = "torch_optimizer", param_vals = list() )
optimizer
(TorchOptimizer
or character(1)
or torch_optimizer_generator
)
The optimizer (or something convertible via as_torch_optimizer()
).
id
(character(1)
)
Identifier of the resulting object.
param_vals
(list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would
otherwise be set during construction.
clone()
The objects of this class are cloneable with this method.
PipeOpTorchOptimizer$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other PipeOp:
mlr_pipeops_module
,
mlr_pipeops_torch_callbacks
Other Model Configuration:
ModelDescriptor()
,
mlr_pipeops_torch_callbacks
,
mlr_pipeops_torch_loss
,
model_descriptor_union()
po_opt = po("torch_optimizer", "sgd", lr = 0.01) po_opt$param_set mdin = po("torch_ingress_num")$train(list(tsk("iris"))) mdin[[1L]]$optimizer mdout = po_opt$train(mdin) mdout[[1L]]$optimizer
po_opt = po("torch_optimizer", "sgd", lr = 0.01) po_opt$param_set mdin = po("torch_ingress_num")$train(list(tsk("iris"))) mdin[[1L]]$optimizer mdout = po_opt$train(mdin) mdout[[1L]]$optimizer
Calls torchvision::transform_adjust_brightness
,
see there for more information on the parameters.
The preprocessing is applied row wise (no batch dimension).
R6Class
inheriting from PipeOpTaskPreprocTorch
.
Id | Type | Default | Levels | Range |
brightness_factor | numeric | - | |
|
stages | character | - | train, predict, both | - |
affect_columns | untyped | selector_all() | - | |
Calls torchvision::transform_adjust_gamma
,
see there for more information on the parameters.
The preprocessing is applied row wise (no batch dimension).
R6Class
inheriting from PipeOpTaskPreprocTorch
.
Id | Type | Default | Levels | Range |
gamma | numeric | - | |
|
gain | numeric | 1 | |
|
stages | character | - | train, predict, both | - |
affect_columns | untyped | selector_all() | - | |
Calls torchvision::transform_adjust_hue
,
see there for more information on the parameters.
The preprocessing is applied row wise (no batch dimension).
R6Class
inheriting from PipeOpTaskPreprocTorch
.
Id | Type | Default | Levels | Range |
hue_factor | numeric | - | |
|
stages | character | - | train, predict, both | - |
affect_columns | untyped | selector_all() | - | |
Calls torchvision::transform_adjust_saturation
,
see there for more information on the parameters.
The preprocessing is applied row wise (no batch dimension).
R6Class
inheriting from PipeOpTaskPreprocTorch
.
Id | Type | Default | Levels | Range |
saturation_factor | numeric | - | |
|
stages | character | - | train, predict, both | - |
affect_columns | untyped | selector_all() | - | |
Calls torchvision::transform_grayscale
,
see there for more information on the parameters.
The preprocessing is applied row wise (no batch dimension).
R6Class
inheriting from PipeOpTaskPreprocTorch
.
Id | Type | Default | Levels | Range |
num_output_channels | integer | - | |
|
stages | character | - | train, predict, both | - |
affect_columns | untyped | selector_all() | - | |
Does nothing.
R6Class
inheriting from PipeOpTaskPreprocTorch
.
Calls torchvision::transform_normalize
,
see there for more information on the parameters.
The preprocessing is applied row wise (no batch dimension).
R6Class
inheriting from PipeOpTaskPreprocTorch
.
Id | Type | Default | Levels |
mean | untyped | - | |
std | untyped | - | |
stages | character | - | train, predict, both |
affect_columns | untyped | selector_all() | |
Calls torchvision::transform_pad
,
see there for more information on the parameters.
The preprocessing is applied row wise (no batch dimension).
R6Class
inheriting from PipeOpTaskPreprocTorch
.
Id | Type | Default | Levels |
padding | untyped | - | |
fill | untyped | 0 | |
padding_mode | character | constant | constant, edge, reflect, symmetric |
stages | character | - | train, predict, both |
affect_columns | untyped | selector_all() | |
Reshapes the tensor according to the parameter shape
, by calling torch_reshape()
.
This preprocessing function is applied batch-wise.
R6Class
inheriting from PipeOpTaskPreprocTorch
.
shape
:: integer()
The desired output shape. The first dimension is the batch dimension and should usually be -1
.
Calls torchvision::transform_resize
,
see there for more information on the parameters.
The preprocessing is applied to the whole batch.
R6Class
inheriting from PipeOpTaskPreprocTorch
.
Id | Type | Default | Levels |
size | untyped | - | |
interpolation | character | 2 | Undefined, Bartlett, Blackman, Bohman, Box, Catrom, Cosine, Cubic, Gaussian, Hamming, ... |
stages | character | - | train, predict, both |
affect_columns | untyped | selector_all() | |
Calls torchvision::transform_rgb_to_grayscale
,
see there for more information on the parameters.
The preprocessing is applied row wise (no batch dimension).
R6Class
inheriting from PipeOpTaskPreprocTorch
.
Id | Type | Default | Levels |
stages | character | - | train, predict, both |
affect_columns | untyped | selector_all() | |
A classification task for the popular datasets::iris data set. Just like the iris task, but the features are represented as one lazy tensor column.
R6::R6Class inheriting from mlr3::TaskClassif.
tsk("lazy_iris")
Task type: “classif”
Properties: “multiclass”
Has Missings: no
Target: “Species”
Features: “x”
Data Dimension: 150x3
https://en.wikipedia.org/wiki/Iris_flower_data_set
Anderson E (1936). “The Species Problem in Iris.” Annals of the Missouri Botanical Garden, 23(3), 457. doi:10.2307/2394164.
task = tsk("lazy_iris") task df = task$data() materialize(df$x[1:6], rbind = TRUE)
task = tsk("lazy_iris") task df = task$data() materialize(df$x[1:6], rbind = TRUE)
Classic MNIST image classification.
The underlying DataBackend
contains columns "label"
, "image"
, "row_id"
, "split"
, where the last column
indicates whether the row belongs to the train or test set.
The first 60000 rows belong to the training set, the last 10000 rows to the test set.
tsk("mnist")
The task's backend is a DataBackendLazy
which will download the data once it is requested.
Other meta-data is already available before that.
You can cache these datasets by setting the mlr3torch.cache
option to TRUE
or to a specific path to be used
as the cache directory.
Task type: “classif”
Properties: “multiclass”
Has Missings: no
Target: “label”
Features: “image”
Data Dimension: 70000x4
https://torchvision.mlverse.org/reference/mnist_dataset.html
Lecun, Y., Bottou, L., Bengio, Y., Haffner, P. (1998). “Gradient-based learning applied to document recognition.” Proceedings of the IEEE, 86(11), 2278-2324. doi:10.1109/5.726791.
task = tsk("mnist") task
task = tsk("mnist") task
Subset of the famous ImageNet dataset.
The data is obtained from torchvision::tiny_imagenet_dataset()
.
The underlying DataBackend
contains columns "class"
, "image"
, "..row_id"
, "split"
, where the last column
indicates whether the row belongs to the train, validation or test set that defined provided in torchvision.
There are no labels for the test rows, so by default, these observations are inactive, which means that the task uses only 110000 of the 120000 observations that are defined in the underlying data backend.
tsk("tiny_imagenet")
The task's backend is a DataBackendLazy
which will download the data once it is requested.
Other meta-data is already available before that.
You can cache these datasets by setting the mlr3torch.cache
option to TRUE
or to a specific path to be used
as the cache directory.
Task type: “classif”
Properties: “multiclass”
Has Missings: no
Target: “class”
Features: “image”
Data Dimension: 120000x4
Deng, Jia, Dong, Wei, Socher, Richard, Li, Li-Jia, Li, Kai, Fei-Fei, Li (2009). “Imagenet: A large-scale hierarchical image database.” In 2009 IEEE conference on computer vision and pattern recognition, 248–255. IEEE.
task = tsk("tiny_imagenet") task
task = tsk("tiny_imagenet") task
A mlr3misc::Dictionary
of torch callbacks.
Use t_clbk()
to conveniently retrieve callbacks.
Can be converted to a data.table
using
as.data.table
.
mlr3torch_callbacks
mlr3torch_callbacks
An object of class DictionaryMlr3torchCallbacks
(inherits from Dictionary
, R6
) of length 13.
Other Callback:
TorchCallback
,
as_torch_callback()
,
as_torch_callbacks()
,
callback_set()
,
mlr_callback_set
,
mlr_callback_set.checkpoint
,
mlr_callback_set.progress
,
mlr_context_torch
,
t_clbk()
,
torch_callback()
Other Dictionary:
mlr3torch_losses
,
mlr3torch_optimizers
,
t_opt()
mlr3torch_callbacks$get("checkpoint") # is the same as t_clbk("checkpoint") # convert to a data.table as.data.table(mlr3torch_callbacks)
mlr3torch_callbacks$get("checkpoint") # is the same as t_clbk("checkpoint") # convert to a data.table as.data.table(mlr3torch_callbacks)
Dictionary of torch loss descriptors.
See t_loss()
for conveniently retrieving a loss function.
Can be converted to a data.table
using
as.data.table
.
mlr3torch_losses
mlr3torch_losses
An object of class DictionaryMlr3torchLosses
(inherits from Dictionary
, R6
) of length 13.
cross_entropy, l1, mse
Other Torch Descriptor:
TorchCallback
,
TorchDescriptor
,
TorchLoss
,
TorchOptimizer
,
as_torch_callbacks()
,
as_torch_loss()
,
as_torch_optimizer()
,
mlr3torch_optimizers
,
t_clbk()
,
t_loss()
,
t_opt()
Other Dictionary:
mlr3torch_callbacks
,
mlr3torch_optimizers
,
t_opt()
mlr3torch_losses$get("mse") # is equivalent to t_loss("mse") # convert to a data.table as.data.table(mlr3torch_losses)
mlr3torch_losses$get("mse") # is equivalent to t_loss("mse") # convert to a data.table as.data.table(mlr3torch_losses)
Dictionary of torch optimizers.
Use t_opt
for conveniently retrieving optimizers.
Can be converted to a data.table
using
as.data.table
.
mlr3torch_optimizers
mlr3torch_optimizers
An object of class DictionaryMlr3torchOptimizers
(inherits from Dictionary
, R6
) of length 13.
adadelta, adagrad, adam, asgd, rmsprop, rprop, sgd
Other Torch Descriptor:
TorchCallback
,
TorchDescriptor
,
TorchLoss
,
TorchOptimizer
,
as_torch_callbacks()
,
as_torch_loss()
,
as_torch_optimizer()
,
mlr3torch_losses
,
t_clbk()
,
t_loss()
,
t_opt()
Other Dictionary:
mlr3torch_callbacks
,
mlr3torch_losses
,
t_opt()
mlr3torch_optimizers$get("adam") # is equivalent to t_opt("adam") # convert to a data.table as.data.table(mlr3torch_optimizers)
mlr3torch_optimizers$get("adam") # is equivalent to t_opt("adam") # convert to a data.table as.data.table(mlr3torch_optimizers)
First a nn_graph
is created using model_descriptor_to_module
and then a learner is created from this
module and the remaining information from the model descriptor, which must include the optimizer and loss function
and optionally callbacks.
model_descriptor_to_learner(model_descriptor)
model_descriptor_to_learner(model_descriptor)
model_descriptor |
( |
Other Graph Network:
ModelDescriptor()
,
TorchIngressToken()
,
mlr_learners_torch_model
,
mlr_pipeops_module
,
mlr_pipeops_torch
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
model_descriptor_to_module()
,
model_descriptor_union()
,
nn_graph()
Creates the nn_graph
from a ModelDescriptor
. Mostly for internal use, since the ModelDescriptor
is in
most circumstances harder to use than just creating nn_graph
directly.
model_descriptor_to_module( model_descriptor, output_pointers = NULL, list_output = FALSE )
model_descriptor_to_module( model_descriptor, output_pointers = NULL, list_output = FALSE )
model_descriptor |
( |
output_pointers |
( |
list_output |
( |
Other Graph Network:
ModelDescriptor()
,
TorchIngressToken()
,
mlr_learners_torch_model
,
mlr_pipeops_module
,
mlr_pipeops_torch
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
model_descriptor_to_learner()
,
model_descriptor_union()
,
nn_graph()
This is a mostly internal function that is used in PipeOpTorch
s with multiple input channels.
It creates the union of multiple ModelDescriptor
s:
graph
s are combinded (if they are not identical to begin with). The first entry's graph
is modified by
reference.
PipeOp
s with the same ID must be identical. No new input edges may be added to PipeOp
s.
Drops pointer
/ pointer_shape
entries.
The new task is the feature union of the two incoming tasks.
The optimizer
and loss
of both ModelDescriptor
s must be identical.
Ingress tokens and callbacks are merged, where objects with the same "id"
must be identical.
model_descriptor_union(md1, md2)
model_descriptor_union(md1, md2)
md1 |
( |
md2 |
( |
The requirement that no new input edgedes may be added to PipeOp
s is not theoretically necessary, but since
we assume that ModelDescriptor is being built from beginning to end (i.e. PipeOp
s never get new ancestors) we
can make this assumption and simplify things. Otherwise we'd need to treat "..."-inputs special.)
Other Graph Network:
ModelDescriptor()
,
TorchIngressToken()
,
mlr_learners_torch_model
,
mlr_pipeops_module
,
mlr_pipeops_torch
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
model_descriptor_to_learner()
,
model_descriptor_to_module()
,
nn_graph()
Other Model Configuration:
ModelDescriptor()
,
mlr_pipeops_torch_callbacks
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_optimizer
Represents a model; possibly a complete model, possibly one in the process of being built up.
This model takes input tensors of shapes shapes_in
and
pipes them through graph
. Input shapes get mapped to input channels of graph
.
Output shapes are named by the output channels of graph
; it is also possible
to represent no-ops on tensors, in which case names of input and output should be identical.
ModelDescriptor
objects typically represent partial models being built up, in which case the pointer
slot
indicates a specific point in the graph that produces a tensor of shape pointer_shape
, on which the graph should
be extended.
It is allowed for the graph
in this structure to be modified by-reference in different parts of the code.
However, these modifications may never add edges with elements of the Graph
as destination. In particular, no
element of graph$input
may be removed by reference, e.g. by adding an edge to the Graph
that has the input
channel of a PipeOp
that was previously without parent as its destination.
In most cases it is better to create a specific ModelDescriptor
by training a Graph
consisting (mostly) of
operators PipeOpTorchIngress
, PipeOpTorch
, PipeOpTorchLoss
, PipeOpTorchOptimizer
, and
PipeOpTorchCallbacks
.
A ModelDescriptor
can be converted to a nn_graph
via model_descriptor_to_module
.
ModelDescriptor( graph, ingress, task, optimizer = NULL, loss = NULL, callbacks = NULL, pointer = NULL, pointer_shape = NULL )
ModelDescriptor( graph, ingress, task, optimizer = NULL, loss = NULL, callbacks = NULL, pointer = NULL, pointer_shape = NULL )
graph |
( |
ingress |
(uniquely named |
task |
( |
optimizer |
( |
loss |
( |
callbacks |
(A |
pointer |
( |
pointer_shape |
( |
(ModelDescriptor
)
Other Model Configuration:
mlr_pipeops_torch_callbacks
,
mlr_pipeops_torch_loss
,
mlr_pipeops_torch_optimizer
,
model_descriptor_union()
Other Graph Network:
TorchIngressToken()
,
mlr_learners_torch_model
,
mlr_pipeops_module
,
mlr_pipeops_torch
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
model_descriptor_to_learner()
,
model_descriptor_to_module()
,
model_descriptor_union()
,
nn_graph()
Retrieve a neural network layer from the
mlr_pipeops
dictionary.
nn(.key, ...)
nn(.key, ...)
.key |
( |
... |
(any) |
po1 = po("nn_linear", id = "linear") # is the same as: po2 = nn("linear")
po1 = po("nn_linear", id = "linear") # is the same as: po2 = nn("linear")
Represents a neural network using a Graph
that usually costains mostly PipeOpModule
s.
nn_graph(graph, shapes_in, output_map = graph$output$name, list_output = FALSE)
nn_graph(graph, shapes_in, output_map = graph$output$name, list_output = FALSE)
graph |
|
shapes_in |
(named |
output_map |
( |
list_output |
( |
Other Graph Network:
ModelDescriptor()
,
TorchIngressToken()
,
mlr_learners_torch_model
,
mlr_pipeops_module
,
mlr_pipeops_torch
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
model_descriptor_to_learner()
,
model_descriptor_to_module()
,
model_descriptor_union()
graph = mlr3pipelines::Graph$new() graph$add_pipeop(po("module_1", module = nn_linear(10, 20)), clone = FALSE) graph$add_pipeop(po("module_2", module = nn_relu()), clone = FALSE) graph$add_pipeop(po("module_3", module = nn_linear(20, 1)), clone = FALSE) graph$add_edge("module_1", "module_2") graph$add_edge("module_2", "module_3") network = nn_graph(graph, shapes_in = list(module_1.input = c(NA, 10))) x = torch_randn(16, 10) network(module_1.input = x)
graph = mlr3pipelines::Graph$new() graph$add_pipeop(po("module_1", module = nn_linear(10, 20)), clone = FALSE) graph$add_pipeop(po("module_2", module = nn_relu()), clone = FALSE) graph$add_pipeop(po("module_3", module = nn_linear(20, 1)), clone = FALSE) graph$add_edge("module_1", "module_2") graph$add_edge("module_2", "module_3") network = nn_graph(graph, shapes_in = list(module_1.input = c(NA, 10))) x = torch_randn(16, 10) network(module_1.input = x)
Concatenates multiple tensors on a given dimension. No broadcasting rules are applied here, you must reshape the tensors before to have the same shape.
nn_merge_cat(dim = -1)
nn_merge_cat(dim = -1)
dim |
( |
Calculates the product of all input tensors.
nn_merge_prod()
nn_merge_prod()
Calculates the sum of all input tensors.
nn_merge_sum()
nn_merge_sum()
Reshape a tensor to the given shape.
nn_reshape(shape)
nn_reshape(shape)
shape |
( |
Squeezes a tensor by calling torch::torch_squeeze()
with the given dimension dim
.
nn_squeeze(dim)
nn_squeeze(dim)
dim |
( |
Unsqueezes a tensor by calling torch::torch_unsqueeze()
with the given dimension dim
.
nn_unsqueeze(dim)
nn_unsqueeze(dim)
dim |
( |
Function to create objects of class PipeOpTaskPreprocTorch
in a more convenient way.
Start by reading the documentation of PipeOpTaskPreprocTorch
.
pipeop_preproc_torch( id, fn, shapes_out = NULL, param_set = NULL, packages = character(0), rowwise = FALSE, parent_env = parent.frame(), stages_init = NULL, tags = NULL )
pipeop_preproc_torch( id, fn, shapes_out = NULL, param_set = NULL, packages = character(0), rowwise = FALSE, parent_env = parent.frame(), stages_init = NULL, tags = NULL )
id |
( |
fn |
( |
shapes_out |
( |
param_set |
( |
packages |
( |
rowwise |
( |
parent_env |
( |
stages_init |
( |
tags |
( |
An R6Class
instance inheriting from PipeOpTaskPreprocTorch
PipeOpPreprocExample = pipeop_preproc_torch("preproc_example", function(x, a) x + a) po_example = PipeOpPreprocExample$new() po_example$param_set
PipeOpPreprocExample = pipeop_preproc_torch("preproc_example", function(x, a) x + a) po_example = PipeOpPreprocExample$new() po_example$param_set
Retrieves one or more TorchCallback
s from mlr3torch_callbacks
.
Works like mlr3::lrn()
and mlr3::lrns()
.
t_clbk(.key, ...) t_clbks(.keys)
t_clbk(.key, ...) t_clbks(.keys)
.key |
( |
... |
(any) |
.keys |
( |
list()
of TorchCallback
s
Other Callback:
TorchCallback
,
as_torch_callback()
,
as_torch_callbacks()
,
callback_set()
,
mlr3torch_callbacks
,
mlr_callback_set
,
mlr_callback_set.checkpoint
,
mlr_callback_set.progress
,
mlr_context_torch
,
torch_callback()
Other Torch Descriptor:
TorchCallback
,
TorchDescriptor
,
TorchLoss
,
TorchOptimizer
,
as_torch_callbacks()
,
as_torch_loss()
,
as_torch_optimizer()
,
mlr3torch_losses
,
mlr3torch_optimizers
,
t_loss()
,
t_opt()
t_clbk("progress")
t_clbk("progress")
Retrieve one or more TorchLoss
es from mlr3torch_losses
.
Works like mlr3::lrn()
and mlr3::lrns()
.
t_loss(.key, ...) t_losses(.keys, ...)
t_loss(.key, ...) t_losses(.keys, ...)
.key |
( |
... |
(any) |
.keys |
( |
Other Torch Descriptor:
TorchCallback
,
TorchDescriptor
,
TorchLoss
,
TorchOptimizer
,
as_torch_callbacks()
,
as_torch_loss()
,
as_torch_optimizer()
,
mlr3torch_losses
,
mlr3torch_optimizers
,
t_clbk()
,
t_opt()
t_loss("mse", reduction = "mean") # get the dictionary t_loss() t_losses(c("mse", "l1")) # get the dictionary t_losses()
t_loss("mse", reduction = "mean") # get the dictionary t_loss() t_losses(c("mse", "l1")) # get the dictionary t_losses()
Retrieves one or more TorchOptimizer
s from mlr3torch_optimizers
.
Works like mlr3::lrn()
and mlr3::lrns()
.
t_opt(.key, ...) t_opts(.keys, ...)
t_opt(.key, ...) t_opts(.keys, ...)
.key |
( |
... |
(any) |
.keys |
( |
Other Torch Descriptor:
TorchCallback
,
TorchDescriptor
,
TorchLoss
,
TorchOptimizer
,
as_torch_callbacks()
,
as_torch_loss()
,
as_torch_optimizer()
,
mlr3torch_losses
,
mlr3torch_optimizers
,
t_clbk()
,
t_loss()
Other Dictionary:
mlr3torch_callbacks
,
mlr3torch_losses
,
mlr3torch_optimizers
t_opt("adam", lr = 0.1) # get the dictionary t_opt() t_opts(c("adam", "sgd")) # get the dictionary t_opts()
t_opt("adam", lr = 0.1) # get the dictionary t_opt() t_opts(c("adam", "sgd")) # get the dictionary t_opts()
Creates a torch dataset from an mlr3 Task
.
The resulting dataset's $.get_batch()
method returns a list with elements x
, y
and index
:
x
is a list with tensors, whose content is defined by the parameter feature_ingress_tokens
.
y
is the target variable and its content is defined by the parameter target_batchgetter
.
.index
is the index of the batch in the task's data.
The data is returned on the device specified by the parameter device
.
task_dataset(task, feature_ingress_tokens, target_batchgetter = NULL, device)
task_dataset(task, feature_ingress_tokens, target_batchgetter = NULL, device)
task |
|
feature_ingress_tokens |
(named |
target_batchgetter |
( |
device |
( |
task = tsk("iris") sepal_ingress = TorchIngressToken( features = c("Sepal.Length", "Sepal.Width"), batchgetter = batchgetter_num, shape = c(NA, 2) ) petal_ingress = TorchIngressToken( features = c("Petal.Length", "Petal.Width"), batchgetter = batchgetter_num, shape = c(NA, 2) ) ingress_tokens = list(sepal = sepal_ingress, petal = petal_ingress) target_batchgetter = function(data, device) { torch_tensor(data = data[[1L]], dtype = torch_float32(), device)$unsqueeze(2) } dataset = task_dataset(task, ingress_tokens, target_batchgetter, "cpu") batch = dataset$.getbatch(1:10) batch
task = tsk("iris") sepal_ingress = TorchIngressToken( features = c("Sepal.Length", "Sepal.Width"), batchgetter = batchgetter_num, shape = c(NA, 2) ) petal_ingress = TorchIngressToken( features = c("Petal.Length", "Petal.Width"), batchgetter = batchgetter_num, shape = c(NA, 2) ) ingress_tokens = list(sepal = sepal_ingress, petal = petal_ingress) target_batchgetter = function(data, device) { torch_tensor(data = data[[1L]], dtype = torch_float32(), device)$unsqueeze(2) } dataset = task_dataset(task, ingress_tokens, target_batchgetter, "cpu") batch = dataset$.getbatch(1:10) batch
Convenience function to create a custom TorchCallback
.
All arguments that are available in callback_set()
are also available here.
For more information on how to correctly implement a new callback, see CallbackSet
.
torch_callback( id, classname = paste0("CallbackSet", capitalize(id)), param_set = NULL, packages = NULL, label = capitalize(id), man = NULL, on_begin = NULL, on_end = NULL, on_exit = NULL, on_epoch_begin = NULL, on_before_valid = NULL, on_epoch_end = NULL, on_batch_begin = NULL, on_batch_end = NULL, on_after_backward = NULL, on_batch_valid_begin = NULL, on_batch_valid_end = NULL, on_valid_end = NULL, state_dict = NULL, load_state_dict = NULL, initialize = NULL, public = NULL, private = NULL, active = NULL, parent_env = parent.frame(), inherit = CallbackSet, lock_objects = FALSE )
torch_callback( id, classname = paste0("CallbackSet", capitalize(id)), param_set = NULL, packages = NULL, label = capitalize(id), man = NULL, on_begin = NULL, on_end = NULL, on_exit = NULL, on_epoch_begin = NULL, on_before_valid = NULL, on_epoch_end = NULL, on_batch_begin = NULL, on_batch_end = NULL, on_after_backward = NULL, on_batch_valid_begin = NULL, on_batch_valid_end = NULL, on_valid_end = NULL, state_dict = NULL, load_state_dict = NULL, initialize = NULL, public = NULL, private = NULL, active = NULL, parent_env = parent.frame(), inherit = CallbackSet, lock_objects = FALSE )
id |
( |
classname |
( |
param_set |
( |
packages |
( |
label |
( |
man |
( |
on_begin , on_end , on_epoch_begin , on_before_valid , on_epoch_end , on_batch_begin , on_batch_end , on_after_backward , on_batch_valid_begin , on_batch_valid_end , on_valid_end , on_exit
|
( |
state_dict |
( |
load_state_dict |
( |
initialize |
( |
public , private , active
|
( |
parent_env |
( |
inherit |
( |
lock_objects |
( |
It first creates an R6
class inheriting from CallbackSet
(using callback_set()
) and
then wraps this generator in a TorchCallback
that can be passed to a torch learner.
begin
:: Run before the training loop begins.
epoch_begin
:: Run he beginning of each epoch.
batch_begin
:: Run before the forward call.
after_backward
:: Run after the backward call.
batch_end
:: Run after the optimizer step.
batch_valid_begin
:: Run before the forward call in the validation loop.
batch_valid_end
:: Run after the forward call in the validation loop.
valid_end
:: Run at the end of validation.
epoch_end
:: Run at the end of each epoch.
end
:: Run after last epoch.
exit
:: Run at last, using on.exit()
.
Other Callback:
TorchCallback
,
as_torch_callback()
,
as_torch_callbacks()
,
callback_set()
,
mlr3torch_callbacks
,
mlr_callback_set
,
mlr_callback_set.checkpoint
,
mlr_callback_set.progress
,
mlr_context_torch
,
t_clbk()
custom_tcb = torch_callback("custom", initialize = function(name) { self$name = name }, on_begin = function() { cat("Hello", self$name, ", we will train for ", self$ctx$total_epochs, "epochs.\n") }, on_end = function() { cat("Training is done.") } ) learner = lrn("classif.torch_featureless", batch_size = 16, epochs = 1, callbacks = custom_tcb, cb.custom.name = "Marie", device = "cpu" ) task = tsk("iris") learner$train(task)
custom_tcb = torch_callback("custom", initialize = function(name) { self$name = name }, on_begin = function() { cat("Hello", self$name, ", we will train for ", self$ctx$total_epochs, "epochs.\n") }, on_end = function() { cat("Training is done.") } ) learner = lrn("classif.torch_featureless", batch_size = 16, epochs = 1, callbacks = custom_tcb, cb.custom.name = "Marie", device = "cpu" ) task = tsk("iris") learner$train(task)
This wraps a CallbackSet
and annotates it with metadata, most importantly a ParamSet
.
The callback is created for the given parameter values by calling the $generate()
method.
This class is usually used to configure the callback of a torch learner, e.g. when constructing
a learner of in a ModelDescriptor
.
For a list of available callbacks, see mlr3torch_callbacks
.
To conveniently retrieve a TorchCallback
, use t_clbk()
.
Defined by the constructor argument param_set
.
If no parameter set is provided during construction, the parameter set is constructed by creating a parameter
for each argument of the wrapped loss function, where the parametes are then of type ParamUty
.
mlr3torch::TorchDescriptor
-> TorchCallback
new()
Creates a new instance of this R6 class.
TorchCallback$new( callback_generator, param_set = NULL, id = NULL, label = NULL, packages = NULL, man = NULL )
callback_generator
(R6ClassGenerator
)
The class generator for the callback that is being wrapped.
param_set
(ParamSet
or NULL
)
The parameter set. If NULL
(default) it is inferred from callback_generator
.
id
(character(1)
)
The id for of the new object.
label
(character(1)
)
Label for the new instance.
packages
(character()
)
The R packages this object depends on.
man
(character(1)
)
String in the format [pkg]::[topic]
pointing to a manual page for this object.
The referenced help package can be opened via method $help()
.
clone()
The objects of this class are cloneable with this method.
TorchCallback$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other Callback:
as_torch_callback()
,
as_torch_callbacks()
,
callback_set()
,
mlr3torch_callbacks
,
mlr_callback_set
,
mlr_callback_set.checkpoint
,
mlr_callback_set.progress
,
mlr_context_torch
,
t_clbk()
,
torch_callback()
Other Torch Descriptor:
TorchDescriptor
,
TorchLoss
,
TorchOptimizer
,
as_torch_callbacks()
,
as_torch_loss()
,
as_torch_optimizer()
,
mlr3torch_losses
,
mlr3torch_optimizers
,
t_clbk()
,
t_loss()
,
t_opt()
# Create a new torch callback from an existing callback set torch_callback = TorchCallback$new(CallbackSetCheckpoint) # The parameters are inferred torch_callback$param_set # Retrieve a torch callback from the dictionary torch_callback = t_clbk("checkpoint", path = tempfile(), freq = 1 ) torch_callback torch_callback$label torch_callback$id # open the help page of the wrapped callback set # torch_callback$help() # Create the callback set callback = torch_callback$generate() callback # is the same as CallbackSetCheckpoint$new( path = tempfile(), freq = 1 ) # Use in a learner learner = lrn("regr.mlp", callbacks = t_clbk("checkpoint")) # the parameters of the callback are added to the learner's parameter set learner$param_set
# Create a new torch callback from an existing callback set torch_callback = TorchCallback$new(CallbackSetCheckpoint) # The parameters are inferred torch_callback$param_set # Retrieve a torch callback from the dictionary torch_callback = t_clbk("checkpoint", path = tempfile(), freq = 1 ) torch_callback torch_callback$label torch_callback$id # open the help page of the wrapped callback set # torch_callback$help() # Create the callback set callback = torch_callback$generate() callback # is the same as CallbackSetCheckpoint$new( path = tempfile(), freq = 1 ) # Use in a learner learner = lrn("regr.mlp", callbacks = t_clbk("checkpoint")) # the parameters of the callback are added to the learner's parameter set learner$param_set
Abstract Base Class from which TorchLoss
, TorchOptimizer
, and TorchCallback
inherit.
This class wraps a generator (R6Class Generator or the torch version of such a generator) and annotates it
with metadata such as a ParamSet
, a label, an ID, packages, or a manual page.
The parameters are the construction arguments of the wrapped generator and the parameter $values
are passed
to the generator when calling the public method $generate()
.
Defined by the constructor argument param_set
.
All parameters are tagged with "train"
, but this is done automatically during initialize.
label
(character(1)
)
Label for this object.
Can be used in tables, plot and text output instead of the ID.
param_set
(ParamSet
)
Set of hyperparameters.
packages
(character(1)
)
Set of required packages.
These packages are loaded, but not attached.
id
(character(1)
)
Identifier of the object.
Used in tables, plot and text output.
generator
The wrapped generator that is described.
man
(character(1)
)
String in the format [pkg]::[topic]
pointing to a manual page for this object.
phash
(character(1)
)
Hash (unique identifier) for this partial object, excluding some components
which are varied systematically (e.g. the parameter values).
new()
Creates a new instance of this R6 class.
TorchDescriptor$new( generator, id = NULL, param_set = NULL, packages = NULL, label = NULL, man = NULL )
generator
The wrapped generator that is described.
id
(character(1)
)
The id for of the new object.
param_set
(ParamSet
)
The parameter set.
packages
(character()
)
The R packages this object depends on.
label
(character(1)
)
Label for the new instance.
man
(character(1)
)
String in the format [pkg]::[topic]
pointing to a manual page for this object.
The referenced help package can be opened via method $help()
.
print()
Prints the object
TorchDescriptor$print(...)
...
any
generate()
Calls the generator with the given parameter values.
TorchDescriptor$generate()
help()
Displays the help file of the wrapped object.
TorchDescriptor$help()
clone()
The objects of this class are cloneable with this method.
TorchDescriptor$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other Torch Descriptor:
TorchCallback
,
TorchLoss
,
TorchOptimizer
,
as_torch_callbacks()
,
as_torch_loss()
,
as_torch_optimizer()
,
mlr3torch_losses
,
mlr3torch_optimizers
,
t_clbk()
,
t_loss()
,
t_opt()
This function creates an S3 class of class "TorchIngressToken"
, which is an internal data structure.
It contains the (meta-)information of how a batch is generated from a Task
and fed into an entry point
of the neural network. It is stored as the ingress
field in a ModelDescriptor
.
TorchIngressToken(features, batchgetter, shape)
TorchIngressToken(features, batchgetter, shape)
features |
( |
batchgetter |
( |
shape |
( |
TorchIngressToken
object.
Other Graph Network:
ModelDescriptor()
,
mlr_learners_torch_model
,
mlr_pipeops_module
,
mlr_pipeops_torch
,
mlr_pipeops_torch_ingress
,
mlr_pipeops_torch_ingress_categ
,
mlr_pipeops_torch_ingress_ltnsr
,
mlr_pipeops_torch_ingress_num
,
model_descriptor_to_learner()
,
model_descriptor_to_module()
,
model_descriptor_union()
,
nn_graph()
# Define a task for which we want to define an ingress token task = tsk("iris") # We create an ingress token for two feature Sepal.Length and Petal.Length: # We have to specify the features, the batchgetter and the shape features = c("Sepal.Length", "Petal.Length") # As a batchgetter we use batchgetter_num batch_dt = task$data(rows = 1:10, cols =features) batch_dt batch_tensor = batchgetter_num(batch_dt, "cpu") batch_tensor # The shape is unknown in the first dimension (batch dimension) ingress_token = TorchIngressToken( features = features, batchgetter = batchgetter_num, shape = c(NA, 2) ) ingress_token
# Define a task for which we want to define an ingress token task = tsk("iris") # We create an ingress token for two feature Sepal.Length and Petal.Length: # We have to specify the features, the batchgetter and the shape features = c("Sepal.Length", "Petal.Length") # As a batchgetter we use batchgetter_num batch_dt = task$data(rows = 1:10, cols =features) batch_dt batch_tensor = batchgetter_num(batch_dt, "cpu") batch_tensor # The shape is unknown in the first dimension (batch dimension) ingress_token = TorchIngressToken( features = features, batchgetter = batchgetter_num, shape = c(NA, 2) ) ingress_token
This wraps a torch::nn_loss
and annotates it with metadata, most importantly a ParamSet
.
The loss function is created for the given parameter values by calling the $generate()
method.
This class is usually used to configure the loss function of a torch learner, e.g.
when construcing a learner or in a ModelDescriptor
.
For a list of available losses, see mlr3torch_losses
.
Items from this dictionary can be retrieved using t_loss()
.
Defined by the constructor argument param_set
.
If no parameter set is provided during construction, the parameter set is constructed by creating a parameter
for each argument of the wrapped loss function, where the parametes are then of type ParamUty
.
mlr3torch::TorchDescriptor
-> TorchLoss
task_types
(character()
)
The task types this loss supports.
new()
Creates a new instance of this R6 class.
TorchLoss$new( torch_loss, task_types = NULL, param_set = NULL, id = NULL, label = NULL, packages = NULL, man = NULL )
torch_loss
(nn_loss
)
The loss module.
task_types
(character()
)
The task types supported by this loss.
param_set
(ParamSet
or NULL
)
The parameter set. If NULL
(default) it is inferred from torch_loss
.
id
(character(1)
)
The id for of the new object.
label
(character(1)
)
Label for the new instance.
packages
(character()
)
The R packages this object depends on.
man
(character(1)
)
String in the format [pkg]::[topic]
pointing to a manual page for this object.
The referenced help package can be opened via method $help()
.
print()
Prints the object
TorchLoss$print(...)
...
any
clone()
The objects of this class are cloneable with this method.
TorchLoss$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other Torch Descriptor:
TorchCallback
,
TorchDescriptor
,
TorchOptimizer
,
as_torch_callbacks()
,
as_torch_loss()
,
as_torch_optimizer()
,
mlr3torch_losses
,
mlr3torch_optimizers
,
t_clbk()
,
t_loss()
,
t_opt()
# Create a new torch loss torch_loss = TorchLoss$new(torch_loss = nn_mse_loss, task_types = "regr") torch_loss # the parameters are inferred torch_loss$param_set # Retrieve a loss from the dictionary: torch_loss = t_loss("mse", reduction = "mean") # is the same as torch_loss torch_loss$param_set torch_loss$label torch_loss$task_types torch_loss$id # Create the loss function loss_fn = torch_loss$generate() loss_fn # Is the same as nn_mse_loss(reduction = "mean") # open the help page of the wrapped loss function # torch_loss$help() # Use in a learner learner = lrn("regr.mlp", loss = t_loss("mse")) # The parameters of the loss are added to the learner's parameter set learner$param_set
# Create a new torch loss torch_loss = TorchLoss$new(torch_loss = nn_mse_loss, task_types = "regr") torch_loss # the parameters are inferred torch_loss$param_set # Retrieve a loss from the dictionary: torch_loss = t_loss("mse", reduction = "mean") # is the same as torch_loss torch_loss$param_set torch_loss$label torch_loss$task_types torch_loss$id # Create the loss function loss_fn = torch_loss$generate() loss_fn # Is the same as nn_mse_loss(reduction = "mean") # open the help page of the wrapped loss function # torch_loss$help() # Use in a learner learner = lrn("regr.mlp", loss = t_loss("mse")) # The parameters of the loss are added to the learner's parameter set learner$param_set
This wraps a torch::torch_optimizer_generator
a and annotates it with metadata, most importantly a ParamSet
.
The optimizer is created for the given parameter values by calling the $generate()
method.
This class is usually used to configure the optimizer of a torch learner, e.g.
when construcing a learner or in a ModelDescriptor
.
For a list of available optimizers, see mlr3torch_optimizers
.
Items from this dictionary can be retrieved using t_opt()
.
Defined by the constructor argument param_set
.
If no parameter set is provided during construction, the parameter set is constructed by creating a parameter
for each argument of the wrapped loss function, where the parametes are then of type ParamUty
.
mlr3torch::TorchDescriptor
-> TorchOptimizer
new()
Creates a new instance of this R6 class.
TorchOptimizer$new( torch_optimizer, param_set = NULL, id = NULL, label = NULL, packages = NULL, man = NULL )
torch_optimizer
(torch_optimizer_generator
)
The torch optimizer.
param_set
(ParamSet
or NULL
)
The parameter set. If NULL
(default) it is inferred from torch_optimizer
.
id
(character(1)
)
The id for of the new object.
label
(character(1)
)
Label for the new instance.
packages
(character()
)
The R packages this object depends on.
man
(character(1)
)
String in the format [pkg]::[topic]
pointing to a manual page for this object.
The referenced help package can be opened via method $help()
.
generate()
Instantiates the optimizer.
TorchOptimizer$generate(params)
params
(named list()
of torch_tensor
s)
The parameters of the network.
torch_optimizer
clone()
The objects of this class are cloneable with this method.
TorchOptimizer$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other Torch Descriptor:
TorchCallback
,
TorchDescriptor
,
TorchLoss
,
as_torch_callbacks()
,
as_torch_loss()
,
as_torch_optimizer()
,
mlr3torch_losses
,
mlr3torch_optimizers
,
t_clbk()
,
t_loss()
,
t_opt()
# Create a new torch loss torch_opt = TorchOptimizer$new(optim_adam, label = "adam") torch_opt # If the param set is not specified, parameters are inferred but are of class ParamUty torch_opt$param_set # open the help page of the wrapped optimizer # torch_opt$help() # Retrieve an optimizer from the dictionary torch_opt = t_opt("sgd", lr = 0.1) torch_opt torch_opt$param_set torch_opt$label torch_opt$id # Create the optimizer for a network net = nn_linear(10, 1) opt = torch_opt$generate(net$parameters) # is the same as optim_sgd(net$parameters, lr = 0.1) # Use in a learner learner = lrn("regr.mlp", optimizer = t_opt("sgd")) # The parameters of the optimizer are added to the learner's parameter set learner$param_set
# Create a new torch loss torch_opt = TorchOptimizer$new(optim_adam, label = "adam") torch_opt # If the param set is not specified, parameters are inferred but are of class ParamUty torch_opt$param_set # open the help page of the wrapped optimizer # torch_opt$help() # Retrieve an optimizer from the dictionary torch_opt = t_opt("sgd", lr = 0.1) torch_opt torch_opt$param_set torch_opt$label torch_opt$id # Create the optimizer for a network net = nn_linear(10, 1) opt = torch_opt$generate(net$parameters) # is the same as optim_sgd(net$parameters, lr = 0.1) # Use in a learner learner = lrn("regr.mlp", optimizer = t_opt("sgd")) # The parameters of the optimizer are added to the learner's parameter set learner$param_set