Utility function to build an MLP with a choice of activation function and weight initialization with optional dropout and batch normalization.
(integer(1))
Number of input features.
(integer(1))
Number of targets.
(numeric())
Hidden nodes in network, each element in vector represents number
of hidden nodes in respective layer.
(character(1)|list())
Activation function, can either be a single
character and the same function is used in all layers, or a list of length length(nodes)
. See
get_pycox_activation for options.
(list())
Passed to get_pycox_activation.
(numeric())
Optional dropout layer, if NULL
then no dropout layer added
otherwise either a single numeric which will be added to all layers or a vector of differing
drop-out amounts.
(logical(1))
If TRUE
(default) then a bias parameter is added to all linear
layers.
(logical(1))
If TRUE
(default) then batch normalisation is applied
to all layers.
(list())
Parameters for batch normalisation, see
reticulate::py_help(torch$nn$BatchNorm1d)
.
(character(1))
Weight initialization method. See
get_pycox_init for options.
(list())
Passed to get_pycox_init.
This function is a helper for R users with less Python experience. Currently it is limited to simple MLPs. More advanced networks will require manual creation with reticulate.
# \donttest{
if (requireNamespaces("reticulate")) {
build_pytorch_net(4L, 2L, nodes = c(32, 64, 32), activation = "selu")
# pass parameters to activation and initializer functions
build_pytorch_net(4L, 2L, activation = "elu", act_pars = list(alpha = 0.1),
init = "kaiming_uniform", init_pars = list(mode = "fan_out"))
}
#> Error in py_module_import(module, convert = convert): ModuleNotFoundError: No module named 'torch'
#> Run `reticulate::py_last_error()` for details.
# }