R/rm_main_results.R
rm_main_results.Rd
This result metric calculate the zero-one loss, the normalized rank, and the mean of the decision values. This is also an S3 object which has an associated plot function to display the results.
rm_main_results(
ndr_container_or_object = NULL,
include_norm_rank_results = TRUE
)
The purpose of this argument is to make the constructor of the rm_main_results feature preprocessor work with the magrittr pipe (%>%) operator. This argument should almost never be directly set by the user to anything other than NULL. If this is set to the default value of NULL, then the constructor will return a rm_main_results object. If this is set to an ndr container, then a rm_main_results object will be added to the container and the container will be returned. If this argument is set to another ndr object, then both that ndr object as well as a new rm_main_results object will be added to a new container and the container will be returned.
An argument specifying if the normalized rank and decision value results should be saved. If this is a Boolean set to TRUE, then the normalized rank and decision values for the correct category will be calculated. If this is a Boolean set to FALSE then the normalized rank and decision values will not be calculated. If this is a string set to "only_same_train_test_time", then the normalized rank and decision values will only be calculated when for results when training and testing at the same time. Not returning the full results can speed up the run-time of the code and will use less memory so this can be useful for large data sets.
This constructor creates an NDR result metric object with the class
rm_main_results
. Like all NDR result metric objects, this result
metric will be used by a cross-validator to create a measure of decoding
accuracy by aggregating the results after all cross-validation splits have
been run, and after all resample runs have completed.
Like all result metrics, this result metric has functions to aggregate results after completing each set of cross-validation classifications, and also after completing all the resample runs. The results should then be available in the DECODING_RESULTS object returned by the cross-validator.
Other result_metrics:
plot.rm_confusion_matrix()
,
plot.rm_main_results()
,
plot_main_results()
,
rm_confusion_matrix()
# If you only want to use the rm_main_results(), then you can put it in a
# list by itself and pass it to the cross-validator.
the_rms <- list(rm_main_results())