There are a number of metrics available to evaluate test performance. These are the basics, from which one can calculate other metrics. See the Wikipedia page on the confusion matrix for more information on each metric.
cs_pos()
is the proportion of positive tests (out of the organization)
cs_neg()
is the proportion of negative tests (out of the organization)
cs_true_pos()
is the proportion of true positive tests (out of org)
cs_true_neg()
is the proportion of true negative tests (out of org)
cs_false_pos()
is the proportion of false positive tests (out of org)
cs_false_neg()
is the proportion of false negative tests (out of org)
cs_ppv()
is the positive predictive value of a test
cs_npv()
is the negative predictive value of a test
cs_fdr()
is the false discovery rate of a test
cs_for()
is the false omission rate of a test
cs_sens()
is the sensitivity (true positive rate) of a test
cs_spec()
is the specificity (true negative rate) of a test
cs_fpr()
is the false positive rate of a test
cs_fnr()
is the false negative rate of a test
Usage
cs_pos(dt)
cs_neg(dt)
cs_true_pos(dt)
cs_false_pos(dt)
cs_true_neg(dt)
cs_false_neg(dt)
cs_ppv(dt)
cs_npv(dt)
cs_fdr(dt)
cs_for(dt)
cs_sens(dt)
cs_spec(dt)
cs_fnr(dt)
cs_fpr(dt)
Arguments
- dt
[data.table]
A distribution fromcs_dist()