#' ---
#' title: "Data Science and Predictive Analytics (UMich HS650)"
#' subtitle: "
#'
#' |
#' Actual Condition |
#' Test Interpretation |
#'
#'
#' Absent ($H_0$ is true) |
#' Present ($H_1$ is true) |
#'
#'
#' Test Result |
#' Negative (fail to reject $H_0$) |
#' TN Condition absent + Negative result = True (accurate) Negative |
#' FN Condition present + Negative result = False (invalid) Negative Type II error (proportional to $\beta$) |
#' $NPV=Recall$
#' $=\frac{TN}{TN+FN}$ |
#' |
#'
#' Positive (reject $H_0$) |
#' FP Condition absent + Positive result = False Positive Type I error ($\alpha$) |
#' TP Condition Present + Positive result = True Positive |
#' $PPV=Precision$
#' $=\frac{TP}{TP+FP}$ |
#' |
#'
#' Test Interpretation |
#' $Power =1-\beta$
#' $= 1-\frac{FN}{FN+TP}$ |
#' $Specificity=\frac{TN}{TN+FP}$ |
#' $Power=Sensitivity$
#' $=\frac{TP}{TP+FN}$ |
#' $LOR=\ln\left (\frac{S1/F1}{S2/F2}\right )$
#' $=\ln\left (\frac{S1\times F2}{S2\times F1}\right )$, S=success, F=failure for 2 binary variables, $1$ and $2$ |
#'
#'
#'
#' See also [SMHS EBook, Power, Sensitivity and Specificity section](http://wiki.socr.umich.edu/index.php/SMHS_PowerSensitivitySpecificity).
#'
#' ## Confusion matrices
#'
#' We talked about this type of matrix in [Chapter 8](http://www.socr.umich.edu/people/dinov/2017/Spring/DSPA_HS650/notes/08_DecisionTreeClass.html). For binary classes, there will be a $2\times 2$ matrix. Each of the cells have a specific meaning.
#'
#' Graph $2\times 2$ table:
#'
#'
require(knitr)
item_table = data.frame(predict_T = c("TP","FP"),predict_F = c("TN","FN"))
rownames(item_table) = c("TRUE","FALSE")
kable(item_table,caption = "cross table")
#'
#'
#' * **True Positive**(TP): Number of observations that correctly classified as "yes" or "success".
#'
#' * **True Negative**(TN): Number of observations that correctly classified as "no" or "failure".
#'
#' * **False Positive**(FP): Number of observations that incorrectly classified as "yes" or "success".
#'
#' * **False Negative**(FN): Number of observations that incorrectly classified as "no" or "failure".
#'
#' **Using confusion matrices to measure performance**
#'
#' The way we calculate accuracy using these four cells is summarized by the following formula:
#' $$accuracy=\frac{TP+TN}{TP+TN+FP+FN}=\frac{TP+TN}{\text{Total number of observations}}$$
#' On the other hand, the error rate, or proportion of incorrectly classified observations is calculated using:
#' $$error rate=\frac{FP+FN}{TP+TN+FP+FN}==\frac{FP+FN}{\text{Total number of observations}}=1-accuracy$$
#' If we look at the numerator and denominator carefully, we can see that the error rate and accuracy add up to 1. Therefore, a 95% accuracy means 5% error rate.
#'
#' In R, we have multiple ways to obtain confusion table. The simplest way would be `table()`. For example, in [Chapter 7](http://www.socr.umich.edu/people/dinov/2017/Spring/DSPA_HS650/notes/07_NaiveBayesianClass.html), to get a plain $2\times 2$ table reporting the agreement between the real clinical cancer labels and their machine learning predicted counterparts, we used:
#'
#'
hn_test_pred<-predict(hn_classifier, hn_test)
table(hn_test_pred, hn_med_test$stage)
#'
#'
#' Then why did we use `CrossTable()` function back in [Chapter 7](http://www.socr.umich.edu/people/dinov/2017/Spring/DSPA_HS650/notes/07_NaiveBayesianClass.html)? Because it reports more useful information about the model performances.
#'
#'
library(gmodels)
CrossTable(hn_test_pred, hn_med_test$stage)
#'
#'
#' With both tables, we can calculate accuracy and error rate by hand.
#'
#'
accuracy<-(69+0)/100
accuracy
error_rate<-(23+8)/100
error_rate
1-accuracy
#'
#'
#' For matrices that are larger than $2\times 2$, all diagonal elements count the observations that are correctly classified and the off-diagonal elements represent incorrectly labeled cases.
#'
#' ## Other measures of performance beyond accuracy
#'
#' So far we discussed two performance methods - table and cross-table. A third function is `confusionMatrix()` which provides the easiest way to report model performance. Notice that the first argument is an *actual vector of the labels*, i.e., $Test\_Y$ and the second argument, of the same length, represents the *vector of predicted labels*.
#'
#' This example was presented as the first case-study in [Chapter 8](http://www.socr.umich.edu/people/dinov/2017/Spring/DSPA_HS650/notes/08_DecisionTreeClass.html).
#'
#'
library(caret)
qol_pred<-predict(qol_model, qol_test)
confusionMatrix(table(qol_pred, qol_test$cd), positive="severe_disease")
#'
#'
#' ### The kappa ( $\kappa$ ) statistic
#'
#' The Kappa statistic was [originally developed to measure the reliability between two human raters](http://wiki.socr.umich.edu/index.php/SMHS_ReliabilityValidity). It can be harnessed in machine learning applications to compare the accuracy of a classifier, where `one rater` represents the ground truth (for labeled data, these are the actual values of each instance) and the `second rater` represents the results of the automated machine learning classifier. The order of listing the **raters** is irrelevant.
#'
#' Kappa statistic measures the **possibility of a correct prediction by chance alone** and answers the question of `How much better is the agreement (between the ground truth and the machine learning prediction) than would be expected by chance alone?` Its value is between $0$ and $1$. When $\kappa=1$, we have a perfect agreement between a **computed** prediction (typically the result of a model-based or model-free technique forecasting an outcome of interest) and an **expected** prediction (typically random, by-chance, prediction). A common interpretation of the Kappa statistics includes:
#'
#' * Poor agreement: less than 0.20
#' * Fair agreement: 0.20-0.40
#' * Moderate agreement: 0.40-0.60
#' * Good agreement: 0.60-0.80
#' * Very good agreement: 0.80-1
#'
#' In the above `confusionMatrix` output, we have a fair agreement. For different problems, we may have different interpretations of Kappa statistics.
#'
#' To understand Kappa statistic better, let's look at its formula:
#'
#' $$kappa=\frac{P(a)-P(e)}{1-P(e)}$$
#'
#' `P(a)` and `P(e)` simply denote probability of **actual** and **expected** agreement between the classifier and true values. Let's use the Quality of Life `cd` prediciton (qol_pred) for computing the kappa statistics by hand.
#'
#'
table(qol_pred, qol_test$cd)
#'
#'
#' According to above table, actual agreement is the accuracy:
#'
p_a<-(149+131)/(149+89+74+131)
p_a
#'
#'
#' The manually and automatically computed accuracies coincide (0.6321). It may be trickier to obtain the expected agreement. Probability rules tell us that the probability of two independent events occurring at the same time equals to the product of the individual (marginal) probabilities for these two events. Thus, we have:
#'
#' *P(expect agreement for minor_disease)=P(actual type is minor_disease) x P(predicted type is minor_disease)*
#'
#' Similarly:
#'
#' *P(expect agreement for severe_disease)=P(actual type is severe_disease) x P(predicted type is severe_disease)*
#'
#' In our case:
#'
#'
p_e_minor<- (149+131)/(149+89+74+131)*(149+89)/(149+89+74+131)
p_e_severe<-(89+74)/(149+89+74+131) * (74+131)/(149+89+74+131)
p_e<-p_e_minor+p_e_severe
p_e
#'
#'
#' Plug `p_a` and `p_e` into the formula we get:
#'
#'
kappa<-(p_a-p_e)/(1-p_e)
kappa
#'
#'
#' We get a similar value in the `confusionTable()` output. A more straight forward way of getting the Kappa statistics is by using `Kappa()` function in the `vcd` package.
#'
#'
#install.packages(vcd)
library(vcd)
Kappa(table(qol_pred, qol_test$cd))
#'
#'
#' The combination of `Kappa()` and `table` function yields a $2\times 4$ matrix. The *Kappa statistic* is under the unweighted value.
#'
#' Generally speaking, predicting a severe disease outcome is a more critical problem than predicting a mild disease state. Thus, weighted Kappa is also useful. We give the severe disease a higher weight. The Kappa test result is not acceptable since the classifier may make too many mistakes for the severe disease cases. The Kappa value is only $-0.0714$. Notice that the range of Kappa is not [0,1] for weighted Kappa.
#'
#'
Kappa(table(qol_pred, qol_test$cd),weights = matrix(c(1,10,1,10),nrow=2))
#'
#'
#' When the predicted value is the first argument, the row and column names represent the **true labels** and the **predicted labels**, respectively.
#'
#'
table(qol_pred, qol_test$cd)
#'
#'
#' #### Summary of the Kappa score for calculating prediction accuracy
#'
#' Kappa compares an **Observed classification accuracy** (output of our ML classifier) with an **Expected classification accuracy** (corresponding to random chance classification). It may be used to evaluate single classifiers and/or to compare among a set of different classifiers. It takes into account random chance (agreement with a random classifier). That makes **Kappa** more meaningful than simply using the **accuracy** as a single quality metric. For instance, the interpretation of an `Observed Accuracy of 80%` is **relative** to the `Expected Accuracy`. `Observed Accuracy of 80%` is more impactful for an `Expected Accuracy of 50%` compared to `Expected Accuracy of 75%`.
#'
#' #### Computation of Observed Accuracy and Expected Accuracy
#'
#' Consider the following example of a `classifier` generating the following `confusion matrix`. Columns represent the **true labels** and rows represent the **classifier-derived labels** for this binary prediction example.
#'
#' Class | True | False | Total
#' ------|------|-------|------
#' T | 50 | 35 | 85
#' F | 25 | 40 | 65
#' Total | 75 | 75 | 150
#'
#' In this example, there are a total of 150 observations instances total ($50+35+25+40$). In reality, 75 are labeled as **True** ($50 + 25$) and another 75 are labeled as **False** ($35 + 40$). The classifier labeled 85 as **True** ($50 + 35$) and the other 65 as **False** ($25 + 40$).
#'
#' * Observed Accuracy (OA) is the `proportion of instances` that were classified correctly throughout the entire confusion matrix:
#' $$OA = \frac{50 + 40}{150} = 0.6$$
#' * Expected Accuracy (EA) is the accuracy that any random classifier would be expected to achieve based on the given confusion matrix. EA is the `proportion of instances` of each class (**True** and **False**), along with the number of instances that the automated classifier agreed with the ground truth label. The EA is calculated by multiplying the marginal frequencies of **True** for the true-state and the machine classified instances, and dividing by the total number of instances. The marginal frequency of **True** for the **true-state** is 75 ($50 + 25$) and for the corresponding ML classifier is 85 ($50 + 35$). Then, the expected accuracy for the **True** outcome is:
#' $$EA(True) = \frac{75 \times 85}{150} = 42.5$$
#'
#' We similarly compute the $EA(False)$ for the second, **False**, outcome, by using the marginal frequencies for the true-state ($(False|true-state) = 75 = 50 + 25$) and the ML classifier $(False|classifier) = 65 (40+25)$. Then, the expected accuracy for the **True** outcome is:
#'
#' $$EA(False) = \frac{75 \times 65}{150} = 32.5$$
#'
#' Finally, the $EA = \frac{EA(True) + EA(False)}{150}$
#'
#' $$Expected Accuracy (EA) = \frac{42.5 + 32.5}{150} = 0.5$$
#'
#' Note that $EA = 0.5$ whenever the `true-state` binary classification is balanced (in reality, the frequencies of **True** and **False** are equal, in our case 75).
#'
#' The calculation of the **kappa statistic** relies on $OA=0.6$ and $EA=0.5$:
#' $$ (Kappa)\text{ } \kappa = \frac{OA - EA}{1 - EA} = \frac{0.6-0.5}{1-0.5}=0.2.$$
#'
#' ### Sensitivity and specificity
#'
#' Take a closer look at the `confusionMatrix()` output where we can find two important statistics - "sensitivity" and "specificity".
#'
#' Sensitivity or true positive rate measures the proportion of "success" observations that are correctly classified.
#' $$sensitivity=\frac{TP}{TP+FN}.$$
#' Notice $TP+FN$ are the total number of true "success" observations.
#'
#' On the other hand, specificity or true negative rate measures the proportion of "failure" observations that are correctly classified.
#' $$sensitivity=\frac{TN}{TN+FP}.$$
#' Accordingly, $TN+FP$ are the total number of true "failure" observations.
#'
#' In the QoL data, considering "severe_disease" as "success" and using the `table()` output we can manually compute the *sensitivity* and *specificity*, as well as *precision* and *recall* (below):
#'
#'
sens<-131/(131+89)
sens
spec<-149/(149+74)
spec
#'
#'
#' Another R package `caret` also provides functions to directly calculate the sensitivity and specificity.
#'
#'
library(caret)
sensitivity(qol_pred, qol_test$cd, positive="severe_disease")
# specificity(qol_pred, qol_test$cd)
#'
#'
#' Sensitivity and specificity both range from 0 to 1. For either measure, a values of 1 imply that the positive and negative predictions are very accurate. However, simultaneously high sensitivity and specificity may not be attainable in real world situations. There is a tradeoff between sensitivity and specificity. To compromise, some studies loosen the demands on one and focus on achieving high values on the other.
#'
#' ###Precision and recall
#'
#' Very similar to sensitivity, *precision* measures the proportion of true "success" observations among predicted "success" observations.
#' $$precision=\frac{TP}{TP+FP}.$$
#' *Recall* is the proportion of true "success" among all "success". It has the same formula with sensitivity but different meaning. Recall measures how complete the results are. A model with high recall captures most "interesting" cases.
#' $$recall=\frac{TP}{TP+FN}.$$
#' Again, let's calculate these by hand for the QoL data:
#'
#'
prec<-131/(131+74)
prec
recall<-131/(131+89)
recall
#'
#'
#' Another way to obtain *precision* would be `posPredValue()` under `caret` package. Remember to specify which one is the "success" class.
#'
#'
posPredValue(qol_pred, qol_test$cd, positive="severe_disease")
#'
#'
#' From the definitions of **precision** and **recall**, we can derive the type 1 error and type 2 errors as follow:
#'
#' $$error_1 = 1- Precision = \frac{FP}{TP+FP}.$$
#'
#' $$error_2 = 1- Recall = \frac{FN}{TP+FN}.$$
#'
#' Thus, we can compute the type 1 error ($0.36$) and type 2 error ($0.40$).
#'
#'
error1<-74/(131+74)
error2<-89/(131+89)
error1; error2
#'
#'
#' ### The F-measure
#'
#' The F-measure, or F1-score, combines precision and recall using the harmonic mean assuming equal weights. High F1-score means high precision and high recall. This is a convenient way of measuring model performances and comparing models.
#' $$F1=\frac{2\times precision\times recall}{recall+precision}=\frac{2\times TP}{2\times TP+FP+FN}$$
#' If calculating the F1-score by hand, using the Quality of Life prediction:
#'
#'
f1<-(2*prec*recall)/(prec+recall)
f1
#'
#'
#' The direct calculations of the F1-statistics can be obtained using `caret`:
#'
#'
precision <- posPredValue(qol_pred, qol_test$cd, positive="severe_disease")
recall <- sensitivity(qol_pred, qol_test$cd, positive="severe_disease")
F1 <- (2 * precision * recall) / (precision + recall); F1
#'
#'
#' # Visualizing performance tradeoffs (ROC Curve)
#'
#' Another choice for evaluating classifiers' performance is by graphs rather than statistics. Graphs are usually more comprehensive than single statistics.
#'
#' In R there is a package providing user-friendly functions for visualizing model performance. Details could be find on the [ROCR website]( http://rocr.bioinf.mpi-sb.mpg.de).
#'
#' Here we evaluate the model performance for the Quality of Life case study in [Chapter 8](http://www.socr.umich.edu/people/dinov/2017/Spring/DSPA_HS650/notes/08_DecisionTreeClass.html).
#'
#'
#install.packages("ROCR")
library(ROCR)
pred<-ROCR::prediction(predictions=pred_prob[, 2], labels=qol_test$cd)
# avoid naming collision (ROCR::prediction), as
# there is another prediction function in the neuralnet package.
#'
#'
#' `pred_prob[, 2]` is the probability of classifying each observation as "severe_disease". The above code saved all the model prediction information into the object `pred`.
#'
#' The ROC (Receiver Operating Characteristic) curves are often used for examine the trade-off between detecting true positives and avoiding the false positives.
#'
#'
curve(log(x), from=0, to=100, xlab="False Positive Rate", ylab="True Positive Rate", main="ROC curve", col="green", lwd=3, axes=F)
Axis(side=1, at=c(0, 20, 40, 60, 80, 100), labels = c("0%", "20%", "40%", "60%", "80%", "100%"))
Axis(side=2, at=0:5, labels = c("0%", "20%", "40%", "60%", "80%", "100%"))
segments(0, 0, 110, 5, lty=2, lwd=3)
segments(0, 0, 0, 4.7, lty=2, lwd=3, col="blue")
segments(0, 4.7, 107, 4.7, lty=2, lwd=3, col="blue")
text(20, 4, col="blue", labels = "Perfect Classifier")
text(40, 3, col="green", labels = "Test Classifier")
text(70, 2, col="black", labels= "Classifier with no predictive value")
#'
#'
#' The