Confidence in decision-making#

Warning

This chapter is under construction.

Terminology#

Confidence#

In general terms, confidence is the belief or conviction that a hypothesis or prediction is correct, that an outcome will be favorable, or that a chosen course of action is the best or most effective.

In decision-making, confidence can be more precisely defined as the subjective estimate of decision quality [Brus et al., 2021].

Trust#

Trust is a social construct: the belief that someone or something will behave or perform as expected. It implies a relationship between a trustor and a trustee.

Self-confidence is trust in one’s abilities.

Uncertainty#

Generally speaking, uncertainty (or incertitude) characterizes situations involving imperfect or unknown information.

In decision-making, it refers to the variability in the representation of information before a decision is taken [].

Belief#

Bias#

Sensitivity#

Error monitoring#

In decision-making, error monitoring (EM) is the process by which one is able to detect his/her errors as soon as a response has been made [Yeung and Summerfield, 2012].

EM allows adaptation of behavior both in the short and longer terms through gradual learning of actions’ outcomes.

Cognitive control#

Metacognition#

Confidence judgments and error monitoring are two related aspects of metacognition, the self-monitoring and self-control of one’s own cognition (sometimes called high order thinking).

Metacognition

Usefulness of confidence in decision-making#

Modeling decision confidence#

Signal Detection Theory#

Framework for analyzing decision making in the presence of uncertainty.

Originally developped by radar researchers in the mid-20th century, it has applications in many fields (psychology, diagnostics, quality control, etc).

Sensitivity and specificity#

Sensitivity quantifies how well a model can identify true positives. Specificity quantifies how well a model can identify true negatives. Equivalent to the recall metric, these definitions are often used in medecine and statistics.

\[\text{Sensitivity} = \frac{TP}{TP + FN} = \text{True Positive Rate} = \text{Recall}_{positive}\]
\[\text{Specificity} = \frac{TN}{TN + FP} = \text{True Negative Rate} = \text{Recall}_{negative}\]

Prediction outcomes can be interpreted as probability density functions, in order to represent results graphically.

Sensitivity and specificity

ROC curve and AUROC#

\[\text{False Positive Rate} = \frac{FP}{TN+FP} = 1 - TNR = 1 -\text{Specificity}\]
  • ROC stands for “Receiver Operating Characteristic”.

  • A ROC curve plots sensitivity vs. (1 - specificity), or TPR vs. FPR, for each possible classification threshold.

  • AUC, or more precisely AUROC (“Area Under the ROC Curve”), provides an aggregate measure of performance across all possible classification thresholds.

Impact of threshold choice#

AUROC animation

Impact of model’s separative power#

AUROC shape animation

Discriminability index#

Measuring confidence#

Two dominant methodologies:

  • Confidence ratings: after a decision, evaluate its correctness.

  • Confidence forced choice: after two decisions, choose which one is more likely to be correct.

    • Disregards confidence biases to focus on confidence sensitivity.