top of page

WELCOME TO MY BLOG

Search

Signal Detection and Information Theory

  • Writer: Lena
    Lena
  • Sep 28, 2018
  • 3 min read

Methods and Considerations When Designing and Testing System Interfaces


Signal Detection Theory (SDT)

Signal Detection Theory (SDT) is a statistical analysis model used to measure the varying magnitudes of information-bearing patterns (signals) that can be correctly identified through varying magnitudes of random patterns (the noise). This measurement evaluates the sensitivity in decision-making problems by determining a ‘detectable signal threshold’.


For example, you may want to test if increasing luminance on a neuroimaging scan will also increase a doctor's ability to spot brain tumors in an image.

  • Task: identify brain tumors in neuroimaging scan

  • Question: does increasing luminance on a neuroimaging scan also increase a doctor's ability to spot brain tumors?

You can run an experimental study evaluating this research question. With varying degrees of luminance, you can measure how doctors' decision-making when evaluating brain scans alters with the scan display.


When measuring your results, there are four main outcomes in performance:

  • Hit Rate: the doctor correctly identifies a brain tumor on the scan

  • Miss Rate: the doctor fails to identify a brain tumor on the scan

  • False Alarm Rate: the doctor incorrectly identifies a brain tumor on the scan

  • Correct Rejection Rate: the doctor correctly does not identify a brain tumor on the scan

In SDT, the model assumes two normal distributions: the Hit Rate (signal) and the False Alarm Rate (noise). The x-axis is the decision variable’s perceived strength of signal detection, while the y-axis is probability.


In a simplified statistical summary, you want to determine the d’, or sensitivity/spread between the two distributions (between correct and incorrect identifications of brain tumors on the neuroimaging scans). A large d’ is desired, meaning that there is a large spread between the signal and noise (if d’=0, this means that the signal and noise distributions completely overlap, and therefore are indiscriminable). You calculate the sensitivity (d’) for each participant by:

  • Calculating the z score for hit rates and false alarm rates (z(H) and z(FA) )

  • Subtract z(FA) from z(H) to get d’=z(H) - z(FA)

  • Perform statistical test to see whether the two conditions significantly differ

  • Plot Receiver Operator Curve (ROC) curve: essentially the Hit Rate vs. False Alarm rate. This curve helps to visually capture the sensitivity and specificity of decision-making performance


Information Theory

Two fundamental concepts within information theory, dealing with two aspects human performance when interacting with human-computer interfaces, are the Hick-Hyman Law and Fitts’s Law. Without delving too far into information theory, here are quick summaries of the two laws:

  • Hick-Hyman Law: the greater number of options (stimuli), the greater the task performance time (reaction time). In fact, this relationship is quantified by the equation, RT = a + b log2(n) where RT is reaction time, n is the number of stimuli, and a and b are are constants. Therefore, in general, when designing interfaces if you reduce the number of options than people can make quicker decisions.

  • Fitts’s Law: as task difficulty increases between selecting two stationary targets, so does movement time (MT). Task difficulty, aka index of difficulty (ID), depends on the distance or amplitude between the two targets (A) and the width of the targets (W). This relationship is quantified by the two equations: ID = log2(2A/W), which can be derived to MT = a + b log2(2A/W)

Interface Considerations

When designing an interface, it is important to consider the decision making problems, cognitive workload and reaction times, and task difficulties you are putting your users through. This is where signal detection and information theory comes in handy—there are methods of scientifically measuring the strength of your decision-making-inducing signals, task reaction time, and muscular movement times and difficulty indexes.


Even in applications where you don't want or need to know the exact numbers, these findings can generally be applied to help you organize your interface. For instance, you may consider the Hick-Hyman law when creating a navigational menu: how do you want to categorize information, and how many levels of information do you wish to present? Or maybe you consider Fitts's Law when designing two button sizes and their respective distances from one another. Whatever your purpose, these scientific and data-driven theories can help improve the usability of your interface.



 
 
 

1 Comment


Rob O’Donnell
Rob O’Donnell
Oct 01, 2018

Great post! I really liked how you summarized the key concepts of SDT and Information Theory. You outlined the material well, which made it easy to follow and understand.

Like
bottom of page