The question Scerbo,
1998.pdf wanted to examine is addressed blatantly in the article’s title:
“What’s So Boring About Vigilance?” Scerbo analyzed studies using the vigilance
paradigm to induce boredom in order to better understand the role that boredom
plays in performance in vigilance tasks. Scerbo, Greenwald, and Sawin (1992)
tested to see if individuals might attempt to relieve boredom during a
vigilance task by moving a mouse cursor.
In one condition the cursor was displayed on the screen while in another
condition it was not. This study also
sought to measure the interaction between boredom and mental workload in
vigilance. They found that participants who reported the highest levels of
boredom produced the sharpest decrements in performance, making progressively
more mouse movements over time. From the
results, the researchers inferred that the demands of the task must have
exceeded any temporary relief of moving the mouse from the boredom induced by
the vigil. From this analysis they
suggested that such a motor activity may be a good measure of boredom during
vigilance tasks. However, if the
movement of the mouse increase and reported performance increases as time goes
on, couldn’t the decrease in performance be affected by attending to the motor
movement itself? Because performance is also being measured, if this is the
case, even if motor movement is a good measure of boredom, could the possible
effects it has on the vigil task exaggerate the true association between the
amount of boredom experienced in relation to one’s performance during the vigil? In another experiment by Scerbo and Sawin
(1994), participants were told that they would undergo three tasks, though they
really only were provided two: a vigilance task and a more complicated
kaleidoscope task. They were provided
the opportunity to stop a task early if they did not want to continue. Researchers found that participants spent
significantly more time on the vigilance task when it preceded the kaleidoscope
task compared to when it followed it, while the time on the kaleidoscope task was
unaffected by order. Compared to the
kaleidoscope task, participants reported increased levels of boredom, stress, and
had more difficulty concentrating during the vigilance task. Levels of boredom
on each task were independent of task order. Prinzel, Sawin, and Scerbo (1995)
repeated this study but without offering participants the chance to end a task
early, though duration time was obtained from mean session times of the earlier
study. Overall levels of boredom were
higher for the vigilance than kaleidoscope task. However, here this effect was moderated by
order, such higher levels of boredom were reported during the vigil when it
followed the kaleidoscope task than when it preceded it.
More specifically, Jerison,Pickett,&Science
(1964).pdf wanted to look into the effects of the waning of attention
during boredom. In order to do so, they
tested the effect of event rate on vigilance by comparing observing rate under a high event rate of 30 events per
minute to that of those under a low event rate of 5 events per minute. This
study provided very interesting findings, as the results were the exact
opposite of what the researchers had originally predicted. They hypothesized
that those in the lower event rate condition would have less detection accuracy
and a steeper decline in detection rate as time lapsed, compared to those in
the high event rate condition. This
hypothesis was based on the arousal point of view, in which the passive waning
of attention in vigilance tasks is explained by a decrease in arousal due to
the insufficient incoming stimuli needed to maintain attention. While the researchers did find a significant
effect of event rate on performance, this effect was such that subjects in the
low event rate condition performed significantly better (i.e., they were able
to detect more signals) than those in the high event rate condition. The results, therefore, suggested that
instead of this effect of arousal, the decision-theory approach of vigilance
was at play. In this decision-theory
point of view, it is inferred that that the observer decides how much he will
attend to the recurring nonsignal events, depending on the expected value of
doing so. In other words, the amount of
attention that the observer will devote to the task is based on the proportion
of signal to nonsignal events. Therefore, because in the low event rate there
was a substantially larger chance that an event would be a signal compared to
in the high event rate condition, the observer is more likely to attend to each
event. With signals occurring at fixed
times, there is an inverse relationship between event rate and probability of
success (i.e., that an event will be a signal).
Therefore, the expected value of observing an event increases as the
event rate decreases, which they used to explain why the vigilance decrement
was only found in the high event rate condition. I found it very interesting that the
researchers’ hypothesis was the opposite of their findings. There is so much pressure to be published
that it is not likely to find an article in which the results contradict one’s
original hypothesis. Instead,
researchers might take these findings and reword original hypothesis to reflect
what they ultimately find. “Increasing competition for shrinking government
budgets for research and the disproportionately large rewards for publishing in
the best journals have exacerbated the temptation to fudge results or ignore
inconvenient data.”[1]
While I am not in any way insinuating that all or even most recent articles are
fraudulent or offer misleading results, however, I do at times question how
some very specific hypotheses could so accurately reflect their results without
having previously tested an experiment.
However, with the amount of research trying to be published and the
difficulty of doing so, it is increasingly important to question and critique experimental
designs that seem unrealistic or too good to be true.
While Jerison found
that the decision-theory approach helped to better explain his results, Broadbent
& Gregory (1965).pdf started with the decision-theory of vigilance as
the basis of their design, testing to see if their findings would support or
reject it. At the time of this study,
the majority of past research had interpreted the vigilance decrement as a
result of the decrease in strength of evidence from which the observer could
use to make a decision as the duration of a vigil increased. However, Broadbent & Gregory (1963),
instead, found that as time passed, observers increased signal detection
criterion, which is what caused a reported decrease in signal detection rate in
doubtful situations. In other words, as
time passed during the vigil, observers seemed to require more evidence of a
signal’s presence before they were willing to report that they detected
it. In order to account for gaps in
past research, here, observers had to respond to every flash of light, either
to confirm or disconfirm signal detection. Broadbent & Gregory (1965) took
Jerison’s findings a step further, looking into noise effects based on signal
rate and the number of signal sources.
If their results further confirmed the decision theory that Jerison
believed, then they would expect to find that observers’ criterion would be
less cautious when the probability that an event is a signal is high. Therefore, Beta should change when signal
rate changes. They found that a decrease in noise in general did not affect the
total number of detections, however, it did affect the level of confidence with
which detections were made, with a decrease in confidence when the amount of
noise was lower. The experiment was conducted over two days, in which the
experimenters found a significant interaction between the trend of Beta and the
day of the experiment, such that Beta increased on the first day but not on the
second day. From this, they inferred
that the increase in criterion cautiousness on the first day caused a decrease
in the number of total detections recorded, which was then maintained on the
second day. This two-day experiment therefore put into question results of
previous one-day studies, suggesting that the phenomenon may only occur in
unexperienced operators. However, while
I do think that this brings up interesting questions, I am hesitant to consider
any of these participants to be “unexperienced.” All of
the participants are “experienced” in the sense that none were new to the
experimental design prior to participating in the actual study. They all underwent demonstration and
practices rounds before undergoing the experiment, in order to insure that they
understood the directions, could detect nonsignal from signal events, and were
actually able to distinguish them. In
addition, during the practice session that each participant underwent the day
before the experiment, they were given feedback on the number of detections and
false positives recorded. Additionally,
before the practice session and before each of the experiment sessions, each
participant viewed a demonstration of the signal for each of the three
lights. Following each demonstration, on
experiment days, there was a practice period of 100 flashes with a signal rate
corresponding to the rate they would experience during that test session, after
which feedback of results was once again provided. With this in mind, I do not believe it is
accurate to classify participants as “unexperienced” on the “first” test day. Perhaps there would be an even more
significant difference between the recorded detections depending on the day if
so much practice had not preceded the experiment. There were three group conditions, one in
which signals were flashed on a single source, one with multiple sources, and
one with a single source but at a slower rate than the other two conditions,
especially towards the end of the watch.
This third, quiet condition resulted in a decrease in confidence when
recording detections as compared to the other two noise conditions. Results
showed that low signal frequency conditions resulted in participants employing
a more cautious criterion but with no decrease in confidence. From this the
researchers inferred that the harmful effects of noise are most likely less of
a problem in vigilance tasks with low signal frequency. However, the effects of
noise did seem to be more serious when signal rate was higher. They also found that the deteriorating
effects of noise occur whether or not attention is divided between multiple
sources of signals. Therefore, the effects of a low
signal rate are at least partially due to a decrease in observer willingness to
report uncertain signals.
While thus far,
this evidence supports the idea that most individuals find vigilance tasks
boring, Scerbo (1998) also looks into factors that make these tasks
particularly boring to certain individuals.
Sawin and Scerbo (1995) found that that some individuals are more
susceptible to boredom (high- boredom proneness (BP)), and that these are the
individuals that show poorer overall performance in addition to higher reports
of boredom during vigilance tasks. This
idea of boredom proneness might help account for the discrepancies found in
this vigilance research, while it may pose important implications for
real-world vigilance tasks, as some individuals may be more susceptible to
vigilance decrement than others. Vigilance tasks may not be for anyone, but this idea suggests that there are some that this ways more heavily on.
Thank you for this great post! I would like to take a closer look at Scerbo (1998) but I don't have access to it. Can you e-mail me the PDF?
ReplyDelete(falk dot lieder at berkeley dot edu)