Evidence for added value
of baseline testing in computer-based cognitive assessment.
Roebuck-Spencer TM, Vincent AS,
Schlegel RE, and Gilliland K. J of Athl Training. 2013;48 (3).
Schlegel RE, and Gilliland K. J of Athl Training. 2013;48 (3).
Take
Home Message: Computer-based neurocognitive testing methods of deciphering if a
patient has a concussion may lead to false-positives, thus a gold standard is
still needed.
Home Message: Computer-based neurocognitive testing methods of deciphering if a
patient has a concussion may lead to false-positives, thus a gold standard is
still needed.
Baseline computer-based cognitive
testing is commonplace in athletics; however, more information is needed
regarding the validity of comparing post-injury results with baseline
performance. Therefore, Roebuck-Spencer and colleagues completed a study of
8,002 military service members (91% male, ~27 years old) to examine the added
value of baseline testing in computer-based cognitive testing by comparing 2
methods of classifying atypical performance: (1) baseline comparison and (2) normal
reference. All participants took the Automated Neuropsychological Assessment
Matrix (ANAM) prior to and after deployment.
Patients were excluded if they (1) reported a history of concussions
during deployment, (2) nonspecific injuries, (3) extreme outliers at either
time point, (4) incomplete data on potential injury history, and (5) pre-deployment-post-deployment
interval of less than 60 days. Trained test administrators conducted the testing
in a group setting. Post-deployment testing took place on day 6 of a 7-day
reintegration process. Notable declines in neurocognitive performance were
identified using a reliable change index (RCI). Atypical performance at the
post-deployment performance was presented in 2 ways: the baseline comparison
method (post-deployment scores notably declined) and norm-referenced method
(the individual’s ANAM score was below the normal range of scores from the
large sample of healthy individuals). Overall, the two methods were similar and
classified 3.7 and 3.4% of participants with atypical post-deployment scores using
the baseline-referenced and norm-referenced method, respectively. Interestingly, however, both methods were
inconsistent with regards to which individual was considered atypical. Of the
147 individuals classified as atypical using the baseline-based method, 68% (100
individuals) were classified as normal using the norms-based method. Similarly,
of the 137 participants classified as atypical by the norm-based method, 66% (90
individuals) showed no change in test performance from pre-deployment to
post-deployment.
testing is commonplace in athletics; however, more information is needed
regarding the validity of comparing post-injury results with baseline
performance. Therefore, Roebuck-Spencer and colleagues completed a study of
8,002 military service members (91% male, ~27 years old) to examine the added
value of baseline testing in computer-based cognitive testing by comparing 2
methods of classifying atypical performance: (1) baseline comparison and (2) normal
reference. All participants took the Automated Neuropsychological Assessment
Matrix (ANAM) prior to and after deployment.
Patients were excluded if they (1) reported a history of concussions
during deployment, (2) nonspecific injuries, (3) extreme outliers at either
time point, (4) incomplete data on potential injury history, and (5) pre-deployment-post-deployment
interval of less than 60 days. Trained test administrators conducted the testing
in a group setting. Post-deployment testing took place on day 6 of a 7-day
reintegration process. Notable declines in neurocognitive performance were
identified using a reliable change index (RCI). Atypical performance at the
post-deployment performance was presented in 2 ways: the baseline comparison
method (post-deployment scores notably declined) and norm-referenced method
(the individual’s ANAM score was below the normal range of scores from the
large sample of healthy individuals). Overall, the two methods were similar and
classified 3.7 and 3.4% of participants with atypical post-deployment scores using
the baseline-referenced and norm-referenced method, respectively. Interestingly, however, both methods were
inconsistent with regards to which individual was considered atypical. Of the
147 individuals classified as atypical using the baseline-based method, 68% (100
individuals) were classified as normal using the norms-based method. Similarly,
of the 137 participants classified as atypical by the norm-based method, 66% (90
individuals) showed no change in test performance from pre-deployment to
post-deployment.
This study presents clinicians with a
very interesting dilemma with regards to deciphering computer-based cognitive
assessments. While both classification methods had merit, when compared, both
methods classified different participants as atypical in a seemingly healthy
population. Although the number of misclassified individuals was small (3 to 4
out of 100 individuals) it is concerning that 190 (~2%) individuals had
different results with the two methods. Future research may be needed to
determine why these individuals were misclassified and if we can further optimize
computer-based cognitive testing or combine this testing with certain clinical
tests to avoid false findings. It should also be noted however, that the
authors used a military sample that self-reported no concussions during
deployment and just used one computer-based cognitive test. Some of the individuals
may have experienced an undiagnosed concussion that would account for the
misclassification. Furthermore, it is unclear what the misclassification rate
may be with another computer-based test.
Performing this study in a sample of athletic teams may yield a
different result because the normative values with respect to age, gender, etc.
would be different. Further, in this population clinicians would be able to closely
monitor injuries and sub-concussive events which may lead to changes in test
performance. In the meantime, it is important to keep in mind that
computer-based testing, like any clinical test, may lead to false-positive and
false-negative outcomes and therefore the clinical judgement remains key to trying
to minimize these wrong outcomes.
very interesting dilemma with regards to deciphering computer-based cognitive
assessments. While both classification methods had merit, when compared, both
methods classified different participants as atypical in a seemingly healthy
population. Although the number of misclassified individuals was small (3 to 4
out of 100 individuals) it is concerning that 190 (~2%) individuals had
different results with the two methods. Future research may be needed to
determine why these individuals were misclassified and if we can further optimize
computer-based cognitive testing or combine this testing with certain clinical
tests to avoid false findings. It should also be noted however, that the
authors used a military sample that self-reported no concussions during
deployment and just used one computer-based cognitive test. Some of the individuals
may have experienced an undiagnosed concussion that would account for the
misclassification. Furthermore, it is unclear what the misclassification rate
may be with another computer-based test.
Performing this study in a sample of athletic teams may yield a
different result because the normative values with respect to age, gender, etc.
would be different. Further, in this population clinicians would be able to closely
monitor injuries and sub-concussive events which may lead to changes in test
performance. In the meantime, it is important to keep in mind that
computer-based testing, like any clinical test, may lead to false-positive and
false-negative outcomes and therefore the clinical judgement remains key to trying
to minimize these wrong outcomes.
Questions
for Discussion: How do you currently diagnose concussions with regards to
computerized cognitive testing? Have you felt this has been an effective method
of patient classification?
for Discussion: How do you currently diagnose concussions with regards to
computerized cognitive testing? Have you felt this has been an effective method
of patient classification?
Written by: Kyle Harris
Reviewed by: Jeffrey Driban
Related Posts:
Reliability of the Online Version of ImPACT in High School Athletes
Clinical Reaction Time: A Simple and Effective Assessment Tool for Concussions
Single or Dual Task For Concussion Assessment?
Concussion Evaluation Methods among High School Football Coaches and Athletic Trainers
Clinical Reaction Time: A Simple and Effective Assessment Tool for Concussions
Single or Dual Task For Concussion Assessment?
Concussion Evaluation Methods among High School Football Coaches and Athletic Trainers
Roebuck-Spencer, T., Vincent, A., Schlegel, R., & Gilliland, K. (2013). Evidence for Added Value of Baseline Testing in Computer-Based Cognitive Assessment Journal of Athletic Training DOI: 10.4085/1062-6050-48.3.11
I have used ImPACT testing in the past and have been mostly satisfied with the results and integration of the computer based testing into my concussion evaluation and decision-making. I think the important point to take from this study and with regards to all concussion decisions is that we are all still clinicians and must not let the computer take over for our sound clinical judgment. We would be remiss as healthcare providers to rely only on these methods of testing. These are wonderful tools to use and give an objective measure of baseline and then post injury comparison. However, there are false-positive and false-negative results, as mentioned above. I do agree that this type of study can be difficult to apply directly to an athletic team, but is a great jumping off point for more research to look further into classification. Also, some of the potential reasons for the misclassifications in the military population would not happen in an athletic team population with a supervising AT that could monitor those aspects more closely.
I agree with the above mentioned points and believe that a critical aspect of our job as clinicians is trusting our instincts even if some methods of testing tells otherwise. As technology advances, electronic testing is becoming more and more popular, and I agree that that issue is if it has the capability to take the place of in person evaluations and the relationships between clinicians and their athletes. While in theory these new methods could assist clinicians with a diagnosis, there is still a long way to go before these can become the gold standard for cognitive testing.
After reading the article I am very intrigued by the possible association between concussion, decreased post testing scores and PTSD. As mentioned, this is an aspect not typically seen in athletics, so how can we be sure that this condition does affect post-testing scores? Military personnel have a very different experience than athletes and maybe alterations should be made in their testing that accounts for this? I am very interested to see what research stems from this article.
I completely agree with Colby that computers cannot completely dictate our clinical decisions. We should use them in conjunction with our best clinical judgement. With this being said, I think many of the false-positives and differences between the baseline-based method and norm-based method can be explained by the fact that this research was conducted on military personnel pre and post deployment. Kelsey mentioned PTSD as one possible explanation for such a phenomenon and I have to agree. I also think that unreported concussions would not be surprising in a military setting. Many of these men become incredibly attached to their groups as well as feeling very obligated towards their duties as a soldier. Reporting a concussion could take them off the battlefield and away from their comrades and also remove a sense of identity. I also think that these military men have an altered brain function when they return home for war PTSD or not. They could have suffered brain damage from loud noises such as bombs and gunshots. War is also incredibly traumatizing. Honestly, the results of this study don't surprise me at all. I would like to see a study that looks at how emotional state might effect baseline concussion symptoms as well as post-concussion retests.
Great conversation. I think we all agree that overall a computer-based testing system is a tool in the clinician's toolbox. Not the only thing at our disposal. Colby, I agree with you intensely with you statement that this study is a "great jumping off point." I see many avenues branching off of this study. For the military population, I believe Kelsey's point of looking at PTSD would be extremely beneficial. Personally though, I think Lauren's mentioning looking at unreported concussions/MTBI's would be perfect way to continue to bridge the gap between the military and athletic world. I think the military could greatly benefit from looking at if, when and how often their soldiers are sustaining concussive blows. I think this could ultimately help improve care of soldiers in the field, and their long-term care. Athletically, I believe that looking at unreported concussions would help inform sports medicine professionals tremendously. All to often, our athletes want to remain with their team much like that of soldiers. Frequently, in my experiences, I have had athletes attempt to under report concussion symptoms to avoid me "pulling them." What are you thoughts to the notion that computer-based concussion tests could be used to focus on those who may not report or under-report signs and symptoms of a concussion? Do you feel this could is a specific situation for the computer-based system or do you feel as though more needs to be done before it gets to that point?
All of these are very good points and I generally agree with all of them. Like most of you, I think it is extremely important to not become too reliant on these great technological advancements in the TOOLS that are becoming available to clinicians. I hope that these advancements (not only within concussion testing, but musculoskeletal evaluation overall) do not take precedence over clinical experience and behavioral recognition of the athletes we interact with regularly.
I am hesitant to put too much trust in these baseline concussion tests. As Kyle began alluding to, athletes are notorious for not reporting concussions in an effort to not be pulled from the game. This effort to stay in the game seems to happen during the baseline testing, as well. Because of all of the negative media attention surrounding concussions, athletes know what concussion baseline testing is and what it is used for. Athletes are thinking ahead and intentionally not scoring their best during the baseline testing. Peyton Manning seems like an honest man, but has come out and said that he has scored low on these tests to help ensure that he could obtain a post-injury score that would still allow him to play. If he is doing it, isn't it logical to think he might not be the only one?
It is interesting to think about designing a computer-based test that focuses on those who under-report concussion signs and symptoms, but it seems early to start doing that. I still think that even though the current baseline concussion testing is objective in nature, there is still a subjective component, in which the athlete can manipulate the test in a way to score low on the baseline test in a way to possibly score similar after a blow and be cleared to play.
In order to account for this, I trust that research is already working on ways to truly and objectively evaluate brain activity in a way that significantly minimizes the ability of a patient to manipulate the data.
Jake,
Well put. The example of Peyton Manning is especially eye-opening. I think a strength of a computer-based test, is the lack of familiarity student-athletes may have. Although this is fleeting as it becomes more popular, paper-pencil tests like the SCAT3, are easily obtained and more specific to signs and symptoms. It would be much easier for an athlete to recognize what the clinician is looking for an manipulate their answers. I believe the answer ultimately, will lie with more objective ways to measure changes in the brain chemistry to signify whether or not a concussion is sustained. At the NATA conference last week, there were even products being presented that are placed on an athlete and alert the clinician when a substantial impact has occurred. While still in infancy, perhaps a tool such as this would be useful. Have you had any experiences with something like this?
Obviously a lot more research needs to be conducted regarding concussions. Although athletes may try to manipulate their baseline tests, the computerized tests make it much more challenging for athletes to "fake" their scores. I have used ImPACT testing as well and it has speed and accuracy components that are being evaluated simultaneously throughout the test. This makes it much more difficult for athletes to skew their scores because if they take too long or choose too many incorrect answers, their scores will be affected. In addition to baseline concussion testing, we should also look into the benefits of using a brain acoustic monitor (BAM) as a baseline test. There is little research on the BAM currently but there are several theories that believe by using the BAM we will be able to identify changes that have occurred in the brain as a result of concussions. Concussions are a hot topic and definitely require much more research to be conducted. At the present time I believe that we should use all of the tools we have, especially our own clinical instincts.
Kyle,
I have heard of these devices that are being placed on athletes to alert clinicians when a substantial blow occurs, but I have not had the opportunity to use them in clinical practice. The idea behind these devices seems great, however, I will be interested to found out how practical, accurate, and durable they are. I agree that these ideas about concussion are relatively new and I think technological advances will begin to flourish as more data develops. I just hope that clinicians are educated when listening to sales pitches of these devices and do not solely rely on the evaluation of a computer, but instead use past experiences to guide our clinical decisions because of the infancy of these ideas.
Anonymous and Jake,
You both make great points! I agree that we should use all the tools at our disposal. When you think logically, there is not 1 tool which would catch all concussions. We should employ more than 1 tool so that if a concussed athlete is not identified by 1 method there are redundancies to identify them. Jake, your comment on the impact sensors is one that many clinicians including myself have. I have spoken with some of the marketing individuals from these companies and the tool seems as though it would be very valuable to a clinician on a sideline. But ultimately, as we have found with any other product on the market, time and clinician trial will be the deciding factor here.