Acute Sport Concussion Assessment Optimization: A Prospective Assessment from the CARE Consortium

Broglio S, Harezlak J, Katz B, Zhao S, McAllister T, McCrea M, CARE Consortium Investigators. Sports Medicine. doi: 10.1007/s40279-019-01155-0.

https://link.springer.com/article/10.1007/s40279-019-01155-0.

Take-Home Message

A sideline concussion assessment with a symptom inventory, Standardized Assessment of Concussion (SAC), and the Balance Error Scoring System (BESS on a firm surface) offers ideal accuracy. Annual baseline assessments provide only slightly improved diagnostic accuracy compared to just post-injury assessments alone.  

Summary

Sports medicine recommendations advocate for concussion evaluations that include at least symptom, balance, and neurocognitive assessments. However, it is unclear if there is an optimal set of concussion assessments or if we should compare these results to pre-season baseline evaluations collected each year. Therefore, the authors aimed to 1) examine the specificity and sensitivity of common concussion assessment tools individually and combined within 72 hours after a concussion, and 2) establish the optimal frequency and utility of baseline assessments among collegiate athletes. They used data from the Concussion Assessment, Research, and Education (CARE) Consortium, which consisted of 1,458 athletes suffering 1,640 concussions across 29 institutions. All athletes completed annual baseline assessments and similar assessments up to three times within 72 hours after an injury. Post-injury assessments were categorized as sideline (0 to 1.25 hours), post-event (1.26 to 24 hours), or clinic (25 to 72 hours). The authors examined post-injury assessments in three ways: 1) no baseline comparison, 2) same-season (i.e., annual) baseline comparison, and 3) previous season baseline comparison (i.e., bi-annual; only 770 concussions).

The authors found that the strongest diagnostic accuracy during sideline testing was with the SCAT symptom inventory, the Standardized Assessment of Concussion (SAC), and the Balance Error Scoring System (BESS; firm surface) individually or combined, regardless of which baseline test they used. At the post-event and clinic evaluations, the symptom checklists performed best among the traditional concussion assessments. However, symptoms alone may be most accurate after the event, especially after the first day. The Vestibular Ocular Motor Screening (VOMS), which was only conducted at 9 institutions, had the strongest sensitivity among all tools. Computerized neurocognitive testing via ImPACT was among one of the weakest assessment tools. The diagnostic accuracy among post-injury assessments was slightly better when compared to their annual, pre-injury baseline assessment.

Viewpoints

These findings are similar to a previous study from the CARE Consortium, indicating that the symptom inventory is the strongest sideline assessment tool. However, using only a symptom checklist can be hazardous as this allows patients to choose whether to disclose their symptoms. The authors’ findings complement concussion guidelines that recommend we use multiple assessments (e.g., symptoms, balance) to evaluate a concussion. The VOMS assessment is a promising assessment tool but previously lacked diagnostic accuracy studies to truly evaluate its utility. The authors’ results indicate that the VOMS has some of the strongest diagnostic accuracy among assessment tools to date and may become a new consensus-recommended tool in the toolbox. However, we need to be cautious because the VOMS was only tested on a subset of the participants and it’s unclear how the VOMS would perform among a larger group of athletes at numerous institutions. The findings of relatively weak computerized neurocognitive diagnostic accuracy may seem abnormal but are consistent with previous research that indicated specificity and sensitivity of ImPACT are both 52% (2% better than flipping a coin). Lastly, these findings indicate conducting annual baselines assessments may be ideal. However, the increased value equates to only 7% greater specificity and 17% greater sensitivity. Clinicians should critically evaluate their time and resources available for conducting baseline assessments to determine whether the total cost of baseline testing is worth the additional effort.

Questions for Discussion

Is VOMS currently an assessment you are familiar with? What are some barriers to implementing VOMS in your clinical practice? In your opinion, do you think these findings among collegiate athletes are translatable to younger athletes?

Written by: Landon B. Lempke, MEd, LAT, ATC
Reviewed by: Jeffrey Driban

Related Posts

The Diagnostic Value of Concussion Assessment Tools and it’s Individual Components
The Effectiveness of Computerized Neurocognitive Testing
Concussion Assessment: How the Setting Affects the Tools Used by Athletic Trainers
Should Athletic Trainers Add Anxiety Surveys to Preseason Baseline Testing?