Abstract
Confidence intervals (CIs) have been highlighted as “the best” reporting device when reporting statistical findings. However, researchers often fail to maximize the utility of CIs in research. We seek to (a) present a primer on CIs; (b) outline reporting practices of health researchers; and (c) discuss implications for statistical best practice reporting in social science research. Approximately 1,950 peer-reviewed articles were examined from six health education, promotion, and behavior journals. We recorded: (a) whether the author(s) reported a CI; (b) whether the author(s) reported a CI estimate width, either numerical or visual; and (c) whether an associated effect size was reported alongside the CI. Of the 1,245 quantitative articles in the final sample, 46.5% (n = 580) reported confidence interval use; , and 518 provided numerical/visual interval estimates. Of the articles reporting CIs, 383 (64.2%) articles reported a CI with an associated effect size, meeting the American Psychological Association’s (APA) recommendation for statistical reporting best-practice. Health education literature demonstrates inconsistent statistical reporting practices, and falls short in employing best practices and consistently outlining the minimum expectations recommended by APA. In an effort to maximize utility and implications of health education, promotion, and behavior research, future investigations should provide comprehensive information regarding research findings.
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial 4.0 License
Recommended Citation
Barry, Adam E.; Reyes, Jovanni; Szucs, Leigh; Goodson, Patricia; and Valdez, Danny
(2021)
"Should We Be Confident in Published Research? A Case Study of Confidence Interval Reporting in Health Education and Behavior Research,"
Health Behavior Research:
Vol. 4:
No.
1.
https://doi.org/10.4148/2572-1836.1089