TY - JOUR AU - L. M. Kern AU - S. Malhotra AU - Y. Barron AU - J. Quaresimo AU - R. Dhopeshwarkar AU - M. Pichardo AU - A. M. Edwards AU - R. Kaushal A1 - AB - Chinese translation BACKGROUND: The federal Electronic Health Record Incentive Program requires electronic reporting of quality from electronic health records, beginning in 2014. Whether electronic reports of quality are accurate is unclear. OBJECTIVE: To measure the accuracy of electronic reporting compared with manual review. DESIGN: Cross-sectional study. SETTING: A federally qualified health center with a commercially available electronic health record. PATIENTS: All adult patients eligible in 2008 for 12 quality measures (using 8 unique denominators) were identified electronically. One hundred fifty patients were randomly sampled per denominator, yielding 1154 unique patients. MEASUREMENTS: Receipt of recommended care, assessed by both electronic reporting and manual review. Sensitivity, specificity, positive and negative predictive values, positive and negative likelihood ratios, and absolute rates of recommended care were measured. RESULTS: Sensitivity of electronic reporting ranged from 46% to 98% per measure. Specificity ranged from 62% to 97%, positive predictive value from 57% to 97%, and negative predictive value from 32% to 99%. Positive likelihood ratios ranged from 2.34 to 24.25 and negative likelihood ratios from 0.02 to 0.61. Differences between electronic reporting and manual review were statistically significant for 3 measures: Electronic reporting underestimated the absolute rate of recommended care for 2 measures (appropriate asthma medication [38% vs. 77%; P < 0.001] and pneumococcal vaccination [27% vs. 48%; P < 0.001]) and overestimated care for 1 measure (cholesterol control in patients with diabetes [57% vs. 37%; P = 0.001]). LIMITATION: This study addresses the accuracy of the measure numerator only. CONCLUSION: Wide measure-by-measure variation in accuracy threatens the validity of electronic reporting. If variation is not addressed, financial incentives intended to reward high quality may not be given to the highest-quality providers. PRIMARY FUNDING SOURCE: Agency for Healthcare Research and Quality. BT - Annals of Internal Medicine C5 - HIT & Telehealth CP - 2 CY - United States DO - 10.7326/0003-4819-158-2-201301150-00001 IS - 2 JF - Annals of Internal Medicine N2 - Chinese translation BACKGROUND: The federal Electronic Health Record Incentive Program requires electronic reporting of quality from electronic health records, beginning in 2014. Whether electronic reports of quality are accurate is unclear. OBJECTIVE: To measure the accuracy of electronic reporting compared with manual review. DESIGN: Cross-sectional study. SETTING: A federally qualified health center with a commercially available electronic health record. PATIENTS: All adult patients eligible in 2008 for 12 quality measures (using 8 unique denominators) were identified electronically. One hundred fifty patients were randomly sampled per denominator, yielding 1154 unique patients. MEASUREMENTS: Receipt of recommended care, assessed by both electronic reporting and manual review. Sensitivity, specificity, positive and negative predictive values, positive and negative likelihood ratios, and absolute rates of recommended care were measured. RESULTS: Sensitivity of electronic reporting ranged from 46% to 98% per measure. Specificity ranged from 62% to 97%, positive predictive value from 57% to 97%, and negative predictive value from 32% to 99%. Positive likelihood ratios ranged from 2.34 to 24.25 and negative likelihood ratios from 0.02 to 0.61. Differences between electronic reporting and manual review were statistically significant for 3 measures: Electronic reporting underestimated the absolute rate of recommended care for 2 measures (appropriate asthma medication [38% vs. 77%; P < 0.001] and pneumococcal vaccination [27% vs. 48%; P < 0.001]) and overestimated care for 1 measure (cholesterol control in patients with diabetes [57% vs. 37%; P = 0.001]). LIMITATION: This study addresses the accuracy of the measure numerator only. CONCLUSION: Wide measure-by-measure variation in accuracy threatens the validity of electronic reporting. If variation is not addressed, financial incentives intended to reward high quality may not be given to the highest-quality providers. PRIMARY FUNDING SOURCE: Agency for Healthcare Research and Quality. PP - United States PY - 2013 SN - 1539-3704; 0003-4819 T1 - Accuracy of Electronically Reported "Meaningful Use" Clinical Quality Measures: A Cross-sectional Study T2 - Annals of Internal Medicine TI - Accuracy of Electronically Reported "Meaningful Use" Clinical Quality Measures: A Cross-sectional Study U1 - HIT & Telehealth U2 - 23318309 U3 - 10.7326/0003-4819-158-2-201301150-00001 VL - 158 VO - 1539-3704; 0003-4819 Y1 - 2013 ER -