Interrater agreement with a standard scheme for classifying medication errors

Ryan A. Forrey, Craig A. Pedersen, Philip J Schneider

Research output: Contribution to journalArticle

53 Citations (Scopus)

Abstract

Purpose. The interrater agreement for and reliability of the National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP) index for categorizing medication errors were determined. Methods. A letter was sent by the U.S. Pharmacopeia to all 550 contacts in the MEDMARX system user database. Participants were asked to categorize 27 medication scenarios using the NCC MERP index and were randomly assigned to one of three tools (the index alone, a paper-based algorithm, or a computer-based algorithm) to assist in categorization. Because the NCC MERP index accounts for harm and cost, and because categories could be interpreted as substantially similar, study results were analyzed after the nine error categories were collapsed to six. The interrater agreement was measured using Cohen's kappa value. Results. Of 119 positive responses, 101 completed surveys were returned for a response rate of 85%. There were no significant differences in baseline demographics among the three groups. The overall interrater agreement for the participants, regardless of group assignment, was substantial at 0.61 (95% confidence interval [CI], 0.41-0.81). There was no difference among the kappa values of the three study groups and the tools used to aid in medication error classification. When the index was condensed from nine categories to six, the interrater agreement increased with a kappa value of 0.74 (95% CI, 0.56-0.90). Conclusion. Overall interrater agreement for the NCC MERP index for categorizing medication errors was substantial. The tool provided to assist with categorization did not influence overall categorization. Further refining of the scale could improve the usefulness and validity of medication error categorization.

Original languageEnglish (US)
Pages (from-to)175-181
Number of pages7
JournalAmerican Journal of Health-System Pharmacy
Volume64
Issue number2
DOIs
StatePublished - Jan 15 2007
Externally publishedYes

Fingerprint

Medication Errors
Confidence Intervals
Pharmacopoeias
Demography
Databases
Costs and Cost Analysis

Keywords

  • Classification
  • Data collection
  • Errors, medication
  • Methodology
  • National Coordinating Council for Medication Error Reporting and Prevention
  • Reports

ASJC Scopus subject areas

  • Pharmaceutical Science
  • Leadership and Management

Cite this

Interrater agreement with a standard scheme for classifying medication errors. / Forrey, Ryan A.; Pedersen, Craig A.; Schneider, Philip J.

In: American Journal of Health-System Pharmacy, Vol. 64, No. 2, 15.01.2007, p. 175-181.

Research output: Contribution to journalArticle

@article{4f57bb7e39604ceb9eb4b0f6e5217695,
title = "Interrater agreement with a standard scheme for classifying medication errors",
abstract = "Purpose. The interrater agreement for and reliability of the National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP) index for categorizing medication errors were determined. Methods. A letter was sent by the U.S. Pharmacopeia to all 550 contacts in the MEDMARX system user database. Participants were asked to categorize 27 medication scenarios using the NCC MERP index and were randomly assigned to one of three tools (the index alone, a paper-based algorithm, or a computer-based algorithm) to assist in categorization. Because the NCC MERP index accounts for harm and cost, and because categories could be interpreted as substantially similar, study results were analyzed after the nine error categories were collapsed to six. The interrater agreement was measured using Cohen's kappa value. Results. Of 119 positive responses, 101 completed surveys were returned for a response rate of 85{\%}. There were no significant differences in baseline demographics among the three groups. The overall interrater agreement for the participants, regardless of group assignment, was substantial at 0.61 (95{\%} confidence interval [CI], 0.41-0.81). There was no difference among the kappa values of the three study groups and the tools used to aid in medication error classification. When the index was condensed from nine categories to six, the interrater agreement increased with a kappa value of 0.74 (95{\%} CI, 0.56-0.90). Conclusion. Overall interrater agreement for the NCC MERP index for categorizing medication errors was substantial. The tool provided to assist with categorization did not influence overall categorization. Further refining of the scale could improve the usefulness and validity of medication error categorization.",
keywords = "Classification, Data collection, Errors, medication, Methodology, National Coordinating Council for Medication Error Reporting and Prevention, Reports",
author = "Forrey, {Ryan A.} and Pedersen, {Craig A.} and Schneider, {Philip J}",
year = "2007",
month = "1",
day = "15",
doi = "10.2146/ajhp060109",
language = "English (US)",
volume = "64",
pages = "175--181",
journal = "American Journal of Health-System Pharmacy",
issn = "1079-2082",
publisher = "American Society of Health-Systems Pharmacy",
number = "2",

}

TY - JOUR

T1 - Interrater agreement with a standard scheme for classifying medication errors

AU - Forrey, Ryan A.

AU - Pedersen, Craig A.

AU - Schneider, Philip J

PY - 2007/1/15

Y1 - 2007/1/15

N2 - Purpose. The interrater agreement for and reliability of the National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP) index for categorizing medication errors were determined. Methods. A letter was sent by the U.S. Pharmacopeia to all 550 contacts in the MEDMARX system user database. Participants were asked to categorize 27 medication scenarios using the NCC MERP index and were randomly assigned to one of three tools (the index alone, a paper-based algorithm, or a computer-based algorithm) to assist in categorization. Because the NCC MERP index accounts for harm and cost, and because categories could be interpreted as substantially similar, study results were analyzed after the nine error categories were collapsed to six. The interrater agreement was measured using Cohen's kappa value. Results. Of 119 positive responses, 101 completed surveys were returned for a response rate of 85%. There were no significant differences in baseline demographics among the three groups. The overall interrater agreement for the participants, regardless of group assignment, was substantial at 0.61 (95% confidence interval [CI], 0.41-0.81). There was no difference among the kappa values of the three study groups and the tools used to aid in medication error classification. When the index was condensed from nine categories to six, the interrater agreement increased with a kappa value of 0.74 (95% CI, 0.56-0.90). Conclusion. Overall interrater agreement for the NCC MERP index for categorizing medication errors was substantial. The tool provided to assist with categorization did not influence overall categorization. Further refining of the scale could improve the usefulness and validity of medication error categorization.

AB - Purpose. The interrater agreement for and reliability of the National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP) index for categorizing medication errors were determined. Methods. A letter was sent by the U.S. Pharmacopeia to all 550 contacts in the MEDMARX system user database. Participants were asked to categorize 27 medication scenarios using the NCC MERP index and were randomly assigned to one of three tools (the index alone, a paper-based algorithm, or a computer-based algorithm) to assist in categorization. Because the NCC MERP index accounts for harm and cost, and because categories could be interpreted as substantially similar, study results were analyzed after the nine error categories were collapsed to six. The interrater agreement was measured using Cohen's kappa value. Results. Of 119 positive responses, 101 completed surveys were returned for a response rate of 85%. There were no significant differences in baseline demographics among the three groups. The overall interrater agreement for the participants, regardless of group assignment, was substantial at 0.61 (95% confidence interval [CI], 0.41-0.81). There was no difference among the kappa values of the three study groups and the tools used to aid in medication error classification. When the index was condensed from nine categories to six, the interrater agreement increased with a kappa value of 0.74 (95% CI, 0.56-0.90). Conclusion. Overall interrater agreement for the NCC MERP index for categorizing medication errors was substantial. The tool provided to assist with categorization did not influence overall categorization. Further refining of the scale could improve the usefulness and validity of medication error categorization.

KW - Classification

KW - Data collection

KW - Errors, medication

KW - Methodology

KW - National Coordinating Council for Medication Error Reporting and Prevention

KW - Reports

UR - http://www.scopus.com/inward/record.url?scp=33947169206&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=33947169206&partnerID=8YFLogxK

U2 - 10.2146/ajhp060109

DO - 10.2146/ajhp060109

M3 - Article

C2 - 17215468

AN - SCOPUS:33947169206

VL - 64

SP - 175

EP - 181

JO - American Journal of Health-System Pharmacy

JF - American Journal of Health-System Pharmacy

SN - 1079-2082

IS - 2

ER -