Pitfalls in presenting and interpreting clinical trial data

Research output: Contribution to journalArticle

Abstract

Information generated by a clinical trial, when conveyed to health professionals and prospective patients, is affected by the original design of the trial and by the manner in which the results are presented. One problem in study design is the management of comparison groups in randomized assignments. When a comparison group is treated with an accepted standard compound, the chosen standard drug may be one that is associated with more side effects end complications than later modifications of the standard. Inadequate dosing of the comparison group can inflate the relative effect size of the experimental compound. Choosing a standard with a verifiable dose reference range can avoid this pitfall. In reporting results, relative score changes on a rating scale are meaningless without reference to an absolute value reflecting a clinically relevant degree of remission. The validity of rating instruments chosen must be judged in the context of the specific population to which it is applied. In the reporting of effects, the emphasis on significance of differences may obscure the critical distinction between statistical significance and clinical relevance, and graphs can appear to overstate a change over time by truncating the ordinate axis.

Original languageEnglish (US)
Pages (from-to)435-438
Number of pages4
JournalPsychopharmacology Bulletin
Volume31
Issue number2
StatePublished - 1995
Externally publishedYes

Fingerprint

Clinical Trials
Reference Values
Health
Pharmaceutical Preparations
Population

Keywords

  • clinical trials
  • data display
  • data interpretation, statistical
  • marketing (of health services)
  • psychopharmacology

ASJC Scopus subject areas

  • Psychiatry and Mental health
  • Pharmacology

Cite this

Pitfalls in presenting and interpreting clinical trial data. / Thienhaus, Ole J.

In: Psychopharmacology Bulletin, Vol. 31, No. 2, 1995, p. 435-438.

Research output: Contribution to journalArticle

@article{b4566405848644c385fd108dd76c4b47,
title = "Pitfalls in presenting and interpreting clinical trial data",
abstract = "Information generated by a clinical trial, when conveyed to health professionals and prospective patients, is affected by the original design of the trial and by the manner in which the results are presented. One problem in study design is the management of comparison groups in randomized assignments. When a comparison group is treated with an accepted standard compound, the chosen standard drug may be one that is associated with more side effects end complications than later modifications of the standard. Inadequate dosing of the comparison group can inflate the relative effect size of the experimental compound. Choosing a standard with a verifiable dose reference range can avoid this pitfall. In reporting results, relative score changes on a rating scale are meaningless without reference to an absolute value reflecting a clinically relevant degree of remission. The validity of rating instruments chosen must be judged in the context of the specific population to which it is applied. In the reporting of effects, the emphasis on significance of differences may obscure the critical distinction between statistical significance and clinical relevance, and graphs can appear to overstate a change over time by truncating the ordinate axis.",
keywords = "clinical trials, data display, data interpretation, statistical, marketing (of health services), psychopharmacology",
author = "Thienhaus, {Ole J}",
year = "1995",
language = "English (US)",
volume = "31",
pages = "435--438",
journal = "Psychopharmacology Bulletin",
issn = "0048-5764",
publisher = "MedWorks Media LLC",
number = "2",

}

TY - JOUR

T1 - Pitfalls in presenting and interpreting clinical trial data

AU - Thienhaus, Ole J

PY - 1995

Y1 - 1995

N2 - Information generated by a clinical trial, when conveyed to health professionals and prospective patients, is affected by the original design of the trial and by the manner in which the results are presented. One problem in study design is the management of comparison groups in randomized assignments. When a comparison group is treated with an accepted standard compound, the chosen standard drug may be one that is associated with more side effects end complications than later modifications of the standard. Inadequate dosing of the comparison group can inflate the relative effect size of the experimental compound. Choosing a standard with a verifiable dose reference range can avoid this pitfall. In reporting results, relative score changes on a rating scale are meaningless without reference to an absolute value reflecting a clinically relevant degree of remission. The validity of rating instruments chosen must be judged in the context of the specific population to which it is applied. In the reporting of effects, the emphasis on significance of differences may obscure the critical distinction between statistical significance and clinical relevance, and graphs can appear to overstate a change over time by truncating the ordinate axis.

AB - Information generated by a clinical trial, when conveyed to health professionals and prospective patients, is affected by the original design of the trial and by the manner in which the results are presented. One problem in study design is the management of comparison groups in randomized assignments. When a comparison group is treated with an accepted standard compound, the chosen standard drug may be one that is associated with more side effects end complications than later modifications of the standard. Inadequate dosing of the comparison group can inflate the relative effect size of the experimental compound. Choosing a standard with a verifiable dose reference range can avoid this pitfall. In reporting results, relative score changes on a rating scale are meaningless without reference to an absolute value reflecting a clinically relevant degree of remission. The validity of rating instruments chosen must be judged in the context of the specific population to which it is applied. In the reporting of effects, the emphasis on significance of differences may obscure the critical distinction between statistical significance and clinical relevance, and graphs can appear to overstate a change over time by truncating the ordinate axis.

KW - clinical trials

KW - data display

KW - data interpretation, statistical

KW - marketing (of health services)

KW - psychopharmacology

UR - http://www.scopus.com/inward/record.url?scp=0028822524&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0028822524&partnerID=8YFLogxK

M3 - Article

VL - 31

SP - 435

EP - 438

JO - Psychopharmacology Bulletin

JF - Psychopharmacology Bulletin

SN - 0048-5764

IS - 2

ER -