Pitfalls in presenting and interpreting clinical trial data

O. J. Thienhaus

Research output: Contribution to journalArticlepeer-review


Information generated by a clinical trial, when conveyed to health professionals and prospective patients, is affected by the original design of the trial and by the manner in which the results are presented. One problem in study design is the management of comparison groups in randomized assignments. When a comparison group is treated with an accepted standard compound, the chosen standard drug may be one that is associated with more side effects end complications than later modifications of the standard. Inadequate dosing of the comparison group can inflate the relative effect size of the experimental compound. Choosing a standard with a verifiable dose reference range can avoid this pitfall. In reporting results, relative score changes on a rating scale are meaningless without reference to an absolute value reflecting a clinically relevant degree of remission. The validity of rating instruments chosen must be judged in the context of the specific population to which it is applied. In the reporting of effects, the emphasis on significance of differences may obscure the critical distinction between statistical significance and clinical relevance, and graphs can appear to overstate a change over time by truncating the ordinate axis.

Original languageEnglish (US)
Pages (from-to)435-438
Number of pages4
JournalPsychopharmacology Bulletin
Issue number2
StatePublished - 1995


  • clinical trials
  • data display
  • data interpretation, statistical
  • marketing (of health services)
  • psychopharmacology

ASJC Scopus subject areas

  • Psychiatry and Mental health
  • Pharmacology (medical)


Dive into the research topics of 'Pitfalls in presenting and interpreting clinical trial data'. Together they form a unique fingerprint.

Cite this