Outcomes from observational studies: Understanding causal ambiguity

Huw Talfryn Oakley Davies*, IK Crombie

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    4 Citations (Scopus)

    Abstract

    There is growing interest in using health outcomes data from large observational databases to assess the impact of interventions or the performance of services. The usefulness of such data hinges on being able to make attributions of causality between the interventions or services and the observed outcomes. Unfortunately, there are many reasons why such attributions are undermined by methodological difficulties. The findings from outcomes studies may conflict across a broad set of measures and the outcomes measures themselves may lack reliability and validity, especially as they are usually collected unblinded. Differences in case-mix make the use of severity adjustment schemes essential but such adjustments are inevitably incomplete. In addition, case-mix adjustment adds increasing data demands and runs the risk of introducing upstaging. Further, different schemes may lead to different judgments, and unadjusted case-mix differences may still confound the findings. Finally, chance variability may lead to spurious findings or even hide important differences. Thus, there are many reasons to doubt attributions of causality between observations and interventions and such data require circumspect interpretation.

    Original languageEnglish
    Pages (from-to)153-158
    Number of pages6
    JournalDrug Information Journal
    Volume33
    Issue number1
    DOIs
    Publication statusPublished - 1 Jan 1999

    Keywords

    • Bias;
    • Case-mix;
    • Confounding;
    • Data interpretation
    • Observational studies;
    • Outcomes;
    • Routine data;
    • Statistical adjustment;

    Fingerprint

    Dive into the research topics of 'Outcomes from observational studies: Understanding causal ambiguity'. Together they form a unique fingerprint.

    Cite this