Abstract
There is growing interest in using health outcomes data from large observational databases to assess the impact of interventions or the performance of services. The usefulness of such data hinges on being able to make attributions of causality between the interventions or services and the observed outcomes. Unfortunately, there are many reasons why such attributions are undermined by methodological difficulties. The findings from outcomes studies may conflict across a broad set of measures and the outcomes measures themselves may lack reliability and validity, especially as they are usually collected unblinded. Differences in case-mix make the use of severity adjustment schemes essential but such adjustments are inevitably incomplete. In addition, case-mix adjustment adds increasing data demands and runs the risk of introducing upstaging. Further, different schemes may lead to different judgments, and unadjusted case-mix differences may still confound the findings. Finally, chance variability may lead to spurious findings or even hide important differences. Thus, there are many reasons to doubt attributions of causality between observations and interventions and such data require circumspect interpretation.
Original language | English |
---|---|
Pages (from-to) | 153-158 |
Number of pages | 6 |
Journal | Drug Information Journal |
Volume | 33 |
Issue number | 1 |
DOIs | |
Publication status | Published - 1 Jan 1999 |
Keywords
- Bias;
- Case-mix;
- Confounding;
- Data interpretation
- Observational studies;
- Outcomes;
- Routine data;
- Statistical adjustment;