Calculation of sample size for stroke trials assessing functional outcome: comparison of binary and ordinal approaches
Background Many acute stroke trials have given neutral results. Sub-optimal statistical analyses may be failing to detect efficacy. Methods which take account of the ordinal nature of functional outcome data are more efficient. We compare sample size calculations for dichotomous and ordinal out...
| Main Author: | |
|---|---|
| Format: | Article |
| Published: |
Blackwell Publishing
2008
|
| Online Access: | https://eprints.nottingham.ac.uk/889/ |
| _version_ | 1848790497337802752 |
|---|---|
| author | The Optimising Analysis of Stroke Trials Collaboration, OAST |
| author_facet | The Optimising Analysis of Stroke Trials Collaboration, OAST |
| author_sort | The Optimising Analysis of Stroke Trials Collaboration, OAST |
| building | Nottingham Research Data Repository |
| collection | Online Access |
| description | Background Many acute stroke trials have given neutral
results. Sub-optimal statistical analyses may be failing to
detect efficacy. Methods which take account of the ordinal
nature of functional outcome data are more efficient. We
compare sample size calculations for dichotomous and ordinal
outcomes for use in stroke trials.
Methods Data from stroke trials studying the effects of
interventions known to positively or negatively alter functional
outcome – Rankin Scale and Barthel Index – were
assessed. Sample size was calculated using comparisons of
proportions, means, medians (according to Payne), and ordinal
data (according to Whitehead). The sample sizes gained
from each method were compared using Friedman 2 way
ANOVA.
Results Fifty-five comparisons (54 173 patients) of active vs.
control treatment were assessed. Estimated sample sizes
differed significantly depending on the method of calculation
(Po00001). The ordering of the methods showed that the
ordinal method of Whitehead and comparison of means
produced significantly lower sample sizes than the other
methods. The ordinal data method on average reduced sample
size by 28% (inter-quartile range 14–53%) compared with
the comparison of proportions; however, a 22% increase in
sample size was seen with the ordinal method for trials
assessing thrombolysis. The comparison of medians method
of Payne gave the largest sample sizes.
Conclusions Choosing an ordinal rather than binary method
of analysis allows most trials to be, on average, smaller by
approximately 28% for a given statistical power. Smaller trial
sample sizes may help by reducing time to completion, complexity,
and financial expense. However, ordinal methods
may not be optimal for interventions which both improve
functional outcome |
| first_indexed | 2025-11-14T18:13:33Z |
| format | Article |
| id | nottingham-889 |
| institution | University of Nottingham Malaysia Campus |
| institution_category | Local University |
| last_indexed | 2025-11-14T18:13:33Z |
| publishDate | 2008 |
| publisher | Blackwell Publishing |
| recordtype | eprints |
| repository_type | Digital Repository |
| spelling | nottingham-8892020-05-04T20:27:20Z https://eprints.nottingham.ac.uk/889/ Calculation of sample size for stroke trials assessing functional outcome: comparison of binary and ordinal approaches The Optimising Analysis of Stroke Trials Collaboration, OAST Background Many acute stroke trials have given neutral results. Sub-optimal statistical analyses may be failing to detect efficacy. Methods which take account of the ordinal nature of functional outcome data are more efficient. We compare sample size calculations for dichotomous and ordinal outcomes for use in stroke trials. Methods Data from stroke trials studying the effects of interventions known to positively or negatively alter functional outcome – Rankin Scale and Barthel Index – were assessed. Sample size was calculated using comparisons of proportions, means, medians (according to Payne), and ordinal data (according to Whitehead). The sample sizes gained from each method were compared using Friedman 2 way ANOVA. Results Fifty-five comparisons (54 173 patients) of active vs. control treatment were assessed. Estimated sample sizes differed significantly depending on the method of calculation (Po00001). The ordering of the methods showed that the ordinal method of Whitehead and comparison of means produced significantly lower sample sizes than the other methods. The ordinal data method on average reduced sample size by 28% (inter-quartile range 14–53%) compared with the comparison of proportions; however, a 22% increase in sample size was seen with the ordinal method for trials assessing thrombolysis. The comparison of medians method of Payne gave the largest sample sizes. Conclusions Choosing an ordinal rather than binary method of analysis allows most trials to be, on average, smaller by approximately 28% for a given statistical power. Smaller trial sample sizes may help by reducing time to completion, complexity, and financial expense. However, ordinal methods may not be optimal for interventions which both improve functional outcome Blackwell Publishing 2008-05 Article PeerReviewed The Optimising Analysis of Stroke Trials Collaboration, OAST (2008) Calculation of sample size for stroke trials assessing functional outcome: comparison of binary and ordinal approaches. International Journal of Stroke, 3 (2). pp. 78-84. ISSN 1747-4949 http://www.blackwellpublishing.com/ |
| spellingShingle | The Optimising Analysis of Stroke Trials Collaboration, OAST Calculation of sample size for stroke trials assessing functional outcome: comparison of binary and ordinal approaches |
| title | Calculation of sample size for stroke trials assessing
functional outcome: comparison of binary and ordinal
approaches |
| title_full | Calculation of sample size for stroke trials assessing
functional outcome: comparison of binary and ordinal
approaches |
| title_fullStr | Calculation of sample size for stroke trials assessing
functional outcome: comparison of binary and ordinal
approaches |
| title_full_unstemmed | Calculation of sample size for stroke trials assessing
functional outcome: comparison of binary and ordinal
approaches |
| title_short | Calculation of sample size for stroke trials assessing
functional outcome: comparison of binary and ordinal
approaches |
| title_sort | calculation of sample size for stroke trials assessing
functional outcome: comparison of binary and ordinal
approaches |
| url | https://eprints.nottingham.ac.uk/889/ https://eprints.nottingham.ac.uk/889/ |