Inter-examiner agreement in clinical evaluation
Background: The reliability of assessment is an important issue in the evaluation of competence in medical and allied health practice, particularly when assessments are conducted by multiple examiners. The purpose of this study was to examine the agreement between multiple examiners in the assessmen...
| Main Authors: | , , , |
|---|---|
| Format: | Journal Article |
| Published: |
Blackwell Publishing
2012
|
| Online Access: | http://hdl.handle.net/20.500.11937/21897 |
| _version_ | 1848750719149015040 |
|---|---|
| author | Reubenson, Alan Schneph, Tanis Waller, Rob Edmondston, Stephen |
| author_facet | Reubenson, Alan Schneph, Tanis Waller, Rob Edmondston, Stephen |
| author_sort | Reubenson, Alan |
| building | Curtin Institutional Repository |
| collection | Online Access |
| description | Background: The reliability of assessment is an important issue in the evaluation of competence in medical and allied health practice, particularly when assessments are conducted by multiple examiners. The purpose of this study was to examine the agreement between multiple examiners in the assessment of a postgraduate physiotherapy student using a specifically designed performance evaluation system. Methods: Seven examiners simultaneously watched a recording of a postgraduate student’s examination and treatment of one patient. The Postgraduate Physiotherapy Performance Assessment (PPPA) form was used to guide the assessment of performance in key areas of patient examination and management. Each examiner independently recorded a grade for each of five performance categories, and these scores were used to guide the global performance grade and mark. Results: Five examiners agreed on the global performance grade and four of the performance categories. The level of pass grade awarded was more variable, with scores in the performance categories spanning two grades, and in one case, three grades. The two examiners who were not in agreement with the majority consistently awarded higher grades across most performance categories. Discussion: This preliminary study has demonstrated majority agreement in global performance between multiple examiners when physiotherapy clinical practice is assessed against specific performance standards. Not all examiners awarded global grades consistent with the majority, and there was greater variability between examiners when grading performance in specific aspects of practice. These findings highlight the importance of examiner training and review sessions to improve inter-examiner agreement in assessments of clinical performance that require multiple examiners. |
| first_indexed | 2025-11-14T07:41:18Z |
| format | Journal Article |
| id | curtin-20.500.11937-21897 |
| institution | Curtin University Malaysia |
| institution_category | Local University |
| last_indexed | 2025-11-14T07:41:18Z |
| publishDate | 2012 |
| publisher | Blackwell Publishing |
| recordtype | eprints |
| repository_type | Digital Repository |
| spelling | curtin-20.500.11937-218972017-09-13T15:57:56Z Inter-examiner agreement in clinical evaluation Reubenson, Alan Schneph, Tanis Waller, Rob Edmondston, Stephen Background: The reliability of assessment is an important issue in the evaluation of competence in medical and allied health practice, particularly when assessments are conducted by multiple examiners. The purpose of this study was to examine the agreement between multiple examiners in the assessment of a postgraduate physiotherapy student using a specifically designed performance evaluation system. Methods: Seven examiners simultaneously watched a recording of a postgraduate student’s examination and treatment of one patient. The Postgraduate Physiotherapy Performance Assessment (PPPA) form was used to guide the assessment of performance in key areas of patient examination and management. Each examiner independently recorded a grade for each of five performance categories, and these scores were used to guide the global performance grade and mark. Results: Five examiners agreed on the global performance grade and four of the performance categories. The level of pass grade awarded was more variable, with scores in the performance categories spanning two grades, and in one case, three grades. The two examiners who were not in agreement with the majority consistently awarded higher grades across most performance categories. Discussion: This preliminary study has demonstrated majority agreement in global performance between multiple examiners when physiotherapy clinical practice is assessed against specific performance standards. Not all examiners awarded global grades consistent with the majority, and there was greater variability between examiners when grading performance in specific aspects of practice. These findings highlight the importance of examiner training and review sessions to improve inter-examiner agreement in assessments of clinical performance that require multiple examiners. 2012 Journal Article http://hdl.handle.net/20.500.11937/21897 10.1111/j.1743-498X.2011.00509.x Blackwell Publishing restricted |
| spellingShingle | Reubenson, Alan Schneph, Tanis Waller, Rob Edmondston, Stephen Inter-examiner agreement in clinical evaluation |
| title | Inter-examiner agreement in clinical evaluation |
| title_full | Inter-examiner agreement in clinical evaluation |
| title_fullStr | Inter-examiner agreement in clinical evaluation |
| title_full_unstemmed | Inter-examiner agreement in clinical evaluation |
| title_short | Inter-examiner agreement in clinical evaluation |
| title_sort | inter-examiner agreement in clinical evaluation |
| url | http://hdl.handle.net/20.500.11937/21897 |