Abstract
Purpose. To evaluate inter- and intra-rater reliability for the interpretation of MTI Photoscreener photographs taken in a population of Native American preschool children with a high prevalence of astigmatism. Methods. Photographs of 369 children were rated by 11 nonexpert and 3 expert raters. Photographs for each child were scored as pass, refer, or retake. Nonexpert raters scored photos on two separate occasions, permitting analysis of intra-rater reliability. Results. Analyses of pass/refer responses only: inter-rater reliability was moderate to substantial among nonexpert raters and substantial among expert raters. Intra-rater reliability among nonexperts was substantial. Analyses of all responses (pass, refer, and retake): inter-rater reliability for pass and refer scores was moderate among nonexperts and substantial among experts; for retake scores inter-rater reliability was slight for nonexperts and moderate for experts. Intra-rater reliability among nonexperts was substantial for pass and refer scores and moderate for retake scores. Conclusions. In this population with a high prevalence of astigmatism, whether MTI photoscreening results are interpretable is much more variable among and within raters than whether an interpretable photograph should be scored as pass or refer. The level of agreement among raters in the current study was influenced by the experience of the raters. In addition, nonexpert raters were more likely to deem a photograph uninterpretable than expert raters.
Original language | English (US) |
---|---|
Pages (from-to) | 473-482 |
Number of pages | 10 |
Journal | Optometry and Vision Science |
Volume | 77 |
Issue number | 9 |
DOIs | |
State | Published - 2000 |
Keywords
- Children
- Inter-rater reliability
- Intra-rater reliability
- Photoscreening
ASJC Scopus subject areas
- Ophthalmology
- Optometry