-
Chapter 1: Introduction
-
Chapter 2: Administration
-
Chapter 3: Scoring and Reports
-
Chapter 4: Interpretation
-
Chapter 5: Case Studies
-
Chapter 6: Development
-
Chapter 7: Standardization
-
Chapter 8: Reliability
-
Chapter 9: Validity
-
Chapter 10: Fairness
-
Chapter 11: Conners 4–Short
-
Chapter 12: Conners 4–ADHD Index
-
Appendices
Conners 4 ManualChapter 8: Overview |
Overview |
Measurement error must be taken into account when observations are made during the assessment of human behavior. In classical test theory (CTT), any observed score is equal to the true score of the attribute being measured plus measurement error (Lord & Novick, 1968). Reliability statistics are used to describe the amount of measurement error, which is determined by examining the consistency of measurements obtained across different administrations or parts of the instrument (American Educational Research Association [AERA], American Psychological Association [APA], & National Council on Measurement in Education [NCME], 2014). Reliability can be conceptualized and measured in several ways; however, a common interpretation is that the reliability of an instrument describes how consistent the scores are. Multiple indicators of reliability are provided for the Conners 4th Edition (Conners 4®), including internal consistency, test information, test-retest reliability, and inter-rater reliability (Solomon et al., 2021). Reliability estimates for the Conners 4th Edition Short (Conner 4®–Short) and Conners 4th Edition ADHD Index (Conners 4®–ADHD Index) are found in chapter 11, Conners 4–Short, and chapter 12, Conners 4–ADHD Index, respectively.
< Back | Next > |