-
Chapter 1: Introduction
-
Chapter 2: Administration
-
Chapter 3: Scoring and Reports
-
Chapter 4: Interpretation
-
Chapter 5: Case Studies
-
Chapter 6: Development
-
Chapter 7: Standardization
-
Chapter 8: Reliability
-
Chapter 9: Validity
-
Chapter 10: Fairness
-
Chapter 11: Conners 4–Short
-
Chapter 12: Conners 4–ADHD Index
-
Appendices
Conners 4 ManualChapter 12: Summary |
Summary |
The Conners 4–ADHD Index was developed with the use of machine learning algorithms, which identified the optimal set of 12 items for the Parent, Teacher, and Self-Report rater forms that would distinguish between general population and ADHD cases. Responses to the set of items are summed into a raw score and then converted to a probability score. The probability score communicates the likelihood that an individual’s raw score more closely resembles a score from the ADHD Reference Sample than the General Population. There is strong evidence for reliability, based on analyses regarding internal consistency, precision of measurement, test-retest reliability, and inter-rater reliability. Additionally, the probability score demonstrates evidence of validity in its ability to correctly classify individuals with and without an ADHD diagnosis. Finally, there is considerable evidence to support the fairness of the Conners 4–ADHD Index, as no meaningful variance or group differences were found in terms of gender, race/ethnicity, country of residence, or parental education level. The Conners 4–ADHD Index sufficiently meets reliability, validity, and fairness standards and guidelines for psychometric tests (AERA, APA, & NCME, 2014).
< Back | Next > |