Neal Kingston, Ph.D., is a University Distinguished Professor in the department of Educational Psychology at the University of Kansas, in which he also serves as the Director of Graduate Studies and coordinator of the Research, Evaluation, Measurement, and Statistics track of the Educational Psychology and Research Program and Director of the Achievement and Assessment Institute. His research focuses on large-scale assessment, with particular emphasis on how it can better support student learning through the use of learning maps and diagnostic classification models. He has served as principal investigator or co-principal investigator for over 180 research grants. Of particular note was the Dynamic Learning Maps Alternate Assessment grant from the US Department of Education, which was at that time was the largest grant in KU history and which currently serves 21 state departments of education. Other recent testing projects include the Kansas Assessment Program, Career Pathways Collaborative, and Adaptive Reading Motivation Measures.
Dr. Kingston is known internationally for his work on large-scale assessment, formative assessment, and learning maps. He has served as a consultant or advisor for organizations such as the AT&T, College Board, Department of Defense Advisory Committee on Military Personnel Testing, Edvantia, General Equivalency Diploma (GED), Kaplan, King Fahd University of Petroleum and Minerals, Merrill Lynch, National Council on Disability, Qeyas (Saudi Arabian National Center for Assessment in Higher Education), State of New Hampshire, State of Utah, and the U.S. Department of Education, and Western Governors University.
Dr. Kingston also serves as Director of the KU Achievement and Assessment Institute (AAI). Dr. Kingston is responsible for five research centers with a staff of about 300 year-round staff and about 150 temporary employees.
Currently Dr. Kingston teaches a course in Classical Test Theory in the fall of odd numbered years, Item Response Theory in the spring of even numbered years, a course on meta-analysis in the fall of even numbered years, and an advanced seminar in a topic to be announced in the spring of odd numbered years.
I am seeking one or two new doctoral students (previous masters degree not required) to start in fall 2022. Students will receive a graduate research assistant position that covers their tuition and fees and provides a salary. An interest in learning and applying test theory to better support student learning is of critical importance.
A link to the Achievement and Assessment Institute graduate student handbook is included for your reference: https://aai.ku.edu/sites/aai.ku.edu/files/docs/pdfs_general/GRA_Handbook...
- Educational Measurement theory and practice
- Instructionally embedded assessment
- Learning maps
The field of education moves slowly. Theoretical improvements often take 30-50 years before wide-spread implementation. Often research-based practices are crowded out by the fad of the day. Making the challenge of improving education even greater, sub-disciplines within education far too often work in isolation. Simple solutions that do not address the complexity of individual students or the dynamics of a classroom at best have little impact and too often have a negative impact. Such has been the case in large-scale assessment where the use of assessment to drive curriculum and instruction has had numerous negative consequences.
Coming to the University of Kansas gave me the opportunity to consider the fundamental issues of education from a broader perspective. As have many others, I had long realized that thinking about curriculum, instruction, and assessment needs to be integrated. However, few researchers attempted to develop models or theories to do this. I was impressed by the eorts of some, particularly the research trajectories of Susan Embretson and Kikumi Tatsuoka, but I remained frustrated regarding how incomplete this work was and how little impact it was having on federally mandated state assessment programs. This led me to develop three conference presentations in 2009 that served to focus my thinking. The first was presented at the National Council on Measurement in Education and was entitled, "What Have We Learned about the Structure of Learning from 30 Years of Research on Integrated Cognitive-Psychometric Models? Not Much." The second was presented at the American Educational Research Association Conference – The Efficacy of Formative Assessment: A Meta-Analysis. The third, presented at the National Conference on Student Assessment, was entitled, "Large-Scale Formative Assessment: Panacea, Transitional Tool, or Oxymoron."
In 2010 an opportunity presented itself and allowed me to solidify my thinking. The US Department of Education issued a request for proposals to develop a large-scale assessment system for students with significant cognitive disabilities –the approximately one percent of students with the greatest learning challenges. It was clear to me that such an assessment system needed to do far more than measure learning – it needed to facilitate learning. I identified six features that needed to be present to do this. They are as follows.
1. Comprehensive one-grained learning maps that guide instruction and assessment
2. A subset of particularly important nodes that serve as content standards to provide an organizational structure for teachers
3. Instructionally embedded assessments that reinforce the primacy of instruction
4. Instructionally relevant testlets that model good instruction and reinforce learning
5. Accessibility by design
6. Status and growth reporting that is readily actionable
No one had ever tried to develop a learning environment in this way. Comprehensive fine-grained learning maps did not exist. The concept of instructionally relevant assessment previously was unnamed and in its infancy. Clearly much research – both basic and applied – was necessary and this has become the focus of my research.
A closely related second area of research is assessment that supports the needs of learners who face educational or assessment challenges. This includes issues of test development and universal design and which have close ties to features 3-5 in the list above. I separate it as a research focus because it is also applicable to traditional testing programs.
- Large-scale assessment
- Computer-based testing
- Diagnostic classification modeling
- Learning maps
- Test development
- Score reporting
[Journal Articles]. Psychology in the Schools, 50, 770–780. https://doi.org/10.1002/PITS.21707