This section will welcome work on a wide range of assessment methods and processing techniques, and also pedagogical pieces that explain and exemplify the enactment of functions in the field.Read More
This section addresses the complex fields of assessment, testing, and applied measurement. A wide range of assessment and testing methods are implied (e.g., informal quizzes, teacher observation and judgment, self-assessment, peer assessment, performance assessment, standardized testing) used to generate data about student learning, achievement, or performance. In addition to multiple formats of assessment, a wide range of assessment processes are included (e.g., design, administration, scoring, standard-setting, equating, norming, feedback delivery and design, follow-on curriculum planning, instruction, and monitoring). These practices can be created and delivered at the classroom, school, district, national, or international level. This means that all purposes (i.e., formative, summative, administrative, ranking, scholarship, selection, etc.) are legitimately part of this section. Furthermore, in the light of the many complex technical aspects of assessment, testing, and measurement, this section would especially welcome pedagogical pieces that explain and exemplify how, even some of the most basic functions of the field, can be carried out.
Validity and reliability are important characteristics of all data collected in order to make decisions. Assessment, testing, and measurement exist to inform decisions about student learning needs, curriculum, teaching, and system characteristics. To avoid the possibility that decisions are made on faulty data, it is important that researchers demonstrate the robustness and quality of assessment processes, tools, and consequences. Hence this section is especially concerned with studies that demonstrate the degree of validity of an assessment event or process and that can demonstrate the accuracy and reliability of scores, marks, grades and so on. Unless assessments can be demonstrated to produce fair, valid, appropriate, and reasonably accurate interpretations and decisions about the domain or participants for which they have been created, then we are reduced to the age old axiom “garbage in, garbage out”.
The section is international in its scope and so studies in developing contexts as well as developed nations are welcome. Small-scale qualitative studies of the human experience within cultural contexts of various assessment methods are also welcome. Investigations into the properties of various methods of assessment and/or testing are welcome, especially if readers are given insights as to the utility, feasibility, and accuracy of those measures. This means that statistical and psychometric studies are welcome, but the emphasis should be on the applied consequences or meaning of such studies.
As David Thissen has pointed out, psychometrics and, by implication, assessment is an engineering response to real world needs. Hence, this section is especially interested in critical considerations of the real world implications of an assessment, test, or measurement. Assessment in education serves multiple masters and thus research into the validity of a specific practice for a specific purpose in a specific context with a specific audience is needed. Free from the restrictions created by print-based page limits, studies in this section need only demonstrate robustness in theory, method, and analysis to be accepted. The significance or importance of a paper will be decided by open-access readership who will hopefully cite and draw on the work in this section.
Indexed in: coming soon
Assessment, Testing and Applied Measurement welcomes submissions of the following article types: Book Review, Conceptual Analysis, Correction, Curriculum, Instruction, and Pedagogy, Editorial, Empirical Study, Evaluation, General Commentary, Hypothesis & Theory, Methods, Mini Review, Opinion, Original Research, Perspective, Review and Specialty Grand Challenge.
All manuscripts must be submitted directly to the section Assessment, Testing and Applied Measurement, where they are peer-reviewed by the Associate and Review Editors of the specialty section.
Articles published in the section Assessment, Testing and Applied Measurement will benefit from the Frontiers impact and tiering system after online publication. Authors of published original research with the highest impact, as judged democratically by the readers, will be invited by the Chief Editor to write a Frontiers Focused Review - a tier-climbing article. This is referred to as "democratic tiering". The author selection is based on article impact analytics of original research published in all Frontiers specialty journals and sections. Focused Reviews are centered on the original discovery, place it into a broader context, and aim to address the wider community across all of Education.