Aims and Scope
Aim
The Journal of Educational Evaluation & Standards (JEES) is an international, peer-reviewed, open-access journal advancing rigorous scholarship and practice at the nexus of educational evaluation and standards. Covering the full learner continuum—from early childhood through higher education, TVET, and lifelong learning—JEES prioritizes method- and evidence-driven work that demonstrates validity, reliability, and fairness, operationalizes standard-setting and quality assurance, and delivers research that is transparent, reproducible, and scalable. We especially welcome studies that translate measurement and evaluation techniques into actionable standards, accreditation, and accountability processes to inform policy and improve education.
Audience
JEES serves measurement and evaluation scholars, standards and accreditation bodies, QA agencies, policymakers, school/system evaluators, and professionals in educational data science and EdTech. Interdisciplinary, cross-context, and cross-regional submissions are encouraged.
Scope (including but not limited to)
Measurement & Methodology: validity/reliability evidence; measurement invariance; bias and fairness (e.g., DIF); IRT; equating/linking; Bayesian and multilevel modeling; CAT/MST; cognitive diagnosis; performance assessment.
Standard-Setting & Score Interpretation: cut-score/proficiency level setting (e.g., Angoff/Bookmark); alignment with learning/curriculum standards; score reporting and use; validity argument and interpretive use claims.
Qualification Frameworks, Accreditation & QA: national/regional qualification frameworks; program/institutional accreditation; external QA; internal quality management and improvement cycles; micro-credentials; recognition of prior learning.
Evaluation Systems & Accountability: educational monitoring; longitudinal designs; system- and school-level accountability; indicator systems and weighting; data visualization and governance.
Classroom & Formative Assessment: classroom assessment quality; evidence-based instructional improvement; learning outcomes assessment and diagnostic feedback; teacher assessment literacy.
Educational Data Science & AI-enabled Assessment: learning analytics; digital assessment; automated scoring and surveillance; test security and academic integrity; algorithmic transparency and algorithmic fairness; human-AI collaboration and risk assessment.
Cross-Cultural & Multilingual Assessment: cultural/linguistic adaptation; comparability and equivalence; international benchmarking and mutual recognition; contextualized standard implementation and local validation.
Program Evaluation & Policy Studies: evaluation of programs/reforms; cost-effectiveness and impact; implementation fidelity and transportability; pathways from evidence to policy/standards.
Ethics, Governance & Open Science: data protection and privacy; consent and participant risk; inclusive language and group fairness; open data/code/materials; reproducibility and replication.
Synthesis & Method Innovation: systematic reviews and meta-analyses; methods/tools notes; standards and accreditation practice cases; registered reports and replication; evidence-based perspectives.
Out of Scope / Generally Discouraged
Pedagogy-only studies with no substantive link to educational evaluation or standards;
Purely descriptive cases lacking methodological rigor and evidence;
Results that report statistical significance without validity/reliability or fairness evidence or without implications for standards or evaluation practice.
Article Types (indicative)
Original research; methods & measurement papers; standards & accreditation cases; program/policy evaluation; systematic reviews/meta-analyses; data/code & tool descriptors; registered reports and replications; evidence-based viewpoints/commentaries.