Review Policies
The editorial team of the Journal of Educational Evaluation & Standards (JEES) strictly follows standardized procedures for editorial screening and double-blind peer review to ensure that:
the submission fits the aims and scope of JEES (educational evaluation, measurement, and standards across sectors);
the manuscript demonstrates originality, methodological rigor (e.g., validity/reliability/fairness evidence), and practical or policy relevance;
the journal’s format, layout, and style guidelines are followed; and
language quality is suitable for scientific communication.
Manuscripts not adhering to JEES guidelines may be returned without scientific evaluation (desk return). JEES operates a double-blind peer-review process.
The review policy comprises an editorial review followed by external peer review. If a submission passes editorial screening, it is sent for double-blind peer review by at least three experts of international standing, with combined expertise in (a) evaluation/measurement methods and (b) the relevant domain of educational standards/policy or subject area. Reviewer and author identities remain blinded during review; identities may be known only after publication where authors opt into transparency statements.
Each submission is handled by the Editor-in-Chief (EiC) or a Section Editor, who is responsible for processing according to JEES policies. Reviewer comments and editorial suggestions are shared with authors for revision. After evaluating the revision, the EiC/Section Editor may:
send the manuscript for peer review without changes,
invite minor or major revisions, or
reject the manuscript.
All submissions undergo an initial check in the Editorial Office (scope & policy compliance, ethics and declarations, formatting, and similarity screening). A Section Editor or the EiC manages peer review. A Preliminary Editorial Review is normally completed within 7–8 days of submission.
If the manuscript proceeds to external review, an Assistant Editor may perform essential language/formatting edits to ensure clarity for reviewers. The formatted version is then sent for double-blind review to at least three subject-matter experts, including at least one expert in evaluation/measurement and at least one expert in standards/policy or the focal discipline. Where applicable, the EiC/Section Editor will seek at least one reviewer familiar with the country/region that is the manuscript’s empirical focus.
Peer reviewers evaluate relevance, originality, innovation, methodological soundness (design, measurement, analysis), evidence of validity/reliability/fairness, data transparency and reproducibility, and contribution to educational standards/policy or practice.
Authors are expected to revise according to reviewers’ and editors’ comments. If an author disagrees, they must provide a point-by-point rebuttal explaining their reasoning for each item. Revised manuscripts are checked by the EiC/Section Editor for completeness and adherence to required changes and may be returned to the original or additional reviewers.
In cases of conflicting reviews, the Editor may solicit an independent expert opinion (e.g., from an Editorial Adviser). The Editor-in-Chief makes the final decision to accept or reject the revised manuscript.
Once the final revision is accepted by the EiC, authors will be asked to apply the JEES template before the manuscript proceeds to typesetting and proofreading. Authors will receive a PDF proof for checking prior to online publication.
JEES Evaluation Criteria (Full Text)
These criteria guide double-blind peer review for scholarly submissions to JEES. Each item represents a minimum requirement for acceptance. Reviewers should rate “Meets / Needs Improvement / Not Applicable” and add brief comments.
- Title
Minimum requirement: Brief, clear, specific, and anchored in educational evaluation/measurement/standards; avoids over-claiming.
Examples reviewers look for: 10–15 words; active phrasing; key terms such as “validity,” “standard-setting,” “fairness,” “IRT,” “accreditation/QA.”
- Abstract
Minimum requirement: Structured abstract accurately representing the study; states sample, instruments, design, main results, and implications.
Examples: 200–250 words; headings such as Background/Objectives; Design & Data; Measures & Validity/Fairness; Results with effect sizes and CIs; Implications.
- Keywords
Minimum requirement: 4–8 terms that complement (rather than repeat) the title and reflect both evaluation/measurement and standards/policy dimensions.
Examples: learning outcomes assessment; IRT; standard-setting; DIF; quality assurance; accreditation.
- Introduction
Minimum requirement: Moves from the broad problem to specific RQs/Hypotheses; explains significance for educational evaluation and standards; states intended contribution.
Examples: Frames the study with an explicit theory/standard (e.g., Standards for Educational and Psychological Testing); identifies gaps and aims.
- Literature Review
Minimum requirement: Critical integration across the bodies of work on evaluation/measurement and standards/policy/practice, situating the study within the field.
Examples: Compares validity frameworks, standard-setting methods (Angoff/Bookmark), QA models, and prior fairness studies.
- Methodology
Minimum requirement: Transparent design, context/sample, instruments, procedures, and analysis plan; documents reliability/validity/fairness; details standard-setting or QA procedures when applicable.
Examples: Pre-registration or analysis protocol; handling of missing data; robustness checks; ethics approvals/consent; data/code availability statement.
- Results
Minimum requirement: Clear, unbiased reporting consistent with the methods; presents effect sizes and uncertainty; discloses key psychometric indices and model fit; figures/tables are reproducible.
Examples: IRT parameters and fit; reliability (α/ω); DIF/impact; calibration/equating; sensitivity analyses.
- Discussion
Minimum requirement: Interprets findings against prior evidence; addresses threats to validity, limitations, generalizability; offers actionable implications for standards/policy/practice and avoids over-generalization.
Examples: Links back to RQs and frameworks; distinguishes statistical vs. practical significance; notes reproducibility considerations.
- Interdisciplinary Integration
Minimum requirement: Demonstrates how evaluation/measurement techniques substantively inform standards, accreditation, or quality assurance, rather than treating domains as siloed.
Examples: Connects psychometrics to curriculum/qualification frameworks; articulates system accountability implications and stakeholder perspectives.
- Ethical & Equity Considerations
Minimum requirement: Ethics approvals/consent (as applicable); privacy and data governance; fairness and bias checks (e.g., subgroup analyses, DIF); conflict-of-interest disclosures; responsible use and transparency for algorithms or generative AI.
Examples: De-identification; inclusive language; algorithmic transparency; authors’ and funders’ roles clearly stated.
- Conclusion
Minimum requirement: Supported by results; synthesizes contributions; identifies actionable recommendations and future research directions.
Examples: Concrete suggestions for evaluation practice or standard-setting workflows; boundary conditions and validation plans.
- English Language & Style (APA-7)
Minimum requirement: Clear, precise, and objective writing; consistent terminology; APA 7th citation and statistical reporting; inclusive language.
Examples: Defines acronyms; aligns tables/figures with text; correct statistical notation and decimal precision.
Decision Guidance
Reject: ≥3 core dimensions (methods, results, discussion/contribution, ethics) fail to meet the minimum, or serious integrity risks exist.
Major Revision: Core evidence is remediable but requires substantial strengthening (e.g., validity/fairness evidence, methodological detail, open data/code).
Minor Revision: Primarily improvements in presentation, structure, or clarity.
Accept: All minima met, with at least two strengths among rigor, fairness, standards linkage, practical utility.
Note: The 12 criteria will appear in the reviewer form; authors must submit a point-by-point response when revising.