Appendix 6.1:
Additional Journal Article Reporting Standards for Longitudinal Studies, Replication Studies, Studies With One Subject or Unit, and Clinical Trials

Table A6.1. Reporting Standards for Longitudinal Studies (in Addition to Material Presented in Table A1.1)

Paper section and topic

Description

General reporting expectation

Sample characteristics (when appropriate)

Describe reporting (sampling or randomization) unit—individual, dyad, family, classroom:

  • N per group, age, and sex distribution
  • Ethnic composition
  • Socioeconomic status, home language, immigrant status, education level, and family characteristics
  • Country, region, city, and geographic characteristics.

Sample recruitment and retention methods

Attrition

Report attrition at each wave, breaking down reasons for attrition. Report any differential attrition by major sociodemographic characteristics and experimental condition.

Additional sample description

Report any contextual changes for participants (units) as the study progressed (school closures or mergers, major economic changes; for long-term studies, major social changes that may need explanation for contemporary readers to understand the context of the study during its early years).

Method and measurement

Specify independent variables and dependent variables at each wave of data collection.

Report the years in which each wave of the data collection occurred.

Missing data

Report the amount of missing data and how issues of missing data were handled analytically.

Analysis

Specify analytic approaches used and assumptions made in performing these analyses.

Multiple publications

Provide information on where any portions of the data have been previously published and the degree of overlap with the current report.

Note. Adapted from “Journal Article Reporting Standards for Quantitative Research in Psychology: The APA Publications and Communications Board Task Force Report,” by M. Appelbaum, H. Cooper, R. B. Kline, E. Mayo-Wilson, A. M. Nezu, and S. M. Rao, 2018, American Psychologist, 73, p. 14. Copyright 2018 by the American Psychological Association.

Table A6.2. Reporting Standards for Replication Studies (in Addition to Material Presented in Table A1.1)

Paper section and topic

Description

Study type

Report sufficient information both in the study title and, more important, in the text to allow the reader to determine whether the study is a direct (exact, literal) replication, approximate replication, or conceptual (construct) replication.

Indicate whether a replication study has conditions, materials, or procedures that were not part of the original study. Describe these new features, where in the study they occur, and their potential impact on the results.

Report indications of treatment fidelity for both the original study and the replication study.

Participants

Compare the recruitment procedures in the original and replication studies. Note and explain major variations in how the participants were selected, such as whether the replication study was conducted in a different setting (e.g., country or culture) or whether the allocation of participants to groups or conditions was different. Describe implications of these variations for the results.

Compare the demographic characteristics of the participants in both studies. If the units of analysis are not people (cases), such as classrooms, then report the appropriate descriptors of their characteristics.

Instrumentation

Report instrumentation, including both hardware (apparatus) and soft measures used to collect data, such as questionnaires, structured interviews, or psychological tests. Clarify in appropriate subsections of the Method section any major differences between the original and replication studies.

Indicate whether questionnaires or psychological tests were translated to another language, and specify the methods used, such as back-translation, to verify that the translation was accurate.

Report psychometric characteristics of the scores analyzed in the replication study and compare these properties with those in the original study.

Specify and compare the informants and methods of administration across the two studies. The latter includes the setting for testing, such as individual versus group administration, and the method of administration, such as paper and pencil or online.

Analysis

Report results of the same analytical methods (statistical or other quantitative manipulations) used. Results from additional or different analyses may also be reported.

State the statistical criteria for deciding whether the original results were replicated in the new study. Examples of criteria include statistical significance testing, effect sizes, confidence intervals, and Bayes factors in Bayesian methods.

Explain decision rules when multiple criteria, such as significance testing with effect size estimation, are employed. State whether the effect size in a power analysis was specified to equal that reported in the original study (conditional power) or whether power was averaged over plausible values of effect size on the basis of an estimated standard error (predictive power), which takes account of sampling error.

Note. Adapted from “Journal Article Reporting Standards for Quantitative Research in Psychology: The APA Publications and Communications Board Task Force Report,” by M. Appelbaum, H. Cooper, R. B. Kline, E. Mayo-Wilson, A. M. Nezu, and S. M. Rao, 2018, American Psychologist, 73, p. 17. Copyright 2018 by the American Psychological Association.

Table A6.3. Reporting Standards for N-of-1 Studies (in Addition to Material Presented in Table A1.1)

Paper section and topic

Description

Design

Describe the design, including

  • Design type (e.g., withdrawal–reversal, multiple baseline, alternating treatments, changing criterion, some combination thereof, or adaptive design)
  • Phases and phase sequence (whether determined a priori or data driven) and, if applicable, criteria for phase changes.

Type of design

Procedural changes

Describe any procedural changes that occurred during the course of the investigation after the start of the study.

Replication

Describe any planned replication.

Randomization

State whether randomization was used, and if so, describe the randomization method and the elements of the study that were randomized (e.g., during which phases treatment and control conditions were instituted).

Analysis

Sequence completed

Report for each participant the sequence actually completed, including the number of trials for each session for each case. State when participants who did not complete the sequence stopped and the reason for stopping.

Outcomes and estimation

Report results for each participant, including raw data for each target behavior and other outcomes.

Note. Adapted from “Journal Article Reporting Standards for Quantitative Research in Psychology: The APA Publications and Communications Board Task Force Report,” by M. Appelbaum, H. Cooper, R. B. Kline, E. Mayo-Wilson, A. M. Nezu, and S. M. Rao, 2018, American Psychologist, 73, p. 16. Copyright 2018 by the American Psychological Association.

Table A6.4. Reporting Standards for Studies Involving Clinical Trials (in Addition to Material Presented in Table A1.1)

Paper section and topic

Description

Title page

State whether the trial was registered prior to implementation.

Abstract

State whether the trial was registered. If so, state where and include the registration number.

Describe the public health implications of trial results.

Introduction

State the rationale for evaluating specific interventions for a given clinical problem, disorder, or variable.

Describe the approach, if any, to assess mediators and moderators of treatment effects.

Describe potential public health implications of the study.

State how results from the study can advance knowledge in this area.

Method

Participant characteristics

State the methods of ascertaining how participants met all inclusion and exclusion criteria, especially if assessing clinical diagnoses.

Sampling procedures

Provide details regarding similarities and differences in data collection locations if a multisite study.

Measures

State whether clinical assessors were

  • Involved in providing treatment for studies involving clinical assessments
  • Aware or unaware of assignment to a condition at posttreatment and follow-up assessments; if unaware, how was this accomplished?

Experimental interventions

Report whether the study protocol was publicly available (e.g., published) prior to enrolling participants and if so, where and when.

Describe how intervention in this study differed from the “standard” approach in order to tailor it to a new population (e.g., differing age, ethnicity, comorbidity).

Describe any materials (e.g., clinical handouts, data recorders) provided to participants and how information about them can obtained (e.g., URL address).

Describe any changes to the protocol during the course of the study, including all changes to the intervention, outcomes, and methods of analysis.

Describe involvement of the data and safety monitoring board.

Describe any stopping rules.

Treatment fidelity

Describe method and results regarding treatment deliverers’ (e.g., therapists’) adherence to the planned intervention protocol (e.g., therapy manual).

Describe method and results regarding treatment deliverers’ competence in implementing the planned intervention protocol.

Describe (if relevant) method and results regarding whether participants (i.e., treatment recipients) understood and followed treatment recommendations (e.g., did they comprehend what the treatment was intended to do, complete homework assignments if given, and/or practice activities assigned outside of the treatment setting?).

Describe any additional methods used to enhance treatment fidelity.

Research design

Provide rationale for length of follow-up assessment.

Results

Describe how treatment fidelity (i.e., therapist adherence and competence ratings) and participant adherence were related to intervention outcome.

Describe method of assessing clinical significance, including whether the threshold for clinical significance was prespecified (e.g., as part of a publicly available protocol).

Identify possible differences in treatment effects attributable to intervention deliverers.

Describe possible differences in treatment effects attributable to the data collection site if a multisite study.

Describe results of analyses of moderation and mediation effects, if tested.

Explain why the study was discontinued, if appropriate.

Describe the frequency and type of adverse effects that occurred (or state that none occurred).

Discussion

Describe how this study advances knowledge about the intervention, clinical problem, and population.

Note. Adapted from “Journal Article Reporting Standards for Quantitative Research in Psychology: The APA Publications and Communications Board Task Force Report,” by M. Appelbaum, H. Cooper, R. B. Kline, E. Mayo-Wilson, A. M. Nezu, and S. M. Rao, 2018, American Psychologist, 73, pp. 12–13. Copyright 2018 by the American Psychological Association.