When you have a tool as flexible as MetaReviewer, the learning curve can be steep! But as with other much-loved software (Hello, R!), a little perseverance and creativity will open all kinds of possibilities for your syntheses and reviews. In this Meta in Motion: Project Spotlight, we touch down with one of MetaReviewer's super users, Marta Pellegrini, to learn more about how she leveraged MetaReviewer's functionality to conduct a complex project: a meta-review on the methodological quality of meta-analyses in education research.

A meta-review on education meta-analyses

We're preaching to the choir here, but it bears repeating: a well-executed meta-analysis is a gift to researchers and practitioners alike. It can direct researchers to new areas for exploration and support practitioners in making evidence-based decisions. But are all meta-analyses well-executed? And how much effort do synthesists invest to ensure that meta-analytic findings are accessible to practitioners? These are the questions that Marta and her team (Terri Pigott, Caroline Sutton Chubb, Elizabeth Day, Natalie Pruitt, and Hannah Scarbrough) set out to answer through their Meta-Review on Education Meta-Analyses.

How many coding forms? How many projects?

This meta-review examined 247 meta-analyses that included randomized controlled trials and quasi-experiments on the effects of K-12 school-based academic interventions on student achievement. The team had three aims for each meta-analysis included in the review:

  1. assess the quality of the review process,
  2. assess the quality of the meta-analysis methods, and
  3. review how meta-analytic results are reported to facilitate the use of research evidence among non-research audiences.

One way Marta could have approached this was by setting up a single project space, with one coding form, that covers all three project aims. This is probably the most common set-up for a MetaReviewer project. However, some members of Marta's small review team had expertise specific to one of the three aims. For instance, Terri Pigott is an encyclopedia of knowledge about meta-analysis methods! To facilitate team members working on sections of the codebook that were most relevant to their area of expertise, Marta decided to split her codebook into three coding forms, one for each project aim. Then, for each form, she created a coding team of three members: one expert on the topic covered by that form and two additional team members.

The review team began by coding for quality of the review process and meta-analytic methods in one MetaReviewer project with the two corresponding forms. MetaReviewer already supports uploading multiple forms to a project and displaying them on individual study pages, so splitting the work across multiple forms was straightforward. However, each study needed four coders (two for each form) and MetaReviewer only allows three people to be assigned to a study workflow. So, Marta had to get creative with her assignment strategy.

Again, there were a few ways Marta could have approached assignments. For instance, because her team didn't full-text screen in MetaReviewer, she could have made assignments to the Review Process team in the screening workflow and assignments to the Quality of Methods team in the coding workflow. However, since one member of each coding team would be coding every study, she decided to make assignments in the coding workflow to only those team members who were coding a portion of the studies. At the end of the project, pages for a completed study looked like this: a Review Process team members assigned as Coder 1, a Quality of Methods team members assigned as Coder 2, but two submitted responses for each of the coding forms.

Meta in Motion Marta Completed Study

Lessons learned

One takeaway from Marta's experience: big changes to coding forms are best made in the Google document template. Changes to question types, question descriptions, response options, and conditional logic are easier to make in SurveyJS, where users can preview and test the changes without having to upload yet another version of a coding form. And that wasn't the only lesson learned! Marta shared three tips for success when using MetaReviewer for the first (or second or third...) time:

There are a lot features available to MetaReviewer users, but some are less obvious than others. In her first MetaReviewer project, Marta learned that it's important to give yourself ample time and space to explore the software, find its treasure troves, and test its limits. Moving forward, Marta plans to use a "sandbox project" to test different project set-up ideas, play with new functionality as it rolls out, and train team members.

You should always test the data export after making changes to your coding form to ensure that the export works and displays the data correctly. For this meta-review, Marta made two edits that, at the time, were not compatible with MetaReviewer's export functionality: removing the effect size information page and adding "Other (please describe)" as a response option. While Marta was able to work with the Help Desk Team to successfully export her data, addressing the bug fixes took a little longer than expected. Testing the export functionality as she was making edits to the coding form would have given her multiple opportunities to adjust her coding form based on the exported data file or highlight the export issues earlier in the process!

Marta never shied away from asking questions, identifying bugs, and making suggestions about how to improve functionality. While we haven't been able to address all of her feedback, there are multiple features in MetaReviewer v.1.2.5 that are a direct result of Marta's Help Desk requests, like coding form templates without effect size information pages.

And what about the meta-review findings?

Good news first: many synthesists working on education meta-analyses do a great job of following best practices for screening and coding and are increasingly attending to research questions about effect size variation. These are crucial first steps for producing trustworthy and relevant meta-analytic findings.

There are places, however, where synthesis teams fall short. For instance, meta-analytic methods that are widely supported among methodologists—like methods for handling effect size dependency or using meta-regression to explore why results vary—are rarely used. Additionally, most of the included meta-analyses did not preregister their protocols or make their data publicly available.

Only 8% (!) of reviews mentioned using specialized tools to screen and code studies. This was especially interesting to us as tools like MetaReviewer can help make review processes more systematic, reproducible, and make it easier to share data. This isn't just about making review more efficient; it is about ensuring they are open and trustworthy for everyone relying on their findings.