Jeromy Anglim's Blog: Psychology and Statistics

Monday, December 7, 2009

Meta-Analysis: Tips, Resources, and Software

The following discusses issues related to conducting a meta-analysis. It sets out: (a) some tips for getting started; (b) some online resources for learning about meta-analysis; (c) some links to software for conducting meta-analysis; and (d) a few thoughts that I have had about meta-analysis.

Tips for Getting Started
  1. Find a template article: The first time you conduct a meta analysis it helps to have a good example of a meta-analysis. A good example acts as a guide. But what might good mean in this context? First, a good example should be similar to your current project in several respects. It should involve: the same type of relationship (e.g., correction, cohen's d, odds ratio, etc.); the same field; a similar journal; a similar set of issues regarding such things as moderators, correlates, fixed or random effects models and so on. Second, a good example should be good in and of itself. Ideas and conventions about how to conduct meta-analysis are evolving. Thus, a good meta-analysis will tend to be recent. It's also more likely to be found in respected top-tier journals. If you are a researcher in psychology recent issues of Psychological Bulletin may provide a good source of potential articles (Click here for a quick search on Google Scholar for meta analyses in psychological bulletin; Some pdfs are available free online, particularly the ones with "...EDU [PDF]"; here are a couple: Beauty and Life OutcomesSelf-efficacy and Performance; In a more general sense I previously posted thoughts on how to extract writing principles form journal articles.
  2. Get some basic reading materials on conducting a meta-analysis: The links below under Resources provide a good starting point. These resources discuss meta-analysis and provide links to additional resources.
  3. Get some meta analytic software: A lot of options exist. See the list and links below. Some issues to consider include: (a) price (this includes both the cost of the software and the value you put on your time); (b) the degree to which you want to learn about meta-analysis as opposed to letting the software guide your choices; (c) the software you already use. For example, if you already use R, then the R packages for meta analysis are likely to be more appealing; (d) features required; (e) your mathematical background and willingness and competence in applying various formulas. Some software will save you the hassle of doing this; other approaches require that you do this yourself.

  • Psychwiki provides a list of resources to get started; it also provides an outline of software
  • Michael Brannick's course notes on meta-analysis: 
  • Jamie DeCoster provides a great set of notes on meta-analysis; It includes details relevant to the entire process of conducting a meta analysis. In particular it includes: formulas for calculating various effect size measures; thoughts on what to include in a meta analytic report; and strategies for critiquing meta analyses.
  • Crombie and Davies (2009, What is meta analysis) provide a gentle introduction to meta analysis
Meta-Analysis Software
A few random thoughts
The following are just a few thoughts and observations I've made about meta-analysis over the while.
  • Basic meta-analysis is better than no meta-analysis:  Meta-analysis has evolved into a sophisticated technique. There is a trade-off between the cost in time and money of running a meta-analysis and the information that it provides. The more you care about the details, the longer it will take. However, a quick and basic meta-analysis can be highly useful in explaining a relationship. Thus, even in a study that does not aim to provide a comprehensive meta analysis, a simple listing of the effect sizes of previous studies can provide a good starting point for thinking about previous results and guiding subsequent research.
  • Think meta-analytically: Bruce Thompson (2002, What future quantitative social science research could look like) introduced the idea of meta-analytic thinking. Thompson proposed what could be described as a simplistic Bayesian approach to designing and analysing empirical research. The approach involves designing studies in light of expectations regarding effect sizes based on prior research. Then, findings from a study are seen in light of prior research. For example, if you get a non-significant finding, but your study was under powered given the previously found small effect size in the literature, an appropriate conclusion may be that the effect size is small. I discuss meta analytic thinking a little more in a previous talk.
  • Raw data is better than meta-analysis: Meta-analysis is typically done because the raw data is not available. Analysis of raw data creates many additional possibilities. Researchers, universities, funding bodies, and journals should all push for greater access to raw data. This will enable more sophisticated synthesis of research.
  • Standardisation is not always appropriate: Meta-analysis is typically performed on standardised effect size measures such as Cohen's D, Pearson's R, or an Odds ratio. However, it is often better to run analyses on the raw metric. In particular if the standard deviation differs between studies, then then standardised measures of effect will be affected. If the same measure is used in a series of studies, it may be possible to use the unstandardised values. Jeff Valentine and Harris Cooper (Effect Size Substantive Interpretation Guidelines) discuss this issue in substantially more depth.
  • Meta-analysis and data sharing: Meta-analyses are based on input data which are summary statistics for individual studiesSome meta-analyses report input data in a the article. However, other times such data is not shared. Researchers conducting meta-analyses should share the data they use to conduct their meta-analysis . Meta-analyses represent databases that should be continuously updated as the literature grows.