How Can We Maximize What We Learn from Science?Scientists are human, and considerable research in the behavioral sciences shows that humans often opt for quick heuristics or simple decision rules. Effortful thinking is difficult, and people often prefer to rely on mental shortcuts instead of sifting through large amounts of complex information. But of course, the quick and easy answer can often lead us astray. And in many ways, this is the story of how the current "crisis of confidence" in science came to be: Because of their human minds, scientists and consumers of science were relying on a lot of oversimplified decision rules. The most important question we can ask going forward is: How do we build systems of conducting and communicating about science that help counteract basic human biases in thinking?

Thinking Deeply across the Research Cycle
Some of our recent methodological work focuses on synthesizing new developments in research methods that promote careful thinking across the research cycle, from study design to data analysis to aggregating across multiple studies using cuttingedge metaanalytic techniques. These resources seek to provide readers with concrete methodological and statistical tools they can use in their own research, with a focus on discussing why, when, and how to implement each tool.
CLICK THE LINKS TO READ MORE:
 Ledgerwood, A. (in press). New developments in research methods. Chapter to appear in R. Baumeister & E. Finkel (Eds.), Advanced Social Psychology. Oxford University Press.
 Ledgerwood, A., Soderberg, C. K., & Sparks, J. (2017). Designing a study to maximize informational value. In J. Plucker & M. Makel (Eds.), Toward a more perfect psychology: Improving trust, accuracy, and transparency in research (pp. 3358). Washington, DC: American Psychological Association.
 Ledgerwood, A. (2016). Introduction to the special section on improving research practices: Thinking deeply across the research cycle. Perspectives on Psychological Science, 11, 661663.
 Ledgerwood, A. (2016). The StartLocal Approach. Talk Presented at the Training Preconference of the 2016 Annual Convention of the Society for Personality and Social Psychology.
 Ledgerwood, A. (2014). Introduction to the special section on moving toward a cumulative science: Maximizing what our research can tell us. Perspectives on Psychological Science, 9, 610611.
 Ledgerwood, A. (2014). Introduction to the special section on advancing our methods and practices. Perspectives on Psychological Science, 9, 275277.
Modeling Tradeoffs to Identify Optimal Research Strategies:
The Case of the Unplanned Covariate
Once viewed as a statistical powerboosting hero, covariates have been recently recast in a negative light as concerns emerged about the possibility that they can inflate Type I error rates when used in a particular way. We conducted a Monte Carlo simulation study designed to quantify the Type I/Type II error tradeoff associated with including an unplanned, independent covariate in an experimental design. By varying the effect size under investigation, the correlation between the covariate and DV, and the particular analytic strategy modeled in our simulations, we were able to identify strategies that are particularly terrible (e.g., ones that do virtually nothing to boost power while allowing Type I error to inflate dramatically) as well as strategies that are particularly useful (e.g., ones that provide a substantial power boost while inflating Type I error only a little or not at all).
Latent Variables Boost Accuracy at the Cost of Precision
In all of our research, we pay particular attention to the appropriateness of various methods and practices and what can be gained (or lost) from choosing one alternative over another. For instance, social psychologists often seek to shed light on the basic process underlying an effect by testing for mediation. A simple threevariable mediation model can be analyzed either with a typical regression approach, or with structural equation modeling (SEM) using latent variables. Which is the better option? This is a more complex and consequential question than researchers often realize.
On the one hand, statisticians often recommend a SEM approach because it tends to produce more accurate estimates. (Regression tends to give inaccurate estimates that are often too small, because they have been attenuated by measurement error.) On the other hand, our work has shown that a regression approach tends to produce more precise estimates than a SEM approach, with smaller standard errors and therefore increased power to detect an effect in the first place.
On the one hand, statisticians often recommend a SEM approach because it tends to produce more accurate estimates. (Regression tends to give inaccurate estimates that are often too small, because they have been attenuated by measurement error.) On the other hand, our work has shown that a regression approach tends to produce more precise estimates than a SEM approach, with smaller standard errors and therefore increased power to detect an effect in the first place.
This means that SEM estimates are correctly centered (i.e., accurate)
but also widely scattered (imprecise, and therefore less likely to be significant). Imagine that each study is like a dart thrown at a dartboard. Over time (that is, across many studies), a SEM approach will get the darts to converge on the bullseye—the real strength of the relation between variables in the population—but there will be a lot of variability in where exactly each individual dart ends up. The estimate from any individual study might be off by quite a bit, but the metaanalytic average would be accurate (that is, it would recover the true population parameter). 
In contrast, regression estimates are incorrectly centered (inaccurate, and often far too small) but tightly clustered (precise, and therefore more likely to be significant). Because there’s less variability in where the darts end up, the results of a single study are more likely to be significant—so this approach is good at telling us if there is an effect in the first place. But because the darts are nowhere near the bullseye, we’ll get the wrong idea about how big that effect actually is...even when aggregating across studies in a metaanalysis. 
Is there a happy medium? Researchers can maximize both accuracy and precision by investing in reliable measures and by planning mediation studies with adequate power. But when highly reliable measures aren’t feasible, a twostep strategy for testing and estimating the indirect effect in a mediation model may be the best approach. We recommend using observed variables to test the indirect effect for significance (e.g., using regression and a bootstrapped SE), and then estimating the path coefficient for the indirect effect using latent variables in SEM.