No More Excuses for Non-Reproducible Methods
Online technologies make it easy to share precise experimental protocols - and doing so is essential to modern science, says Lenny Teytelman.
Send us a link
Online technologies make it easy to share precise experimental protocols - and doing so is essential to modern science, says Lenny Teytelman.
This publication provides an overview of some practical tools and strategies that researchers can implement in their own workflow to increase replicability and the overall quality of psychology research.
Many efforts are underway to promote data sharing in psychology, however it is currently unclear if the in-principle benefits of data availability are being realized in practice. In a recent study, we found that a mandatory open data policy introduced at the journal Cognition led to a substantial increase in available data, but a considerable portion of this data was not reusable. For data to be reusable, it needs to be clearly structured and well-documented. Open data alone will not be enough to achieve the benefits envisioned by proponents of data sharing.
Study attempts to reproduce values reported in 35 articles published in the journal Cognition revealed analysis pipelines peppered with errors. Elements of a reproducible workflow that may help to mitigate these problems in future research are outlined.
Just like judges and politicians, researchers may overstate their confidence in a claim. To truly assess their confidence, something needs to be on the line.
Peer reviewers have the right to view the data and code that underlie a work if it would help in the evaluation, even if these have not been provided with the submission. Yet few referees exercise this right.
A simple software toolset can help to ease the pain of reproducing computational analyses.
Only about 20% of statements indicate that data are deposited in a repository, which the PLOS policy states is the preferred method. More commonly, authors state that their data are in the paper itself or in the supplemental information, though it is unclear whether these data meet the level of sharing required in the PLOS policy.
An introductory course that guides students towards a reproducible science workflow.
An ambitious project that set out nearly 5 years ago to replicate experiments from 50 high-impact cancer biology papers, but gradually shrank that number, now expects to complete just 18 studies.
There are a number of threats to replicability. Some of them are technical, some social.
What’s the scientific value of the Stanford Prison Experiment? Zimbardo responds to the new allegations against his work.
Teach anyone how to create reproducible reports, with reusable environments, using technologies like Nix, LaTeX, and KnitR for languages like R, Python and JavaScript.
And the key to its popularity is the online repository and social network, GitHub.
Current bibliometric incentives discourage innovative studies and encourage scientists to shift their research to subjects already actively investigated.
New research predicts that audits would reduce the number of false positive results from 30.2 per 100 papers to 12.3 per 100.
Most papers fail to report many aspects of the experiment and analysis that we may not with advantage omit - things that are crucial to understanding the result and its limitations and to repeating the work. Instead of arguing about whether results hold up, we should strive to provide enough information for others to repeat the experiments.
Reproducibility failures occur even in fields such as mathematics or computer science that do not have statistical problems or issues with experimental design. Suggested policy changes ignore a core feature of the process of scientific inquiry that occurs after reproducibility failures: the integration of conflicting observations and ideas into a coherent theory.
Reproducibility issues pose serious challenges for scientific communities. But what happens when those issues get picked up by political activists? A report from the National Association of Scholars takes on the reproducibility crisis in science. Not everyone views the group’s motives as pure.
Policy makers often cite research to justify their rules, but many of those studies wouldn’t replicate.
Congress will have to pay for some steps to ensure greater reproducibility in the sciences. In the end, those steps will save enormous amounts now spent building blind allies and mirages. What’s needed are standardized descriptions of scientific materials and procedures, standardized statistics programs, and standardized archival formats.
This study by the National Association of Scholars examines the different aspects of the reproducibility crisis of modern science. The report also includes a series of policy recommendations, scientific and political, for alleviating the reproducibility crisis.
A series of talks on robust research practices in psychology and the biomedical sciences, held in Oxford in 2017. Organized by Dorothy Bishop, Ana Todorovic, Caroline Nettekoven and Verena Heise.
New guidelines from many journals requiring authors to provide data and code postpublication upon request is found to be an improvement over no policy, but currently insufficient for reproducibility.
Widespread adoption of preregistration will increase distinctiveness between hypothesis generation and hypothesis testing and will improve the credibility of research findings.
Nature journals encourage researchers who submit papers that rely on custom software to provide the programs for peer review.
Lutz Jäncke and Lawrence Rajendran talk about the crisis in the publication process and new solutions.
The publishing system builds in resistance to replication. Paul Gertler, Sebastian Galiani and Mauricio Romero surveyed economics journals to find out how to fix it.
Scientists around the globe nowadays regularly take to the internet to scrutinize research after it’s been published — including to run their own analyses of the data and spot mistakes or fraud.
Some scientific journals are defusing the fear of getting “scooped” by making it easier for scientists to publish results that have appeared elsewhere.