
Data Reanalysis – how does methodology influence research results?
Dr Michał Misiak and dr Marta Kowal from the University of Wrocław took part in a project whose aim was to reanalyze the data from 100 studies. The analyses of over 500 researchers showed how methodological choices influence research results.
The new publication in Nature, „Investigating the analytical robustness of the social and behavioural sciences” was part of an international collaboration led by Balázs Aczél and Barnabás Szászi (Eötvös Loránd University and Corvinus University) executed as part of the program Systematizing Confidence in Open Research and Evidence (SCORE) financed by DARPA.
A team of 457 independent analysts from all over the world conducted 504 reanalyses of data from 100 already published studies from social and behavioral sciences. All analysts received the same set of data and the same key research question. However, they could choose the method of analysis based on their knowledge and experience.
In the last decade, social and behavioral sciences have been thoroughly reformed, with the aim of increasing clarity, strictness and credibility of research. Registrations, registered reports, replication studies, and the verification of the reproducibility of analyses are intended to reduce the occurrence of random and biased results. However, one important question received relatively little attention: to what extent do research results depend on a specific method of data analysis?
In standard scientific practice a set of data is usually analyzed by one researcher or research team, and the resulting publication presents the result of one specific analytical pathway. Although an academic review assesses the methodological acceptability, it rarely discloses what results could emerge from alternative but equally justifiable statistical decisions.
Empirical research entails many key decisions, such as data-cleaning procedures, the definition of variables, the selection of statistical models or software, and the interpretation of results. All those decisions make up the so-called analytical variability – a form of flexibility that can influence the final conclusions.
Dr Marta Kowal from UWr sums up: “The project shows another level of the maturity of the academic community who, trying to discover human nature and the answers to pressing research questions, pays attention to not only the final goal, but also the way (or ways) of reaching that goal, as well as research self-correction. Our study shows how important data analysis is.”
Main conclusions
In 100 studies the researchers observed significant discrepancies between independent results of the analyses of the same issue conducted based on the same data. Although most of the reanalyses confirmed the main theses of the original studies, the effect sizes, statistical estimates, and levels of uncertainty often varied considerably. In about one third of the cases the analysts reached the same conclusions as the authors of the original studies.
It is important to note that those discrepancies were not the result of a lack of professional knowledge. Divergent results were achieved by experienced researchers with solid statistical knowledge as frequently as by others. At the same time, observational studies proved to be less reliable than experimental studies, suggesting that more complex data structures result in greater analytical flexibility — and consequently generate greater uncertainty.
“I am optimistic about the future, and I am happy that I was part of such an amazing project” – adds dr Kowal
More about the studies can be read in the article: Aczel, B., Szaszi, B., Clelland, H.T. et al. Investigating the analytical robustness of the social and behavioural sciences. Nature 652, 135–142 (2026).
Translated by Maja Maziarz (student of English Studies at the University of Wrocław) as part of the translation practice.
Date of publication: 15.04.2026
Added by: EJK




