Vincenzo Crupi: From scoring rules to the value of evidence
Epistemic inaccuracy, uncertainty, information, evidential support, the value of given evidence, and the expected value of an evidence search option (to wit, an experiment): all of these notions have been involved in classical and recent discussions within probabilistic approaches to philosophy of science, epistemology, and cognitive science. The formal representations of all these key pieces are tightly and neatly connected at the mathematical and foundational level, as anticipated by leading figures such as Jimmy Savage and others. What we will do is starting out with a parametric family of scoring rules (including so-called logarithmic and Brier scores as special cases) as a basic building block, which will then provide a firm grasp on how to navigate this theoretical maze. Once the foundations are clarified, a number of fascinating (and open) issues will assume a much clearer profile.
Susann Fiedler: Strengthening the Bond Between Theory and Evidence
The replication crisis in psychology has led to a fruitful discussion about common research practices and research institutions. I will present a set of measures that aim at making science more efficient and research results more reliable by fostering a strategic alignment and the interlocking of all parts of the research process. The recommended changes address individuals as well as institutions and concern theory, empirical methodology, and accumulation of evidence. The ideas put forward in this talk aim to improve the foundation for efficient research by fostering: (a) precise theory specification, critical theory testing, and theory revision; (b) a culture of openness to evidence and subsequent theory revision and (c) the establishment of interconnected databases for theories and empirical results, which are continuously updated in a decentralized manner.
Christian Hennig: A spotlight on statistical model assumptions
Statistical models are central for statistical data analysis. Ignorance of model assumptions can cause misleading analyses. On the other hand, claims that the model assumptions have to be fulfilled in order to apply statistical methods are equally misleading, given that this in practice is hardly ever possible. What is the meaning and role of model assumptions in statistics? How does this differ between frequentist and Bayesian approaches? Can and should model assumptions be tested? How can “wrong models be useful”? Can we analyse data in a way that is robust against misspecified models? Can we model the consequences of model misspecification? Statisticians can handle uncertainty formally by modelling it, but model uncertainty may dominate modelled uncertainty. Statistics should be used and interpreted always keeping in mind that it is based on idealised models that are essentially different from the modelled reality. Differences between model and reality and their implications always deserve attention.
Gerhard Schurz: The optimality of meta-induction
Hume’s problem is the problem of establishing a justification of the rationality of induction: the transfer of observed regularities from the past to the future. This talk introduces to a new account to Hume’s problem. This account concedes the force of Hume’s sceptical arguments against the possibility of a non-circular justification of the reliability of induction. What it demonstrates is that one can nevertheless give a non-circular justification of the optimality of induction, more precisely of meta-induction, that is, induction applied at the level of competing methods of prediction. Based on discoveries in machine learning theory it is demonstrated that a strategy called attractivity-weighted meta-induction is predictively optimal in all possible worlds among all prediction methods that are accessible to the epistemic agent. Moreover, the a priori justification of meta-induction generates a non-circular a posteriori justification of object-induction. Beyond its importance for foundation-oriented epistemology, meta-induction (MI) has a variety of applications in neighbouring disciplines, including: forecasting sciences (MI as a superior prediction tool), cognitive science (MI as a new account to adaptive rationality) and social epistemology (MI as a means for the spread of knowledge).
Mattia Andreoletti: Philosophy in Science
The question concerning the relationship between science and philosophy of science is a very old one: can philosophy of science be useful to science? Does science need philosophy? As general as these questions may sound, it is one that at least some scientists would reply, not without reasons one must admit, with a peremptory “no”. After all, why should a discipline often fraught with abstruse questions and obscure language be of any use (let alone help) to rational, evidence-based, scientific endeavors? Indeed, philosophy of science is frequently described by those outside the discipline as completely irrelevant to scientific practice. In the last decades, philosophers of science have extensively discussed this problem. For instance, some of them have argued that philosophy of science should contribute to methodological discussions. Some others, along the neo-positivist tradition, have proposed a clarificatory and analytical role in order to facilitate the theoretical aspects of scientific work. In this tutorial I introduce and discuss an emerging field of research which aims to directly contribute to science in a systemic way: philosophy in science. Moreover, starting from few success stories, I offer a framework to explain how philosophers can be useful to science.
Fabrizio Calzavarini, Gustavo Cevolani and Marco Viola: Reverse inference and methodological fallacies in cognitive neuroscience
In cognitive neuroscience, researchers often infer from specific activation patterns to the engagement of particular mental processes. This “reverse inference” (RI) plays a crucial role in many applications of functional magnetic resonance imaging (fMRI), both inside and outside cognitive neuroscience. In recent years, RI has attracted a great deal of attention, especially after leading neuroscientist Russell Poldrack (2006) denounced an uncontrolled “epidemic” of this reasoning pattern, cautioned against its (improper) use and pointed to its crucial weakness. Poldrack’s paper triggered a hot debate, but no convincing solution seems still on offer.
In this tutorial, we discuss the logical and methodological problems surrounding reverse inference and cognitive neuroscience in general, proceeding as follows. First, we provide an elementary introduction to neuroscientific research, its aims, and its methods. Second, we focus on the issue of reverse inference and on the debate raised by Poldrack’s critique. Third, we place the debate about RI in the wider context of Bayesian philosophy of science. In particular, we focus on “weak” and “strong” forms of RI construed as abductive inference and on the confirmatory value of RI in assessing cognitive hypotheses.
Noah van Dongen and Jan Sprenger: Significance Testing and Severe Testing
Mael Lemoine and Thomas Pradeu: Philosophy in Science
Philosophers of science sometimes have an impact on science. Beyond informal discussion, institutional influence or talks, they write papers that are sometimes cited in science, and sometimes even publish papers in science that pass the process of peer-reviewing. Some of these papers are truly philosophical: we call that activity « Philosophy in Science », which consists in starting from a scientific problem, using philosophical tools and getting back to science with a solution. We assess the extent of this activity and we examine the hypothesis that philosophers of science constitute a community within philosophy of science.
Andrea Roselli (Cambridge/Rome): Bayesian conditionalizing and Emerging Objectivity in Science
The idea of a Tarskian correspondence to “the real way the world is” might have detrimental effects in scientific models, pace NMA. What I propose is a change of perspective, which might also help to shed a new light on the key notions of ‘objectivity’ and ‘uncertainty’ of information theory or approaches based on statistical inferences, such as objective/subjective Bayesianism. My proposal is to explore the idea that objectivity in science gradually emerges in the (potentially infinite) chain of Bayesian conditionalizing, instead of being related in any sense to a correspondence to the ‘true world’ out there.
Samuel Fletcher (University of Minnesota): Replication is for Meta-Analysis
Judgements about whether there is a replicability crisis and, if so, what its nature and gravity are, have largely presupposed that replication is a binary matter: an experiment can either be replicated or not by another experiment. But whether this is so depends on its function in science. I suggest that direct replication’s ultimate goal is to amalgamate and assess the strength of evidence for or against a hypothesis, indicated whether the evidence provided by the original experiment being replicated was misleading. If one accepts this, then replication’s goal aligns with that of statistical meta-analysis, broadly conceived.
Borut Trpin (LMU Munich): Thou shalt not gamble with methods
Should a scientist rely on methodological triangulation? Heesen et al. (2019) recently provided a convincing affirmative answer. However, their approach relies on methodological gambling. We instead propose epistemically modest triangulation (EMT), according to which one should withhold judgement when evidence is discordant. We show that for a scientist in a methodologically diffident situation, the expected utility of the EMT is greater than that of Heesen et al.’s (2019) triangulation or that of using a single method. We also show that the EMT is more appropriate for increasing epistemic trust in science. In short: triangulate, but do not gamble with evidence.
Luna de Souter (KU Leuven): Family-wise Error Rate: a new way to evaluate meta-analysis
Reproducibility problems in science are partly caused by publication and citation bias. The purpose of this paper is to illustrate how, in the presence of these biases, testing the same effect by multiple independent research teams leads to error inflation in a similar way as multiple testing in individual studies does. This warrants attention for the probability of getting least one type I error, as opposed to the expected rate of false discoveries. This provides a new point of view to evaluate the advantages and challenges of meta-analysis, which is a popular method for dealing with reproducibility problems.
Adam Kubiak (Catholic University Lublin): Neyman’s Alpha Error Revisited
I show that Neyman juggles with several conceptions of probability. By making use of this fact I show interesting features of Alpha Error and Average Alpha Error and argue that they validate the application of the conception of false rejection error rate in a situation of non-exact replications.
Christoph Merdes (FAU Erlangen): Responding to Retractions
This paper sets the retraction of scientific results in a formal framework for testimonial reports. Retractions can be modeled as testimonial reports, but they require, I argue, as different response from epistemic agents receiving it. I sketch two basic strategies and their qualitative implications. Finally, I lay out the further part for investigating these strategies in a more complex social context by means of computer simulation.