rss_2.0Disputatio FeedSciendo RSS Feed for Disputatio Feed, Excluded Reasons and Moral Conflict<abstract> <title style='display:none'>Abstract</title> <p>As a legitimate authoritative directive is a second-order reason, it defeats conflicting reasons by a process of exclusion. Nonetheless, a legitimate authoritative directive can be defeated by more weighty reasons, including, as I argue in this paper, the more weighty reasons it excludes. This is part of a value pluralist conception of authority, according to which there is no general rule for the resolution of conflicting reasons. And I advance this argument in response to the work of Joseph Raz. Although Raz is a value pluralist, he posits a general rule for the resolution of some conflicts: namely, that an exclusionary reason cannot be defeated by a (more weighty) reason it excludes. This represents a weak version of value pluralism. My argument is that Raz does not succeed in his efforts to show that this general rule either better ensures conformity with reason or that it is justified by commitment to autonomy.</p> </abstract>ARTICLEtrue Integration Challenge to Strong Representationalism<abstract> <title style='display:none'>Abstract</title> <p>By “strong representationalism” (“SR” hereafter), I mean a version of naturalistic philosophy of mind which first naturalizes intentionality by identifying it with causation to physical properties and then naturalizes phenomenology by identifying it with intentionality or making them co-supervene on each other (Montague [2010]). Most specifically, SR will be taken as the conjunction of causal-function semantics and the intentionality-phenomenology identity thesis, the latter of which entails what I call “converse intentionalism”, the principle that experiential content supervenes on phenomenology. Because of this identity thesis, SR enjoys some phenomenological plausibility which is absent from traditional physicalism of mind. However, in this paper, I shall raise an <italic>integration challenge</italic> to SR by arguing that its foundational principles do not integrate easily. I will also explore some strategies open to SR for addressing my challenge, and argue that by invoking those strategies, SR either loses its phenomenological plausibility or undermines causal-function semantics. I conclude that if my argument is correct, it provides us reason to search for new principles to replace SR’s foundations.</p> </abstract>ARTICLEtrue Is Afraid of the Logical Problem in Meta-Ethics?<abstract> <title style='display:none'>Abstract</title> <p>Expressivism, as applied to a certain class of statements, evaluative ones, for instance, is constituted by two doctrines, only the first of which will concern me in this paper. Evaluative statements, according to this doctrine, aren’t propositional (susceptible of truth or falsity). In this paper, I will argue that one of the vexing problems (that I label the “logical problem”) this doctrine engenders for the expressivist is equally pressing for some cognitivists (who think evaluative statements <italic>do</italic> have a truth-value). I will present the difficulty and argue that some constructivists, who <italic>are</italic> cognitivists, cannot contend with it at all, and others must resort to more complex ways than the one available to other cognitivists.</p> </abstract>ARTICLEtrue Observation Sentences<abstract> <title style='display:none'>Abstract</title> <p>I argue that <italic>pace</italic> Quine, indeterminacy of translation affects observation sentences. I illustrate this indeterminacy with examples and show how it is tied to the indeterminacy affecting the analytical status of observation categoricals. I propose my own construal of the thesis of indeterminacy of translation, according to which indeterminacy is based on the inextricability of meaning and belief. I explain why this construal should be favored over Quine’s.</p> </abstract>ARTICLEtrue Governing of Opinions<abstract> <title style='display:none'>Abstract</title> <p>Thomas Hobbes’s most important recommendations for a sovereign reader concerned the governing of opinion. Due to the spread of false doctrines and their powerful champions, Hobbes was afraid that subjects would have opinions contrary to the maintenance of peace. His solution comprehended a combination of civic education and censorship. This text explains how Hobbes justifies his recommendations from the perspective of individual deliberation. It argues that Hobbes conceived censoring circulating doctrines as a way of keeping subjects’ minds like clean paper, ready for the sovereign to imprint civil doctrine in them through teaching, thereby increasing the chances of influencing subjects’ (free) deliberation, and thus of producing obedience.</p> </abstract>ARTICLEtrue a Causal Interpretation of the Common Factor Model<abstract><title style='display:none'>Abstract</title><p>Psychological constructs such as personality dimensions or cognitive traits are typically unobserved and are therefore measured by observing so-called indicators of the latent construct (e.g., responses to questionnaire items or observed behavior). The Common Factor Model (CFM) models the relations between the observed indicators and the latent variable. In this article we argue in favor of interpreting the CFM as a causal model rather than merely a statistical model, in which common factors are only descriptions of the indicators. When there is sufficient reason to hypothesize that the underlying causal structure of the data is a common cause structure, a causal interpretation of the CFM has several benefits over a merely statistical interpretation of the model. We argue that (1) a causal interpretation conforms with most research questions in which the goal is to <italic>explain</italic> the correlations between indicators rather than merely summarizing them; (2) a causal interpretation of the factor model legitimizes the focus on <italic>shared</italic>, rather than unique variance of the indicators; and (3) a causal interpretation of the factor model legitimizes the assumption of local independence.</p></abstract>ARTICLEtrue Patterns and Biological Explanation<abstract><title style='display:none'>Abstract</title><p>Turing patterns are a class of minimal mathematical models that have been used to discover and conceptualize certain abstract features of early biological development. This paper examines a range of these minimal models in order to articulate and elaborate a philosophical analysis of their epistemic uses. It is argued that minimal mathematical models aid in structuring the epistemic practices of biology by providing precise descriptions of the quantitative relations between various features of the complex systems, generating novel predictions that can be compared with experimental data, promoting theory exploration, and acting as constitutive parts of empirically adequate explanations of naturally occurring phenomena, such as biological pattern formation. Focusing on the roles that minimal model explanations play in science motivates the adoption of a broader diachronic view of scientific explanation.</p></abstract>ARTICLEtrue Metabolic Syndrome: Which Kind of Causality, if any, is Required?<abstract><title style='display:none'>Abstract</title><p>The definition of metabolic syndrome (MetS) has been, and still is, extremely controversial. My purpose is not to give a solution to the associated debate but to argue that the controversy is at least partially due to the different ‘causal content’ of the various definitions: their theoretical validity and practical utility can be evaluated by reconstructing or making explicit the underlying causal structure. I will therefore propose to distinguish the alternative definitions according to the kinds of causal content they carry: (1) definitions grounded on associations, (2) definitions presupposing a causal model built upon statistical associations, and (3) definitions grounded on underlying mechanisms. I suggest that analysing definitions according to their causal content can be helpful in evaluating alternative definitions of some diseases. I want to show how the controversy over MetS suggests a distinction among three kinds of definitions based on how explicitly they characterise the syndrome in causal terms, and on the type of causality involved. I will call ‘type 1 definitions’ those definitions that are purely associative; ‘type 2 definitions’ the definitions based on statistical associations, plus generic medical and causal knowledge; and ‘type 3 definitions’ the definitions based on (hypotheses about) mechanisms. These kinds of definitions, although different, can be related to each other. A definition with more specific causal content may be useful in the evaluation of definitions characterised by a lower degree of causal specificity. Moreover, the identification of the type of causality involved is of help to constitute a good criterion for choosing among different definitions of a pathological entity.</p><p>In section (1) I introduce the controversy about MetS, in section (2) I propose some remarks about medical definitions and their ‘causal import’, and in section (3) I suggest that the different attitudes towards the definition of MetS are relevant to evaluate their explicative power.</p></abstract>ARTICLEtrue and Modelling in the Sciences: Introduction<abstract><title style='display:none'>Abstract</title><p>The advantage of examining causality from the perspective of modelling is thus that it puts us naturally closer to the practice of the sciences. This means being able to set up an interdisciplinary dialogue that contrasts and compares modelling practices in different fields, say economics and biology, medicine and statistics, climate change and physics. It also means that it helps philosophers looking for questions that go beyond the narrow ‘what-is-causality’ or ‘what-are-relata’ and thus puts causality right at the centre of a complex crossroad: epistemology/methodology, metaphysics, politics/ethics. This special issue collects nine papers that touch upon various scientific fields, from system biology to medicine to quantum mechanics to economics, and different questions, from explanation and prediction to the role of both true and false assumptions in modelling.</p></abstract>ARTICLEtrue in Systems Medicine<abstract><title style='display:none'>Abstract</title><p>Systems medicine is a promising new paradigm for discovering associations, causal relationships and mechanisms in medicine. But it faces some tough challenges that arise from the use of big data: in particular, the problem of how to integrate evidence and the problem of how to structure the development of models. I argue that objective Bayesian models offer one way of tackling the evidence integration problem. I also offer a general methodology for structuring the development of models, within which the objective Bayesian approach fits rather naturally.</p></abstract>ARTICLEtrue are Purely Predictive Models Best?<abstract><title style='display:none'>Abstract</title><p>Can purely predictive models be useful in investigating causal systems? I argue “yes”. Moreover, in many cases not only are they useful, they are essential. The alternative is to stick to models or mechanisms drawn from well-understood theory. But a necessary condition for explanation is empirical success, and in many cases in social and field sciences such success can only be achieved by purely predictive models, not by ones drawn from theory. Alas, the attempt to use theory to achieve explanation or insight without empirical success therefore fails, leaving us with the worst of both worlds—neither prediction nor explanation. Best go with empirical success by any means necessary. I support these methodological claims via case studies of two impressive feats of predictive modelling: opinion polling of political elections, and weather forecasting.</p></abstract>ARTICLEtrue Model Organisms Theoretical Models?<abstract><title style='display:none'>Abstract</title><p>This article compares the epistemic roles of theoretical models and model organisms in science, and specifically the role of non-human animal models in biomedicine. Much of the previous literature on this topic shares an assumption that animal models and theoretical models have a broadly similar epistemic role—that of indirect representation of a target through the study of a surrogate system. Recently, <xref ref-type="bibr" rid="j_disp-2017-0015_ref_018_w2aab3b7b2b1b6b1ab1ac18Aa">Levy and Currie (2015)</xref> have argued that model organism research and theoretical modelling differ in the justification of model-to-target inferences, such that a unified account based on the widely accepted idea of modelling as indirect representation does not similarly apply to both. I defend a similar conclusion, but argue that the distinction between animal models and theoretical models does not always track a difference in the justification of model-to-target inferences. Case studies of the use of animal models in biomedicine are presented to illustrate this. However, Levy and Currie’s point can be argued for in a different way. I argue for the following distinction. Model organisms (and other concrete models) function as surrogate sources of evidence, from which results are transferred to their targets by empirical extrapolation. By contrast, theoretical modelling does not involve such an inductive step. Rather, theoretical models are used for drawing conclusions from what is already known or assumed about the target system. Codifying assumptions about the causal structure of the target in external representational media (e.g. equations, graphs) allows one to apply explicit inferential rules to reach conclusions that could not be reached with unaided cognition alone (cf. <xref ref-type="bibr" rid="j_disp-2017-0015_ref_015_w2aab3b7b2b1b6b1ab1ac15Aa">Kuorikoski and Ylikoski 2015</xref>).</p></abstract>ARTICLEtrue is the Problem with Model-based Explanation in Economics?<abstract><title style='display:none'>Abstract</title><p>The question of whether the idealized models of theoretical economics are explanatory has been the subject of intense philosophical debate. It is sometimes presupposed that either a model provides the actual explanation or it does not provide an explanation at all. Yet, two sets of issues are relevant to the evaluation of model-based explanation: what conditions should a model satisfy in order to count as explanatory and does the model satisfy those conditions. My aim in this paper is to unpack this distinction and show that separating the first set of issues from the second is crucial to an accurate diagnosis of the distinctive challenges that economic models pose. Along the way I sketch a view of model-based explanation in economics that focuses on the role that non-empirical and empirical strategies play in increasing confidence in the adequacy of a given model-based explanation.</p></abstract>ARTICLEtrue Concepts Guiding Model Specification in Systems Biology<abstract><title style='display:none'>Abstract</title><p>In this paper I analyze the process by which modelers in systems biology arrive at an adequate representation of the biological structures thought to underlie data gathered from high-throughput experiments. Contrary to views that causal claims and explanations are rare in systems biology, I argue that in many studies of gene regulatory networks modelers aim at a representation of causal structure. In addressing modeling challenges, they draw on assumptions informed by theory and pragmatic considerations in a manner that is guided by an interventionist conception of causal structure. While doubts have been raised about the applicability of this notion of causality to complex biological systems, it is here seen to be an adequate guide to inquiry.</p></abstract>ARTICLEtrue and the Modeling of the Measurement Process in Quantum Theory<abstract><title style='display:none'>Abstract</title><p>In this paper we provide a general account of the causal models which attempt to provide a solution to the famous measurement problem of Quantum Mechanics (QM). We will argue that—leaving aside instrumentalism which restricts the physical meaning of QM to the algorithmic prediction of measurement outcomes—the many interpretations which can be found in the literature can be distinguished through the way they model the measurement process, either in terms of the <italic>efficient cause</italic> or in terms of the <italic>final cause</italic>. We will discuss and analyze why both, ‘final cause’ and ‘efficient cause’ models, face severe difficulties to solve the measurement problem. In contradistinction to these schemes we will present a new model based on the <italic>immanent cause</italic> which, we will argue, provides an intuitive understanding of the measurement process in QM.</p></abstract>ARTICLEtrue Virtual and the Real<abstract><title style='display:none'>Abstract</title><p> I argue that virtual reality is a sort of genuine reality. In particular, I argue for virtual digitalism, on which virtual objects are real digital objects, and against virtual fictionalism, on which virtual objects are fictional objects. I also argue that perception in virtual reality need not be illusory, and that life in virtual worlds can have roughly the same sort of value as life in non-virtual worlds.</p></abstract>ARTICLEtrue and Liars<abstract><title style='display:none'>Abstract</title><p> Jamie Tappenden was one of the first authors to entertain the possibility of a common treatment for the Liar and the Sorites paradoxes. In order to deal with these two paradoxes he proposed using the Strong Kleene semantic scheme. This strategy left unexplained our tendency to regard as true certain sentences which, according to this semantic scheme, should lack truth value. Tappenden tried to solve this problem by using a new speech act, articulation. Unlike assertion, which implies truth, articulation only implies non-falsity. In this paper I argue that Tappenden’s strategy cannot be successfully applied to truth and the Liar.</p></abstract>ARTICLEtrueïve Realism and the Conception of Hallucination as Non-Sensory Phenomena<abstract><title style='display:none'>Abstract</title><p> In defence of naïve realism, Fish has advocated an eliminativist view of hallucination, according to which hallucinations lack visual phenomenology. Logue, and Dokic and Martin, respectively, have developed the eliminativist view in different manners. Logue claims that hallucination is a non-phenomenal, perceptual representational state. Dokic and Martin maintain that hallucinations consist in the confusion of monitoring mechanisms, which generates an affective feeling in the hallucinating subject. This paper aims to critically examine these views of hallucination. By doing so, I shall point out what theoretical requirements are imposed on naïve realists who characterize hallucinations as non-visual-sensory phenomena.</p></abstract>ARTICLEtrue Se Beliefs, Self-Ascription, and Primitiveness<abstract><title style='display:none'>Abstract</title><p> De se beliefs typically pose a problem for propositional theories of content. The Property Theory of content tries to overcome the problem of de se beliefs by taking properties to be the objects of our beliefs. I argue that the concept of self-ascription plays a crucial role in the Property Theory while being virtually unexplained. I then offer different possibilities of illuminating that concept and argue that the most common ones are either circular, question-begging, or epistemically problematic. Finally, I argue that only a primitive understanding of self-ascription is viable. Self-ascription is the relation that subjects stand in with respect to the properties that they believe themselves to have. As such, self-ascription has to be primitive if it is supposed to do justice to the characteristic features of de se beliefs.</p></abstract>ARTICLEtrue Possibilities, Volitional Necessities, and Character Setting<abstract><title style='display:none'>Abstract</title><p> Conventional wisdom suggests that the power to do otherwise is necessary for being morally responsible. While much of the literature on alternative possibilities has focused on Frankfurt’s argument against this claim, I instead focus on one of Dennett’s (1984) arguments against it. This argument appeals to cases of volitional necessity rather than cases featuring counterfactual interveners. van Inwagen (1989) and Kane (1996) appeal to the notion of ‘character setting’ to argue that these cases do not show that the power to do otherwise is unnecessary for moral responsibility. In this paper, I argue that their character setting response is unsuccessful.</p></abstract>ARTICLEtrue