DOWNLOAD FULl ISSUE VOL. 20, NO. 1, 2024
RESEARCH ARTICLES
Nathaniel Gan
Article 1 | Pages: 1-23 | Abstract | DOI: 10.31820/ejap.20.1.1
Call the epistemological grounds on which we rationally should determine our ontological (or alethiological) commitments regarding an entity its arbiter of existence (or arbiter of truth). It is commonly thought that arbiters of existence and truth can be provided by our practices. This paper argues that such views have several implications: (1) the relation of arbiters to our metaphysical commitments consists in indispensability, (2) realist views about a kind of entity should take the kinds of practices providing that entity’s arbiters to align with respect to their metaphysical dependencies, (3) if realists take a kind of practice to provide grounds on which to affirm the existence of a kind of entity, they should turn to those same grounds when seeking to provide an epistemology of the relevant domain.
Cristina Nencha
Article 2 | Pages: 31-54 | Abstract | DOI: 10.31820/ejap.20.1.2
L.A. Paul calls “deep” the kind of essentialism according to which the essential properties of objects are determined independently of the context. Deep essentialism opposes “shallow essentialism”, of which David Lewis is said to be a prominent advocate. Paul argues that standard forms of deep essentialism face a range of issues (mainly based on an interpretation of Quinean skepticism) that shallow essentialism does not. However, Paul claims, shallow essentialism eliminates the very heart of what motivates essentialism, so it is better to be deep than shallow. Accordingly, she proposes a very sharp novel account of essentialism, which, while attempting to preserve some of the advantages of shallow essentialism over the classical forms of deep essentialism, can be deemed to be deep.
In this paper, I compare Paul’s proposal for a kind of deep essentialism with Lewis’s account, as it is presented by Paul. My aim is to show that the differences between the two approaches are
not as significant as Paul takes them to be, and that Paul’s account can be taken to be deeper than Lewis’s only at the cost of sacrificing the very idea at the bottom of deep essentialism.
This might be taken to suggest that, if Paul is correct in asserting that shallow essentialism is better equipped to address some skeptical challenges, but it is generally preferable to be deep than shallow, then Lewis’s account should be re-evaluated, since, as shallow as it can be, it might be deeper than it looks.
Ryo Tanaka
Article 3 | Pages: 55-85 | Abstract | DOI: 10.31820/ejap.20.1.3
In this paper, using Mark Schroeder’s (2008a) expressivist semantic framework for normative language as a case study, I will identify difficulties that even an expressivist semantic theory capable of addressing the Frege-Geach problem will encounter in handling the logical possibility of moral dilemmas. To this end, I will draw on a classical puzzle formulated by McConnell (1978) that the logical possibility of moral dilemmas conflicts with some of the prima facie plausible axioms of the standard deontic logic, which include obligation implies permission. On the tentative assumption that proponents of ethical expressivism should be generally committed to securing the logical possibility of moral dilemmas in their semantic theories, I will explore whether and how expressivists can successfully invalidate obligation implies permission within the framework developed by Schroeder. The case study eventually reveals that this can indeed be a hard task for expressivists. Generalizing from the case study, I will suggest that the source of the difficulty ultimately lies in the mentalist assumption of the expressivist semantic project that the logico-semantic relations exhibited by normative sentences should be modeled in terms of the psychological attitudes that speakers express by uttering them. My final goal will be to show that the difficulty expressivists face in dealing with the logical possibility of moral dilemmas is a reflection of the more general problem that their commitment to the mentalist assumption prevents them from flexibly adopting or dropping axioms in their semantic theories to get the right technical results.
Federico Burdan
Article 4 | Pages: 87-111 | Abstract | DOI: 10.31820/ejap.20.1.4
Can addiction be credibly invoked as an excuse for moral harms secondary to particular decisions to use drugs? This question raises two distinct sets of issues. First, there is the question of whether addiction is the sort of consideration that could, given suitable assumptions about the details of the case, excuse or mitigate moral blameworthiness. Most discussions of addiction and moral responsibility have focused on this question, and many have argued that addiction excuses. Here I articulate what I take to be the best argument for this view, based on the substantial difficulty that people with severe addiction experience in controlling drug-related behavior. This, I argue, may in some cases be sufficient to ground a mitigating excuse, given the way in which addiction undermines agents’ responsiveness to relevant moral reasons to do otherwise. Much less attention has been devoted to a second set of issues that critically affect the possibility of applying this mitigating excuse in particular cases, derived from the ambivalent nature of agential control in addiction. In order to find a fitting response to moral harm, the person with the right standing to blame must make a judgment about the extent to which the agent possessed certain morally relevant capacities at the time of the act. In practice, this will often prove tremendously difficult to assess. The ethical challenge for the person with the right standing to blame is fundamentally one of making a judgment about matters that seem underdetermined by the available evidence.
Miloš Kosterec
Article 5 | Pages: 113-130 | Abstract | DOI: 10.31820/ejap.20.1.5
The paper analyses the validity of arguments supporting the assumption of a constant universe of individuals over all possible worlds within Transparent Intensional Logic. These arguments, proposed by Tichý, enjoy widespread acceptance among researchers working within the system. However, upon closer examination, this paper demonstrates several weaknesses in the argumentation, suggesting that there is an open possibility to incorporate a variable universe of individuals even in models within this system.
Joshua Taccolini
Article 6 | Pages: 131-154 | Abstract | DOI: 10.31820/ejap.20.1.6
Normative error theorists aim to defend an error theory which says that normative judgments ascribe normative properties, and such properties, including reasons for belief, are never instantiated. Many philosophers have raised objections to defending a theory which entails that we cannot have reason to believe it. Spencer Case objects that error theorists simply cannot avoid self-defeat. Alternatively, Bart Streumer argues that we cannot believe normative error theory but that, surprisingly, this helps its advocates defend it against these objections. I think that if Streumer’s argument is successful, it provides error theorists an escape from Case’s self-defeat objection. However, I build upon and improve Case’s argument to show that we could never even successfully defend normative error theory whether we can believe it or not. So, self-defeat remains. I close by offering some reasons for thinking our inability to defend normative error theory means that we should reject it, which, in turn, would mean that it’s false.
Erich H. Rast
Article 7 | Pages: 155-179 | Abstract | DOI: 10.31820/ejap.20.1.7
A light form of value realism is defended according to which objective properties of comparison objects make value comparisons true or false. If one object has such a better-making property and another lacks it, this is sufficient for the truth of a corresponding value comparison. However, better-making properties are only necessary and usually not sufficient parts of the justifications of value comparisons. The account is not reductionist; it remains consistent with error-theoretic positions and the view that there are normative facts.
Tomislav Bracanović
Article 8 | Pages: 181-204 | Abstract | DOI: 10.31820/ejap.20.1.8
Integrative bioethics is a predominantly Croatian school of thought whose proponents claim to have initiated an innovative and recognizably European concept of bioethics capable of dealing with the most pressing issues of our time. In this paper, a critical overview of the integrative bioethics project is undertaken to show that it is, in fact, a poorly articulated and arguably pseudoscientific enterprise fundamentally incapable of dealing with practical challenges. The first section provides the basic outline of integrative bioethics: its historical development, major proponents, geographical context and philosophical foundations. The second section considers its main theoretical shortcomings: the absence of normativity, collapse into ethical relativism and frequent intratheoretical inconsistencies. The third section addresses the issue of typically pseudoscientific features of integrative bioethics: verbose language, constant self-glorification and isolation from mainstream science. The fourth and concluding section of the paper argues that integrative bioethics––regarding its quality, reception and identity––does not merit the “European bioethics” label and is better described as a blind alley of European bioethics.
Siddharth S
Article 9 | Pages: 205-229 | Abstract | DOI: 10.31820/ejap.20.1.9
Panpsychism, the view that phenomenal consciousness is present at the fundamental physical level, faces the subject combination problem–the question of whether (and how) subjects of experience can combine. While various solutions to the problem have been proposed, these often seem to be based on a misunderstanding of the threat posed by the subject combination problem. An example is the exchange in this journal between Siddharth (2021) and Miller (2022). Siddharth argued that the phenomenal bonding solution failed to address the subject combination problem, while Miller responded that Siddharth had (among other things) misunderstood the problem that the phenomenal bonding solution was trying to solve. In this paper, I seek to clarify the real subject combination problem facing panpsychism, and on this basis, evaluate the various attempts at defending the possibility of subject composition.
Kristján Kristjánsson
Article 10 | Pages: 231-250 | Abstract | DOI: 10.31820/ejap.20.1.10
This article swims against the stream of academic discourse by answer the title question in the negative. This contrarian answer is not meant to undermine the view that kindness is a good thing; neither is it, however, an example of a mere philosophical predilection for word play. I argue that understanding kindness as a virtue obscures rather than enlightens, for the reason that it glosses over various distinctions helping us make sense of moral language and achieving “virtue literacy”. I survey some of the relevant psychological literature before moving on to philosophical sources. I subsequently delineate the alternative ways in which coherent virtue ethicists can say everything that they want to say about kindness by using much better entrenched and less bland terms. I offer a view of kindness as a cluster concept in the same sense as the Wittgensteinian concept of a game. Finally, I elicit some implications of this view for practical efforts at character education.
BOOK REVIEWS
Martina Blečić
Pages: 25-29 | BOOK REVIEW
BOOK REVIEW Umberto Galimberti L’ETICA DEL VIANDANTE, Feltrinelli, 2023, ISBN: 788807493645 (paper), ISBN: 9788858858530, (e-book) Paperback, 20,90 EUR, e-book: 12,99 EUR