I'm a philosopher specializing in language, with interests in metaphysics, formal semantics, philosophy of mind, cognitive science, and Eastern philosophy. In the more (not so) distant past, I've done work in the history of philosophy, the history of ideas, and classical studies. I'm the holder of a Humboldt Fellowship for Experienced Researchers at the Institute of Philosophy of Freie Universität Berlin, and a laureate of the Fondation des Treilles. I did my graduate work across Italy (San Raffaele), France (Institut Jean Nicod, EHESS), and the United States (Harvard, MIT).
Below some recent publications/drafts. Topics I'm exploring include:
- the zero-shot interpretation of neologisms;
- the phenomenal correlates of meaning apprehension in speech comprehension;
- the interaction between quotation and focus in sign language;
- the relationship between semantics and metaphysics;
- brands of fictionalism in the metatheory of natural language linguistics;
- parallels and non-parallels between musical and linguistic meaning.
Lexical innovations (e.g., nonce words and zero-derivations coined on the fly by a speaker) seem to bear semantic content. Yet, such words cannot be attributed semantic content based on the lexical conventions available to the language, since they are not part of its lexicon. This is in tension with the commonplace view that lexical semantic content is constituted by lexical conventions. In recent work, Josh Armstrong has argued that this tension can be alleviated in two ways. The first is to stick to the conventionalist assumption and deny that lexical innovations bear semantic content. The second is to dynamicize the conventionalist assumption, i.e., argue that lexical innovations trigger a rapid update of the lexical conventions of the language and receive semantic content via the updated lexical conventions. Armstrong lays out and defends the second option. I propose a third way: the view that the interpretation of lexical innovations relies on an algorithm which generates a hypothesis about the semantic properties projected by the unfamiliar occurrence and feeds them into sentence meaning with no prior update of the lexical conventions of the language.
Natural language appears to allow the ascription of properties of numeral symbols to the denotation of number referring phrases. The paper describes the phenomenon and presents two alternative explanations for why it obtains. One combining an intuitive semantics for number referring phrases and a predicate-shifting mechanism, the other assigning number referring phrases a structured denotation consisting of two parts: a mathematical object (the number) and a contextually determined numeral symbol. Some preliminary observations in favor of the second analysis are offered.
Consider the following sentence: "Mary meditated on the sentence 'Bill is a good friend' and concluded that he was a good friend". It is standardly assumed that in sentences of this sort, containing so‐called "closed" quotations, the expressions occurring between quotation marks are mentioned and do not take their ordinary referents. The quoted NP "Bill" refers, if anything, to the name 'Bill', not to the individual Bill. At the same time, the pronoun "he", apparently anaphoric on quoted "Bill", refers to the individual Bill. The case seems thus to invalidate the intuitive principle that pronouns anaphoric on referential expressions inherit their reference from their antecedents. The paper formulates the argument, argues that sentences exhibiting the described pattern do not constitute evidence against the intuitive principle, and proposes an alternative account of the anaphoric relation involved.
According to mainstream linguistic phonetics, speech can be modeled as a string of discrete sound segments or “phones” drawn from a universal phonetic inventory. Recent work has argued that a mature phonetics should refrain from theorizing about speech and speech processing using sound segments, and that the phone concept should be eliminated from linguistic theory. The paper lays out the tenets of the phone methodology and evaluates its prospects in light of the eliminativist arguments. I claim that the eliminativist arguments fail to show that the phone concept should be eliminated from linguistic theory.
The combination of panpsychism and priority monism leads to priority cosmopsychism, the view that the consciousness of individual sentient creatures is derivative of an underlying cosmic consciousness. It has been suggested that contemporary priority cosmopsychism parallels central ideas in the Advaita Vedānta tradition. The paper offers a critical evaluation of this claim. It argues that the Advaitic account of consciousness cannot be characterized as an instance of priority cosmopsychism, points out the differences between the two views, and suggests an alternative positioning of the Advaitic canon within the contemporary debate on monism and panpsychism.
According to Originalism, word types are non-eternal continuants which are individuated by their causal-historical lineage and have a unique possible time of origination. This view collides with the intuition that individual words can be added to the lexicon of a language at different times, and generates other problematic consequences. The paper shows that such undesired results can be accommodated without abandoning Originalism.
This paper presents the hypothesis that the representational repertoire underpinning our ability to process the lexical items of a natural language can be modeled as a system of mental files. To start, I clarify the basic phenomena that an account of lexical knowledge should be able to elucidate. Then, I propose to evaluate whether the mental files theory can be brought to bear on an account of the representational format of lexical knowledge by modeling mental words as recognitional files.
Word meaning has played a somewhat marginal role in early contemporary philosophy of language, which was primarily concerned with the structural features of sentences and showed less interest in the nature of lexical representations. Nowadays, it is well-established that the way we account for word meaning is bound to have a major impact in tipping the balance in favor or against any given picture of the fundamental properties of human language. This entry provides an overview of the way issues related to lexical meaning have been explored in analytic philosophy and a summary of relevant research on the subject in neighboring scientific domains. Though the main focus of the entry is on philosophical problems, contributions from linguistics, psychology, neuroscience and artificial intelligence are also considered.
Emma Borg has defined semantic minimalism as the thesis that the literal content of well-formed declarative sentences is truth-evaluable, fully determined by their lexico-syntactic features, and recoverable by language users with no need to access non-linguistic information. The task of this article is threefold. First, I shall raise a criticism to Borg's minimalism based on how speakers disambiguate homonymy. Second, I will explore some ways Borg might respond to my argument and maintain that none of them offers a conclusive reply to my case. Third, I shall suggest that in order for Borg’s minimalism to accommodate the problem discussed in this paper, it should allow for semantically incomplete content and be converted into a claim about linguistic competence.
See the (incomplete) record on Publons.
Italian, English, French, German (anfänger!), Latin, Ancient Greek.
Classically trained musician: guitar & concert flute.