Friday
10:00–11:15 a.m.
Toby Meadows (UCI):
Was Gödel wrong?

Abstract: No. In fact, Woodin has a pleasing generalization of the second incompleteness theorem to arbitrary strong logics. But he also observes that the generic multiverse appears to get around this. I'm going to talk about what is happening here and introduce a class of logics that share this apparent Gödelian dodge. This material yields some intriguing insights about the limits of linguistic techniques like definition and interpretation.

11:15–11:30 a.m. coffee break

11:30 a.m.–12:45 p.m.
Baptiste Mélès (Université de Lorraine):
How programming languages rehabilitate morphology

Abstract: The word "syntax" has two slightly different meanings in linguistics and in formal sciences. On the one side, in linguistics, "syntax" is a part of the grammar, which describes how words can be combined into sentences. In the grammar of many natural languages (e.g. Indo-European languages and Japanese, but not Chinese), it is opposed to morphology, which describes the formation and variation of words. On the other side, in the study of formal languages---those of logic, mathematics and computer science---"syntax" is taken as a synonym of grammar in general. This is in accordance with the Chomskyan viewpoint, which reduces morphology to syntax. Against this latter view, we will argue that 1) the notion of words' structure and variability has come back in object-oriented programming languages, making the distinction between morphology and syntax relevant again; 2) this opens the way to the use of some traditional linguistic categories---e.g. pronouns, derivation, analogy---to describe some features of formal languages and account for their expressiveness.

12:45–2:00 p.m. Lunch

2:00–3:15 p.m.
Kai Wehmeier (UCI):

Dynamic Predicate Logic revisited

Abstract: We take another look at Groenendijk and Stokhof's Dynamic Predicate Logic (DPL) and argue that the ways it diverges from traditional logics have been misidentified. Contrary to received wisdom, DPL, when its syntax and semantics are suitably reformulated, is truth-conditional and does not allow variable-binding beyond syntactic scope. That said, it turns out to be a many-valued logic and to treat formulas as quantifiers.

3:15–3:30 p.m. Coffee break

3:30–4:45 p.m.
Jeremy Heis (UCI):
Russell’s If-Thenism

Abstract: In the Principles of Mathematics (1903), Bertrand Russell held a philosophy of mathematics that has since come to be called if-thenism: roughly, the sentences of pure mathematics are (quantified) conditionals of the form If axioms, then theorem. This position is hard to understand for a number of reasons that I will address in this talk. First, Russell’s if-thenism seems in danger of collapsing into a position we might call “set-theoretic platonism.” Second, his if-thenism seems on the surface to be simply incompatible with his logicist definition of number as classes of equinumerous classes. Third, one might object — as Sebastien Gandon has recently done — that the details of the philosophy of geometry in Principles are incompatible with if-thenism. Fourth, one might question why Russell would prefer if-thenism, which is plausibly read as what we now call eliminative structuralism, over a full blooded noneliminative variety of structuralism (especially since he had no truck with empiricist scruples over abstracta). I’ll argue that the first three worries can be (partially) addressed, and I’ll gesture toward his answer to the fourth.

6:30 Dinner

Saturday
10:00–11:15 a.m.
Gerhard Heinzmann (Université de Lorraine):

The difficult status of complete induction in mathematics: imagination
or thought experiment?

Abstract: My aim is to discuss different historical approaches to justify the induction principle, and to ask if it is possible to avoid from a constructivist’s point of view impredicativity: In his argumentation against Russell, Poincaré uses transcendent imagination as a necessary tool to justify induction, Hilbert/Bernays uses thought experiments and Lorenzen operative imagination.

Now, Poincaré’s definition of predicativity allows an extensional interpretation, which was then technically elaborated by Feferman, as well an intensional interpretation. 

If predicativity is considered in an extensional perspective, complete induction would possess an irreducible impredicative character even if it is not treated as an explicit definition but as an inductive definition. Thought experiment transcending operative abilities would then be a necessary tool to consider induction as evident and constitute a sort of minimal platonism. This weakens Poincaré’s claim that impredicativity should be avoided because of its circular character.

If predicativity is considered in an intensional perspective, a purely operational and predicative justification with operative imagination (Lorenzen) would be possible of complete induction.

11:15–11:30 a.m. Coffee break

11:30 a.m.–12:45 p.m.
Jason Chen (UCI):

The Classification Turn of Descriptive Set Theory

Abstract: In this work I aim to provide an historical account of what I call the classification turn in descriptive set theory. The motivation of this project is an obvious blank in the literature: the beginning of descriptive set theory centered around the study of the analytic sets and their regularity properties (perfect set property, property of Baire, Lebesgue measurability). It soon pivoted, in Moscow, to the study of the structure theory of the projective sets, such as uniformization, decomposition into union of Borel sets, etc. But a quick examination of the published papers on descriptive set theory today reveals (besides inner model theory) lots and lots papers on definable equivalence relation, along with other classification-adjacent areas and their applications. These papers not only depart drastically from the early French-Russian concerns but also, as a sociological observation, are frequently published in generalist mathematical journals.

One is then naturally motivated to pose the following questions: what took place between the 1920s to today that led to this change in topic? What was in the air, mathematically speaking, in the DST community back then? When and where did the descriptive study of equivalence relation arise? When did people realize that it can be applied fruitfully to non-logic areas in math? In probing these questions, my goal is to give an account of this classification turn. I identify 4 factors that have non-trivial factors leading up to the turn to equivalence relations: 1. fundamental metamathematical difficulty (and its later confirmation), 2. von Neumann's program to classify stochastic systems in terms of certain deterministic properties, 3. breakthrough in the theory of Polish groups, orbit equivalence relations, and dichotomy theorems, and finally, 4. (the provocative claim) the political persecution of Luzin and the subsequent animosity from his students.

12:45–2:00 p.m. Lunch

2:00–3:15 p.m.
Andrew Arana (Université de Lorraine):

Poincaré, uniformisation and purity

Abstract: Roughly, a solution to a problem, or a proof of a theorem, is "pure" if it draws only on what is "close'' or "intrinsic'' to that problem or theorem. For example, a complex-analytic proof of an arithmetic theorem like the prime number theorem is often judged to be impure. Purity is an ideal of proof: mathematicians deem pure proofs valuable, even if impure proofs are also deemed valuable for different reasons. But how should this ideal be understood? A traditional view takes purity to concern the crossing of branches of mathematics: a theorem belonging to one branch has an impure proof if that proof involves other branches. But modern mathematics does not respect branches. Poincaré’s uniformisation theorem is such an example. We will discuss how this case yields a more robust formulation of purity that supports modern mathematical practice.

3:15–4:00 p.m. General Discussion

connect with us

         

© UC Irvine School of Social Sciences - 3151 Social Sciences Plaza, Irvine, CA 92697-5100 - 949.824.2766