Contemporary computability theory has been applied to the philosophical analysis of scientific methodology in at least two distinct ways. First, Kevin Kelly proposes using tools from mathematical topology and effective descriptive set theory to evaluate scientific methods; representing data as infinite strings of numbers and scientific methods as total functions from data to conjectures, Kelly's project attempts to determine the extent to which methods can be guaranteed to converge to truth. Secondly, James W. McAllister posits that observed data is causally determined by the combination of the underlying explanatory phenomenon and random noise; the result of these factors is a random string of data, a maximally efficient source of information about the world. In this talk, Schatz will discuss a hitherto unaddressed incompatibility between these approaches: if data is random in the sense of Martin-Lӧf, then no refutable, highly-specific hypothesis can be true. He will then explore ways of avoiding this undesirable result. In particular, he will focus on using alternative definitions of algorithmic randomness for empirical data and modifying the underlying notion of probability.

connect with us

         

© UC Irvine School of Social Sciences - 3151 Social Sciences Plaza, Irvine, CA 92697-5100 - 949.824.2766