Skip to content Skip to navigation

The Openproof Project

The Openproof project at Stanford's Center for the Study of Language and Information (CSLI) is concerned with the application of software to problems in logic. Since the early 1980's we have been developing applications in logic education which are both innovative and effective. The development of these courseware packages has in turn informed and influenced our research agenda.

We are currently engaged in a project to understand the difficulties that students encounter when learning logic. Our approach to this task is to use data mining techniques on a large corpus of student work that we have gathered through our Internet-based grading service over the past ten years. The corpus currently consists of over 2.75 million submissions of work from more than 55,000 individual students.

A second project involves the investigation of the logics of diagrammatic and heterogeneous reasoning. Logic has traditionally been concerned with deduction using information expressed as sentences. In this project we are concerned with developing formal and informal systems for logical reasoning with diagrams alone, and in heterogeneous contexts where diagrams and sentences are used together to represent information about a reasoning task.

People

Pease, Emma

Emma Pease is a System Administrator for the Openproof project, an Associate Editor for the CSLI Publications project, and an Assistant Editor for the Stanford Encyclopedia of Philosophy project.  

Events

Cognition & Language Workshop - Dave Kleinschmidt Cognition & Language Workshop - Dave Kleinschmidt

ROBUST LANGUAGE COMPREHENSION: Recognize the familiar, generalize to the similar, and adapt to the novel

Abstract:
Anyone who has used an artificial speech recognition system knows that robust speech perception remains a difficult and unsolved problem, yet one which human listeners achieve nearly effortlessly.  Speech perception requires that the listener map continuous, variable acoustic cues to underlying linguistic units like phonetic categories and words.  One of the substantial challenges that human listeners have to tackle is the lack of invariance, or the fact there is no single set of acoustic cue values which reliably indicates the presence of a particular linguistic structure.  The lack of invariance is due in large part to the fact that the relationship between cues and linguistic units varies substantially from one situation to another, due to differences between individual talkers, registers, dialects, accents, etc.: one talker's /p/ may be more like another talker's /b/.

In this talk I will present a computational framework---the ideal adapter---which characterizes the computational problem posed by the lack of invariance, and how it might be solved.  This framework naturally suggests three ways that listeners might achieve robust speech perception in the face of the lack of invariance: recognition of familiar situations/talkers, generalization to new situations/talkers similar to those encountered before, and rapid adaptation to novel situations/talkers.  All three of these strategies have been observed in the empirical literature, bearing out a range of qualitative predictions---of the ideal adapter framework---and quantitative predictions---of an implemented model within this framework.

Finally, this framework provides a unifying perspective on flexibility in language comprehension across different levels, as well as tying language comprehension together with other, more general perceptual processes, which also show similar adaptive properties.  These connections point out future directions for investigating how the kinds of computations necessary for achieving robust speech perception might be carried out algorithmically and could be implemented in neural mechanisms.