Diane will give a practice talk (about 20 minutes) of our HLT-NAACL paper:
==================================================================
TITLE
-----
Predicting Emotion in Spoken Dialogue from Multiple Knowledge Sources
==================================================================
AUTHORS
-------
Kate Forbes-Riley and Diane Litman
==================================================================
ABSTRACT
--------
We examine the utility of multiple types of turn-level and contextual
linguistic feature s for automatically predicting student emotions in
human-human spoken tutoring dialogues. We first annotate student
turns in our corpus for negative, neutral and positive emotions. We
then automatically extract features representing acoustic-prosodic and
other linguistic information from the speech signal and associated
transcriptions. We compare the results of a variety of machine
learning experiments using different feature sets to predict the
annotated emotions. Our best performing feature set contains both
acoustic-p rosodic and other types of linguistic features, extracted
from both the current turn and a context o f previous student turns.
This feature set yields a prediction accuracy of 84.75%, which is a
44% relative improvement in error reduction over a baseline. Our
results suggest that the intelligent tutoring spoken dialogue system
we are developing can be enhanced to automatically predict and adapt
to student emotions.
On April 7, Jan will summarize the AAAI Spring Symposium on
Exploring Attitude and Affect in Text: Theories and Applications
Postdoctoral Research Associate Position in Spoken Dialogue / Intelligent Tutoring Systems
Regina Barzilay will be a guest speaker in the Department of Computer Science colloquium series. She will be here on both 4/1 and 4/2.
NOTE: The talk is on Thurs. afternoon (4/1), not Friday morning
>
> What: Learning to Model Text Structure
> When: 4/1 at 3:30pm, refreshments at 3
> Where: SENSQ 5317/9
>
> Talk abstract:
>
> The natural language processing community has struggled for years to
> develop computational models of text structure. Such models are essential
> both for interpretation of human-written text and for evaluation of
> machine-generated text. Applications such as text summarization and
> machine translation would greatly benefit from such models.
>
> In this talk, I will present our first steps towards learning to model
> text structure. I will describe two models that are induced from a large
> collection of unannotated texts. The first model captures the notion of
> text cohesion by considering connectivity patterns characteristic of
> well-formed texts. These patterns are inferred from a matrix that
> combines distributional and syntactic information about text entities. The
> second model captures the content structure of texts within a specific
> domain, in terms of the topics the texts address and the order in which
> these topics appear. I will present an effective method for learning
> content models, utilizing a novel adaptation of algorithms for Hidden
> Markov Models. To conclude my talk, I will show how these text models can
> be effectively integrated into natural language generation and
> summarization systems.
>
> This is joint work with Mirella Lapata and Lillian Lee.
>
>