Acoustic-prosodic recognition of emotion in speech / Chuchi S. Montenegro
By: Montenegro, Chuchi S [author]
Language: English Description: 46 [45 unnumbered] leaves: color illustrations; 28 cm. + 1 DVD (4 3/4 in)Content type: text Media type: unmediated Carrier type: volumeSubject(s): Speech processing systems | Automatic speech recognitionGenre/Form: Academic theses.DDC classification: 006.454 Dissertation note: Thesis (DIT) -- Cebu Institute of Technology - University, College of Computer Studies, March 2016 Abstract: Analysis of emotion in speech is manifested by the analysis of the vocal behavior of the nonverbal aspect of the speech. The basic assumption is that there is a set of objectively measurable voice parameters called prosodic aspects of speech, which can be assessed through computerized acoustical analysis. In this paper, I report results on recognizing emotional states (happy, sad, angry) from a corpus of short duration utterances using 18 acoustic-prosodic features. Four experiments were being conducted to perform correlations of the different prosodic-acoustic features and were evaluated against four different classifiers using a 10-fold cross-validation technique to estimate how accurately the classifier performs.Item type | Current location | Home library | Call number | Status | Date due | Barcode | Item holds |
---|---|---|---|---|---|---|---|
THESIS / DISSERTATION | GRADUATE LIBRARY | GRADUATE LIBRARY Theses/Dissertations | 006.454 M7646 2016 (Browse shelf) | Not for loan | T1978 |
Thesis (DIT) -- Cebu Institute of Technology - University, College of Computer Studies, March 2016
Analysis of emotion in speech is manifested by the analysis of the vocal behavior of the nonverbal aspect of the speech. The basic assumption is that there is a set of objectively measurable voice parameters called prosodic aspects of speech, which can be assessed through computerized acoustical analysis.
In this paper, I report results on recognizing emotional states (happy, sad, angry) from a corpus of short duration utterances using 18 acoustic-prosodic features. Four experiments were being conducted to perform correlations of the different prosodic-acoustic features and were evaluated against four different classifiers using a 10-fold cross-validation technique to estimate how accurately the classifier performs.
There are no comments for this item.