litbaza книги онлайнРазная литератураОхота на электроовец. Большая книга искусственного интеллекта - Сергей Сергеевич Марков

Шрифт:

-
+

Интервал:

-
+

Закладка:

Сделать
1 ... 449 450 451 452 453 454 455 456 457 ... 482
Перейти на страницу:
Society Press. Athens, Greece, pp. 109—114 // http://www.image.ece.ntua.gr/projects/physta/conferences/531.pdf

2452

Ortony A., Clore G. L., Collins A. (1988). The Cognitive Structure of Emotion. Cambridge, UK: Cambridge University Press // https://books.google.ru/books?id=Sp8FngEACAAJ

2453

Russell J. A. (1980). A Circumplex Model of Affect / Journal of Personality and Social Psychology, Vol. 39, No. 6, pp. 1161—1178 // https://doi.org/10.1037%2Fh0077714

2454

Fontaine J. R. J., Scherer K. R., Roesch E. B., Ellsworth P. C. (2007). The World of Emotions is not Two-Dimensional / Psychological Science, Vol. 18 (12), pp. 1050—1057 // https://doi.org/10.1111/j.1467-9280.2007.02024.x

2455

Mcginn C., Kelly K. (2018). Using the Geneva Emotion Wheel to Classify the Expression of Emotion on Robots / Companion of the 2018 ACM/IEEE International Conference // https://doi.org/10.1145/3173386.3177058

2456

Scherer K. R., Shuman V., Fontaine J. J. R., Soriano C. (2013). The GRID meets the Wheel: Assessing emotional feeling via self-report / Fontaine J. J. R., Scherer K. R., Soriano C. (2013). Components of emotional meaning: a sourcebook. Series in affective science. Oxford University Press // https://doi.org/10.13140/RG.2.1.2694.6406

2457

Scherer K. R. (2005). What are emotions? And how can they be measured? / Social Science Information, Vol. 44 (4), pp. 695—729 // https://doi.org/10.1177/0539018405058216

2458

Mehrabian A. (1996). Pleasure-arousal-dominance: A general framework for describing and measuring individual differences in Temperament / Current Psychology, Vol. 14 (4), pp. 261—292 // https://doi.org/10.1007/BF02686918

2459

Baggia P., Pelachaud C., Peter C., Zovato E., Burkhardt F., Schröder M. (2014). Emotion Markup Language (EmotionML) 1.0. W3C Recommendation 22 May 2014. Copyright © 2014 W3C® (MIT, ERCIM, Keio, Beihang) // https://www.w3.org/TR/emotionml/

2460

Ashimura K., Baggia P., Oltramari A., Peter C., Zovato E., Burkhardt F., Schröder M., Pelachaud C. (2014). Vocabularies for EmotionML. W3C Working Group Note 1 April 2014. W3C® (MIT, ERCIM, Keio, Beihang) // https://www.w3.org/TR/emotion-voc/

2461

Ververidis D., Kotropoulos C. (2003). A Review of Emotional Speech Databases / Proceedings of panhellenic conference on informatics, Thessaloniki, Greece, pp. 560—574 // http://poseidon.csd.auth.gr/LAB_PEOPLE/Ververidis/Ververidis_PCI_2003.pdf

2462

Pittermann J., Pittermann A., Minker W. (2009). Handling Emotions in Human-Computer Dialogues. Language Arts & Disciplines // https://books.google.ru/books?id=VUqEuXrk_hUC

2463

Livingstone S. R., Russo F. A. (2018). The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English / PLos One, May 16, 2018 // https://doi.org/10.1371/journal.pone.0196391

2464

Surrey Audio-Visual Expressed Emotion (SAVEE) Database (2015) // http://kahlan.eps.surrey.ac.uk/savee/

2465

Haq S., Jackson P. J. B. (2010). Multimodal Emotion Recognition / Wang W. (2010). Machine Audition: Principles, Algorithms and Systems. IGI Global Press, pp. 398—423 // https://doi.org/10.4018/978-1-61520-919-4

2466

Haq S., Jackson P. J. B. (2009). Speaker-Dependent Audio-Visual Emotion Recognition // Proceedings of the International Conference on Auditory-Visual Speech Processing, pp. 53—58 // http://personal.ee.surrey.ac.uk/Personal/P.Jackson/pub/avsp09/HaqJackson_AVSP09.pdf

2467

Haq S., Jackson P. J. B., Edge J. D. (2008). Audio-Visual Feature Selection and Reduction for Emotion Classification // Proceedings of the International Conference on Auditory-Visual Speech Processing, pp. 185—190 // http://personal.ee.surrey.ac.uk/Personal/P.Jackson/pub/avsp08/HaqJacksonEdge_AVSP08.pdf

2468

McKeown G., Valstar M., Pantic M., Schroder M. (2012). The SEMAINE database: annotated multimodal records of emotionally coloured conversations between a person and a limited agent / IEEE Transactions on Affective Computing, Vol. 3, Iss. 1, pp. 5—17 // https://doi.org/10.1109/T-AFFC.2011.20

2469

The sensitive agent project database / SEMAINE Database // https://semaine-db.eu/

2470

* Аватар — воплощение человека в виртуальном мире.

2471

Ekman P., Friesen W. (1978). Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto, 1978 // https://books.google.ru/books?id=08l6wgEACAAJ

2472

Burton V. (2013). Happy Women Live Better. Harvest House Publishers // https://books.google.ru/books?id=FW6jDDjtH4cC

2473

Burkhardt F., Paeschke A., Rolfes M., Sendlmeier W., Weiss B. (2005). A database of German emotional speech / 9th European Conference on Speech Communication and Technology, Vol. 5, pp. 1517—1520 // https://www.isca-speech.org/archive/interspeech_2005/i05_1517.html

2474

Busso C., Bulut M., Lee C.-C., Kazemzadeh A., Mower E., Kim S., Chang J. N., Lee S., Narayanan S. S. (2008). IEMOCAP: Interactive emotional dyadic motion capture database / Journal of Language Resources and Evaluation, Vol. 42, No. 4, pp. 335—359 // https://doi.org/10.1007/s10579-008-9076-6

2475

Chen J., Wang C., Wang K., Yin C., Zhao C., Xu T., Zhang X., Huang Z., Liu M., Yang T. (2020). HEU Emotion: A Large-scale Database for Multi-modal Emotion Recognition in the Wild // https://arxiv.org/abs/2007.12519

2476

Makarova V., Petrushin V. A. (2002). RUSLANA: A database of Russian emotional utterances / 7th International Conference on Spoken Language Processing, ICSLP2002 — INTERSPEECH 2002, Denver, Colorado, USA, September 16—20, 2002 // https://www.isca-speech.org/archive/archive_papers/icslp_2002/i02_2041.pdf

2477

Lyakso E., Frolova O., Dmitrieva E., Grigorev A., Kaya H., Salah A. A., Karpov A. (2015). EmoChildRu: Emotional Child Russian Speech Corpus / Ronzhin A., Potapova R., Fakotakis N. (2015). Speech and Computer. SPECOM 2015. Lecture Notes in Computer Science, Vol. 9319. Springer, Cham // https://doi.org/10.1007/978-3-319-23132-7_18

2478

Kondratenko V., Sokolov A., Karpov N., Kutuzov O., Savushkin N., Minkin F. (2022). Large Raw Emotional Dataset with Aggregation Mechanism // https://arxiv.org/abs/2212.12266

2479

djunka (2022). Dusha: самый большой открытый датасет для распознавания эмоций в устной речи на русском языке. / Хабр, 8 фев. 2022 // https://habr.com/ru/companies/sberdevices/articles/715468/

2480

Shen G., Wang X., Duan X., Li H., Zhu W. (2020). MEmoR: A Dataset for Multimodal Emotion Reasoning in

1 ... 449 450 451 452 453 454 455 456 457 ... 482
Перейти на страницу:

Комментарии
Минимальная длина комментария - 20 знаков. Уважайте себя и других!
Комментариев еще нет. Хотите быть первым?