Machines can detect people’s emotions through computer vision or speech processing, but they can also convey or mimic human emotions. A robot can frown, a smart speaker can sound happy, and a chatbot can send emojis. Yet as these various AI systems become more technically capable, simple expressions like smiles or emojis are likely to evolve into complex emotional displays, e.g., gratitude or grief. Even if a machine never truly feels grief like a person can, humans may still attribute complex emotional experiences to artificial agents. Artificial emotions (AE) are then an extension of or projections of our own emotions. Blurred emotional boundaries arise when we interact with emotionally intimate machines of the future, with many ethical implications.
The group of Consumer Informatics likes to follow the theme of the world usability year this year, “Human-centered AI”, and invites you to join in a keynote talk of Assistant Prof. Minha Lee on “Whose emotions are the second-ordered artificial emotions?” The talk focuses on the challenges of artificial emotions and the possibility of non-anthropomorphic expression of the emotions in intelligent agents.
Minha Lee graduated from the University of Amsterdam with a M.Sc. in Information Science, Pratt Institute in Brooklyn, NY with a B.F.A. in Digital Arts, and University of Minnesota – Twin cities with a B.A. in Philosophy. Minha Lee’s research interests include moral conflicts and moral emotions in relation to technology. She works with various members of the 4TU center of Humans & Technology at Eindhoven university of technology.
The talk is free of charge and open for everyone who is interested to Human AI interaction.
We will meet on Wednesday December 9. at 6 p.m.
The talk will be held via zoom. To join us use the link below: