Page 111 - AC/E Digital Culture Annual Report 2014
P. 111
AC/E digital culture ANNUAL REPORT 2014FACIAL MODELING AND ANIMATION TECHNIQUEShttp://ow.ly/tyZkLgrowing interest in them. From automata to robots with human emotions is a long and varied journey. And it is precisely at this point where science, technology, art and, indeed, culture, begin a relationship from the real to the fictional, or the fictional to the real. Fiction, science fiction, has sought to anticipate scientific and technical achievements, in which technology acquires a vital role in the attainment of these futuristic aims. Meanwhile, technology and science attempt this anticipation as a measure for prevention (of diseases) and now also of services and experiences (tastes, desires, searches, emotions).Hence, science, technology and cultural products come together in this search for this latest recreation of the nature of man. A clear example is robotics, where all three have found a common argument to develop their work in this field. Artificial intelligence, affective computing and the semantic Web are three approaches to the question of humanoids or computers with emotions. An approximation to the creation of a “real” person that is no longer like Kleist’s puppets in his essay “On puppet theatre”: here there is an attempt to recreate in detail what we know about what we are, without ironic distance, including our emotions.One of the first scientific approaches to the recreation of human emotions in a machine was that made by Fred Parke in 1972. This computer science graduate from the university of Utah unveiled the first human face with computer graphics, made in 1974 for his doctoral thesis. Parke was trying toAC/Eartificially recreate, in 3D and with the maximum possible detail, the movements of a human face, to which end he included a recital of a poem by Emily Dickinson, “How Happy is the Little Stone” (1881). I do not know whether the choice of a poem with this title had a similar intention. It was an exceptionally complex product for its time since it combined programming code with analogue systems of sound recording in order to reproduce in video these pioneering faces with 3‐D animation, with the consequent need for a laborious process of synchronisation. The study of human facial expressions has a much longer history than that, of course, but the articulation of such expressions did not take shape until these first attempts that would soon give way to research into the gestural recreation of emotions.Since then, thecreation ofanimatedinterfaces hasevolved toattain thestandards we allnow know, andwhich we can see in video games and films. Virtual robotics can animate the protagonists of a film that is not necessarily itself an animation. Work is going on in the University of Cambridge on a line of research similar to Parke’s with the Zoe prototype, which according to its creators sets out to be “the most expressive avatar yet made, replicating human emotions with unprecedented realism”. It is a project based on voice recognition and the capture of visual data. This results in the adjustments of speech that reveal different states of mind. The idea is to develop such virtual faces to turn them into the interactive interfaces of the near future through which humans will be able to relate to computers and digital intelligence of all sorts. But beyond the recreation of the emotions, others are attempting to fully integrate emotion into robots and computers.Artificial intelligence, affective computing and the semantic Web are three approaches to the question of humanoidsIn 2011, Eva, a highly emotive film by the director Kike Maíllo, told the story of a researcher at the University of Robotics in the field of cyberneticWHERE WE ARE HEADING: DIGITAL TRENDS IN THE WORLD OF CULTURETHEME 9: THE NEW AFFECTIVE TECHNOLOGIES COME TO THE CULTURAL SECTOR CURRENT PAGE...111