Page 113 - AC/E Digital Culture Annual Report 2014
P. 113
AC/E digital culture ANNUAL REPORT 2014software or as a mobile app, for example, to obtain the desired results. A company of Mexican origin, EmoSpeech, develops software applications based on emotion recognition with voice as its interface. Basically what this technology does is to recognise frame of mind by means of the voice, that is, the software interprets the emotions, which for the purposes of an enterprise can be converted into data on its users. The idea emerged at the Laboratorio de Tecnologías del Lenguaje de la Coordinación de Ciencias Computacionales at the Instituto Nacional de Astrofísica, Óptica y Electrónica, in Mexico City. Its uses, of course, will go beyond the call‐centres where it has started to be deployed.emotions are very similar to those observed in other primates. Recognising emotional expressions during social interaction allows us to detect the state or the emotional reactions of another, and may give clues as to how to respond properly in different circumstances. It is this type of response that is the subject of current work on emotional intelligence projects. The time will come when mobile technology will also decipher the reading of these universal parameters and know how to react. That is to say, it is very possible that thanks to the voice, mobile phones in their most “intelligent” version will “understand” their owners, and, who knows, take decisions for them.Often this sort of technology is much closer and more commonplace in our environment than we notice or are aware of. Anyone who has an Apple smartphone or tablet running the latest operating system will have a voice application called Siri. This app processes the user’s language to respond to his or her commands during navigation without he or she needing to use their hands. It can also give usage tips because, according to its creators, it gradually adapts itself to the needs of each user. In other words, it personalises its service. As is often the case, with successive updates of the application, its success rate gets much better. It is not hard to find amusing anecdotes on the Web about users who ask Siri more or less compromising questions and the surprising answers they may get. For example, if you ask, “Would you marry me?”, the application might answer, amongst other possibilities, “I sure have been receiving a lot of marriage proposals recently”. It must not be forgotten that it is an application to provide a real service while using Apple’s platforms, but as can be seen, it in turn tries to humanise itself and to give a coherent response to the more or less joking or utilitarian queries made by more chatty users.I cannot fail to mention here another example from the cinema regarding the interpretation these technologies are making of themselves, in this case something very similar to the instance we have just seen with Siri, but perhaps taken to the extreme. I am referring to Her, a film by Spike Jonze featuringResearch is being directed towards recognising emotional expressions to detect the state of mind and determine how to respond properlyMany of thesecomplextechnologies areeventuallyintegrated intoapplications orresources formobile devices,including smartphones and tablets. Their various features and characteristics may serve as tools or resources for the applications themselves such as data collection: the voice, the camera and the GPS are being used to investigate the anticipation of decisions or searches by users of these devices. Research on the voice may supply many data, particularly from the point of view of affective analysis. It is well known that emotion causes changes in breathing, phonation and articulation, which in their turn affect the acoustic signal. The emotional tone of the voice or prosody take in a number of acoustic parameters such as temporal structure, intensity and frequency. The emotion expressed by a speaker is characterised in all cultures by the universal properties of these parameters.According to a recent study3, adult listeners can quickly and reliably recognise different emotions on the basis of different vocal signals. Furthermore, it shows that emotional prosody is not processed voluntarily, and the specific acoustic patterns observed in human beings in response to certainAC/E WHERE WE ARE HEADING: DIGITAL TRENDS IN THE WORLD OF CULTURETHEME 9: THE NEW AFFECTIVE TECHNOLOGIES COME TO THE CULTURAL SECTORCURRENT PAGE...113