TEL AVIV—Beyond Verbal Communications Ltd., a voice-recognition software developer here, is rolling out an app promising something Siri can’t yet deliver: a readout on how you’re feeling. Called Moodies, it lets a smartphone user speak a few words into the phone’s mike to produce, about 20 seconds later, an emotional analysis. Beyond Verbal executives say the app is mostly for self-diagnosis—and a bit of fun: It pairs a cartoon face with each analysis, and users can share the face on Facebook or in a tweet or email. But the app is coming out as the company and other developers—many clustered in Tel Aviv—push increasingly sophisticated hardware and software they say can determine a person’s emotional state through analysis of his or her voice. These companies say the tools can also detect fraud, screen airline passengers and help a call-center technician better deal with an irate customer. And they can be used to keep tabs on employees or screen job applicants. One developer, Tel Aviv-based Nemesysco Ltd., offers what it calls “honesty maintenance” software aimed at human-resource executives. The firm says that by analyzing a job applicant’s voice during an interview, the program can help identify fibs. That’s raising alarm among many voice-analysis experts, who question the accuracy of such on-the-spot interpretations. It’s also raising worries among privacy advocates, who say such technology—especially if it is being rolled out in cheap, easy-to-use smartphone apps—could be a fresh threat to privacy in the digital age.
All of this juicy tech news notwithstanding, I’m keenly interesting in this app as it translates our voices into emotions when we read certain passages of literature. I was taught by poetry professor Tyehimba Jess that when we read poetry and prose out loud, it becomes alive and resonant. Open mic performances breathe new life into words that we often glance over, feeling with a different part of our brain when we come across terminal illness, slavery, sex, sentimentality, or a crying child. When we speak these words out loud, I believe we use another part of our brain that is connected to our emotional outlets, such as those we use when arguing, or voicing “I love you,” for the first or last time. When we talk out loud about trauma, we are emotionally connected in stronger ways than when we write about the trauma repeatedly, such as veterans do with Post Traumatic Stress Disorder (PTSD).
I’d like to see how this app handles our voicing of strong literary passages. I’d like to see the differences between people and how they process emotions while speaking. And then I’d like to see how it processes something like an open mic or spoken word—with an audience in tow. While emotional, the performance is staged and enhanced. Can the app pick up on that?