Sunday, August 15, 2010

Updates on voice analysis, etc.

"Stress detector can hear it in your voice"
Normally we have full control over our vocal muscles and change their position to create different intonations, says Yin. "But when stressed, we lose control of the position of the speech muscles," and our speech becomes more monotone, he says.

Yin tested his stress detector in a call centre to identify which interviewees were more relaxed during recruitment tests. The number of new staff that left after three months subsequently fell from 18 per cent to 12 per cent, he claims. The detector was shown at trade show CeBIT Australia in May.

"Innovation: Google may know your desires before you do"
In future, your Google account may be allowed, under some as-yet-unidentified privacy policy, to know a whole lot about your life and the lives of those close to you. It will know birthdays and anniversaries, consumer gadget preferences, preferred hobbies and pastimes, even favourite foods. It will also know where you are, and be able to get in touch with your local stores via their websites.

Singhal says that could make life a lot easier. For instance, he imagines his wife's birthday is coming up. If he has signed up to the searching-without-searching algorithm (I'll call it "SWS" for now), it sees the event on the horizon and alerts him – as a calendar function can now. But the software then reads his wife's consumer preferences file and checks the real-time Twitter and Facebook feeds that Google now indexes for the latest buzz products that are likely to appeal to her.

"Roila: a spoken language for robots"
The Netherlands' Eindhoven University of Technology is developing ROILA, a spoken language designed to be easily understandable by robots.

The number of robots in our society is increasing rapidly. The number of service robots that interact with everyday people already outnumbers industrial robots. The easiest way to communicate with these service robots, such as Roomba or Nao, would be natural speech. But current speech recognition technology has not reached a level yet at which it would be easy to use. Often robots misunderstand words or are not able to make sense of them. Some researchers argue that speech recognition will never reach the level of humans.

I talked about this earlier in the post about machine translation: the reason it sucks is because people never speak clearly and use slang, etc. but if it becomes common place, as it learns to understand slang, we'll also understand how to speak in a way that's easy for the machine to understand and/or translate.

"Speech-to-Speech Android App"

"See what Google knows about your social circle"

Google started including "your social circle" in its search results earlier this year. Ever wonder how Google knows who you know? Wonder no more, as the Mountain View firm offers a page explaining exactly how inter-connected your online life really is.

The link below leads you to a page where Google explains the three levels of contact it can trace between you and other people, with the depth depending on whether you've filled out a Google Profile and how busy you are on Google services like Chat and Reader. You'll see your "direct connections" through Chat and other contact-creating apps, direct connections from sites you've linked to in your profile (including those you follow on services like Twitter), and those friends-of-a-friend through your direct connections.

"Google working on voice recognition for all browsers"
In some ways it seemed inevitable, but in other ways, it's still an awesome idea. InfoWorld reports that Google is building speech recognition technologies for browsers, and not just their own Chrome—all browsers, as an "industry standard." Beyond making certain searches easy to fire off with a spoken phrase, voice recognition might also give the web a whole new class of webapps that listen for audio cues. Do you want your browser to understand what you're telling it? Or is the keyboard still your preferred lingua franca for non-mobile browsing? [InfoWorld]