Showing posts with label machine learning. Show all posts
Showing posts with label machine learning. Show all posts

Sunday, August 15, 2010

Updates on voice analysis, etc.

"Stress detector can hear it in your voice"
Normally we have full control over our vocal muscles and change their position to create different intonations, says Yin. "But when stressed, we lose control of the position of the speech muscles," and our speech becomes more monotone, he says.

Yin tested his stress detector in a call centre to identify which interviewees were more relaxed during recruitment tests. The number of new staff that left after three months subsequently fell from 18 per cent to 12 per cent, he claims. The detector was shown at trade show CeBIT Australia in May.

"Innovation: Google may know your desires before you do"
In future, your Google account may be allowed, under some as-yet-unidentified privacy policy, to know a whole lot about your life and the lives of those close to you. It will know birthdays and anniversaries, consumer gadget preferences, preferred hobbies and pastimes, even favourite foods. It will also know where you are, and be able to get in touch with your local stores via their websites.

Singhal says that could make life a lot easier. For instance, he imagines his wife's birthday is coming up. If he has signed up to the searching-without-searching algorithm (I'll call it "SWS" for now), it sees the event on the horizon and alerts him – as a calendar function can now. But the software then reads his wife's consumer preferences file and checks the real-time Twitter and Facebook feeds that Google now indexes for the latest buzz products that are likely to appeal to her.

"Roila: a spoken language for robots"
The Netherlands' Eindhoven University of Technology is developing ROILA, a spoken language designed to be easily understandable by robots.

The number of robots in our society is increasing rapidly. The number of service robots that interact with everyday people already outnumbers industrial robots. The easiest way to communicate with these service robots, such as Roomba or Nao, would be natural speech. But current speech recognition technology has not reached a level yet at which it would be easy to use. Often robots misunderstand words or are not able to make sense of them. Some researchers argue that speech recognition will never reach the level of humans.

I talked about this earlier in the post about machine translation: the reason it sucks is because people never speak clearly and use slang, etc. but if it becomes common place, as it learns to understand slang, we'll also understand how to speak in a way that's easy for the machine to understand and/or translate.

"Speech-to-Speech Android App"

"See what Google knows about your social circle"

Google started including "your social circle" in its search results earlier this year. Ever wonder how Google knows who you know? Wonder no more, as the Mountain View firm offers a page explaining exactly how inter-connected your online life really is.

The link below leads you to a page where Google explains the three levels of contact it can trace between you and other people, with the depth depending on whether you've filled out a Google Profile and how busy you are on Google services like Chat and Reader. You'll see your "direct connections" through Chat and other contact-creating apps, direct connections from sites you've linked to in your profile (including those you follow on services like Twitter), and those friends-of-a-friend through your direct connections.

"Google working on voice recognition for all browsers"
In some ways it seemed inevitable, but in other ways, it's still an awesome idea. InfoWorld reports that Google is building speech recognition technologies for browsers, and not just their own Chrome—all browsers, as an "industry standard." Beyond making certain searches easy to fire off with a spoken phrase, voice recognition might also give the web a whole new class of webapps that listen for audio cues. Do you want your browser to understand what you're telling it? Or is the keyboard still your preferred lingua franca for non-mobile browsing? [InfoWorld]

Friday, February 5, 2010

Know Thyself

Let's take a look at what we talked about before, that is machine learning and software eventually sort of understanding us better than we understand ourselves. Since there are obvious monetary/political advantages to pulling this off, advertising companies and government projects are a good place to start:

1. Cognitive Match Secures Another $2.5m For Realtime Matching

The Cognitive Match startup is applying artificial intelligence, learning mathematics, psychology and semantic technologies to match content (product, offers, or editorial) to realtime content. It’s doing this in part by relying on an academic panel of professors in artificial intelligence from Universities across the UK and Europe who specialize in machine learning and psychology. The idea is to ensure maximum response from individuals, thereby increasing conversion, revenue and ultimately profit.

The premise of Levine’s company, Innerscope, is that running this data through algorithms can tell advertisers which commercials work and which don’t. They can quantify your subconscious responses to advertisements without resorting to the messiness of human language.

3. Navy Wants Troops Wearing Brain-Scanners Into War

The Navy’s Bureau of Medicine and Surgery is requesting proposals for a brain-scanning system that can assess a myriad of neuro-cognitive abilities, including reaction times, problem solving and memory recall. The scanner would also test for preliminary warning signs of post-traumatic stress, anxiety and depression, using the Trail-Making Test: a series of connect-the-dot exercises that’s been used by the military since the 1940s. And not only should the system be portable, but the Navy wants it to outlast the most extreme weather conditions, from desert heat to Arctic cold.


4. HIDE – Homeland Security, Biometric Identification & Personal Detection Ethics

HIDE is a project promoted by the European Commission (EC) and coordinated by the Centre for Science, Society and Citizenship, an independent research centre based in Rome (IT).

HIDE aims to establish a platform devoted to monitoring the ethical and privacy implications of biometrics and personal detection technologies. Detection technologies are technologies used to detect something or someone within a security or safety context. Personal Detection Technologies focus specifically on individuals, they include for example CCTV, infrared detectors and thermal imaging, GPS and other Geographical Information Systems (GISs), RFID, MEMS, smart ID cards, transponders, body scanners, etc. Biometrics is the application of technologies that make use of a measurable, physical characteristic or personal behavioural trait in recognizing the identity, or verifying the claimed identity of a previously registered individual.


ADABTS (Automatic Detection of Abnormal Behaviour and Threats in crowded Spaces) aims to facilitate the protection of EU citizens, property and infrastructure against threats of terrorism, crime, and riots, by the automatic detection of abnormal human behaviour. Current automatic detection systems have limited functionality, struggling to make inferences about the acceptability of human behaviour.

--
We could keep going, but that's enough for now. This last one is pretty interesting- and don't worry, I'm not about to start talking Orwell/Minority Report. Biometrics when hooked into a bunch of wires, sitting in a chair is one thing. Biometrics being read by simply analyzing visual/sonic information is another. This British system is supposedly working on algorithms to detect evil intentions through facial cues allegedly in order to stop potential terrorists/criminals before they're able to do anything. So let's talk about the fun, non military, non crime fighting, personal version of this type of thing. If we're eventually all wearing cameras and microphones, then we have the same tools at our disposal as the British government, just on a small scale. The advantage we also have, is being able to manually tag incoming information to help the computer: that was Mark who I was talking to for the last hour. Next time you talk to Mark, it recognizes his voice and adds important information to your growing collection of his statistics. Three months later, after the computer has a pretty good idea of what he sounds like when you talk to him, all of a sudden it lets you know that he's either sick, tired, or depressed, judging by his abnormal facial expressions, less emotional voice, and sparser comments. It also lets you in on the fact that Leah, who you just met at a party is probably attracted to you judging by her tracked eye movement, increasingly engaged responses, and infrared temperature patterns. As the judicial spins all out of wack, so will interpersonal relationships, art, and love.