Sunday, August 15, 2010

Updates on voice analysis, etc.

"Stress detector can hear it in your voice"
Normally we have full control over our vocal muscles and change their position to create different intonations, says Yin. "But when stressed, we lose control of the position of the speech muscles," and our speech becomes more monotone, he says.

Yin tested his stress detector in a call centre to identify which interviewees were more relaxed during recruitment tests. The number of new staff that left after three months subsequently fell from 18 per cent to 12 per cent, he claims. The detector was shown at trade show CeBIT Australia in May.

"Innovation: Google may know your desires before you do"
In future, your Google account may be allowed, under some as-yet-unidentified privacy policy, to know a whole lot about your life and the lives of those close to you. It will know birthdays and anniversaries, consumer gadget preferences, preferred hobbies and pastimes, even favourite foods. It will also know where you are, and be able to get in touch with your local stores via their websites.

Singhal says that could make life a lot easier. For instance, he imagines his wife's birthday is coming up. If he has signed up to the searching-without-searching algorithm (I'll call it "SWS" for now), it sees the event on the horizon and alerts him – as a calendar function can now. But the software then reads his wife's consumer preferences file and checks the real-time Twitter and Facebook feeds that Google now indexes for the latest buzz products that are likely to appeal to her.

"Roila: a spoken language for robots"
The Netherlands' Eindhoven University of Technology is developing ROILA, a spoken language designed to be easily understandable by robots.

The number of robots in our society is increasing rapidly. The number of service robots that interact with everyday people already outnumbers industrial robots. The easiest way to communicate with these service robots, such as Roomba or Nao, would be natural speech. But current speech recognition technology has not reached a level yet at which it would be easy to use. Often robots misunderstand words or are not able to make sense of them. Some researchers argue that speech recognition will never reach the level of humans.

I talked about this earlier in the post about machine translation: the reason it sucks is because people never speak clearly and use slang, etc. but if it becomes common place, as it learns to understand slang, we'll also understand how to speak in a way that's easy for the machine to understand and/or translate.

"Speech-to-Speech Android App"

"See what Google knows about your social circle"

Google started including "your social circle" in its search results earlier this year. Ever wonder how Google knows who you know? Wonder no more, as the Mountain View firm offers a page explaining exactly how inter-connected your online life really is.

The link below leads you to a page where Google explains the three levels of contact it can trace between you and other people, with the depth depending on whether you've filled out a Google Profile and how busy you are on Google services like Chat and Reader. You'll see your "direct connections" through Chat and other contact-creating apps, direct connections from sites you've linked to in your profile (including those you follow on services like Twitter), and those friends-of-a-friend through your direct connections.

"Google working on voice recognition for all browsers"
In some ways it seemed inevitable, but in other ways, it's still an awesome idea. InfoWorld reports that Google is building speech recognition technologies for browsers, and not just their own Chrome—all browsers, as an "industry standard." Beyond making certain searches easy to fire off with a spoken phrase, voice recognition might also give the web a whole new class of webapps that listen for audio cues. Do you want your browser to understand what you're telling it? Or is the keyboard still your preferred lingua franca for non-mobile browsing? [InfoWorld]

Thursday, June 17, 2010

Just in Case...


More -

If anyone's reading this and hasn't watched "More" by director Mark Osborne, do yourself a favor and give it a whirl- it's only 6 minutes. (When this link is no longer good, you can probably find it on youtube.)

It's one of those films that becomes more and more relevant every day. Especially amazing is the fact that in 1998, he basically spelled out all the happiness and woe that's still about 15 years out when AR glasses really take off. Of course scientist and writers had been speculating about similar concepts since the 40's but for me Osborne really captures it in all its corporate, florescent light- "packaged BLISS" - eventually highlighting the fact that people can even tell something's wrong in the way their view of the world has changed, but alas...

Project google goggles is called google GOGGLES for a reason.

Fountain of Youth



NewScientist has a nice article talking about the possibility of immortal life through an avatar by digitally capturing your personality. Like we've talked about before, this would be viewed by your descendants or loved ones after your death in order to give them a momentary sense of comfort/respect.

"Ultimately, however, they aim to create a personalised, conscious avatar embodied in a robot - effectively enabling you, or some semblance of you, to achieve immortality. "If you can upload yourself into this digital form, it could live forever," says Nick Mayer of Lifenaut, a US company that is exploring ways to build lifelike avatars. "It really is a way of avoiding death."

...Like many people, I have often dreamed of having a clone: an alternative self that could share my workload, give me more leisure time and perhaps provide me with a way to live longer.

How my avatar looks may in the end matter less than its behaviour, according to researchers at the University of Central Florida in Orlando and the University of Illinois in Chicago. Since 2007, they have been collaborating on Project Lifelike, which aims to create a realistic avatar of Alexander Schwarzkopf, former director of the US National Science Foundation.

They showed around 1000 students videos and photos of Schwarzkopf, along with prototype avatars, and used the feedback to try to work out what features of a person people pay most attention to. They conclude that focusing on the idiosyncratic movements that make a person unique is more important than creating a lifelike image. "It might be how they cock their head when they speak or how they arch an eyebrow," says Steve Jones of the University of Illinois.

Equally important is ensuring that these movements appear in the correct context. To do this, Jones's team has been trying to link contextual markers like specific words or phrases with movements of the head, to indicate that the avatar is listening, for example. "If an avatar is listening to you tell a sad story, what you want to see is some empathy," says Jones, though he admits they haven't cracked this yet.

The next challenge is to make an avatar converse like a human. At the moment the most lifelike behaviour comes from chatbots, software that can analyse the context of a conversation and produce intelligent-sounding responses as if it is thinking. Lifenaut goes one step further by tailoring the chatbot software to an individual. According to Rollo Carpenter of artificial intelligence (AI) company Icogno in Exeter, UK, this is about the limit of what's possible at the moment, a software replica that is "not going to be self-aware or equivalent to you, but is one which other people could hold a conversation with and for a few moments at least believe that there was a part of you in there".

...Lifenaut's avatar might appear to respond like a human, but how do you get it to resemble you? The only way is to teach it about yourself. This personality upload is a laborious process. The first stage involves rating some 480 statements such as "I like to please others" and "I sympathise with the homeless", according to how accurately they reflect my feelings.

...One alternative would be to automatically capture information about your daily life and feed it directly into an avatar. "Lifeloggers" such as Microsoft researcher Gordon Bell are already doing this to some extent, by wearing a portable camera that records large portions of their lives on film.

A team led by Nigel Shadbolt at the University of Southampton, UK, is trying to improve on this by developing software that can combine digital images taken throughout the day with information from your diary, social networking sites you have visited, and GPS recordings of your location. Other researchers are considering integrating physiological data like heart rate to provide basic emotional context. To date, however, there has been little effort to combine all this into anything resembling an avatar. We're still some way off creating an accurate replica of an individual, says Shadbolt. "I'm sure we could create a software agent with attitude, but whether it's my attitude seems to be very doubtful," he says."

Monday, May 31, 2010

iDollators


While surfing the wrong parts of the internet again, I've stumbled into some research being done for lifesize sex dolls. In case anyone is still in the clear, please make yourself feel at home here in the gutter: http://video.google.com/videoplay?docid=-7277801797935788405#

So these things are slowly gaining the ability to walk/talk in very rudimentary, creepy style, still very much stuck within the uncanny valley. Safely assuming that technology for voice recognition, speech synthesis, emotion detection, facial replication, etc. progress at a realistic rate, at some point we'll have something convincing enough that it will start to create quite a few social problems.

On a somewhat related note, here's this video from a while back showing research being done in human-humanoid interaction. Since things will progress, imagine the humanoid as a sexy fembot and the human with sleek, unobtrusive HUD glasses.

Innovation Blues

So I attended this "TechCrunch: Disrupt" event last week by agreeing to work for free in exchange for a ticket (usually $3k or something ridiculous).

Disappointment.

Out of 100 start ups, I'd say 70% were something completely mundane: "so with youtube and other current video hosting sites, you're required to convert to certain formats and limit your videos to a certain length- us? No limits."

27%, including the company I was working for, were relatively interesting but at least a year late and in no way revolutionary- basically just slightly more efficient combinations of pre-existing Ideas: "alright, this is like youtube but it's mobile, geotagged and social." etc.

The remaining three companies were actually interesting. One was called uJam, which is some sort of app where you can sing a song and then hear it orchestrated with background music. When I stumbled upon their exhibit I was actually a little pissed because I felt like they had stolen my idea from a while ago: "In another situation we're in a group hanging out on the street. Our devices know we're together talking. Suddenly one of the more inebriated amongst us breaks out into song- a drunken rendition of the latest top 40 hit. His device quickly runs a song recognition on what he's singing to identify a possible match, based on what it knows he's listened to lately and in the past [remember, it's hearing what he hears on a daily basis, keeping track the whole time]. Before he's hit the second chorus, it's figured out that he's quoting the latest T-Pain song, although a bit too slow, out of tune and in a different key. Nonetheless, like any good accompanist, the machine tries to follow his singing- it tries to make him sound as good as possible. To accomplish this, it transposes into the tempo and key he's set.
As this happens, everyone in the group hears an accompanying melody fade into what he's singing in real time. Like a live musical or a constant karaoke machine, this device adds acoustic background to whatever it hears. Life becomes a movie as simply hanging out with friends takes on cinematic effects. "


But apparently Bill Gates had said it over a decade ago:"And make no mistake, there will be great applications of all kinds on the Internet - much better and far more plentiful than the ones available today. Many of tomorrow's net applications will be purely for fun, as they are today. ... You might hum a little tune of your own into a microphone and then play it back to hear what it could sound like if it were orchestrated or performed by a rock group."

Bill Gates - The Road Ahead, 1996

and of course people have been dreaming about some sort of "magic harmonizer machine" for ages now, so...


"What has been done will be done again. There is nothing new under the sun"

Ecclesiastes 1:9


Wednesday, May 12, 2010

Emotion Detection Through Voice Making Progress


Computer Software Decodes Emotions Over the Phone

from Discovery News
"THE GIST
  • A company called eXaudios has developed software that detects emotions during a phone call.
  • The program is currently used by companies to assist customer service agents.
  • The versatile software could even soon diagnose Parkinson's disease, schizophrenia and even cancer."

As the computer becomes better at recognizing our moods, it becomes better at positively or negatively changing them. From my post about the digital secretary which I think really benefits from being placed in the context of this software:

"But let's go further and a little bit darker. If we improve CGI and voice simulation, there will be no reason not to have this secretary actually appear as a simulated friend- one who knows what will make you happy depending on your mood, one who won't mind giving you perpetually undivided attention. If it's linked to your cellphone, this friend could also fill you in on things and give you advice:

"you know you seem a little down today- why not try calling Robert or Jessica- you haven't seen them in a while, and last time you hung out you all had a great time." (it was listening to the quality of your voices and watching everyone's facial expressions)

you: "I dunno, what about Eryka, she seems pretty cool... what are the chances that she's into me?"

computer: "approximately 3,720 to 1"

you: "damn"


Even further and much darker, what if we allowed our secretaries to communicate with one another, say even temporarily like at a party. They would watch everyone's interactions, occasionally chiming in to suggest mingling (think a futuristic version of facebook suggesting that we help people become more social). Toward the end of the night we could begin some sort of Game Theory algorithm where each of the secretary agents would trade data until coming up with a "greatest possible universal happiness" formula which would pair those of us wanting to go home or to the next party with someone with each other in the most efficient way. And even if we don't agree to trade data, people will each use their own gathered information to see who they might have a shot with as things are winding down. Of course young people will also learn how to fool the system (like learning to pass a lie detector test) which would have certain advantages. As computer-aided social interaction becomes the norm, how will everything change?"

Sunday, April 11, 2010

Twitter as Ghetto Prediction Engine

Twitter predicts future box office

A study by researchers from HP's Social Computing Lab shows that Twitter does very well in predicting the box office revenue for movies.

[Researchers] found that using only the rate at which movies are mentioned could successfully predict future revenues. But when the sentiment of the tweet was factored in (how favorable it was toward the new movie), the prediction was even more exact.

But as someone noted in the comments:

Works fine until people realize it works, then they start gaming it, and it stops working.

Wednesday, April 7, 2010

On Micro Donations

What if there were a way to donate small amounts of money almost effortlessly and practically without overhead charges? Then if you saw something inspirational that someone was doing, you might send your thanks/support in the way of, say, 30 cents. And if 10,000 people did the same, suddenly this person would be $3,000 richer and would have gained something for their effort. Thus as a society we would reward things instantly, directly and without any foreseeable detriment. (This fits into the idea that we'll try to rebuild the god that we've essentially destroyed, through an actual mediated social controller.) Think about it: just in the united states alone we've got at least 100 million people for whom a single dollar a month doesn't really matter. If that money became malleable, no one would really be put out and we would have created another outlet for artistic and creative reward- especially important today while these practices are being phased out of the economy by piracy. There could be something like a "five cent club", meaning people would commit to a donation of 5 cents every month to a certain cause, project, or person. The donor wouldn't even notice, but if you somehow managed to get 100,000 five cent members, you would make $60,000 a year. That's pretty cool to think about. There would have to be some sort of regulation mediated through social networking mixed with Paypal, wherein you could only get this "five cent member" icon put on your profile if you were really donating the money. It would be another way of showing your support and would be encouraged through peer pressure. There wouldn't even really be an excuse to stop because it's such a small amount of money. That's a strange thought because this would also make donation into a powerful political tool. For instance, if an artist or organization did something that many of its monthly micro-donors disapproved of, they would feel the effects as people withdrew their membership, which is the beauty of a regularly spaced interval contributions: they have instant influence through strength in numbers. Cash democracy on every level of society.
Also, there's the whole exchange rate issue. We already see this when people end up sympathizing with the 419 scammers that are cheating them (my mother's friend now regularly sends money and gifts to a guy in India who straight up told her the truth after she called him out on being a scammer) One hundred dollars in India is a big sum of money. $250 can pay for a surgery to fix a child's cleft lip. Perhaps if people could literally see their money heading to a specific kid who they could watch via Facebook for their entire life (and talk to with new translation software) they would be less inclined to selfishly spend $3,000 to pay for a surgery to help their objectively worthless dog live for another two years. You would start to have real worldwide social priorities take precedent over daily and local bullshit. Of course this might make as many problems as it would solve. Capitalism always finds a way of fucking even the best laid systems. You would have people donating to terrorist organizations disguised as relief spending and all sorts of hoax and counter hoaxes. Either way though, you can see the seeds of this all over the place right now. Insect theory: Practically nothing times a million is suddenly quite a lot.

Evidence:

http://www.charitywater.org/twestival/

This is the closest to what I'm talking about: different cities and organizations teamed up over twitter to raise money for sustainable clean drinking water projects in developing nations like India and Ethiopia. The result was 250K which is being used to build systems to provide sanitary water to ~17,000 people who previously had none.

Thus we have X number of people (lets call it 60,000 at $4 on average) donating an extremely small portion of their expendable income in exchange for roughly 17,000 people gaining access to the most basic necessity for life. Thus there is a large net gain in the total amount of happiness in the universe:

[60k * (amount of happiness lost over losing $4) * (-1)] + [60k * (amount of happiness gained over sense of having done good in the world)] + [17k * (amount of happiness gained from not dying)] = GUSH (Global Unified Shift in Happiness)

http://nonprofit.about.com/od/socialmedia/a/mobilegiving.htm

http://www.qwasi.com/news/blog/micro-donations-generating-millions-in-haiti-relief-efforts-for-red-cross.htm

PayPal 2.0 "Bumps" Money Between iPhones

iPhone/iPod touch: You're settling up a restaurant tab for three. One eater has no cash, the other only twenty-dollar bills, and you're left wondering. If at least two of you have iPhones, PayPal 2.0 lets you "bump" the balance between phones.

Monday, March 22, 2010

Prediction Conviction


First do yourself a favor and read this article: The Predictioneer

For the lazy, basically this fellow Bruce Bueno, a professor at New York University, has been developing a model to predict such seemingly unforeseeable things as who will win an election, or if a certain bill will make it through congress, etc. While avid political scientists might be able to make similar claims, Bueno has the statistics to back it up: over 90% accuracy at this point, with an even better model in the works.

So how does he do it?

1. Amazingly informed starting information which he gathers by paying attention to what's going on in the world of politics [for example: "Give a shit factor for health care" Obama 98/100, McCain -77/100, ... "Able to do shit about it factor" Obama 60/100, McCain 5/100, Clinton (Bill) 80/100 ... etc.]

2. Game theory running software, which runs all known variables against each other and predicts an outcome.

Really, it's that simple. The difficult part is obtaining the correct input data, because the engine is only so accurate as what you provide it, which is why Bueno, being an expert in the world of politics, is able to achieve such high results.

Now.

Take something really similar to an election, say high school prom. We want to predict who will end up taking who. Even without knowing English. Even without being able to see the kids to analyze how objectively attractive they each are. Even without knowing who's friends with who on facebook, etc. If we could just track relative movement over time for a number of weeks, I bet that would be enough data to run an accurate model. Luckily Japan has given us this.

So now take a similar example: a night at a club. If we could track movement, we could probably come up with a 10% accurate model of who would go home with who. If we could couple this with eye tracking, we could probably bump it up to 50%. If we could add past relationships for the people involved, to see what facial/personality features they prefer in a mate, we could probably tip the scale closer toward 90%. Now switch into first person- if you're in the club and you want to know who you might have a shot with, then you can work with the software by inputting good data just like Bueno [example: Suzy likes Bill 75/100, Suzy finds guys like Bill attractive ?/100 (now you show the system her past boyfriends on facebook and it analyzes their physical appearance for similarity = 99/100) ... etc. Run this on everyone present, and you can figure out who to talk to. Just keep in mind that everyone else is probably doing the same thing.]

Take something less trivial like who's a compatible lifelong mate, and we'll be able to cause all sorts of mischief.

Cyborg Socialization

Monday, March 1, 2010

Reprise: eternal life


Meant to post this last Monday:

I've been ranting about how rudimentary immortality is possible for a while now. Of course it won't be some shiny fountain of life deal, and from your perspective life will definitely end, but at least for everyone else left behind this will serve as a way to remember you and creepily keep you around.

I bring this up right now, because tomorrow we'll all see the perfect demonstration of what I'm talking about. On the Oprah Winfrey show, Robert Ebert who has recently lost the ability to speak due to throat surgery, will be talking through a digital recreation of his former voice. A Scottish company called CereProc has taken audio samples from his old movie commentaries in order to piece together a simulated version of what he used to sound like.

Here's the story.
Also, UPDATE, it didn't sound all that great. Still, if some company in Scotland can pull off something decent, imagine what Google could do- after all, they have announced that real time speech to speech translation will be coming within the next year or so...

Sunday, February 21, 2010

Statistical Mastery


Cellphone traces reveal you're so predictable

We may all like to consider ourselves free spirits. But a study of the traces left by 50,000 cellphone users over three months has conclusively proved that the truth is otherwise.

"We are all in one way or another boring," says
Albert-László Barabási at the Center for Complex Network Research at Northeastern University in Boston, who co-wrote the study. "Spontaneous individuals are largely absent from the population."

Barabási and colleagues used three months' worth of data from a cellphone network to track the cellphone towers each person's phone connected to each hour of the day, revealing their approximate location. They conclude that regardless of whether a person typically remains close to home or roams far and wide, their movements are theoretically predictable as much as 93 per cent of the time.


--
Just another instance of the machine understanding us better than we understand ourselves. Like renting a movie even though netflix says you won't like it. A few days later the formula laughs as you give it a low rating. The god in the machine knows you like no one else.

"Indeed the very hairs of your head are all numbered. Fear not therefore: ye are of more value than many sparrows."

So much more valuable in the whole political economy [politonomy or ecolotics?] of things. $o much more valuable...

Thursday, February 18, 2010

Living Legends

About a week ago, the guys over at RalVision.ae were inspired by some history channel concept videos and ended up writing a piece about the educational possibilities for augmented reality- specifically in terms of history and interaction with the past:

Will History disappear, if we can “see” the past via Augmented Reality?

"What if we could harness Technology, to educate and stimulate the younger generation to value and cherish tradition but in a non text book manner, and thus impart education to them with help from the same devices that they seem hooked onto. In effect, hijack these devices in an interesting way, so as to break into a students "Digital personal space" which they are not so keen to give up that easily.

Some note worthy apps are Layar, WikiTude and Junaio. These applications allow a person to “annotate” the living living world. It’s actually blurring the line between the Digital and the real world.



The Junaio application allows you to actually have animated 3d objects placed at different locations. Re-animating ancient battles and wars at the actual locations, brings a whole new dimension to learning and keeps History alive! As any place on Earth can be annotated or “geo-tagged”, this will promote the learning of History and heritage when visiting these different countries.

These ”Digital Ghosts”, will be inhabiting our world alongside us, waiting to be revealed through the View Finder of a Smart phone, and in the next few years via digital sunglasses such as those fromVuzix. This will further blur the line between our present world and History – So will History be “history” if it’s always living with us?"

--

They're definitely on to it, but the idea goes much, much further in terms of the digital ghosts they were talking about at the end. From my earlier post on education:

"...when combined with transparent visuals we get something completely new. Imagine walking up to the Twin Towers and watching a realistic, stationary CGI simulation of its construction in real size [UPDATE]. Time could be sped up to show the building rise in ten minutes or 30 seconds. Then imagine being able to watch a recreation of the September 11th terrorist attack with sound and visuals of explosions, audio bites of news anchors delivering the information, a montage of newspaper headlines, and simulations of running crowds, yelling firefighters, and lots of smoke- in real size and in a sort of transparent half-virtual reality. Or imagine walking onto a battlefield and being able to see a panoramic, 360 degree simulation of the battle of Gettysburg, complete with overhead maps of troop movement and the ability to hit “pause” at any time. Each of these simulations would come with three or four different levels of realism- after all, we probably wouldn’t want to expose a group of ten year olds to the full carnage of warfare uncensored…There would also be much less depressing examples: the flight of the first plane, a volcanic eruption, a solar eclipse, or a Roman sporting event. And here comes the best part: you wouldn’t have to be at these physical locations. Of course it would be more interesting if you were, but there’s no reason you couldn’t run the simulation in the middle of any empty field, park, or parking lot. This would be an educational dream: “Alright kids, watch what happens to the Spanish navy during this storm....”
If nothing else it would keep students entertained, which brings us to our next point: people aren’t going to use this for work as they are for fun.Ignoring any advanced features, this device is already a portable, full size movie projector which can be linked to watch with friends. It’s also a portable computer, mp3 player, and gaming system. You can imagine any of the examples above being more than just simulations. They could be fully interactive strategy games, where you and other players actually command troops as generals on the field, or you try to shoot down planes before they can hit buildings, etc. This is essentially something like the virtual reality that gamers have been longing for since the days of Donkey Kong, but what’s even better is that it can actually gather information from the real environment to become half real/half digital: mixed reality.
To use a familiar example, people could “carry” their Warcraft characters around with them throughout the day. While turned on, these characters would actually walk through the environment- avoiding walls, traffic, and other real life hazards [all through communication with online maps and visual environmental recognition through the user’s camera]. Waiting in line for something, or just hanging out in a public place, you might see a stranger’s character. He prompts you to fight. For two minutes the sidewalk is lit up with a battle that only the two of you can see. After beating him, you actually walk over and talk about the game- thus creating an opportunity to form a real life friendship through a digital introduction, like a walking social networking site. Coffee and Cigarettes for the 21st century and counter intuitively a way to make people less isolated from the real world.
Anthropocentricism
Strangely enough, mixed reality is in many ways actually more compelling than a complete virtual world and it will hold more lasting appeal based purely human nature. Our psychology is tied to the world we are bound to. Even our fantasies can’t escape: whether greek, hindu, old or new testament, our (no offence) mythological gods behave as humans and are concerned with our affairs. Our cartoon animals speak, love, and fight, as do the robots. Our stories, dreams, and illusions can reach a high level of abstraction, but they are always anchored to reality. There are another type which are not held to this rule, but as these leave the ground they cease to hold their social power. The fantasies of a madman might contain the most beautiful creations ever imagined, but they are either misunderstood or dismissed as irrelevant to those of us around him- what good is a social commentary on the inhabitants of Europa unless they love and hate like us? Until we can accomplish a Matrix like “brain in a vat” experience, freely manipulating all five senses and therefore experience itself, the most interesting virtual reality will be that which we paint on top of the existing world around us. This layered world will be the most important cultural development of our generation, and will affect social interaction perhaps more than anything since speech."

Wednesday, February 10, 2010

Sound Problem Solved


From Wired a while ago:

Clear Calls Amid Chaos

"I'm just settling in at a bar when I get a phone call from work. The football game is blaring, people are shouting, glasses are clinking- but I hit"answer" on my Bluetooth headset anyway. The [Motorola] HX1 completely eliminates the barroom ruckus, sending only my speech to my colleague. That's because I can turn off its ordinary microphones, which pick up sound from the air, and instead switch on an ultra-sensitive microphone that listens just for waves conducted through my jawbone. Parked on the earbud's inner tip, this specialized mic uses software to turn the smallest vibrations sent from my throat into a faithful reproduction of my voice. So my colleague can hear me, I can hear her and, best of all, she'll never know I was talking business over a martini."

The part about vocal re-synthesis is interesting, because that would farm quite a bit of data, eventually enabling convincingly real voices from scratch, allowing us to do things like hold conversations with simulated dead relatives or absent friends: how will she react to this? Lets just run a simulation based on the statistics I've collected about her personality, etc.
This will help fight the lonelification of modern life- just like we've chosen perpetual visual satisfaction/stimulation through hyper sexual ads, we'll probably choose perpetual aural satisfaction through automatically selected background music, perpetual oral satisfaction through miracle fruit tablets, and perpetual social satisfaction by having ghost friends and relatives around us at all times. You can see the seeds of this every time someone comes to you with a problem not looking for a solution, but just for the sake of telling someone. In the future, this might often be a simulated someone. Like the Splenda version of human companionship. Actual, uncut, Colombian-grown human companionship will meanwhile become more and more scarce as people are too busy pleasuring themselves to bother.

New hedonism: when people stop giving a shit about not giving a shit about anything.

Sunday, February 7, 2010

PDA helping you score a little PDA



New Toshiba Phone 'Acts Like a Secretary'
"Toshiba is working on a new cellphone with tech that allows it to, uh, behave like a secretary. Apparently, that means it tracks you wherever you go and gives you info without your asking. Sounds like a creepy secretary, Toshiba!
The technology, which could be available for practical use by the end of this year, enables cellphones to "predict" the user's actions based on behavioral patterns monitored by such programs as the Global Positioning System, Toshiba officials said.
The technology also draws on acceleration sensors that detect the handsets' movements, such as rocking and shaking.
For example, cellphones with the technology can automatically display train schedules for the nearest station when the user leaves home in the morning.
It can also recommend places to eat when the user leaves the office for lunch."

--

It makes sense that this type of thing will catch on pretty quickly. Like I've said before, everything has to do with computers automatically understanding very human things like boredom and loneliness. In one sense, this personal assistant is just an extension of what we already use: pandora suggesting songs or google reader suggesting articles it believes we'll find interesting.

How far does this go? I think it's safe to assume that this toshiba model will become much more advanced and as all profiles are merged together (think chrome storing each of your passwords when you hop onto a different computer) they'll in essence become a giant personal secretary unique to each user. As algorithms get better and more information is collected, we'll better trust them to predict other things: "suggest a friend" "suggest a date" etc.

To quote augmented.org:

"Funny applications to socialize try to counter this development of disconnecting-with-the-real-world, which is actually kind of silly. But that’s how it has always been. One technology may drive us apart, another technology is needed to bring us together again."

But let's go further and a little bit darker. If we improve CGI and voice simulation, there will be no reason not to have this secretary actually appear as a simulated friend- one who knows what will make you happy depending on your mood, one who won't mind giving you perpetually undivided attention. If it's linked to your cellphone, this friend could also fill you in on things and give you advice:

"you know you seem a little down today- why not try calling Robert or Jessica- you haven't seen them in a while, and last time you hung out you all had a great time." (it was listening to the quality of your voices and watching everyone's facial expressions)

you: "I dunno, what about Eryka, she seems pretty cool... what are the chances that she's into me?"

computer: "about 3%"

you: "damn"


Even further and much darker, what if we allowed our secretaries to communicate with one another, say even temporarily like at a party. They would watch everyone's interactions, occasionally chiming in to suggest mingling (think a futuristic version of facebook suggesting that we help people become more social). Toward the end of the night we could begin some sort of Game Theory algorithm where each of the secretary agents would trade data until coming up with a "greatest possible universal happiness" formula which would pair those of us wanting to go home or to the next party with someone with each other in the most efficient way. And even if we don't agree to trade data, people will each use their own gathered information to see who they might have a shot with as things are winding down. Of course young people will also learn how to fool the system (like learning to pass a lie detector test) which would have certain advantages. As computer-aided social interaction becomes the norm, how will everything change?

Friday, February 5, 2010

Know Thyself

Let's take a look at what we talked about before, that is machine learning and software eventually sort of understanding us better than we understand ourselves. Since there are obvious monetary/political advantages to pulling this off, advertising companies and government projects are a good place to start:

1. Cognitive Match Secures Another $2.5m For Realtime Matching

The Cognitive Match startup is applying artificial intelligence, learning mathematics, psychology and semantic technologies to match content (product, offers, or editorial) to realtime content. It’s doing this in part by relying on an academic panel of professors in artificial intelligence from Universities across the UK and Europe who specialize in machine learning and psychology. The idea is to ensure maximum response from individuals, thereby increasing conversion, revenue and ultimately profit.

The premise of Levine’s company, Innerscope, is that running this data through algorithms can tell advertisers which commercials work and which don’t. They can quantify your subconscious responses to advertisements without resorting to the messiness of human language.

3. Navy Wants Troops Wearing Brain-Scanners Into War

The Navy’s Bureau of Medicine and Surgery is requesting proposals for a brain-scanning system that can assess a myriad of neuro-cognitive abilities, including reaction times, problem solving and memory recall. The scanner would also test for preliminary warning signs of post-traumatic stress, anxiety and depression, using the Trail-Making Test: a series of connect-the-dot exercises that’s been used by the military since the 1940s. And not only should the system be portable, but the Navy wants it to outlast the most extreme weather conditions, from desert heat to Arctic cold.


4. HIDE – Homeland Security, Biometric Identification & Personal Detection Ethics

HIDE is a project promoted by the European Commission (EC) and coordinated by the Centre for Science, Society and Citizenship, an independent research centre based in Rome (IT).

HIDE aims to establish a platform devoted to monitoring the ethical and privacy implications of biometrics and personal detection technologies. Detection technologies are technologies used to detect something or someone within a security or safety context. Personal Detection Technologies focus specifically on individuals, they include for example CCTV, infrared detectors and thermal imaging, GPS and other Geographical Information Systems (GISs), RFID, MEMS, smart ID cards, transponders, body scanners, etc. Biometrics is the application of technologies that make use of a measurable, physical characteristic or personal behavioural trait in recognizing the identity, or verifying the claimed identity of a previously registered individual.


ADABTS (Automatic Detection of Abnormal Behaviour and Threats in crowded Spaces) aims to facilitate the protection of EU citizens, property and infrastructure against threats of terrorism, crime, and riots, by the automatic detection of abnormal human behaviour. Current automatic detection systems have limited functionality, struggling to make inferences about the acceptability of human behaviour.

--
We could keep going, but that's enough for now. This last one is pretty interesting- and don't worry, I'm not about to start talking Orwell/Minority Report. Biometrics when hooked into a bunch of wires, sitting in a chair is one thing. Biometrics being read by simply analyzing visual/sonic information is another. This British system is supposedly working on algorithms to detect evil intentions through facial cues allegedly in order to stop potential terrorists/criminals before they're able to do anything. So let's talk about the fun, non military, non crime fighting, personal version of this type of thing. If we're eventually all wearing cameras and microphones, then we have the same tools at our disposal as the British government, just on a small scale. The advantage we also have, is being able to manually tag incoming information to help the computer: that was Mark who I was talking to for the last hour. Next time you talk to Mark, it recognizes his voice and adds important information to your growing collection of his statistics. Three months later, after the computer has a pretty good idea of what he sounds like when you talk to him, all of a sudden it lets you know that he's either sick, tired, or depressed, judging by his abnormal facial expressions, less emotional voice, and sparser comments. It also lets you in on the fact that Leah, who you just met at a party is probably attracted to you judging by her tracked eye movement, increasingly engaged responses, and infrared temperature patterns. As the judicial spins all out of wack, so will interpersonal relationships, art, and love.

Wednesday, February 3, 2010

Laughing at themselves, to themselves

Opwats. You've seen it, you've been effected by it, you've read articles written under its influence, hell you might even have it.

Old People With Advanced Technology Syndrome. Opwats

Common examples:

1. That guy wearing his bluetooth in church whose most urgent possible phone call would probably be coming from his wife (sitting next to him) making sure he doesn't forget to pick up milk on the way home

2. a txt msg frm ur mum tht luks like ths n dsnt rly say nething imprtnt @ all n is less efcnt bcuz its hardr 2 read neway damit

3. a professor trying to maximize youtube

4. using realplayer by choice

5. being overly impressed by apple products

6. conspicuously entering the simplest meeting into your planner/pda/calendar instead of hiding that weakness like a decent person

7. way too much public information from people over 30 on facebook who apparently can't help but check the shiny "reply to all" button

etc.

Opwats, a tragedy as annoying as it is deadly (people who can barely work a cell phone now trying to sync blueteeth while driving sounds like a great idea...)

Wednesday, January 27, 2010

On Public Domain

Public Audio and Score Database


At a certain point, depending of course on the specific copyright laws of each country, intellectual creations become public domain. For the purpose of what we're talking about here, we'll be assuming that everything made before 1923 is fair game in the US (actually much more complicated). Now lets talk about music. The International Music Score Library Project (IMSLP) has already gathered a huge collection of public domain scores. This includes among other things the entire collected works of many of the masters: Bach, Beethoven, Chopin, etc. Which means that anything written by these men can be accessed for free by anyone through the internet in a type of wikipediesque expanding collection. Their work has basically been absorbed into the intellectual commons of the entire human race- no one "owns" Mozart , rather we are all the proud inheritors of this man's genius. This score library then acts as a way of providing a freely deserved intellectual right. But there's one problem: no recordings.
Even though all this old music is technically public domain, individual recordings made today would be owned by whoever made them as their private intellectual material. This is why the Berlin Philharmonic is able to charge for a recording of Beethoven's 9th Symphony, even though it has been in the public domain for at least 100 years. What people are paying for, is the recording itself, which is why no one can post it side by side with the free public domain score without first giving the Berlin Philharmonic a ($ubstantial) cut. Therefore we're stuck in a system whereby the only way a recording can be provided is if an organization donates it (which won't happen because they would be directly cutting into their own profits) or if the recording itself is old enough to be considered public domain. Unfortunately the latter option leaves us with haphazard recordings made before 1923, which must be located, digitized (vinyl to mp3), and forgiven for the poor recording quality of the time. This is as inconvenient as it is absurd.
Let's look at another option. What we already have in the (IMSLP) are scanned .pdf copies of scores (with more being added every day, much like Project Gutenberg). Through the use of recognition software, these scans could be automatically converted into midi files, the same way scanned pages are converted into text. Project Gutenberg itself uses this method. Now almost everyone is familiar with how horrible a typical midi file sounds, bit it doesn't have to be that bad. A company called Guarritan (among others) has created an advanced library of audio samples of actual orchestra instruments which it has engineered for midi playback. What this creates is an audio file of decent quality (it's not the Berlin Philharmonic as they'd be the first to tell you) that is completely copyright free- except to the writers of the software itself.
So what is Guarritan's motivation to allow all of these recordings to be given to the public for free, you might ask? Advertising, plain and simple. They are trying to sell their product for the use of composers who need to be able to hear their own works in progress as they write them (assuming they don't have an actual orchestra at their disposal). So by establishing themselves as the universal authority on instrumental synthesis, Guarritan would be simultaneously creating a constant advertising campaign for the quality of their product. For a working model of this technique, just look at Adobe Reader- anyone can download it for free, but in order to actual manipulate and create similar images, you need to purchase Adobe Acrobat, for which they happily charge a hefty sum.
Another thing that would start to happen as this score library became more popular, would be that certain orchestras would begin to donate specific recordings. [This already happens on Wikipedia, usually with college or non-professional groups ] For example, the San Francisco Symphony might be putting on a production of Mahler's Kindertotenlieder and could therefore post a simple recording of one of their old performances of the same piece. The library then, in the recording section would replace it's advertisement for Guarritan with something along the lines of, "This recording has been generously donated by the San Francisco Symphony [link to site] Many of their other recordings can be purchased here [link to itunes or amazon] and a list of their upcoming performances can be seen here [link to calendar]." As this database starts to expand, orchestras will realize that they no longer have a stranglehold on the recording market, and must therefore find different ways to make money, one of which is through increased performance attendance, as well as added pressure to perform contemporary (copyrighted) works of current composers, the recordings of which they will again have a monopoly on. This will stimulate musical progress and perhaps loosen the hold that music of the Romantic era has artificially maintained for so long... .
In the end, what we'll have created is an audio-visual library of all surviving music written before WWI, which could be accessed instantly by anyone in the world for free, which could also be integrated into wikipedia to form the ultimate musical educational tool.

Just a Word About Food

2008

Every restaurant should have a complete list of the dishes they serve available online. This should be hooked into an unaffiliated food profiling service which keeps track of what you eat and how much you enjoyed it like netflix. As this starts to build it would become quite useful. Say you walk into a restaurant that you've never been to before. Instead of searching through the menu in order to guess at what sounds good before the waiter starts to get impatient you could simply consult your profile via your cellphone. You tell it the name of the restaurant that you're at and it gives you a list of the top ten dishes it thinks you'll enjoy based on what you've rated in the past and what "connoisseurs like you" have said about this restaurant's various options. On a more practical level it also keeps track of allergies and disdain for particular items: "Thank god it told me that they make the sauce with soy milk, because I'm so allergic that I probably would have died." It would also tell you things about yourself that you wouldn't have figured out otherwise: "Based on your hatred of these ten dishes, it seems you do not like their shared ingredient, cilantro." Or say you get sick after you eat certain foods. You let it know each time this happens and it examines common ingredients to figure out that you're allergic to eggplant. Next time you ask them to leave it out. Problem solved. You could also see trends of food poisoning which would tell you where to avoid. It seems that it would be impossible to keep neighboring restaurants from sabotaging each other with fake claims, but maybe we'll find a solution- humanistic input and participation requirements would protect against primitive bots which might make large scale sabotage too inefficient to become a problem.....

Update:

MenuPages Brings Restaurant Menus to Your iPhone [Downloads]

Pandoras Disk Jocks

3.09

Short: Since we all have musical profiles, there should be a way to listen to universally appealing music automatically, like a lowest common denominator for every possible audience. Extending this, when we go to clubs we should upload our profiles so that the dj would mix more efficiently and the music would change with the type of people in the room.

Long:

Many of us have an ever growing musical profile attached to us via Pandora, Itunes or similar such programs. These keep track of what songs we listen to and how often. In the case of Pandora, the service actually tries to understand the musical "taste" of its users based on hundreds of criteria (repeating form, sad lyrics, solo guitar, etc.) Since most of us have these profiles, we should use them for more than our own interest i.e. "did I really listen to that song six times yesterday?"
Pandora already suggests music that it thinks you would enjoy (similar to Netflix), but much more could be done. It's amazing we aren't doing this already. For example: I'm with a friend. We hop into a car to go on a road trip. Obviously music is going to be an awkward problem on a long trip with someone who's tastes might be completely different. As a solution, instead of us taking uneven and annoying turns trying to guess what would be most palatable for each other, we simply log into BOTH of our Pandora or Itunes accounts simultaneously. These two profiles then compare notes on our listening habits and create a playlist based on the highest possible common denominator: what will make both of us most happy and the least annoyed. Though we each have unique collections, there are probably a dozen or so albums that we both happen to listen own or listen to, which would be more than enough music to last a days worth of driving. It could also determine characteristics that we both appreciate in music and try to add variety from there (like the music genome project). Obviously it won't be perfect but this will be remedied through the manual skip of bad songs (like a veto by either party). The service could also just provide a list of the most congruous thousand or so songs which we could look through and select from manually.
Expanding this model the same process would work with three, four or more people. For house parties instead of someone manually making all the decisions, everyone would log into their accounts when they arrive, thus changing the mix as different crowds come and go. This could even work in a club. Hundreds of people could contribute their musical desires automatically creating a scene that would change as often as the people within it.
"...man's relationship to his environment has changed. As a result of cybernetic efficiency, he finds himself becoming more and more predominantly a Controller and less an Effecter"
-Roy Ascott, 1964
The DJ then becomes an interpreter of this massive amount of information. He might see that many people present have recently started listening to a certain popular song. Because he knows that they'll enjoy dancing to their new found titillation, he mixes the song's chorus a number of times throughout the night, combining it with other songs thus creating a remediated version of something he can be sure people like. His melodic insertions also become an interesting form of communication. Suppose someone with uncommon preferences is in the building- we'll call him Fred. Our DJ sees that Fred mostly listens to obscure jazz from the 60's and 70's and that he has listened to Archie Shepp's Attica Blues album nearly 10 times in the past week. Our DJ also likes this obscure subsection of the musical world, which is why he noticed Fred in the first place (this collection of profiles shows him the most congruous/unique individuals in the crowd). Because Fred had obviously taken a recent liking to this record, our DJ takes quick listen. It's good. The first track is a type of funk tune that features a great drum intro. He quickly cuts this intro, loops it and mixes it in as a background to an acapella recording of a pop track. Of course Fred would be delighted and, as a way of saying "thanks for turning me on to some good music", our DJ even sends Fred the recorded mix with his loop in it as a type of memento and personal leitmotive. The next time Fred comes out and logs in, his profile will be tagged with this recording that the next DJ will be free to include or exclude at his discretion.
More than a simplistic improvement in song selection, this tool would show what people had listened to, how many times, and how recently. Therefore, to the trained interpreter it would reveal the actual mental state of the individuals and the crowd itself. He actually sees what is fresh in people's minds and therefore ripe for manipulation and artistic communication. This all plays into a mental commodification of attention and relevancy. Things that are more pervasive in people's recent memory carry more sway and could theoretically affect on an unprecedented scale. A club's atmosphere would also change in real time with the feeling of its patrons.

For People Who Hate Shopping

Every clothing store should have a complete list of their inventory available online. This should be hooked into an unaffiliated fashion profiling service which keeps track of what you wear, in what size, and how often (frequency = enjoyment), like netflix. As this starts to build it would become quite useful. Say you walk into a store that you've never been to before. Instead of haphazardly searching through racks of clothes, making sure to try everything on in three sizes to find the right one, you could simply consult your profile via your cellphone. You tell it the name of the store that you're at and what you're looking for, and it gives you a list of suggestions that it thinks you'll enjoy based on what you've rated in the past and what "shoppers like you" have said about various pieces of clothing. On a more practical level it also keeps track of what colors you prefer and your exact measurements to determine proper size without trying everything on (negating the overhead caused by changing rooms and all the reshelving they necessitate, which would be a way to get stores to agree to put their inventories online). It would also tell you things about yourself that you wouldn't have figured out otherwise: "Based on your hatred of these ten pairs of pants, it seems you don't like anything boot-cut. We'll keep this in mind for future suggestions." Or say you always wear a particular item. Since the system keeps track of frequency of use, it could search for similars, or accessories that would complement the style. You could also see which stores would have the most/cheapest selection of clothes you would probably like, which would make for much more efficient shopping- it could even predict seasonal sales like bing.com does with airfare, telling you when to buy what. So shoppers would save time, stores would save overhead, and designers would have free* access to the largest survey pool possible- the general public.

S O C I A L freaking E F F I C I E N C Y

Also, if we connect this to the attraction meter, you could get something like, "Statistically speaking, if you'd like to attract men ages 20-23 in this area, we recommend this outfit." or even better: "Based on what women you've rated attractive in the past, their demographic and aesthetic choices, an outfit like thi$ would give you the best chances."

How would people react to that?

12. Trading Experience

The basic premise of this whole system is to create something which can record and affect sight and sound. On the visual side of things, there will be cameras which know what you're looking at which are constantly recording video and taking still pictures. On the auditory side, there are a set of stereo microphones which record everything you hear- ambient sounds, street musicians, conversations, etc. What we are left with is something like a first person movie of someone's life.
Think about how that would affect everything: the idea of subjective, non-transmittable experience begins to fall apart. If people want to know what a day in the life of a typical working class resident of Tokyo, they'll simply download a user submitted video of someone’s experience, which they can watch in fast motion or even real time. This way, they literally see through someone else's eyes, hear through their ears. Instead of kids sending their parents pictures after they move out, they'll simply send them a one hour clip or montage of what life looks like in their new city. Instead of simply relating stories to one another, we can pull up a clip, and show them: “You should have seen the look on his face,” becomes “here, you’ve got to see the look on his face,” because the machine is always recording, and therefore captures life in a way that conventional media simply can’t. Selection is now a post production process.
Think about how this would affect the “truth” of representations of political and social conflicts. Today we might hear that there were violent protests in Cairo, ending in countless injuries as police forcefully broke up the crowd. We have to accept this at face value and do our best to imagine the severity of the situation based on the slant and reliability of our source. When these devices are common place, however, this protest will have at least 1,000 different vantage points, which could theoretically be viewed 20 at once, on split screen to get a feel for how things really went. At some point, computer rendering will get to the point in which it could actually create a 3D replica of the scene in its entirety simply based off of the vantage points and camera tracking of many users being submitted and combined. This scene could then be watched in slow motion from above to see what really happened- a view previously available only to the gods.
On the subject of divinity, what this device also moves toward, is a technological recreation of ancient social constructs which we ended up destroying some time in the 19th century. Religious debate aside, what are the most basic consequences of the existence of god on the individual and society as a whole? It is the knowledge or at least the assumption that all actions, though secret to all earthly witnesses, are still seen, understood, and judged by an omniscient, divine being. This, coupled with the expectation of retribution for good and bad deeds has a profound effect on the free will of the individual and arguably beneficial consequences for the functioning of society as a whole. People who are scared of god do not steal just because no one is watching.



“You must mean ‘Yes’ when you say ‘Yes’. You must mean ‘No’ when you say ‘No’.”
-Matthew 5:37

So what’s very interesting is that through these technologies and their common use, what we’ll have essentially created, is an artificial deity- something which is constantly watching and recording what we do. Imagine an argument in this context:

A: “You promised that you would stop smoking after graduation!”
B: “No, what I said was that I would try to quit. I never said that I definitely would.”
A: “Oh no you don’t. I have it right here- look!”

Suddenly they both see an actual recording where he says, “alright, alright, I promise that I’ll quit right after graduation….” Argument over. But of course knowledge of the rolling tape would cause people to make fewer promises or follow the ones that they did make for fear of being considered a blatant liar. “Thou shalt not lie,” enforced by the new digital god.
You can already see this effect- there have been many reports of police brutality and corruption decreasing as cell phone cameras become more and more ubiquitous. Sure it still happens, but cops understand that there’s always the possibility of someone watching, and therefore it is less of a common occurrence. It’s important to remember that all of these things are double-edged though. Just imagine the new possibilities for false confessions and evidence as voice and video simulation become indistinguishable from the real thing. What will become of objective “truth”?

At any rate, these are the concerns not of the next generation or some arbitrary time in the future- they are our own. The seeds of this giant system have already been sown, and the hardware to make it work is about to be on the market. Almost everything I talked about is possible now, as in 2009, and the rest is only a few years (-5) and a tiny bit of effort away. The effects will be greater than anything we’ve seen for a long time- perhaps too great, too quickly. As we begin to augment the world around us, there’s always the direct risk of loosing touch with reality. Like a group of oblivious Ikaruses we run the risk of floating father and farther away from the solid ground of reality, only to suffer a painful awakening as we hit the ground, realizing that the world went to hell while we were busy being entertained by our metallic blindfolds.


“Use the internet to get off of the internet.”

I talk quite a bit about the use of this device with friends in a social setting, but this is perhaps only wishful thinking in an attempt to ignore the fact that much of these systems really encourage an introspective and anti-social existence. I’ve already been witness to or part of so many conversations that end in an awkward realization that neither of them have any common subjects to talk about: “you haven’t heard of this band? Well they’re really great, you should check ‘em out.” “Oh, maybe I will.” And it’s over. It’s important to start thinking about these things sooner than later.

“If we, in a small way, make human tasks easier by replacing them with a machine execution of the task, and in a large way eliminate the human element in these tasks, we may find we have essentially burned incense before the machine god. There is a very real danger in this country in bowing down before the brass calf, the idol, which is the gadget.” -Norbert Wiener, 1954

I learned how to shave online.
Think about that.

Yours,
Edouard Cabane

June 2009


Relevant links and updates:

by Michael Arrington on September 6, 2009

sdfImagine a small device that you wear on a necklace that takes photos every few seconds of whatever is around you, and records sound all day long. It has GPS and the ability to wirelessly upload the data to the cloud, where everything is date/time and geo stamped and the sound files are automatically transcribed and indexed. Photos of people, of course, would be automatically identified and tagged as well.

Imagine an entire lifetime recorded and searchable. Imagine if you could scroll and search through the lives of your ancestors.

Would you wear that device? I think I would. I can imagine that advances in hardware and batteries will soon make these as small as you like. And I can see them becoming as ubiquitous as wrist watches were in the last century. I see them becoming customized fashion statements.

Privacy disaster? You betcha.

But ten years ago we would have been horrified by what we nonchalantly share on Facebook and Twitter every day. I always imagine what a family in the 70s would think about all of their photo albums being posted on computers and available for the entire world to see. They’d be horrified, they couldn’t even imagine it. Heck, a life recorder is less of a privacy abandonment step forward than we’ve already taken with the Internet and electronic surveillance in general.

A Business Week articlesdfg talks about a ten year old Microsoft project called SenseCamwert (more heresdfg) that is just such a device.

It’s clunky today and doesn’t do most of the things I mentioned in the first paragraph above. But a true life recorder that isn’t a fashion tragedy isn’t that far away.

In fact I’ve already spoken with one startup that has been working on a device like this for over a year now, and may go to market with it in 2010.

The hardware is actually not the biggest challenge. How it will be stored, transcribed, indexed and protected online is. It’s a massive amount of data that only a few companies (Microsoft, Google, Amazon) are equipped to really handle anytime soon.

But these devices are coming. And you have to decide if you’ll be one of the first or one of the last to use one.

Will you wear one? I will. Let us know in the poll below.

-----------

by MG Siegler on August 19, 2009

There’s nothing cool about crime, but Stamen Designrtey comes pretty damn close to making it cool with the new site it built and designed, San Francisco Crimespottingzcv, that launched today. The site offers a visual representation of reported crimes in the city during a set period of time. Various types of crime ranging from alcohol-related to theft to murder are represented by different color dots placed on a map of the city.

Not only does this visually show you possible trends in various types of crime, but you can manipulate both the date range an time range to further drill into the data. Not surprisingly, there are more crimes committed at night, but it’s interesting the trends in crime during some months versus others. If you zoom in, you can click on any of these dots to get more information about the actual crime, including the police report number.

screen-shot-2009-08-19-at-22944-pmAs the site describes it, Crimespotting is “a tool for understanding crime in cities.” It also notes:

If you hear sirens in your neighborhood, you should know why. Crimespotting makes this possible with interactive maps and RSS feeds of crimes in areas that you care about.

We’ve found ourselves frustrated by the proprietary systems and long disclaimers that ultimately limit information available to the public. As citizens we have a right to public information. A clear understanding of our environment is essential to an informed citizenry.

The San Francisco launch follows the Oakland version of the site in 2007, as LaughingSquid notesfdgs. But the San Francisco version features several of the newer updates including the sort-by-hour and days featurexbv.

One thing that would make the site even better is if there was real-time data for crimes being reported. Unfortunately, much of the data is days or even weeks old, as the site clearly notes along the top. But the APIs for this data could lead to even more interesting uses. You can find out more about those heredgfs.

The site is quick to notedbfs that it is in no way affiliated with the city of San Francisco or the SFPD. Again, it just uses the publicly available data to build these maps.