Quantcast
Channel: WBEZ | Venture
Viewing all articles
Browse latest Browse all 10

Straight outa Dilbert: Could a robot go to meetings for you?

$
0
0

My to-do list for 2012: Create invention. Change the world. Get rich.

The invention: A robot that goes to meetings for you. 

Changing the world: In addition to freeing up a load of time, this invention will prevent a ginormous number of needless conflicts. (If you hadn’t been at the meeting, you wouldn’t have responded to your colleague’s idiotic comment, and we’d all be better off, right?)

Getting rich: This part seems obvious, since everybody needs one.

So, job one: Major R&D. How convincing can this robot be?

The Clever Apes guys recommended talking with Malcolm MacIver, a Northwestern University scientist who has consulted with the producers of the Battlestar Galactica prequel Caprica, and the movie Tron Legacy. In other words, he’s one of the guys Hollywood people call when they want to know, “How do we make this robot really lifelike?”

And he’s built these crazily-awesome robotic fish. (They even sing.) But fish--even singing fish--don’t go to meetings.

So MacIver told me about a project by a colleague of his in Japan, Hiroshi Ishiguro. “He had the Japanese movie-making industry create a stunningly-accurate reproduction of him,“ MacIver says.  “So he can send his physical robot to a meeting and it will smile and furrow its brow—and talk through his mouth.”

How accurate are we talking about?  “It’s realistic enough that he doesn’t want to show his young daughter,” MacIver says, “because he thinks it would creep her out.”

Wow.  So, is this Ishiguro guy beating me to market?  No, as MacIver describes things, it sounds like he’s mainly using it for pure research.

Ishiguro uses the robot to learn about non-verbal elements of communication “by disrupting them,” says MacIver. “So you can say, ‘OK, I’m going to shut off eyebrow movement today, and how does that affect people’s ability to understand what I’m talking about?’ You know, are they still able to get the emotional content?”

So, back to stunningly accurate: Ishiguro’s robot would creep out a three year old... but does it fool his adult research subjects? Would it fool my colleagues, if I left eyebrow-movement switched on?

Not so much, says MacIver.

What if he just got a much, much bigger grant?

“Um, unlikely,” MacIver says. 

OK. Super-lifelike equals. No go. Moving on....

Someone mentioned to me that there’s a robot that listens really well. It can kind of convince you that it’s listening to you. When I saw the YouTube video, it looked like WALL*E

It had these big goggle eyes that bug out a little bit, it would look down—It would respond emotionally to you. The point of the experiment was—I mean, it was kind of heartbreaking—could you make old people in nursing homes less lonely, if they had someone to listen to them, and would this do it

And even for ten seconds, watching this guy in the lab coat, you think: Yeah, maybe.

So, I tell MacIver, now I’m starting to think that the robot should be a cartoon version of me.

“Well, right, that’s a good point,” he says. “If you can’t do it perfectly, go to the other side of the uncanny valley and and you’ll be more effective.”

The “uncanny valley” turns out to be this phenomenon where, when animated characters—or robots-- get too real-looking, they become creepy. Like in the 2004 movie, The Polar Express.

Lawrence Weschler explained it this way in a 2010 interview with On the Media:

If you made a robot that was 50 percent lifelike, that was fantastic. If you made a robot that was 90 percent lifelike, that was fantastic. If you made it 95 percent lifelike, that was the best – oh, that was so great. If you made it 96 percent lifelike, it was a disaster. And the reason, essentially, is because a 95 percent lifelike robot is a robot that’s incredibly lifelike. A 96 percent lifelike robot is a human being with something wrong.

So: I want a cartoon avatar. 

That’s one question down, but there’s a lot more R&D to do. Next, I think I need to talk with some Artificial Intelligence specialists...

... to make sure that the robot knows what to say if someone in the meeting asks “me” a question.  (I’ve got some ideas, but they’ve probably got better ones.)  

Because, as it turns out, the Hiroshi Ishiguro model has another problem: Not only is it creepy, but it requires Ishiguro himself (or some human being) to actively operate the robot. In other words, he may have skipped the commute, but mentally he's still "there." 

Which pretty much defeats my purpose. 

And then there’s figuring out how to license the technology-- like, would I owe Scott Adams a royalty?-- plus a manufacturing supply chain, a marketing campaign-- the whole shebang.  

Stay tuned. 


Viewing all articles
Browse latest Browse all 10

Latest Images

Trending Articles



Latest Images