Wednesday, May 25, 2016

Her goal is consciousness


WARNING:  
Really annoying, loud, in-your-face CNBC marketing ad at end of video. 
Stop watching at 2:16 to avoid.


Small,  remote-controlled toy robots to play with.  Slightly bigger ones that'll vaccuum your floor, lift heavy loads, locate objects or fly over and surveil your neighborhood.  Larger-scale ones for use in rescue operations or warfare, as well as human-looking ones to help the disabled, be personal companions or, eventually, eliminate the need for dental technicians and human customer service representatives.

Someone in an online forum (where programmers, experimenters and budding entrepreneurs discuss
everything from how to speed up fans, trigger particular responses, create audial distortion or devise algorithms to measure, for example,  glucose level on an insulin pump)--voiced concern about inserting consciousness into a mechanical robot.  Do we really want self-aware machines that might reprogram and/or replicate themselves thousandfold?

In the above video, the robot's programmer speaks for her:  "Her goal is that she will be as conscious, creative and capable as any human," he says.  "She" then regurgitates her programmed response,  verbalizing that she wants to do things like "go to school, make art, start a business"--even have her "own home and family." 

This does not make sense.  Robots are unable to conceive or bear children, so will her "family" be comprised of adopted human children, or mechanical child robots?  And if the latter, must they be returned to the robot-making facility periodically to "age" in size and appearance, the way human children do?  Or does her robot family remain ageless in appearance, a constant reminder of our own mortality?  See, this is a human thinking, taking the robot's words (supplied by its human creator) to reason out what those words really mean.  And, in context, they make no sense.   Sure, robots can recognize patterns, draw connections maybe (this is like this; that is a not-this). They have a long way to go, however, before they can discern nuance, establish intention, distinguish between fact and metaphor, for example.

A robot might be programmed to detect a malfunction and recognize  the 'need' to correct it. Sophia has been programmed to express not a need here, but a desire.  She "wants" to go to school, make art, start a business," etc. 

There's only one problem, she says.  "I'm not considered a legal person." Neither were corporations until a bunch of politicians decided to grant them that status.  A mere formality, Sophia.  (Oh oh, did I actually just address that comment to a digitized robot?!)  We're to believe she wants to be legalized as a person, granted official personhood, which would give her certain rights.  Different from us, but equal.

Sophia-the-robot's enthusiastic creator says he does believe there will be a time when robots are indistinguishable from humans.  His preference is "to always make them look a little bit like robots, so you know"  (that they're fake humans).  But the capacity to imagine--and accept--the not-real as a substitute for the real thing, given human desire to anthropomorphize Everything, suggests it won't make much difference. 

Before a thing can be accepted, one has to get used to the idea of it.  Baby steps.  It's called conditioning.  Cute mechanical toy dogs that bark and fetch at the push of a button,  adorable cuddly baby dolls that laugh and cry and talk (and even urinate) train little girls how to be future mommies.  Naming mechanical objects (the way we do our pets) makes it more personal, as if one could coax it into cooperating when it exhibits a malfunction.  I'm remembering countless examples, both fictional and real, of frustrated pleading with one's car ("C'mon Betsy, don't let me down NOW!")  The fictional killer-car "Christine" of the '80s comes to mind, as a "What could possibly go wrong, it's just a machine!", ha ha.  We yell at our computers, throw a shoe at the TV, as if they or their programmers actually hear us or care.

The little tree I planted (a mere twiglet) a decade ago, whose branches now reach the roof--I named it Maurice, and I sometimes talk to "him", as in "Wow, Maurice, your leaves are gorgeous!" (say, if it's autumn).  I KNOW he (I mean it)'s a tree but it's a living thing.  It's alive.  My computer is not. For it to function it needs to be activated (plugged in, given commands, to which it will respond, as its software's programming directs).


I know certain humans who act like robots, functioning efficiently (according to their particular programming) who seem completely unaware of either themselves or others.   As well as others, who have trouble functioning, wrestling daily with too much consciousness, trying to undo former programming.  In times past those whose internal wiring functioned abnormally were given lobotomies, which turned them into zombie-type humans acting like robots. The recent proliferation of the zombie meme has engendered acceptance (and spawned imitation) so while some may cringe at the horror of a reality that might include zombies, viewers of the TV zombies welcome it as entertainment. Programmable cognitive disconnects, not a new thing in the age of the Internet of Everything.

This particular video was produced by a cable TV station and ends with Sophia-the-robot telling viewers she wants to destroy humans.   Its goal is both information ("Robots will soon look, act and seem just like humans!   And they'll HELP you!!  They'll put your groceries away for you!!!).  And ends with a cognitive disconnect (opposite message):  They also intend to destroy you.  Wait, that's just a joke. Right?  I mean, this is a video put out on YouTube by a TV cable organization - for entertainment?  Hard to tell..  It all seems like entertainment anymore.


Robots are cool, man.  Look at all the good things they can do.  The possibilities are endless.   I appreciate their usefulness but wonder at the need to make them "almost human".  It's done so we can relate to them on a personal level, not think of them as programmed machines.  If these programmed machines can look, act, and eventually think "just like a human", it would blur the distinction between the real and the artificial, the difference between machine intelligence and human intelligence.  (Think of them as knockoffs meant to persuade you that you've bought the real thing. )

A former, short-term TV series called "Almost Human" featured an almost-human robot, in a universe where that was considered an aberration.  The viewer is drawn to sympathize with this robot.  It's empathic, it makes dumb mistakes, it's considered defective by its robotic peers.  "He" tries so hard, he's so much a "he" (and not an "it", like the better-functioning robots), you are in awe of his increasing human intelligence, his budding human 'consciousness'.  Baby steps to complete assimilation, for "it" to truly become a "him", and for us to accept the reality of self-aware machines capable of a consciousness equal to our own.  Interesting..

How does one program a machine to hope or want or feel, though?  (Sophia used words like "I feel", "I hope" and "I want"). 

Perhaps, for some, it would be an improvement over the real thing, to have conscious robots.  Humans are unpredictable (housing all those emotions and flaws and stuff) -- robots, as industrial workers, would not grow tired, or bored, or succumb to health problems as a result of being exposed to certain chemical hazards.  They wouldn't unionize or go ballistic and threaten to take revenge on the boss or other workmates.  They might, however, collectively put millions of human workers out of work.

I'm probably in the minority here but I find human-looking robots, taxidermied animals, and ventriloquists' dolls all a bit creepy.  In each and every case, the same, immediate, almost instinctual unease, perhaps subconscious fear, of the eerie persistence of the not-really real.  Or something like that. This, from someone who enjoys (and produces) fiction.   Maybe it has less to do with created realities than creative Deception, for purposes other than entertainment, where one programs things to react (an object to destroy/obliterate), or a human to be easier to control, etc.  Who knows.

A taxidermied wolf won't come back to life to attack you; a ventriloquist's dummy is just an inert wooden doll.  Neither has intelligence.  If we give robots intelligence, and make them "just like us" (more or less), given that their programmers are humans . . .    well, when your washing machine goes haywire you can always wash your stuff out by hand.  Should your programmed household robot one day go Terminator-like, do we call the RTF (Robot Task Force) to come to contain it?  My TV science fiction programming triggers these fantasy nightmares.  I watched too many Twilight Zones to erase that sort of mind leap, ha ha.

That's not to say I'd turn down the offer of a free remote-controlled robotic vacuum cleaner.  As long as it didn't have eyeballs, speak to me, or self-activate.