Showing posts with label anthropomorphizing. Show all posts
Showing posts with label anthropomorphizing. Show all posts

Saturday, December 3, 2016

Secular Inclusion



awynfoto2016


At the Basilica religious gift shoppe,
ceramic females -
glassed in, shelved.
Yours for a price.


awynfoto2016

These are decorative pieces, a celebration of ordinary women, as women, or mothers.
There's a whole section of Blessed Virgin, St. Joseph, and Baby Jesus statues in traditional pose,
adjacent to the rosary, crucifix and religious medal display cases. But these ceramic ladies caught my attention.

Mothers and babies, a child releasing a peace dove - or is she trying to capture it?  (Can one hold on to Peace?)


detail

Two quick snapshots taken during a visit to the Sanctuary with a friend recently.  'Tis the season -- Peace on Earth, Good Will Toward Men, and so forth.  Dove as the symbol of peace.  At what price, peace?  Can peace be bought?   I love the image, even without the symbolism - a child reaching towards (or releasing?)  a bird.  A reaching toward, and at the same time, a letting go. Metaphor for too many parallels.

And . . .

the need to . . . interpret what's seen.  Is that a choice, like holding onto, or letting go, of something? That you can see a thing (object, event, image) from different perspectives and attach (or dismiss) its perceived meaning.  Meanings are assigned (or taught); accepted or rejected.  If factory-produced, the packer just sees a fake-girl-with-bird statue, breakable.

I just really liked the image, regardless of what it may, or may not, mean.  If only I could figure out how to remove that price sign from the photograph. It protrudes, as a jarring distraction.

I re-looked at the photos and it occurred to me the figures might appreciate not being seen as a group, but individually. Ways of looking, where what initially draws is the whole picture (the group), but then you notice the details.  (Or it sometimes goes the other way, where you obsess over the details but fail to see the larger picture.  Both are ways of seeing, and not seeing; each enlightens in its own way.

Or not.  Sometimes an image is . . . just an image.   Girl. Bird. Price tag.

Interesting that the figures' faces are a blur, their individuality wiped out.  Commercialized art, portraying "types".  None had a mouth, yet they spoke to me, as being worthy of a second look.  (This propensity to anthropomorphize, another quirkery.)















My favorite remains the girl with the bird. 

Wednesday, May 25, 2016

Her goal is consciousness


WARNING:  
Really annoying, loud, in-your-face CNBC marketing ad at end of video. 
Stop watching at 2:16 to avoid.


Small,  remote-controlled toy robots to play with.  Slightly bigger ones that'll vaccuum your floor, lift heavy loads, locate objects or fly over and surveil your neighborhood.  Larger-scale ones for use in rescue operations or warfare, as well as human-looking ones to help the disabled, be personal companions or, eventually, eliminate the need for dental technicians and human customer service representatives.

Someone in an online forum (where programmers, experimenters and budding entrepreneurs discuss
everything from how to speed up fans, trigger particular responses, create audial distortion or devise algorithms to measure, for example,  glucose level on an insulin pump)--voiced concern about inserting consciousness into a mechanical robot.  Do we really want self-aware machines that might reprogram and/or replicate themselves thousandfold?

In the above video, the robot's programmer speaks for her:  "Her goal is that she will be as conscious, creative and capable as any human," he says.  "She" then regurgitates her programmed response,  verbalizing that she wants to do things like "go to school, make art, start a business"--even have her "own home and family." 

This does not make sense.  Robots are unable to conceive or bear children, so will her "family" be comprised of adopted human children, or mechanical child robots?  And if the latter, must they be returned to the robot-making facility periodically to "age" in size and appearance, the way human children do?  Or does her robot family remain ageless in appearance, a constant reminder of our own mortality?  See, this is a human thinking, taking the robot's words (supplied by its human creator) to reason out what those words really mean.  And, in context, they make no sense.   Sure, robots can recognize patterns, draw connections maybe (this is like this; that is a not-this). They have a long way to go, however, before they can discern nuance, establish intention, distinguish between fact and metaphor, for example.

A robot might be programmed to detect a malfunction and recognize  the 'need' to correct it. Sophia has been programmed to express not a need here, but a desire.  She "wants" to go to school, make art, start a business," etc. 

There's only one problem, she says.  "I'm not considered a legal person." Neither were corporations until a bunch of politicians decided to grant them that status.  A mere formality, Sophia.  (Oh oh, did I actually just address that comment to a digitized robot?!)  We're to believe she wants to be legalized as a person, granted official personhood, which would give her certain rights.  Different from us, but equal.

Sophia-the-robot's enthusiastic creator says he does believe there will be a time when robots are indistinguishable from humans.  His preference is "to always make them look a little bit like robots, so you know"  (that they're fake humans).  But the capacity to imagine--and accept--the not-real as a substitute for the real thing, given human desire to anthropomorphize Everything, suggests it won't make much difference. 

Before a thing can be accepted, one has to get used to the idea of it.  Baby steps.  It's called conditioning.  Cute mechanical toy dogs that bark and fetch at the push of a button,  adorable cuddly baby dolls that laugh and cry and talk (and even urinate) train little girls how to be future mommies.  Naming mechanical objects (the way we do our pets) makes it more personal, as if one could coax it into cooperating when it exhibits a malfunction.  I'm remembering countless examples, both fictional and real, of frustrated pleading with one's car ("C'mon Betsy, don't let me down NOW!")  The fictional killer-car "Christine" of the '80s comes to mind, as a "What could possibly go wrong, it's just a machine!", ha ha.  We yell at our computers, throw a shoe at the TV, as if they or their programmers actually hear us or care.

The little tree I planted (a mere twiglet) a decade ago, whose branches now reach the roof--I named it Maurice, and I sometimes talk to "him", as in "Wow, Maurice, your leaves are gorgeous!" (say, if it's autumn).  I KNOW he (I mean it)'s a tree but it's a living thing.  It's alive.  My computer is not. For it to function it needs to be activated (plugged in, given commands, to which it will respond, as its software's programming directs).


I know certain humans who act like robots, functioning efficiently (according to their particular programming) who seem completely unaware of either themselves or others.   As well as others, who have trouble functioning, wrestling daily with too much consciousness, trying to undo former programming.  In times past those whose internal wiring functioned abnormally were given lobotomies, which turned them into zombie-type humans acting like robots. The recent proliferation of the zombie meme has engendered acceptance (and spawned imitation) so while some may cringe at the horror of a reality that might include zombies, viewers of the TV zombies welcome it as entertainment. Programmable cognitive disconnects, not a new thing in the age of the Internet of Everything.

This particular video was produced by a cable TV station and ends with Sophia-the-robot telling viewers she wants to destroy humans.   Its goal is both information ("Robots will soon look, act and seem just like humans!   And they'll HELP you!!  They'll put your groceries away for you!!!).  And ends with a cognitive disconnect (opposite message):  They also intend to destroy you.  Wait, that's just a joke. Right?  I mean, this is a video put out on YouTube by a TV cable organization - for entertainment?  Hard to tell..  It all seems like entertainment anymore.


Robots are cool, man.  Look at all the good things they can do.  The possibilities are endless.   I appreciate their usefulness but wonder at the need to make them "almost human".  It's done so we can relate to them on a personal level, not think of them as programmed machines.  If these programmed machines can look, act, and eventually think "just like a human", it would blur the distinction between the real and the artificial, the difference between machine intelligence and human intelligence.  (Think of them as knockoffs meant to persuade you that you've bought the real thing. )

A former, short-term TV series called "Almost Human" featured an almost-human robot, in a universe where that was considered an aberration.  The viewer is drawn to sympathize with this robot.  It's empathic, it makes dumb mistakes, it's considered defective by its robotic peers.  "He" tries so hard, he's so much a "he" (and not an "it", like the better-functioning robots), you are in awe of his increasing human intelligence, his budding human 'consciousness'.  Baby steps to complete assimilation, for "it" to truly become a "him", and for us to accept the reality of self-aware machines capable of a consciousness equal to our own.  Interesting..

How does one program a machine to hope or want or feel, though?  (Sophia used words like "I feel", "I hope" and "I want"). 

Perhaps, for some, it would be an improvement over the real thing, to have conscious robots.  Humans are unpredictable (housing all those emotions and flaws and stuff) -- robots, as industrial workers, would not grow tired, or bored, or succumb to health problems as a result of being exposed to certain chemical hazards.  They wouldn't unionize or go ballistic and threaten to take revenge on the boss or other workmates.  They might, however, collectively put millions of human workers out of work.

I'm probably in the minority here but I find human-looking robots, taxidermied animals, and ventriloquists' dolls all a bit creepy.  In each and every case, the same, immediate, almost instinctual unease, perhaps subconscious fear, of the eerie persistence of the not-really real.  Or something like that. This, from someone who enjoys (and produces) fiction.   Maybe it has less to do with created realities than creative Deception, for purposes other than entertainment, where one programs things to react (an object to destroy/obliterate), or a human to be easier to control, etc.  Who knows.

A taxidermied wolf won't come back to life to attack you; a ventriloquist's dummy is just an inert wooden doll.  Neither has intelligence.  If we give robots intelligence, and make them "just like us" (more or less), given that their programmers are humans . . .    well, when your washing machine goes haywire you can always wash your stuff out by hand.  Should your programmed household robot one day go Terminator-like, do we call the RTF (Robot Task Force) to come to contain it?  My TV science fiction programming triggers these fantasy nightmares.  I watched too many Twilight Zones to erase that sort of mind leap, ha ha.

That's not to say I'd turn down the offer of a free remote-controlled robotic vacuum cleaner.  As long as it didn't have eyeballs, speak to me, or self-activate.



Sunday, May 31, 2015

Finds

Grande vente de garage at Parc Ile Saint Quentin yesterday.


 What I found:

____________________

a leather-boot keychain, for 25 cents


A lovely green stone necklace, 50 cents


   Two pen holders and 6 drawing nibs (new!!), $1


A black and gold-colored metal wildcat pin, $2


Artwork made from pressed flowers, herbs and seeds
from Quebec artist/botanist Julie Corbeil,
signed and framed,  $4

(This photo doesn't do it justice.)


A little hand-carved wooden figure by
sculptor Robert Jean, of Saint-Jean-Port-Jolie, $1


When I brought him home, I put him next to another carved wooden figure
that I got some years ago that's actually an incense holder.
They seem to be getting along just fine together.

I was not so sure, however, about Francis and François.
Francis is a wooden deer with broken antlers, carved from driftwood, gotten in Vermont.

"Francis"

This carved horse, now called  François, was sitting on a table
 at the garage sale,  ignored by all the passersby.
He reminded me a bit of Francis.  I, too, passed him by -
but then went back.  Something about those eyes. 


His backside includes this gaping hole
that resembles a mouth, howling.

Here's an imaginary (photographic) intro between the two sculptures
as they size each other up.

"Hello, who's this you've brought home with you?"

May I present François, I said,
emphasizing his finer-sculpted points.

Francis checks him out.

No comment.

 François waits.

I'm taller than you, Francis thinks.

You can join me in sentinel duty at the window,
he says, authoritatively.

Because photographers can manipulate perception,
here it would seem they are the same height
and Francis appears more friendly.

As in all contrived, anthropomorphic stories,
a happy ending trumps a not so happy one.

 I truly do not know if Francis and François will get along,
or if the perpetually smiling wooden man won't occasionally
feel like frowning --  but in our world, inanimate objects
don't speak or feel, and so can't really tell us.

And yet they do, when imagination takes over,
giving a tiny, decorative clunk of metal
one pins on a garment, to "accessorize",
the ability to leap, to dream.


"Can I go now?"