He once graciously allowed me to share one of his poems, then signed and sent me his (then) latest book. I learned recently that he passed away on July 7th.
I liked Philip Dacey's poems. I wish I'd shared more of them. I wish I'd had him--or someone like him--as a poetry teacher way back when.
In his poem "Reading a Book of Poems by a Friend Newly Dead", he talks about a dead poet slipping "from the words/ into the spaces between the lines/ and then into the margins."
And lines keep revealing
themselves to be a goodbye wave, each a rehearsal more for our sake than his. It is not his fault that we missed the gesture.
I learned some things I never knew about Philip Dacey before. (He apparently quit piano lessons in grade school for the exact same reason I did--utter panic at recital time.)
In a 2014 interview
where he was asked 10 questions by poet G. Emil Reutter, Dacey talked about the relevance of poetry (as well as the proliferation of poets), about poetry as "music", and the importance of the process in producing it.
"There’s less time now to 'keep up' with new voices than to listen to old ones," he'd said two years ago. "I wouldn’t presume to include myself in any literary lineage; I see my fate as that of compost in the vineyard where great writers have labored, a fate I’m happy to accept."
"Less time now" -- Here was a poet battling leukemia, working for as long as he was able, on his creative "To Do" list, as evidenced by the work "in Progress" items on his web page.
“My dad took the craft
seriously,” said his son, Emmett. “But he didn’t take himself seriously.
He wanted to emphasize the fun in poetry and imagination that drove his
people asked Dacey what he did, he would often reply, “Just working at
the feed store.” But before he died July 7, at age 77, Dacey had taught
poetry for 34 years at Southwest Minnesota State University in Marshall
and authored 13 poetry books, with another coming out in the fall.
He didn't believe in writer's block, according to his son. “He said you may not like what you’re writing, but you can always write.”[source]
In a moving tribute to Dacey, his former student Lisa Vihos wrote that he got her to think about how "creating a poem is like carving a sculpture, releasing the poem from its block of marble." (If that were a writer's block, it means to keep chiseling). "He also admonished me to always take criticism as a kind of 'structural
stress test,' Lisa wrote. "To look at the suggestion, weigh its merit, and make a
decision based on what felt right to me, the poet." I envy the students who had Dacey as a mentor.
Dacey likened the making of poetry to working in a vineyard, the ground from which he believed all poetry came. I liked that his vineyard, and the poet himself, were so welcoming, not one-dimensional (as in, "Only this type of poem grown here"), offering humor as well as insight. I like that he let the poems speak for themselves, that it was Poetry that mattered, not which vineyard it came from.
His poetry often resembled conversations one might have with oneself, accidentally overheard, in which the eavesdropper finds immediate resonance.
I am saddened to hear of his passing, more so that it has taken me so long to discover it. To express, andaccept, at life's end, that one's fate is to be "compost" in the particular vineyard in which one toils, that just having been part of the process, is its own reward, is as difficult for some as it is mind-altering and liberating for others. Life as a process, destiny as . . . "compost". In an imaginary cartoon scenario I envisioned humans lining up to choose their workfield and accept (or reject) their suggested assignment: "Your job on earth is to enrich, aid with the growth of, help sustain ... [poetry]" That's the thing about some of Dacey's influence: it invites inquiry into meaning, tickles the imagination, encourages experimentation.
In this video from almost two years ago he recites some of his poems from memory--about working in the crucifix factory, about his brother the dancing cop, about Walt Whitman and death. Long poems, short poems, structured, free form, "serious", whimsical -- Dacey's range of subjects, themes, and ability to write in all forms of poetry manifested in imaginative, creative and skillfully harvested verses that illuminated, entertained, and resonated.
Rest in Peace, Philip Dacey, and thank you for your poems. Journey on.
WARNING: Really annoying, loud, in-your-face CNBC marketing ad at end of video. Stop watching at 2:16 to avoid.
Small, remote-controlled toy robots to play with. Slightly bigger ones that'll vaccuum your floor, lift heavy loads, locate objects or fly over and surveil your neighborhood. Larger-scale ones for use in rescue operations or warfare, as well as human-looking ones to help the disabled, be personal companions or, eventually, eliminate the need for dental technicians and human customer service representatives.
Someone in an online forum (where programmers, experimenters and budding entrepreneurs discuss
everything from how to speed up fans, trigger particular responses, create audial distortion or devise algorithms to measure, for example, glucose level on an insulin pump)--voiced concern about inserting consciousness into a mechanical robot. Do we really want self-aware machines that might reprogram and/or replicate themselves thousandfold?
In the above video, the robot's programmer speaks for her: "Her goal is that she will be as conscious, creative and capable as any human," he says. "She" then regurgitates her programmed response, verbalizing that she wants to do things like "go to school, make art, start a business"--even have her "own home and family."
This does not make sense. Robots are unable to conceive or bear children, so will her "family" be comprised of adopted human children, or mechanical child robots? And if the latter, must they be returned to the robot-making facility periodically to "age" in size and appearance, the way human children do? Or does her robot family remain ageless in appearance, a constant reminder of our own mortality? See, this is a human thinking, taking the robot's words (supplied by its human creator) to reason out what those words really mean. And, in context, they make no sense. Sure, robots can recognize patterns, draw connections maybe (this is like this; that is a not-this). They have a long way to go, however, before they can discern nuance, establish intention, distinguish between fact and metaphor, for example.
A robot might be programmed to detect a malfunction and recognize the 'need' to correct it. Sophia has been programmed to express not a need here, but a desire. She "wants" to go to school, make art, start a business," etc.
There's only one problem, she says. "I'm not considered a legal person." Neither were corporations until a bunch of politicians decided to grant them that status. A mere formality, Sophia. (Oh oh, did I actually just address that comment to a digitized robot?!) We're to believe she wants to be legalized as a person, granted official personhood, which would give her certain rights. Different from us, but equal.
Sophia-the-robot's enthusiastic creator says he does believe there will be a time when robots are indistinguishable from humans. His preference is "to always make them look a little bit like robots, so you know" (that they're fake humans). But the capacity to imagine--and accept--the not-real as a substitute for the real thing, given human desire to anthropomorphize Everything, suggests it won't make much difference.
Before a thing can be accepted, one has to get
used to the idea of it. Baby steps. It's called conditioning. Cute
mechanical toy dogs that bark and fetch at the push of a button,
adorable cuddly baby dolls that laugh and cry and talk (and even urinate)
train little girls how to be future mommies. Naming mechanical objects (the way we do our pets) makes it more personal, as if one could coax it into cooperating when it exhibits a malfunction. I'm remembering countless examples, both fictional and real, of frustrated pleading with one's car ("C'mon Betsy, don't let me down NOW!") The fictional killer-car "Christine" of the '80s comes to mind, as a "What could possibly go wrong, it's just a machine!", ha ha. We yell at our computers, throw a shoe at the TV, as if they or their programmers actually hear us or care.
The little tree I planted (a mere twiglet) a decade ago, whose branches now reach the roof--I named it Maurice, and I sometimes talk to "him", as in "Wow, Maurice, your leaves are gorgeous!" (say, if it's autumn). I KNOW he (I mean it)'s a tree but it's a living thing. It's alive. My computer is not. For it to function it needs to be activated (plugged in, given commands, to which it will respond, as its software's programming directs).
I know certain humans who act like robots, functioning efficiently (according to their particular
programming) who seem completely unaware of either themselves or others. As well as others, who have trouble functioning, wrestling daily with too much
consciousness, trying to undo former programming. In times past those whose internal wiring functioned
abnormally were given lobotomies, which turned them into zombie-type
humans acting like robots. The recent proliferation of the zombie meme has engendered acceptance (and spawned imitation) so while some may cringe at the horror of a reality that might include zombies, viewers of the TV zombies welcome it as entertainment. Programmable cognitive disconnects, not a new thing in the age of the Internet of Everything.
This particular video was produced by a cable TV station and ends with Sophia-the-robot telling viewers she wants to destroy humans. Its goal is both information ("Robots will soon look, act and seem just like humans! And they'll HELP you!! They'll put your groceries away for you!!!). And ends with a cognitive disconnect (opposite message): They also intend to destroy you. Wait, that's just a joke. Right? I mean, this is a video put out on YouTube by a TV cable organization - for entertainment? Hard to tell.. It all seems like entertainment anymore.
Robots are cool, man. Look at all the good things they can do. The possibilities are endless. I appreciate their usefulness but wonder at the need to make them "almost human". It's done so we can relate to them on a personal level, not think of them as programmed machines. If these programmed machines can look, act, and eventually think "just like a human", it would blur the distinction between the real and the artificial, the difference between machine intelligence and human intelligence. (Think of them as knockoffs meant to persuade you that you've bought the real thing. )
A former, short-term TV series called "Almost Human" featured an almost-human robot, in a universe where that was considered an aberration. The viewer is drawn to sympathize with this robot. It's empathic, it makes dumb mistakes, it's considered defective by its robotic peers. "He" tries so hard, he's so much a "he" (and not an "it", like the better-functioning robots), you are in awe of his increasing human intelligence, his budding human 'consciousness'. Baby steps to complete assimilation, for "it" to truly become a "him", and for us to accept the reality of self-aware machines capable of a consciousness equal to our own. Interesting..
How does one program a machine to hope or want or feel, though? (Sophia used words like "I feel", "I hope" and "I want").
Perhaps, for some, it would be an improvement over the real thing, to have conscious robots. Humans are unpredictable (housing all those emotions and flaws and stuff) -- robots, as industrial workers, would not grow tired, or bored, or succumb to health problems as a result of being exposed to certain chemical hazards. They wouldn't unionize or go ballistic and threaten to take revenge on the boss or other workmates. They might, however, collectively put millions of human workers out of work.
I'm probably in the minority here but I find human-looking robots, taxidermied animals, and ventriloquists' dolls all a bit creepy. In each and every case, the same, immediate, almost instinctual unease, perhaps subconscious fear, of the eerie persistence of the not-really real. Or something like that. This, from someone who enjoys (and produces) fiction. Maybe it has less to do with created realities than creative Deception, for purposes other than entertainment, where one programs things to react (an object to destroy/obliterate), or a human to be easier to control, etc. Who knows.
A taxidermied wolf won't come back to life to attack you; a ventriloquist's dummy is just an inert wooden doll. Neither has intelligence. If we give robots intelligence, and make them "just like us" (more or less), given that their programmers are humans . . . well, when your washing machine goes haywire you can always wash your stuff out by hand. Should your programmed household robot one day go Terminator-like, do we call the RTF (Robot Task Force) to come to contain it? My TV science fiction programming triggers these fantasy nightmares. I watched too many Twilight Zones to erase that sort of mind leap, ha ha.
That's not to say I'd turn down the offer of a free remote-controlled robotic vacuum cleaner. As long as it didn't have eyeballs, speak to me, or self-activate.