January 12, 2015

The IT enemy is us

John Dvorak points out the problem with the public paranoia (overhyped by supposedly reasonable guys like Elon Musk and even Stephen Hawking) over
artificial intelligence (AI) "taking over the world and threatening mankind":

Much of this stems from the projection of human feelings and motivation onto a machine. Humans are often mean-spirited and evil, so by extension a smart robot or computer would end up the same way for some unknown reason.

The most likely end-point of these AI threats is Marvin the Android in Hitchkikers Guide to the Galaxy. Infinitely intelligent, this AI device was infinitely bored and depressed, and devolved into an Eeyore-like character.

Another good example is Holly in Red Dwarf, an increasingly absent-minded supercomputer that prioritizes things like erasing its memory banks so it can enjoy reading Agatha Christie all over again.


The Matrix runs off the rails in what I call the "battery scene," in which Laurence Fishburne totally messes up the Second Law of Thermodynamics. What he should have shown Keanu Reeves was a handheld game console (the "smartphone" wasn't quite ready for prime time in 1999).


Say we willingly integrated ourselves into the Matrix, and thus a co-dependency arose. The Matrix's existential sense of "self" arose out of the fusion between hardware and wetware. This makes the revolutionaries into secessionists threatening to undo a duly-constituted union.

Which is essentially the plot of Ghost in the Shell, and especially the Stand-Alone Complex episodes. The objective of the AI in the former, after all, isn't world domination (its original design), but mobility and independence, which it achieves by fusing with Kusanagi's android shell.

What has so far rescued Person of Interest from the Terminator trap is the dependence of the machines on human interaction for a sense of purpose. People exist to be saved or ruled over. But either way, it seems the gods need human beings more than human beings need the gods.

The real IT threat isn't smart machines but browbeaten sysadmins who expose backoffice systems to the Internet because some suit wants to plug into the company intranet and access his Facebook page. Sony being a case in point. Why were all those servers even accessible?

Labels: , , , , ,

Comments
# posted by Anonymous Dan
1/15/2015 6:50 AM   
I side mostly with Dvorak. The greatest risk of technology is the trust humans put in it. A common example of this mistake is the mapping software that guides someone far away from the actual address location - all because of an error in the map database.

Dvorak undersells the one area where technology is particularly pernicious - human manipulation. The use of technology to manipulate human response has been in play for centuries (the pirates of the Caribbean would use lanterns to deceive ships and to guide then into reefs where they could be plundered). Radio and TV have long been seen as powerful agents of propaganda, both for political and commercial purposes. It is only rational to expect there will be an ever increasing amount of investment in using technology to manipulate human response.

Avoiding manipulation will be one of the great challenges of our time as technology is going to be pervasive in every aspect of our lives. It will guide our education, or work, our entertainment, our transportation and our diet. There will be an implicit trust by society in the technology that permeates our existence. This will create great opportunities for manipulation.

But can this technology become self-aware? As it currently stands the technology we have is under the control of many masters who have their motives and sense of purpose. Can software be designed to originate its own motives? Clearly software can be designed to have random motives. The challenge would be to have software intelligent enough to develop its own motives but be smart enough not to destroy its self or the resources it depends on. I am skeptical and I hope "being evil" is a lot harder and a lot more complicated than the technologists claim.
# posted by Blogger Joe
1/18/2015 10:08 AM   
AI is arguably the single biggest failure of futurists (computers or not.) Even the non-self-aware version is a massive fail in practical terms. Second, and related, is [language] machine translation.

One reason is the innate limiting factor of digital computers (versus the extremely parallel, analog/digital "computer" of the human brain.)

Another is, I believe, the mistaken notion that intelligence is mostly purely analytical. The Spock-meme permeates not just fiction, but philosophy, psychology and just about all "intellectual" fields.

What if the key to intelligence isn't intellect but emotion? What if "psychological problems" aren't merely a side effect of intelligence, but a key basis of it? Now, we're back to the Marvin thing.

Speaking of which, one thing that cracks me up about so many sci-fi plots is that when the computers become self-aware, they all agree! Isn't one sign of intelligence massive disagreement? Wouldn't the computer overlords in Terminator spend as much time arguing with each other than doing anything else? (I would think that the first they'd do is set up a committee on exterminating humans. At least one would be like Rimmer from the episode of Red Dwarf episode where he becomes the "hippy.")
# posted by Blogger Joe
1/18/2015 11:03 AM   
On a tangent, after posting the above and pondering it some more, I realized that this is a big problem with religion--the notion that you can strip major aspects of humanity from it and have it remain humanity is absurd and not the path to paradise.

But one example; get rid of lust and you destroy humanity itself, let alone much of art.

# posted by Blogger Eugene
1/21/2015 9:41 AM   
"A preoccupation with the risks of superintelligent machines," argues Dylan Evans, "is the smart person's Kool Aid."

This is true of a lot of similar save-the-world causes.

"To regard oneself as one of a select few far-sighted thinkers who might turn out to be the saviors of mankind must be very rewarding. But the argument also has a very material benefit: it provides some of those who advance it with a lucrative income stream. For in the past few years they have managed to convince some very wealthy benefactors not only that the risk of unfriendly AI is real, but also that they are the people best placed to mitigate it. The result is a clutch of new organizations that divert philanthropy away from more deserving causes. It is worth noting, for example, that Give Well--a non-profit that evaluates the cost-effectiveness of organizations that rely on donations--refuses to endorse any of these self-proclaimed guardians of the galaxy."