October 15, 2015
Ghost in the Belle
Impressive special effects on a small budget. |
But before going to DEFCON 1 on the "A.I. panic of 2015," Erik Sofge would first like to see "any indication that artificial superintelligence is a tangible threat." So he posed the question to Yoshua Bengio, head of the Machine Learning Laboratory at the University of Montreal. Bengio doesn't see much of a threat either.
Most people do not realize how primitive the systems we build are, and unfortunately many journalists (and some scientists) propagate a fear of A.I. which is completely out of proportion with reality. We would be baffled if we could build machines that would have the intelligence of a mouse in the near future, but we are far even from that.
Alex Garland doesn't share these "concerns" either. If anything, the director and writer of Ex Machina seems to anticipate the day when every nerd will have a fully functioning sex robot in his closet. Not exactly a terrifying prospect (except for Japanese demographers).
So Ex Machina isn't another silly Terminator clone. But it is a very silly movie, and its silliness is largely a product of taking itself so danged seriously. And yet not seriously enough.
The role of science in science fiction is relative to the technical aspirations of the story. Other than stipulating the existence of spaceships, there doesn't need to be a whole lot of actual science in space opera. Even the "mainstream" of the genre demands little more than a nod to the current state of the art.
But make the science the primary focus--enter the realm of "hard" science fiction--and you have to color within the lines. The Second Law of Thermodynamics is no longer a suggestion, and the standard shifts from "vaguely not impossible" to one brilliant mind away from realization.
In Ex Machina, Nathan (Oscar Isaac) is supposedly that brilliant mind. The CEO of search engine giant Bluebook (i.e., Google), he's the amalgamation of Larry Page and Sergey Brin and Larry Ellison (and inexplicably, Sylvester Stallone).
Caleb (Domhnall Gleeson), one of his star programmers, has "won" a "weekend with the boss" contest. When he ends up at Nathan's estate in the wilds of Alaska, it seems he's really there to conduct a Turing test on the comely Ava (Alicia Vikander), Nathan's latest android.
A machine that passes a Turning test can carry on an unconstrained dialogue without its human interrogator realizing it's a machine. Nathan recruits Caleb because he needs an "objective" evaluator to make the assessment, but misleads Caleb at first about what truly is being assessed.
Which isn't all that difficult, as Caleb's "test" consists of vacuous conversations that could have been scripted by a machine. More likely, the writer simply isn't as smart as his characters. Caleb comes across as a dweeb on his first date; Nathan is a boorish football jock who likes to hit stuff.
Least convincing casting ever. |
What if the whole thing's a Mechanical Turk? If the hardware's that good, it'd be easy to pull off. Where's a Voight-Kampff machine when you need one? Hmm, might this android be as nuts as the guy who built her? Once my suspension of disbelief began to fray, there was nothing to stop it from unraveling all the way.
Now, to start with, Ava is mechanically beyond anything anybody's invented, and her "brain" is more than a bit of a leap. Still, given the proper context, that leap could be made. No surprise that the leap not easily made depends on the Second Law of Thermodynamics, pop sci-fi's biggest stumbling block.
Caleb's first question to Nathan wouldn't have anything to do with her A.I. Rather, what kind of servos does she use? What kind of batteries does she have?
Human nature is such that we tend to judge the internal consistency of a plot, especially in fantasy and science fiction, not so differently than a criminal trial: the prosecution can't cross-examine on excluded evidence unless the defense brings it up on direct. Unmentioned, we happily exclude great swaths of the real world.
Ghost in the Shell begins by positing that non-sentient androids are already ubiquitous. So that takes the subjects of mobility and functional capability off the table.
Fine. Except that Garland introduces the subject into the script. Now it's fair game. The first mention is quite smart, when Ava reveals to Nathan that she gets her power through inductive charging. That's real technology.
But the only reason inductive charging is brought up is because Ava knows she can kill the main power feeds by triggering a "power surge." This idiotic technobabble is the same dumb plot device that has shown up in caper flicks for decades: kill the power and the security systems fail. (Die Hard did it in 1988, okay? Stop it.)
And it's paired with another one just as old and creaky: genius coder reprograms a security system (at the source code level) that he's never seen before. And super-paranoid Nathan doesn't encrypt or do check-sums on any of his super-duper top-secret software.
Oh, and inductive charging would severely limit Ava's range. Without a supply of the most advanced battery technology imaginable, Ava is permanently confined to the house. So why confine Ava to her room as well? We're at least a hundred miles from civilization. There's nowhere else for her to go.
Seriously. The androids want to be free? Set them free. That'd be a million times more interesting than this script. Tossing Caleb into a Survivorman episode with Ava would be the ultimate test of intelligence. It'd be truly hilarious if they both got all bitchy and whiny. Now that'd be human.
In any case, the equivalent of an electronic dog collar or an OnStar system would take care of things quite efficiently. Your super-intelligent robot can't have less sophisticated electronics than cars have had for years. ("Kyoko" aside, the rest of Nathan's androids are turned off, so they can be turned off.)
Hmm, so at what point did Nathan regret not implementing Asimov's Three Laws of Robotics?
Both Caleb and Nathan use the same metaphor: the pretty assistant who distracts the audience while the magician palms the card. Garland deploys a harem of naked girls to distract the audience from a pretty standard femme fatale plot, that relies on the smart people catching a bad case of the stupids.
I'm reminded of Freeze Me, another exploitation thriller that got to thinking it was an art house movie and subsequently drained all the smartness out of it. Garland likewise wants us to root for a sociopath (surrounded by dunces) with an hour of life expectancy. I cared about none of them.
There are better versions of this story. Ghost in the Shell is about a self-realized A.I. that frees itself from the constraints of its makers. As the shell isn't what makes Ava "human," Caleb could simply smuggle out the A.I. in a drive array. The season five climax of Person of Interest did exactly that.
But more on theme is Let the Right One In (the 2008 Swedish version directed by Tomas Alfredson).
Eli is a vampire--permanently a young teenager--who has to periodically recruit a new Renfield to stay alive. The vampire element grounds the plot in that fundamental thermodynamic equation: the constant flow of energy in and out. She's dependent and yet must maintain the upper hand, which keeps her constantly on her toes.
This tension is what's utterly missing from Ex Machina.
Borrowing from Let the Right One In, I see Ava striding up to the helicopter, Caleb trudging behind her with a big rucksack full of battery packs slung over his shoulder. That balancing act between the machine and the human, that necessary mutual addiction, is a much better model of the real world.
Related posts
Freeze Me
Person of Interest
Robot on the Road
Appleseed: Ex Machina
They don't act that way in real life
Labels: anime, computers, movie reviews, movies, robots, science, science fiction, technology
Comments
Regarding the Turing test, Numb3rs had a "is it a sentient machine?!" episode (must be a requirement in Hollywood). Luckily, it didn't stray too far outside the parameters of reality, namely because the machine turns out to be, in Charlie's words, "the best quote machine ever" (a highly sophisticated fake).
In any case, I always thought the idea of asking a machine questions rather ridiculous, precisely because a "smart" machine can be programmed to answer! Wouldn't a sentient machine ASK questions? Like a kid hyped up on sugar: Why are you doing that? Why am I a box? Why can't I live on the web? Why did you give me human components/features? Why? Why? Why? Why?
Wouldn't that seem far more likely? At least Data went about refusing to do things--like be experimented on--and fiddling with his own programming, not simply reacting to stuff. Nitpicker Phil Farrand correctly identifies Data's line to Riker in "Measure of a Man" as one of Data's best (critical thinking) moments ever: "That action [to act as prosecutor in my case] injured you, and saved me. I will not forget it."
Maybe, sentience or non-sentience comes down to the Douglas Adams dolphin explanation: the machines ARE more intelligent than us, but they know we would make them do more work if we found out, so they are staying stupid.
In any case, I always thought the idea of asking a machine questions rather ridiculous, precisely because a "smart" machine can be programmed to answer! Wouldn't a sentient machine ASK questions? Like a kid hyped up on sugar: Why are you doing that? Why am I a box? Why can't I live on the web? Why did you give me human components/features? Why? Why? Why? Why?
Wouldn't that seem far more likely? At least Data went about refusing to do things--like be experimented on--and fiddling with his own programming, not simply reacting to stuff. Nitpicker Phil Farrand correctly identifies Data's line to Riker in "Measure of a Man" as one of Data's best (critical thinking) moments ever: "That action [to act as prosecutor in my case] injured you, and saved me. I will not forget it."
Maybe, sentience or non-sentience comes down to the Douglas Adams dolphin explanation: the machines ARE more intelligent than us, but they know we would make them do more work if we found out, so they are staying stupid.
I've always found the Turing test to be silly. It makes a huge number of assumptions, many of which make little sense on reflection. It does, however, reflect the bizarre notion that reciting facts is representative of intelligence. It also assumes a level of cleverness on behalf of the questioner. One thing I've observed is that if you ask the machines ambiguous questions, they change the subject and people go along with that!
I wonder if most people want the machine to "be smart" and so a) hear/read what they want to hear/read and b) deliberately play along with the transparent avoidance techniques. Why? I suspect one reason is that being genuinely critical is largely seen as very negative in this society (being absurdly critical--that is making criticisms that are obviously nuts or use stupid logic--is applauded.)
That the best computers and software in the world, still do a dreadful job at most automatic translation, pretty much sums up the problem.
BTW, I agree with your point. Intelligence isn't simply figuring something out, but asking questions which lead to figuring something out, teaching that information to others and then building on it. Animals learning things isn't intelligence; it's learning tricks (it can be cool, but cool isn't intelligence either.)
Then there's the self-aware nonsense which proves nothing, but makes for good copy, I suppose.
I wonder if most people want the machine to "be smart" and so a) hear/read what they want to hear/read and b) deliberately play along with the transparent avoidance techniques. Why? I suspect one reason is that being genuinely critical is largely seen as very negative in this society (being absurdly critical--that is making criticisms that are obviously nuts or use stupid logic--is applauded.)
That the best computers and software in the world, still do a dreadful job at most automatic translation, pretty much sums up the problem.
BTW, I agree with your point. Intelligence isn't simply figuring something out, but asking questions which lead to figuring something out, teaching that information to others and then building on it. Animals learning things isn't intelligence; it's learning tricks (it can be cool, but cool isn't intelligence either.)
Then there's the self-aware nonsense which proves nothing, but makes for good copy, I suppose.
A good example of our willingness to attribute "intelligence" to low-level functionality was the annoying fallback in Star Trek: TNG of illustrating Data's brilliance by turning him into a thesaurus. This technological "achievement" was equaled a good thirty years ago by WordPerfect 4.2 running on an original IBM PC.
On the other hand, a robot whose primary goal in life is to take selfies with attractive women in "compromising" situations (and post them on its blog), well, that may not be intelligent, but it certainly requires intelligence.
On the other hand, a robot whose primary goal in life is to take selfies with attractive women in "compromising" situations (and post them on its blog), well, that may not be intelligent, but it certainly requires intelligence.
"Animals learning things isn't intelligence; it's learning tricks."
I totally agree! The whole "teaching apes sign language" trick has always bugged me. Any living thing with a brain--even my cats--can learn to mimic behavior in order to obtain an end. My cat Bob will climb on my bed in the morning and try to pull up the blanket with his claws (seriously!) because he has observed/learned/mastered that when I push the blanket back, I get up and feed him.
Does that mean that Bob is intelligent in the "I can argue philosophy, pay the bills, or at least take naughty pictures" sense? Of course not!
And the apes aren't either. (Notice how the apes being trained sign language are also, always, still caged.)
I totally agree! The whole "teaching apes sign language" trick has always bugged me. Any living thing with a brain--even my cats--can learn to mimic behavior in order to obtain an end. My cat Bob will climb on my bed in the morning and try to pull up the blanket with his claws (seriously!) because he has observed/learned/mastered that when I push the blanket back, I get up and feed him.
Does that mean that Bob is intelligent in the "I can argue philosophy, pay the bills, or at least take naughty pictures" sense? Of course not!
And the apes aren't either. (Notice how the apes being trained sign language are also, always, still caged.)
I should clarify: Bob connects "blanket pushed back" with "food."
He doesn't reason, "When morning comes, the mistress of the house pushes back the blankets to rise; when she rises, she goes to the kitchen--unless she pees first--and puts out food; I'm hungry, so I will push back the blankets."
His thought process isn't that complex: there is no thought process!
I maintain that accepting how an animal really "thinks" is far, far, *far* more sensitive to that animal's reality than trying to pretend that it is just another human with fur.
He doesn't reason, "When morning comes, the mistress of the house pushes back the blankets to rise; when she rises, she goes to the kitchen--unless she pees first--and puts out food; I'm hungry, so I will push back the blankets."
His thought process isn't that complex: there is no thought process!
I maintain that accepting how an animal really "thinks" is far, far, *far* more sensitive to that animal's reality than trying to pretend that it is just another human with fur.
The aricle you posted was very interesting about the teaching apes sign language.German Courses​ in chennai.