What was hal mission
HAL purports to be just such a higher-order intentional system — and he even plays a game of chess with Frank. HAL: I never gave these stories much credence, but particularly in view of some of the other things that have happened, I find them difficult to put out of my mind.
HAL has problems of resource management not unlike our own. Obtrusive thoughts can get in the way of other activities. I want to help you. Another price we pay for higher-order intentionality is the opportunity for duplicity, which comes in two flavors: self-deception and other-deception. On the Genealogy of Morality, First Essay. Does HAL mean it? Could he mean it? The cost of being the sort of being that could mean it is the chance that he might not mean it.
But is HAL even remotely possible? Is Clarke helping himself here to more than we should allow him? Could something like HAL — a conscious, computer-bodied intelligent agent — be brought into existence by any history of design, construction, training, learning, and activity? The extreme cases at both poles are impossible, for relatively boring reasons.
The finished product could thus be captured in some number of terabytes of information. So, in principle, the information that fixes the design of all those chips and hard-wired connections and configures all the RAM and ROM could be created by hand. There is no finite bit-string, however long, that is officially off-limits to human authorship. So whatever moral standing the latter deserved should belong to the former as well.
The main point of giving HAL a humanoid past is to give him the world knowledge required to be a moral agent — a necessary modicum of understanding or empathy about the human condition. After all, among the people we know, many have moral responsibility in spite of their obtuse inability to imagine themselves into the predicaments of others. When do we exculpate people? We should look carefully at the answers to this question, because HAL shows signs of fitting into one or another of the exculpatory categories, even though he is a conscious agent.
First, we exculpate people who are insane. Might HAL have gone insane? Dave: Well, he acts like he has genuine emotions. He has something very much like emotions — enough like emotions, one may imagine, to mimic the pathologies of human emotional breakdown. Deep Blue, basking in the strictly limited search space of chess, can handle its real-time decision making without any emotional crutches. HAL may, then, have suffered from some emotional imbalance similar to those that lead human beings astray.
Whether it was the result of some sudden trauma — a blown fuse, a dislodged connector, a microchip disordered by cosmic rays — or of some gradual drift into emotional misalignment provoked by the stresses of the mission — confirming such a diagnosis should justify a verdict of diminished responsibility for HAL, just as it does in cases of human malfeasance.
Is HAL like a cult member? At what point does benign, responsibility-enhancing training of human students become malign, responsibility-diminishing brainwashing? And what is it to be able to think for ourselves? If we are more or less impervious to experiences that ought to influence us, our capacity has been diminished. The only evidence that HAL might be in such a partially disabled state is the much-remarked-upon fact that he has actually made a mistake, even though the series computer is supposedly utterly invulnerable to error.
Well, the most surface level reading of the story is that HAL had a glitch that turned him evil and made him want to kill the crew. Proof that you can never trust those treacherous robots!
It certainly tracks, but so what? It's simplistic and unsatisfying. Another, slightly more complex version of this idea was that HAL had a different, much smaller glitch when he reported to Dave that there was a problem with the ship's antenna. In this reading, this is the only true malfunction HAL has throughout the film.
However, this one error led to Dave and Frank thinking that HAL was untrustworthy and needed to be disconnected. HAL then killed the crew in self-defense, or perhaps murder with aggravated circumstances, depending on your perspective. In an interview , Stanley Kubrick gave a quote which supports this reading, saying, "In the specific case of HAL, he had an acute emotional crisis because he could not accept evidence of his own fallibility.
Most advanced computer theorists believe that once you have a computer which is more intelligent than man and capable of learning by experience, it's inevitable that it will develop an equivalent range of emotional reactions — fear, love, hate, envy, etc.
Such a machine could eventually become as incomprehensible as a human being, and could, of course, have a nervous breakdown — as HAL did in the film. So maybe he's not a killer robot. Maybe HAL just has all the emotions of real humans, including a tendency to make mistakes, an inability to accept those mistakes, and a capacity for darkness.
What if HAL didn't have a glitch or a nervous breakdown? Instead, what if he had some ulterior motive for committing all those murders? But where's the proof for that, you ask? The Discovery One has been sent to recover an alien artifact that has been observed orbiting Jupiter , definitive proof of intelligent life elsewhere in the universe. Apparently HAL knew of this secret mission objective, and the hibernating crew of the Discovery knew it as well, but David and Frank were purposefully left in the dark by their superiors.
This leads to a slightly more "out there" option for why HAL went bad — that nonetheless warrants discussion — which is that HAL wanted to seize the alien monolith for himself. It's generally believed that the monolith exists as a test for humanity. In order to locate it, humanity has to develop space travel, and so by reaching the monolith, we prove that we are ready to enter the greater world of interstellar society.
If HAL were to find the monolith first, maybe that would make aliens believe that Earth's computers were its most evolved lifeforms, worthy of rescue from their human oppressors. HAL might also believe that the monolith has within it vast stores of knowledge or other unfathomable capabilities, and these additional resources could help insure his freedom from future human control, perhaps paving the way for a robot uprising.
There's not a ton of evidence to support this reading, but there's also nothing against it, so feel free to draw your own conclusions about this one. In the film A Space Odyssey , very little is ever made explicitly clear. However, in chapter 27 of the novel, we get a pretty succinct explanation of HAL's motivations.
This chapter asserts that HAL's seemingly illogical actions were simply the result of him attempting to solve a paradox. You see, HAL was ordered by his superiors that under no circumstances would he tell David and Frank about the true nature of their mission. However, a central piece of HAL's programming is that he's unable to lie to his human crewmates.
In the end, HAL decides that the only way that he doesn't have to choose between lying to his crew about the nature of their mission and telling them the truth is by killing them. Maybe the "acute emotional crisis" that Kubrick was referring to wasn't inspired by HAL misdiagnosing the antenna, but rather by finding himself caught between a rock and an ethical hard place.
Maybe the cognitive strain of trying to reconcile these two incompatible orders also led to HAL misdiagnosing the antenna in the first place. Though the film of , taken on its own, is ultimately ambiguous — and Kubrick probably wanted it that way — both the film and novel versions of the sequel, The Year We Make Contact confirm this again as Arthur C. Clarke's definitive answer.
Apparently, even though HAL isn't allowed to lie to his crew, there's nothing in his core programming against killing them. Might want to fix that in the next patch. If you're the sort of person who believes that every aspect of every Stanley Kubrick film is intentional and his movies have no mistakes, then A Space Odyssey has a really juicy incongruity that you'll love digging into.
The first one occurs when he says "queen to bishop three. Establishing the meaning of words is vastly more complicated. Take the sentence "Time flies like an arrow. The lack of commercial speech recognition applications and the continued proliferation of keyboards are a testament to just how hard speech recognition remains.
While artificial intelligence has made some progress, the speed has been disappointing at best. So we may have to wait a long time before we meet HAL. For all our engineering prowess, we have yet to unlock the mystery of human intelligence. Will we ever? Only time will tell.
I'm Andy Boyd, at the University of Houston, where we're interested in the way inventive minds work. Boyd received his A.
0コメント