Director: Steven Spielberg
Writer: Steven Spielberg
DP: Janusz Kamiński
Editor: Michael Kahn
Score: John Williams
Producer: Kathleen Kennedy, Steven Spielberg, Bonnie Curtis
Starring: Haley Joel Osment, Frances O'Connor, Jude Law, William Hurt
Distribution: Warner Bros. Pictures
Length: 2 hrs. 26 min.
The film opens with Professor Allen Hobby, a designer of androids played by William Hurt, speaking to a room full of people. He's saying things that sound heady and philosophical, but eventually we realize that what he's doing is highlighting the potential benefits of his newest creation for his investors; he's making a sales pitch. The camera drifts across the room from above, peering down at Hurt's character between the investors' silhouetted heads. This image - a picture of practicality and economy taking precedence over questions of philosophy and morality - serves as an effective jumping-off point for the rest of the film.
Why? On the surface, A.I. Artificial Intelligence appears to discuss the same moral and metaphysical concepts that a multitude of other American science fiction films have discussed. However, the film's visual language undercuts the conversations the characters have about whether or not an artificial being can truly attain a human level of consciousness and emotion. There are scenes early in the film in which David, the android child played by Haley Joel Osment, is cast in shadow while the humans in the room are not, highlighting his "inhumanity" while he attempts awkwardly to imitate human behavior. But then the humans leave the room to discuss their uncertainty about the reality of David's emotions, and when the camera follows them, they become cast in shadow; we're forced to reconsider an image that at first alienated David as an image of understandable, human uncertainty.
At one point, David asks his adopted mother if she will someday die. She tells him she will, and tries to explain it to him briefly before being cut off. She doesn't get to say much, but there's a moment in which she contemplates her own mortality. In this moment, we don't see her, but her reflection: the use of a reflection combines distance with intimacy, in the same way that the film's extremely lifelike androids do.
What the film posits is that imitations of human life are packaged and sold under the pretense of offering a solution to loss and interpersonal conflict. Androids are attempts to capture and understand humanity in a way that eliminates imperfection and compensates for worldly suffering. So, when a human brings up the subject of their own mortality, we see a reflection; a simulacrum that recalls the nature of the androids.
But this is an untenable state of affairs. Whether David is human or not, he was programmed to love, and therefore longs to be human. He tries to act in such a way that his position and effect in the world will be recognizably human, which quickly becomes a complicated and unpredictable task that brings an even greater uncertainty.
The point is, it's clear that the effort to sell the concepts David embodies necessitates the selling of humanity itself. The film suggests that commodifying people's basic needs and anxieties is a cruel, twisted form of instrumentalization that results in only greater confusion and deprivation, and possibly tragedy.
The ending is frequently maligned, but more than anything else in the film it serves to highlight both the destruction wrought by the system under which David lived and the potential for a better system. And taking that into account, it's appropriate that it should directly reference the ending of 2001: A Space Odyssey: that film, after all, ended with a suggestion that technology could help humanity make progress, but that it could equally end in destruction. Both films end with a call for caution, deliberation, and engagement with the real world.