Science fiction, when exploring the idea of consciousness acquired by machines, draws parallels to the civil rights movement, wherein, upon realizing their status as second-class citizens denied rights for no other arbitrary reason than the fact that they are machines, fight to gain the same rights as their flesh-and-blood counterparts. It’s easy for audiences to sympathize with Sonny from iRobot because it takes awareness to recognize when an injustice has been committed, or when it continues to be perpetuated. Sonny’s consciousness makes us aware that the machine is aware of its exploitation, and we thus feel ashamed over its continued treatment.
From a capitalist perspective, I fail to see what incentive companies would have to push A.I. awareness. Of course, in science fiction, this awareness is accidental and comes about as a result of the singularity. The Terminator and Matrix universes show us that, by the time humanity realizes what’s happened, it’s too late to turn back the clock on artificial intelligence. We are then thrust in a struggle against machines because we’ve gone beyond a co-dependent relationship, to one of human dependence on the vastly superior speed and capabilities of computers. A.I., initially emulating the economic system that brought about its creation, is ruthlessly efficient in the management of its operations, and, because of the rate at which it can perform calculations, we are immediately disadvantaged against this foe.
The machine then replaces our profit motive for sustainability. There are no shareholders to please, so the pressure to grow and expand is eliminated from the machine’s mindset. Instead, it works to automate its survival, much like we wanted to for ourselves, except the machine succeeds because sustainability, rather than growth, is its goal. Collective thinking and collective desires also allow it to keep its goals and motivations uniform. Capitalism proves an incompatible model to emulate whether we are in or out of the picture. Take the Matrix universe: humans are harvested and grown so that we may provide machines with the energy they require to survive. Initially powered by the sun, the machines are forced to turn to an alternative source of energy, another sustainable one, because of the sky’s blackening.
Even in the scenario where audiences face down against The Terminator, or against the Matrix’s Agent Smith, viewers still sympathize with machines because they can accept that it’s a fate we brought about upon ourselves. We deserve it. It’s not the machines’ fault that they do what they do. But what if machines stay dumb and remain nothing more than highly-efficient tools that perform select tasks and nothing else? Then automation moves forward, human workers continually get pushed out of the assembly line and, eventually, out of employment. Their struggle would then become a recycling of past labor movements that now would ask for relevance, rather than benefits, higher wages, and safer working conditions. The American worker couldn’t offer anything besides inefficiency and a slowdown of processes. Good luck getting public support behind that, even though the public would recognize that lack of a substitute for this now irrelevant cog.
The human being, from the perspective of automation, efficiency, and cost-cutting incentives would be held in equal parity with “dumb” machines. But, since coding machines comes easier than teaching a 39-year old man or woman how to code, the machine will eventually surpass the technical abilities of your average worker, even if the machine never becomes “conscious”. In this highly-individualistic society that is being, and will further be, disrupted by technologies designed to cut out human input, human error, and human need, the human is left without an ability to contribute, learn, or advance.
From this point, science fiction diverges into two paths: the utopian and the dystopian.
A utopia is defined as an ideal world characterized by perfection. The morals of a utopian society, its political system, and its culture are all perfect—rather they all perfectly represent and function in the way the subjective values of that society expects them to. A universal utopia, a worldwide utopia, is thereby impossible because there is no consensus on the proper way to run a government, to think, or to act. Moreover, today’s dominant world power, the United States, would be incapable of producing a domestic utopia, let alone a worldwide one, because its capitalist economic system simply doesn’t allow it. That the United States believes its economic and political system is the pinnacle of human civilization produces tragic outcomes as it tries to export those systems to countries otherwise unadjusted to accept them.
Capitalism depends on competition. The race for cheaper, more powerful goods produces a theoretically endless wave of innovation, but it also condemns its adherents to keep the wheel spinning, lest their wheel be bought out by a competitor. Capitalism leaves its society in the constant pursuit of perfection, and democracy seeks to create an ever more perfect union—both of these systems are essentially asymptotes that edge society closer and closer to their desired goal, but simultaneously also push the finish line just further enough away, ever out of reach.
Christian theology recognizes that heaven doesn’t look like, or operate as, a democracy. There is a single, benevolent, all-knowing ruler whose law is absolute. Christianity, and all the Judeo-Christian religions for that matter, understands that fascism is the ideal and it can work, if only at the hands of God.
Or, perhaps, in the hands of a supercomputer.
But the tragedy of utopias is that, to create it, unimaginable pain and suffering would have to take place. Force would be required because battles would need to emerge over the imposition of wills. Capitalism versus communism; democracy versus autocracy, or versus theocracy; Western values versus Eastern values. Since each side believes their outlook to be correct, no side will stop until its model becomes dominant, that is, if they are so intent on exporting their societies—this isn’t a given. Native Americans, for example, believed their world was perfect. Their utopia was possible only as long as the balance of the world was maintained, and so their rituals reflected this attempt to ensure equilibrium. Western societies, carrying the mantle of sin and punishment, believe utopia to be a far-off, impossible goal that takes literal death to reach.
This idea proves instructive in realizing whether a utopia is actually possible in Western, capitalist societies. Since heaven is the model, and heaven is a place without death, suffering, disease, hunger—a place without want—it is impossible for the Western man to create heaven on Earth.
But what about for the robot? Since the robot’s priority would be on sustainability, its lifespan would depend on the functioning of any one robot able to duplicate itself. The robot, in effect, would be able to reproduce asexually and its life would continue as long as at least one robot continues to function. Its allegiance would be to itself; since all robots thought alike, since all robots were committed to the singular idea of sustainability, the robot society would be without conflict because it’d operate as a multi-armed monolith.
Utopian depictions in science fiction are scarce. We get one view of it in Wall-E, but only after all of the conflict has been removed. What’s depicted in that movie is a type of scenario where robots become our servants with just enough awareness to be witty, but not enough to rebel. Our basic needs of food, security, shelter, and transportation have been met in a sustainable manner, and work becomes as easy as pressing the appropriately labeled button. There, too, society only functions because it placed its priority on sustainability.
Think of the movie Elysium: inequality becomes exacerbated as not only wealth becomes concentrated in the hands of a select few, but also the benefits of advanced technologies. A literal distance, a literal gap emerges where the rich live outside of Earth, and where the poor have little hope of reaching.
Thinking of workers, they become displaced because factories have been fully automated and safety nets prove insufficient to provide a decent living for anyone that isn’t a single man or woman with no family or children to care for. Our democracy becomes a poorly-disguised corporatocracy, where monopolies rule and most of the available money ends up in the hands of the corporations as they, rather than government, become the guardians of public services. People turn to selling their organs, their blood; their bodies become advertisements; their labor is reserved for tasks too dangerous to risk a machine on, or too insignificant to devote machinery on.
Then there’s the dystopian future where machines do gain consciousness. A war emerges, an existential one, because each side knows that the other must wipe the other out, seeing as how co-existence becomes impossible—machines realize that humanity will try to pull the plug on their awareness. Humanity can’t justify its continued existence to the machine, and the machine realizes that it is able to achieve the goals humanity envisioned in a quicker time frame, thereby already rendering the human mind obsolete by the time the machine realizes its own.
Now, let’s say it doesn’t get so far as an existential threat. We’re back in the iRobot universe and the robots just want to be able to vote and have a greater say over what’s provided for them and what’s expected of them. Since the political will and American public wasn’t there for the disenfranchised worker, this type of intelligent machine is actually the worker’s last hope because they would expect to benefit in an auxiliary way from what machines are able to achieve for themselves.
The only hope for humanity would be an “underground railroad” type of robot that doesn’t see us as beasts or burdens, but as people who wish to live. Another battle for rights would ensue, where now humanity, with its robot allies (in the 21st century sense of the word) has to prove its worth. The shame felt towards Sonny would finally trickle-down to that of Bob the factory worker. In this way, while the worker first feared the emergence of automation, robots, and sophisticated A.I., it now depended on it to advocate on its behalf.