Back and Forth: State Legitimacy, AI and Death

In the Distance
If we are to make progress, we need to know where we’re going. Does an AI know where it’s going?

In 2012, I began looking at State Legitimacy as a political entity under attack from globalisation and technology. At its core, my thesis was that the nation state was being re-cast in new dimensions, beyond geography and ethnicity, into brands, global culture, and digital communications. This was a more intellectual evolution, beyond the physical, into deeper concepts of identity. The possibility of deviance, of what Foucault or Zizek might call perversions, presented an opportunity for reduced anxieties and improved conditions for all of us.

The first stage was in understanding the political philosophy of the moment. From Plato, through Hobbes, Locke and Rousseau, and the revolutions of the enlightenment, emerged the republic, alongside the diminution of the monarchy, and the church. Science and industry would drive our economies, not random favours in return for preferment. Corruption, as it were, would be frowned upon, and human rights would be asserted through the labour movement of the late nineteenth century, independence movements of the early twentieth century, and liberation movements of the later twentieth century.

The failures of Keynesian economics in the later part of the century bought about a shift in emphasis from the public to the private sector in the 1970s and 1980s, as neoliberalism and Hayekian economics assumed the stage. This was the ultimate manifestation of what my original thesis sought to understand: globalisation and technology combining to usurp conventional politics, establishing a new order for governance and progress. The markets would show the way!

While different political ideologies competed for dominance through that two hundred year period since the French Revolution, art witnessed a progressive realisation of modernism. From Turner and the pastoral sublime, through impressionism and cubism, artists realised that the world was not as simple as it at first appeared. Turner sought to draw emotions and sensuousness from nature and the external world; Cezanne and the impressionists recognised that seeing was not linear; Picasso and the cubists understood the limitations of subject-object convention, and the blindness of base human perception. Increasingly there came an awareness of the complexity of reality, the depth of potential hidden in this universe.

In literature too, the romantic realism of Balzac was followed by the gritty realism of Dickens, but the industrial revolution could not stop at the mere elevation of peasant existences. Kafka and Joyce moved beyond the page – like the cubists – and the expectations of the reader in linear, naturalist storytelling. The depths, the desires, and the demons in human nature were revealed in these works; the hypocrisy too! Not all writers arrived at a Nietzschean wasteland – indeed in the inter-war writings of F. Scott Fitzgerald and Henry Miller, while rooted in perhaps a vacuous nihilism, were celebratory and sexy and exhilarating. The Great Depression and the Second World War put an end to that. How could man be so evil, so cruel? Try as we might to blame the Nazis, or the Germans, this was a Global holocaust, one in which we all at some level realised our own complicity.

Marxist communism failed; socialism failed; communitarianism, green movements, anarchism and humanism all failed to establish any kind of foothold in the lives of people. The labour movement has not only been set back, but continues to retreat in the face of ‘the future of work’. And as we arrive at the AI moment, we are deliberately forced to ask the question: who are we? We must train these machines to behave as we do – but do we really want to reveal that? And how do we express that in any case? There are two courses of action open to us. First, we can enumerate the rules, the ethics by which we expect the machines to behave. Second, we can have the machines learn from us how we behave, and replicate that. The first option, however, has proven too difficult; computer scientists for years have tried to achieve AI through rules, but there are simply too many. IBM’s Deep Blue, in defeating Garry Kasparov the reigning World Chess Grandmaster in 1987, used very primitive learning techniques, only barely getting across the line. Today, the advances in AI are due to machine learning being the standard. Yet are we content for the machines to learn from us and behave that way?

AI requires the skills of sociology, anthropology, and linguistics to achieve its ambition. The unifying nature of information technology – driving towards singular, universal objectives – has a real danger that it may simply homogenise everything, and destroy diversity. This is an anti-ecological dystopia, an anomaly in neoliberal thinking. How can AI machines avoid this trap? Richard Thaler and Cass Sunstein’s 2008 book Nudge describes the problem once everything has been measured using the example of a school cafeteria: if we know, categorically, that the positioning of certain foods for children influenced what they ate. Therefore, by systematically arranging food, they would ensure that the children ate either healthily, or less well. Once this information was known, it was a Pandora’s Box: there was no way to un-know, and while prima facie the decision is straightforward – prioritise healthy food – this is in effect denying choice through the application of systems engineering. Plus, while it may appear that healthy eating means carrots and not French fries, who is to define what eating well really means? Who sets the rules?

The cafeteria example within the construct of an AI shows how easy it would be to have us all eating carrots, and not just that: learning the same things (we must learn maths and engineering!), viewing the same things on TV (we must not be that person who has never seen Breaking Bad!), and driving the same cars. Society itself of course has a homogenising effect: we become most often like our fathers and mothers, like our peers and our countrymen, we inherit and we reflect culture. Yet its structure bears within it the mechanisms for deviance and perversion. How can AI tolerate deviation without attempting remedy?

The mechanist, scientific approach to AI is a natural extension of the industrial revolution. This is about rules and outcomes, about thriving, flourishing, and progress. Yet it remains somehow empty, soulless. Which take us all the way back to the beginning. When the church was in its pomp, and the Divine Right of Kings remained intact, society was not dysfunctional. One could argue that there was oppression; that there was a good deal of lawlessness; and that the quality of life was not high, however one may have measured it back then. Were there things that were lost in the Industrial Revolution that should not have been forgotten? Are there things that we have today that many people would dismiss as bad, that would be missed should AI eliminate them? Who is to define what the rules of progress are, and hand them over to a machine?

Death may well be the thing that defines us more than anything else. We die young and we die old, we die of disease and we die traumatically. We mourn those who have died, and we prepare ourselves, insofar as we can, for death. Our ages are important to us in terms of family, careers, wealth, and education. Time and decay change us, we become different people, for better and for worse. All of this was the same at the time of Plato, just as it is today. In order to truly understand us, reflect us, and serve us, AI needs to understand death itself. I’m not sure it ever will.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s