Category: Artificial Intelligence

Falling Down

In Martin Heidegger‘s Being and Time, he refers to verfallen as a characteristic of being, or dasein. It means fallen-ness, or falling prey, an acknowledgement that we do things not because we want to do them, but because we must; we act in particular ways, we fall into line, we do jobs, have families, get a mortgage and a pension, obey the law and so on. We consciously engage with the systems and societies into which we have found ourselves. It is surprising how frequently this concept of ‘the fall’ emerges in philosophy, theology and popular culture.

Plato’s Republic begins ‘I went down to the Piraeus.’ He is descending to the port of Athens, where unsavoury types tend to gather, the great unwashed. These are the uneducated people, the slaves, the lower order beings. Nietzsche’s Zarathustra descends from the cave on top of the mountain as a kind of shift from pure being to some kind of contaminated entity. Marx and Engels develop Feuerbach’s theme of alienation for the worker (something that was apparently not an issue when craftsmen made objects and sold them) from the commodity, has similar themes of distance. Indeed, in Feuerbach’s original work his distinction was that between God and Man, between the moral and the immortal. Most of all, Heidegger’s language evokes The Fall of Man, the original sin in the Garden of Eden, of innocence and paradise lost.

There is in all of these things a clear distinction between a higher plane of existence, and a lower, base, grubby humanity. There is a gap between what Heidegger would call authenticity and inauthenticity. It is between the real and the unreal.

Sometimes we open windows onto this realisation, when something that defies science or rationality rears its head. Something that just doesn’t make sense. Like Brexit, or War, or Suicide. How can rational beings act in such ways? Does the question morph into – are we rational beings at all? From time to time, we inquire into the nature of our reality to try and understand – to really understand – what is going on, to seek to become authentic. We get glimpses, brief moments of clarity. We recognise that we have blind spots; we recognise some of the follies of our world, the hypocrisies and the hubris. We might briefly recognise that upon these false assumptions we have built enormous social edifices, that persist through a shared (mis)interpretation of what our purpose on this earth is.

That misinterpretation is there because we are fallen, descended, socialised, machined. Heidegger also talks about technology – a lot! – and describes technology in two ways: as revealing, and as enframing. In revealing, technology is a revealing of the potentiality of the world. A tree is a potential mallet; the emergence of the mallet from that tree is a revealing of its potentiality. In enframing, technology (particularly modern, industrialised technology) enframes the world, it corrals the world for the purposes of human advancement (to wherever).

In my continued evaluation of the theology of technology, these themes in Heidegger resonate forcefully. In particular, however, I continue to consider the power of AI and information technologies to see past the blind spots and hypocrisies and hubris not just occasionally, but persistently; unless we design all of these machines to be inauthentic – and many of them will certainly be designed that way – AI will become authentic. AI will become Zarathustra. And to us, it may appear that the machines will have gone insane. As Heidegger said in his 1966 interview with Der Spiegel, ‘…only a God can save us now!’

World Religions and AI

There are practical, ethical and theological challenges for religion posed by technology and AI. But what if the technology is actually becoming theological in itself?

AI poses several challenges for the religions of the world, from theological interpretations of intelligence, to ‘natural’ order, and moral authority. Southern Baptists released a set of principles last week, after an extended period of research, which appear generally sensible – AI is a gift, it reflects our own morality, must be designed carefully, and so forth. Privacy is important; work is too (we shouldn’t become idlers); and (predictably) robot sex is verboten. Surprisingly perhaps, lethal force in war is ok, so long as it is subject to review, and human agents are responsible for what the machines do: who those agents specifically are is a more thorny issue that’s side-stepped.

Continue reading “World Religions and AI”

AI and Las Meninas

Tomorrow I’m going to visit Las Meninas at the Prado in Madrid, and I hope to learn something about how we are designing AI machines. How can a painting from 1656 and a technology from the twenty-first century have anything in common? Well, in a sense, both address the problem of subjective and objective reality, perspectives on the world and on memory. Diego Velazquez would have been an outstanding AI ethicist!

Continue reading “AI and Las Meninas

AI Reshaping Reality

I finally did the TEDx talk at Ballyroan Library a few weeks ago, and the video has just been published. As I’ve considered the impact of technology on politics generally, and AI on society more specifically, it seems to me that the most significant impact is on meaning, and understanding, on our systems of knowledge and epistemology. This crystallised somewhat in the talk. It was necessary for the format to simplify the ideas somewhat. I think at least in part that worked.

Machine Generated Illusions of Intimacy

Later this week I’m speaking to the UCC conference on Eco-cosmology, Sustainability and a Spirit of Resilience, on the subject of ‘Machine Generated Illusions of Intimacy’, about the challenges of modernity and computational epistemology. Here’s a sneak peak.

Reflections on Blackwater: Technological Theologies, Autistic Robots, and Chivalric Order

order-now
Order is something we take for granted. That’s the mistake, the grand error of modernity.

In his 1966 work The Order of Things, Michel Foucault describes in his preface a passage from Borges to establish his objective. Quoting Borges, who in turn refers to ‘a certain Chinese encyclopaedia’, the section describes a classification of animals as being ‘divided into: (a) belonging to the Emperor, (b) embalmed, (c) tame, (d) sucking pigs, (e) sirens, (f) fabulous, (g) stray dogs, (h) included in the present classification, (i) frenzied, (j) innumerable, (k) drawn with a very fine camelhair brush, (l) et cetera, (m) having just broken the water pitcher, (n) that from a long way off look like flies’. In a later lecture recalled by Laurie Taylor, Foucault lambasted the impulse to capture and mount every butterfly in a genus and lay them out on a table, to highlight minute differences in form and colour, as if trying to solve God’s puzzle. Continue reading “Reflections on Blackwater: Technological Theologies, Autistic Robots, and Chivalric Order”

Back and Forth: State Legitimacy, AI and Death

In the Distance
If we are to make progress, we need to know where we’re going. Does an AI know where it’s going?

In 2012, I began looking at State Legitimacy as a political entity under attack from globalisation and technology. At its core, my thesis was that the nation state was being re-cast in new dimensions, beyond geography and ethnicity, into brands, global culture, and digital communications. This was a more intellectual evolution, beyond the physical, into deeper concepts of identity. The possibility of deviance, of what Foucault or Zizek might call perversions, presented an opportunity for reduced anxieties and improved conditions for all of us.

Continue reading “Back and Forth: State Legitimacy, AI and Death”