There are practical, ethical and theological challenges for religion posed by technology and AI. But what if the technology is actually becoming theological in itself?

AI poses several challenges for the religions of the world, from theological interpretations of intelligence, to ‘natural’ order, and moral authority. Southern Baptists released a set of principles last week, after an extended period of research, which appear generally sensible – AI is a gift, it reflects our own morality, must be designed carefully, and so forth. Privacy is important; work is too (we shouldn’t become idlers); and (predictably) robot sex is verboten. Surprisingly perhaps, lethal force in war is ok, so long as it is subject to review, and human agents are responsible for what the machines do: who those agents specifically are is a more thorny issue that’s side-stepped.

Edward Snowdon
Edward Snowdon: His revelations (though not new) have launched an avalanche of introspection and head scratching.

The New York Times and the Guardian have been digging ever deeper into the activities of the US National Security Agency or NSA following the leaking by Edward Snowdon of information about how they were spying both on countries and ordinary people at home.  Hot on the heels of the Chelsea Manning and Wikileaks diplomatic cables episode, there has been a constant flow of stories reporting on nefarious activities of spooks and governments, embarrassing opinions, and the mechanisms by which international diplomacy and spying are conducted, though Wired Magazine had got there first.   There are numerous angles to all of this.  There is the technology problem, an Orwellian, Kurtzweilian post-humanist dystopia where technology trumps all, and big data and analytics undermines or redefines the essence of who we are and forces a kind of a re-evaluation of existence.  There is the human rights problem, the balancing of the right to privacy and – generally speaking – an avoidance of judgement of the individual by the state, with the obligation to secure the state.  This issue is complex – if for example we have an ability to know, to predict, to foretell that people are going to do bad things, but we choose not to do that because it would require predicting also which people were going to do not-bad things, and therefore invade their privacy, is that wrong?  Many people said after 9/11 ‘why didn’t we see this coming?’ Which leads to the question – if you could know all that was coming, would you want to know?