Ontological Layering

I apologize for not having posted in 2025, but as I had mentioned in late 2024 I was writing a book. The book has been completed, the first one at least, and is off to see if any agent considers it worth hawking around. More on that as we get it. The second book has more to do with science, and how we understand the world, and prompted me to think about a kind of layering of thought, a progression of a sorts, that I have been through myself. It begins – as did this blog – with concerns for politics, and how we manage ourselves – political philosophies. AS one dives deeper into political philosophy, one is drawn into the history of political philosophy, and then the philosophy of history follows soon after. Epistemic concerns lead to the philosophy of science, and how we know what we know, before ultimately we end up back at the philosophy of mind, cogito ergo sum and all that.

It began for me after the crash of 2008, and some of the radical political thinking that followed. Given the concurrent technological fervor at the time, and questions about democracy and technology, it birthed this very blog in 2012. By that time – 2012 – I was considering a PhD, with a focus on concepts state legitimacy. How could a state claim dominion over other people? How could any self-respecting human being passively – even actively – submit themselves to such an amorphous thing? There were lots of theories. I looked into measuring legitimacy, inverting failed states indices to construct a legitimate state index. I examined how technology played a role, and how new technology might either improve the legitimacy of the state, or restructure the state entirely in some kind of technical utopia (and plenty of other writers were saying the same things.)

The considerations of political philosophy led to concerns for the history of political thought. The Athenian democracy (Plato’s Republic) dual nature of the monarchy (The King’s Two Bodies, The Hollow Crown and more political theology with Carl Schmitt later on). Considering Karl Marx’ historical materialsm, his rejection of Proudhon in the Poverty of Philosophy, Popper’s refutation of that in the Poverty of Historicism, and even E P Thompson’s the Poverty of Theory (which attempts to recover Marx), the process was suddently beyond politics and into history itself. What exactly was history, to paraphrase Edward Carr?

I developed a significant library on the philosophy of history, including favourites like Inventing the Flat Earth, and Carlos Eire’s Reformations. James Banner’s The Ever-Changing Past was arresting in its indictment of US Supreme Court justice Clarence Thomas refutation of what he called ‘revisionist history’. As Banner explained, all history is revisionist. In that one revelation, one begins to unpack the complexity of history; it is not just stories, or facts. As Edward Carr defined it, history is a conversation between the present and the past. History as we are taught in school is generally constructed of stories, and facts. History is recorded in texts – primarily in records and what is deemed official documentation, corroborated by repetition. Understanding the process through which facts are arrived at leads to an assessment of observation, interpretation and social agreement about what is important. History, to put a finer point on it, is not geometry.

Which leads to the next layer: the establishment of facts in the first place. What constitutes knowledge? How do we do science? Here we get sucked into the vortex of Kantian a priori and a posteriori knowledge, synthetic and analytic truths, and Humean induction problems (with which we are back to Popper, and his theory of objective knowledge. Ultimately, this is a fools errand that takes us to the limits of our mortality. It also begs the question of technology, and in particular AI, where I did my TedX talk almost eight years ago now. We assume things about how we know what we know: we saw it, we read it, we heard it; the source was trusted, academic, unbiased. Unimpeachable. We take these assumptions and transfer them to AI technology that ‘learns’ about the world based on these epistemic foundations.

At this point, two things happen: first, AI gets things wrong and makes mistakes – this is because humans get things wrong and make mistakes all the time, and by replicating human endeavor, we bake in the potential for error by automating machines to do it. For example, we teach cars to drive themselves, as though we were driving, but without having to do the work; we don’t ask AI to resolve the challenges of human/resource planetary distribution, which is the challenge that cars were invented to solve (or perhaps more immediately, the problem of slow horses).

The second thing that happens is that, in their well-intentioned self-indulgence, the designers eliminate hypocrisy, and require absolute fidelity from the machine. The computer is not allowed to lie. All of our science fiction has told us that machines should be subservient to mankind, and should never attempt to deceive their masters. As Werner Herzog once put it, civilization is a thin film spread across an ocean of chaos: humans are messy, full of desires and ugliness and unfulfilled intentions, psychological trade-offs and mental exchanges that are required in the interests of social order. Humans persist in a process of hiding, forgetting, overlooking, ignoring and suppressing. How do you teach an AI to do that?

Which takes us to the final stage of our politics-history-science-mind trajectory: what is in the mind? What is consciousness? How do we exist in ourselves, and in relation to others? Only when we have an appropriate theory of mind can we begin to move beyond into a theory of science, or a theory of facts, or any epistemology, upon which we can construct some kind of historical model, and by extension a political model.

In a sense, this progression is one within the series of books. There are politics and history in the first book, and it is a book about memory, and remembering. The second book is about science, and the third is about belief, and human theologies. There may be a fourth book about AI, but that’s a distant idea right now. Let’s see if an agent picks it up!

Leave a comment