
The Berinmo people of Papua New Guinea are a small tribe of people with a curious flourish in their language. Specifically, they do not distinguish between green and blue, and they have two separate words for types of yellow. This has deep implications for those who argue for a consistent and objective real (universalists), and the ultimate possibility of artificial intelligence. We’ll come back to the Berinmo, and color linguistics later. While strategies for the avoidance of bias in AI focus on the injury of minority oppression and design failings in creator preference, a deeper semantic analysis of some of the fundamentals of AI reveal several foundational assumptions that give cause for concern. Simply put, the fundamental task of an AI is to construct an image of the world within the parameters of its design (from narrowly defined chatbot engines to Artificial General Intelligence or AGI), which in turn establishes the context for automatic machine decisions to be made. In order to arrive at that image of the world – the simulated real – there are several intermediate layers that each introduces a risk of misinterpretation. This article will walk through each, and understand where some of those challenges might lie. But first, Heidegger.
Continue reading “The Ontological Layers of AI”