Wittgenstein’s ‘form of life’ construction, one which has addled my brain for over a year now, is a philosophical device that allows us to think about life, and what it means, in a layered and constructed form. Human beings, in their pure essence, are not really a form of life, but merely a life-form, shorn as they are of context and relativity. If you take a human, take away everything that is non-essential for the preservation of mere existence – legs, arms and so on, and then replace those organs vital for the maintenance of that state of existence with machines – a mechanical heart, even the parts of the brain that are not required, such as those controlling motor functions. There is very little in the bare, denuded essence of man that is in any respect a form of life. It is mere existence, presence; it may even be argued that while rational potential exists, reason does not, as that potential has no access to nurturing functions. It is only when the human interacts with the outside world, with the world that exists beyond consciousness and the self, that she becomes a form of life.
I had the pleasure this week of addressing the Royal Irish Academy on the subject of Digital Citizenship, which, rather than addressing the narrow construct of person-state relationship, instead took in a broad sweep of life in the digital world. Part of a series themed around the constitution, it focused on issues of growing up a teenager in the digital world, data protection (there were a lot of lawyers in the room, not least my fellow presenter Oisín Tobin of Mason Hayes & Curran), privacy, artificial intelligence and the politics of all that. In two hours, it was a hurried skip across disciplines and dystopias, which illustrated in equal measure the interest and enthusiasm people have for addressing the issue (there was barely a seat left in the hall), and the strange paucity (it seems to me, at least) of opportunities that there are to pursue in particular the ethical, policy, and political implications of our digital lives. Convened by Dr John Morrison, the Academy Chair of the Ethical, Political, Legal and Philsophical Committee, and expertly chaired by Dr Noreen O’Carroll, perhaps this is the beginning of an attempt to address that.
Shoshana Zuboff’s ‘Big Other’ and ‘Surveillance Capitalism’ as Future Economic Models
Shoshana Zuboff’s recently published article on what she has termed Information Civilization is a compact and helpful analysis of the kind of internet economies that are emerging in the early twenty-first century. This blog post is a commentary on that text. She takes Google’s Chief Economist Hal Varian as her foil, referencing his two articles Computer Mediated Transactions (2010) and Beyond Big Data (2013).
The Irish Data Protection Commissioner (DPC) recently doubled its budget, and is busy hiring and building capability. It’s an encouraging sign, the function had been significantly under-resourced in recent times; but one wonders whether there needs to be more done. The DPC is responsible for three areas right now – privacy in relation to Internet Services companies like Facebook and Google; privacy in relation to state organisations like the Gardaí; and privacy in relation to private national companies who possess data. That all three domains are vested in this single organisation says something for the breadth of work that these guys have to take on. But nowhere in their mandate does it suggest that they may have a role in commercial or security issues, for which there is no competent authority in the state, and certainly no strategy to address them.
So let’s say the State becomes a platform, like we talked about in the last post. In order to participate in the State, in order to pay taxes, and get educational accreditation, access healthcare, and to get licensed to own dogs, own a gun, or drive a car, you need to subscribe to the platform. Let’s say then that the platform allows for commercial entities to participate, to advertise their wares on the State Platform, to ‘compete’ for consumer attention based on big data analysis of citizen behaviour and experience. What are the other things that are happening with technology that impact upon the evolution of the state?
Here at StateLegitimacy.com, we’re interested in two things. First, how we measure legitimacy, and how legitimacy is constructed, and second, how technology impacts on legitimacy. We’re going to ask the question: could Rousseau’s Social Contract be implemented in technology? What if the state became a platform?
The New York Times and the Guardian have been digging ever deeper into the activities of the US National Security Agency or NSA following the leaking by Edward Snowdon of information about how they were spying both on countries and ordinary people at home. Hot on the heels of the Chelsea Manning and Wikileaks diplomatic cables episode, there has been a constant flow of stories reporting on nefarious activities of spooks and governments, embarrassing opinions, and the mechanisms by which international diplomacy and spying are conducted, though Wired Magazine had got there first. There are numerous angles to all of this. There is the technology problem, an Orwellian, Kurtzweilian post-humanist dystopia where technology trumps all, and big data and analytics undermines or redefines the essence of who we are and forces a kind of a re-evaluation of existence. There is the human rights problem, the balancing of the right to privacy and – generally speaking – an avoidance of judgement of the individual by the state, with the obligation to secure the state. This issue is complex – if for example we have an ability to know, to predict, to foretell that people are going to do bad things, but we choose not to do that because it would require predicting also which people were going to do not-bad things, and therefore invade their privacy, is that wrong? Many people said after 9/11 ‘why didn’t we see this coming?’ Which leads to the question – if you could know all that was coming, would you want to know?
As mentioned in my last post, Zizek identifies four apocalyptic antagonisms that threaten the liberal democratic status-quo. They are ecology, technology, property and equality. In relation to the technological post-human dystopia, Zizek attributes a leadership role to Ray Kurzweil, a noted thinker in technology futurism. There are two kinds of post-humanism, it appears – a kind of robotic, artificial intelligence future as described in the fiction Asimov and the Terminator movies, and a bio-genetic technological Armageddon of which I’m less familiar.
On the plane to New York I was reading an interesting article in the Economist on The Politics of The Internet, that asked the question whether Internet activism could develop into a ‘real political movement’. It was an interesting sentence construction, one that presupposed how politics should work, and that the real effect of significant change may not be within the system – in the form of a political party, one that spans borders – but with the system itself. For example, open source software should not succeed at all based on the market based assumptions of equity distribution. It succeeds in spite of the system, not because of it. At the same time, I’m reading Zizek’s First as Tragedy, Then as Farce, notwithstanding his pathological fear of footnotes.
A data scientist at Twitter, Edwin Chen, has used twitter to measure the prevalence of the term ‘soda’ versus ‘pop’ or ‘coke’ across the US, and the world. He compares his work to work done ten years previously on a survey basis, which reveals slight changes over time, but essentially concurs with Chen’s conclusions. In order to arrive at the data set, Chen had to clean the data by removing extraneous references. For example, references to specific drinks – like Coca Cola – were eliminated; and only those references to drinks were included. Then he was left with a pretty accurate picture as represented by Americans who use Twitter – and let’s presume for now that that’s a statistically accurate sample.