We’re enslaved by social media which will manipulate emotions for money
By Mark Kernan
This year Facebook filed two very interesting patents in the US. One was for emotion recognition technology; which recognises human emotions through facial expressions and can assess what mood we are in at any given time -happy or anxious for example. This can be done either by a webcam or a phonecam. The technology is relatively straightforward. Artificially intelligent driven algorithms analyse and then decipher facial expressions.
They then match the duration and intensity of the expression with a corresponding emotion. Take contempt: measured from 0 to 100, an expression of contempt can be measured by a smirking smile, a furrowed brow or a wrinkled nose. An emotion can then be extrapolated from the data linking it to dominant personality traits: open, introverted, neurotic etc. The accuracy of the match may not be perfect, but AI (Artificial Intelligence) technology is getting much better; and is already much quicker than human intelligence.
Recently at Columbia university a competition was set up between human lawyers and their AI counterparts. Both read a series of non-disclosure agreements with loopholes in them. AI found 95% compared to 88% for humans. The human lawyers took 90 minutes to read them; AI took 22 seconds. More remarkably still, last year Google’s AlphaZero beat Stockfish 8 in chess. Stockfish 8 is an open-sourced chess engine with access to centuries of human chess experience. Yet AlphaZero taught itself using machine learning principles, free of human instruction, beating Stockfish 8 28 times and drawing 72 out of 100. It took AlphaZero four hours to independently teach itself chess. Four hours from blank slate to genius.
A common misconception about algorithms is that they can be easily controlled. In fact they can learn, change and run themselves –a process known as deep “neural” learning. In other words, they run on self-improving feedback loops. Much of this is exciting of course: unthought of solutions to collective problems like climate change may become feasible. The social payoffs could be huge too. But AI could be nefarious. Yuval Noah Hariri, author of ‘Sapiens’ speculates that AI could become just another tool to be used by elites to consolidate their power in the twenty-first century. Rapidly evolving technology ending up in the hands of just a few mega companies, unregulated and uncontrolled, should seriously concern us all.
Algorithms, as Jamie Bartlett the author of ‘The People Vs Tech’ puts it, are “the keys to the magic kingdom” of understanding deep-seated human psychology: they filter, predict, correlate, target and learn. They can also manipulate – both financially and politically.
In 2017 an internal Facebook report said it could detect teenagers’ moods and emotions by their entries, though it later denied it, adding it does not, “offer tools to target people based on their emotional state”. The report was written by two Australian executives, Andy Sinn and David Fernandez. The report was written for a large bank and said that, “the company has a database of its young users – 1.9 million high schoolers, 1.5 million tertiary students and 3 million young workers”.
Going one better, Affectiva, a Boston company, claims to be able to detect and decode complex emotional and cognitive data from your face, voice and physiological state using emotion recognition technology (ECT) – amassing 12 billion “emotion data points” across gender, age and ethnicity. Its founder has declared that Affectiva’s ECT can read your heart rate from a webcam without you wearing any sensors, simply by using the reflection of your face which highlights blood flow, a reflection of your blood pressure. Next time you’re listening to Newstalk’s breakfast show, dwell on that.
Affectiva’s ultimate goal of course, underneath all the feel-good optimistic guff about “social connectivity”, “awesome innovation”, and worst of all “empowering” is, in its own words, to “enable media creators to optimize their content”. Profiting from decoding our emotional states, in other words.
Maybe Facebook (and Google) would use this technology wisely for our benefit, however, it isn’t such a stretch to imagine how it could be used unethically too. It’s already microtargetting customised ads and messages at us depending on our state of mind and it allowed Cambridge Analytica to harvest the personal data of 87 million Facebook users to subvert democracy with Brexit and Trump. Facebook claims it wasn’t aware of this though. Well, maybe, maybe not, and it remains remarkably unaccountable given its enormous cultural and social power in modern lives.
The second Facebook patent is even more interesting, or dystopian. Patented this June and under the code US20180167677 (with the abstract title of Broadcast Content View Analysis Based on Ambient Audio Recording), it illustrates a process by which secret messages – ‘ambient audio fingerprints’ embedded in TV ads, would trigger viewers’ smart technology (phone or TV) to record them while the ad was playing. Presumably to gauge the reaction to the product being advertised through, perhaps, voice biometrics (i.e. the identification and recognition of the pitch and tone of the viewer’s voice).
As the patent explains in near impenetrable jargon this is done by first detecting one or more broadcasting signals (the advertisement) of a content item. Second, ambient audio of the content item is recorded, and then the audio feature is extracted “from the recorded ambient audio to generate an ambient fingerprint” before finally, “ the ambient audio fingerprint, time information of the recorded ambient audio, and an identifier of an individual associated with a client device (you and your phone or smart TV) recording the ambient audio” is sent “to an online system for determining whether there was an impression of the content by the individual”. It goes on to say that “the impression of the identified content item by the identified individual” is logged in a “data store of the online system”.
“Content providers”, it notes, “have a vested interest in knowing who has listened to and/or viewed their content”. The feature described in the patent is not exhaustive: “many additional features and advantages will be apparent to one of ordinary skill in the art…”.
It is already obvious we don’t know how much Facebook and other big-tech platforms monitor us, neither do we know how much data it holds on us individually and collectively and, critically, who has access to that data and how they could use it.
If you can sell consumer goods by such manipulation why not whole ideologies, chipping away at our human agency throught dystopian tech innovations, paving the way for the morphing of late-stage capitalism into authoritarian capitalism; one efficiency gain at a time.
These “innovations” are designed to monitor our emotional states for monetary gain. We are the digital lab rats. Facebook is already valued at half a trillion US dollars giving it huge economic and cultural power. Over 90% of its revenues comes from selling adverts. It has the market incentive.
According to Private Eye magazine, Facebook’s legal team says the patent was filed “to prevent aggression from other companies”, and that “patents tend to focus on future-looking technology that is often speculative in nature and could be commercialised by other companies”. As Private Eye pointed out though, it’s not as if Facebook has been completely transparent about such secretive issues in the past.
Much of this represents the very opposite of the freedoms and civil liberties envisaged by the original pioneers of the internet, and the opposite of how tech leaders currently perceive themselves.
Verint, a leading multinational analytics and biometric corporation, with an office in Donegal, has installs and sells, “intrusive mass surveillance systems worldwide including to authoritarian governments”, according to Privacy International -governments that routinely commit human rights abuses on their own citizens.
China, a world leader in surveillance capitalism, recently declared that by 2020 a national video surveillance network, Xueliang, will be fully operationable, to be know as Sharp Eyes in English, evoking the post-war slogan in communist China of “The people have sharp eyes”. People were encouraged to tell on their neighbours.
Democracies too have built overarching systems of surveillance. Edward Snowden told us in 2013 that the NSA was given secret direct access to the servers of big tech companies (Facebook, YouTube, Google and others) to collect private communications. As Glenn Greenwald said, the NSA’s unofficial “motto of omniscience” is: ‘Know it all, Collect it all, Process it all’.
Jaron Lanier, pioneer of virtual-reality technology tech renegade, and apostate to some, recently called the likes of Facebook and Google “behaviour manipulation empires”. Their pervasive surveillance and subtle manipulation through “weaponised advertising” he argues debases democracy by polarising debate at a scale unthinkable even just five or ten years ago, and it’s not only advertising that can be weaponised. Facebook, Google, Twitter and Instagram all have “manipulation engines” (algorithms we know little about) running in the background, Lanier says, designed specifically by thousands of psychological and “emotional engineers” (“choice architects” or “product philosophers”). Their job is to keep us addicted to what’s now known as the “attention economy” – and attention equals profit.
The goal is addiction into a consumption frenzy of socially approved validation. Big Tech’s social media universe is, as one reformed “choice architect” put it, “an attention seeking gravitational wormhole” that sucks you into its profit-seeking universe. If you don’t think so, check how many times you look at your phone every day. The average person checks 150 times. You are enslaved.