The proliferating urban myth of whether or not our devices — the phones, computers and social media apps we live our lives by — are listening to us, has evolved from being a paranoid old wives’ tale into a phenomenon that feels increasingly real. It seems that everyone has their own freakish ‘coincidence’, where a private conversation discussing something like a bunion results in the participants being subjected to a series of pop-up advertisements for obscure bunion remedies; or a one-off comment about a child’s birthday party unleashes a stream of sponsored posts for freelance clowns into your feed. The thought of being listened to and recorded alongside having personal conversations exploited seems like something straight out of the Cold War. So is it actually happening like we think? The short answer is yes, although this doesn’t mean that there is a shady figure sitting in a dark room somewhere taking notes with a headset on. The most confronting aspect of this issue is how innocuous information that was once privy to you and you alone can become a statistic in targeted advertising — a small but significant part of the larger demographic picture that companies like Facebook have at their disposal. But is this okay? The question of where morality fits into the overall structure of social media and new technology is puzzling. In a post-Cambridge Analytica world, the idea of regulation has been at the forefront of discussion around policy change. People don’t want to feel like they’re being listened to, and in a society where democratic freedoms dictate a basic level of privacy, eavesdropping phones seem downright Machiavellian. But when those running social media companies argue that the work they are doing is inherently “good”, it will take more than minor policy change for any acknowledgement of their moral shortcomings.
The change needed to ensure full transparency is at paradigm level. The business model that has been established around Facebook, Instagram and phones themselves, operates on a currency of attention. The more attention they can draw from us, the more clicks they get. The more clicks they get, the more the artificial intelligence at play can learn. The more they learn, the better they are at manipulating our newsfeeds to keep us scrolling for longer. And while it might sound like a theory spouted by survivalists, how many times have you opened Facebook to ‘quickly check something’ before realising that 45 minutes have gone by and you’re onto your 18th cat video? This combination of information gathering and tactical deployment of content is the main way that these devices are ‘listening’, and the predictive nature of this technology is more advanced than you might think. According to an article published by the ABC, the algorithm could ascertain a person’s character better than an average co-worker from 10 Facebook likes, better than a friend from 70 likes, more accurately than their parents from 150 likes and better than a partner from 300 likes. Being able to predict behaviour allows corporations to spread marketing material with pinpoint accuracy and leaves users open to being targeted based on their demographic. As author Zadie Smith noted in her essay Generation Why? “It genuinely doesn’t matter who you are [on Facebook] as long as you make choices…To the advertisers, we are our capacity to buy, attached to a few personal, irrelevant photos.” Last year, a series of leaked documents revealed how Facebook presented to advertisers their ability to identify when teenagers in Australia were feeling “insecure”, “stressed” or “anxious”. According to their internal Facebook data, the social media juggernaut could predict mood shifts as accurately as knowing which day of the week a young person would be feeling their best or worst, and tailor content accordingly.
Able to accurately predict who we are and what we like, it isn’t too much of a leap to understand how social media could sway public opinion around something like an election. When the algorithms at play can capture our attention, manipulate our emotions and feed us targeted information around current affairs, culture and politics — factors that were hugely important to the ‘fake news’ phenomenon — it would seem the users can quickly become the used.
With personal A.I. assistants like Siri and Google Assist now happily residing in our phones, awaiting the ‘hey Siri’ and ‘okay Google’ vocal command to pipe up with helpful information about the traffic or weather, there is a chance that, alongside data gathering, phone assistant systems and third-party apps are listening via triggers designed to spur them into action. Major corporations like Google vehemently deny that their technology is recording anything on the sly, but according to cybersecurity expert Dr Peter Henaway, snippets of recorded information can be sent to apps like Facebook every so often — although what the exact trigger words are, remains unclear. Google recently demonstrated its new A.I.’s ability to call a restaurant, make a reservation in a natural-sounding voice and improvise according to the answers of the human it was interacting with, a simultaneously impressive and off-putting exercise. On one level, it’s clear how these developments can exponentially improve our lives — streamlining things that used to take much longer and allowing us to be more connected (in some ways) to the world, but are we paying too high a price for convenience?
Tristan Harris, founder of the Centre For Humane Technology (and former Google employee) stresses the need for the whole system to be overhauled from the ground up. Comparing the “unsustainable” model to bad urban planning, he likens opening the home screen of our phones to walking out the front door in the morning and straight into a casino. His organisation is stipulating that a “cultural awakening” is needed, similar to the about-face that happened around tobacco, after we all realised that cigarettes were actually, really very bad for us. Acknowledging the huge amount of unchecked power wielded by Zuckerberg and his contemporaries, Harris is attempting to position himself as the conscience of Silicon Valley by spreading education about how our devices are listening, why, and what we can do to minimise their manipulative pull. “Society’s capacity to solve its problems,” he says, speaking to Nick Bilton, “is its ability to reflect on what’s important and change things by directing our attention properly.” If we don’t do something to reclaim autonomy over where and how our time is spent, other things (read: important things) will fall by the wayside.
If Aldous Huxley were to write Brave New World now, he wouldn’t have to reach too far into his imagination to wax lyrical about a society where progress at all costs has led to rampant technological advancement and a lamentable loss of humanity. Until something drastic changes, our phones will continue to gather and store information, listen for trigger words to build up marketable demographic groups and command our attention like nothing else. But if we are aware of the systems at play, if we know how susceptible we are, perhaps there’s a chance of us looking up from our scrolling for just long enough to do something.