“…so many of the data scientists that are in work right now think of themselves as technicians and think that they can blithely follow textbook definitions of optimisation, without considering the wider consequences of their work. So, when they choose to optimise to some kind of ratio of false positives or false negatives, for example, they are not required by their bosses or their educational history to actually work out what that will mean to the people affected by the algorithms they’re optimising. Which means that they don’t really have any kind of direct connection to the worldly consequences of their work.”
“…we think that data people are magical and that they have any kind of wisdom. What they actually have is a technical ability without wisdom.”
“But again, I don’t care. Why? Because the failures are so obvious compared to those of most algorithms. Dead people by the side of the road constitute public tragedies. They make headlines. They damage companies’ reputations and market valuations. This creates inherent and continuous pressure on the data scientists who build the algorithms to get it right. It’s self-regulating nirvana, to be honest. I don’t care because the self-driving car companies have to care for me.
By contrast, companies that own and deploy other algorithms – algorithms that decide who gets a job, who gets fired, who gets a credit card and with what interest rate, who pays what for car or life insurance – have shockingly little incentive to care.
The problem stems from the subtlety of most algorithmic failures. Nobody, especially not the people being assessed, will ever know exactly why they didn’t get that job or that credit card. The code is proprietary. It’s typically not well understood, even by the people who build it. There’s no system of appeal and often no feedback to improve decision-making over time. The failures could be getting worse and we wouldn’t know it.
“The young progressives grew up in a time when platform monopolies like Facebook were so dominant that they seemed inextricably intertwined into the fabric of the internet. To criticize social media, therefore, was to criticize the internet’s general ability to do useful things like connect people, spread information, and support activism and expression.”
The older progressives, however, remember the internet before the platform monopolies. They were concerned to observe a small number of companies attempt to consolidate much of the internet into their for-profit, walled gardens.
To them, social media is not the internet. It was instead a force that was co-opting the internet – including the powerful capabilities listed above – in ways that would almost certainly lead to trouble.
The social internet describes the general ways in which the global communication network and open protocols known as “the internet” enable good things like connecting people, spreading information, and supporting expression and activism.
Social media, by contrast, describes the attempt to privatize these capabilities by large companies within the newly emerged algorithmic attention economy, a particularly virulent strain of the attention sector that leverages personal data and sophisticated algorithms to ruthlessly siphon users’ cognitive capital.
“Needless to say, racists don’t spend a lot of time hunting down reliable data to train their twisted models. And once their model morphs into a belief, it becomes hardwired. It generates poisonous assumptions, yet rarely tests them, settling instead for data that seems to confirm and fortify them. Consequently, racism is the most slovenly of predictive models. It is powered by haphazard data gathering and spurious correlations, reinforced by institutional inequities, and polluted by confirmation bias. In this way, oddly enough, racism operates like many of the WMDs I’ll be describing in this book.”