The problem is that digital design isn’t cynical enough.
First, the Internet is an amoral force that reduces friction, not an inevitable force for good. Second, sometimes different cultures simply have fundamentally different values. Third, if values are going to be preserved, they must be a leading factor in economic entanglement, not a trailing one. This is the point that Clinton got the most wrong: money, like tech, is amoral. If we insist it matters most our own morals will inevitably disappear.
Students’, educators’ and regulators’ critical resistance to edtech is likely to grow as we learn more about the ways it works, how it treats data, and in come cases how dysfunctional it is.
Increasingly, journalists are on to edtech, and are feeding into the growing sense of frustration and resistance by demonstrating these technologies don’t even fairly do what they claim to do.
So, there is a rising wave of edtech resistance from a wide variety of perspectives—from activists to students, journalists to regulators, and legal experts to ethicists.
Without a grounding in theory or knowledge or ethics or care, the Silicon Valley machine rewards stupid and dangerous ideas, propping up and propped up by ridiculous, self-serving men. There won’t ever be a reckoning if we’re nice.
Source: HEWN, No. 321
we cannot presume that the adjective “open” is sufficient when it comes to re-orienting our technologies towards justice.
Source: HEWN, No. 321
The plutocrat-backed neoliberal technocracy is being manufactured at universities around the world, and its corrupt ideology is being laundered by publications and think tanks funded by these same, unethical billionaires. And plenty of folks look the other way because they’re more committed to being in networks with the “innovators” than they are in building a world that is caring and just.
Source: HEWN, No. 320
Change also means that the ideas and concerns of all people need to be a part of the design phase and the auditing of systems, even if this slows down the process. We need to bring back and reinvigorate the profession of quality assurance so that products are not launched without systematic consideration of the harms that might occur. Call it security or call it safety, but it requires focusing on inclusion. After all, whether we like it or not, the tech industry is now in the business of global governance.
Move fast and break things” is an abomination if your goal is to create a healthy society.
A key question to building any software in the modern age is: “In the wrong hands, who could this harm?”
Decades ago, software seemed harmless. In 2019, when facial recognition is used to deport refugees and data provided by online services have been used to jail journalists, understanding who you’re building for, and who your software could harm, are vital. These are ideas that need to be incorporated not just into the strategies of our companies and the design processes of our product managers, but the daily development processes of our engineers. These are questions that need to be asked over and over again.
It’s no longer enough to build with rough consensus and running code. Empathy and humanity need to be first-class skills for every team – and the tools we use should reflect this.
After watching a few reviews of the new Leatherman multi-tool line during a logged out session, YouTube served me far right conspiracy theories and fear mongering as ads.
Why is fairness to people with disabilities a different problem from fairness concerning other protected attributes like race and gender?
Disability status is much more diverse and complex in the ways that it affects people. A lot of systems will model race or gender as a simple variable with a small number of possible values. But when it comes to disability, there are so many different forms and different levels of severity. Some of them are permanent, some are temporary. Any one of us might join or leave this category at any time in our lives. It’s a dynamic thing.
I think the more general challenge for the AI community is how to handle outliers, because machine-learning systems—they learn norms, right? They optimize for norms and don’t treat outliers in any special way. But oftentimes people with disabilities don’t fit the norm. The way that machine learning judges people by who it thinks they’re similar to—even when it may never have seen anybody similar to you—is a fundamental limitation in terms of fair treatment for people with disabilities.