A key question to building any software in the modern age is: “In the wrong hands, who could this harm?”
Decades ago, software seemed harmless. In 2019, when facial recognition is used to deport refugees and data provided by online services have been used to jail journalists, understanding who you’re building for, and who your software could harm, are vital. These are ideas that need to be incorporated not just into the strategies of our companies and the design processes of our product managers, but the daily development processes of our engineers. These are questions that need to be asked over and over again.
It’s no longer enough to build with rough consensus and running code. Empathy and humanity need to be first-class skills for every team – and the tools we use should reflect this.
After watching a few reviews of the new Leatherman multi-tool line during a logged out session, YouTube served me far right conspiracy theories and fear mongering as ads.
Why is fairness to people with disabilities a different problem from fairness concerning other protected attributes like race and gender?
Disability status is much more diverse and complex in the ways that it affects people. A lot of systems will model race or gender as a simple variable with a small number of possible values. But when it comes to disability, there are so many different forms and different levels of severity. Some of them are permanent, some are temporary. Any one of us might join or leave this category at any time in our lives. It’s a dynamic thing.
I think the more general challenge for the AI community is how to handle outliers, because machine-learning systems—they learn norms, right? They optimize for norms and don’t treat outliers in any special way. But oftentimes people with disabilities don’t fit the norm. The way that machine learning judges people by who it thinks they’re similar to—even when it may never have seen anybody similar to you—is a fundamental limitation in terms of fair treatment for people with disabilities.
We accelerated progress itself, at least the capitalist and dystopian parts. Sometimes I’m proud, although just as often I’m ashamed. I am proudshamed.
I miss making things. I miss coding. I liked having power over machines. But power over humans is often awkward and sometimes painful to wield. I wish we’d built a better industry.
I believe the relative ease — not to mention the lack of tangible cost — of software updates has created a cultural laziness within the software engineering community. Moreover, because more and more of the hardware that we create is monitored and controlled by software, that cultural laziness is now creeping into hardware engineering — like building airliners. Less thought is now given to getting a design correct and simple up front because it’s so easy to fix what you didn’t get right later.
Every time a software update gets pushed to my Tesla, to the Garmin flight computers in my Cessna, to my Nest thermostat, and to the TVs in my house, I’m reminded that none of those things were complete when they left the factory — because their builders realized they didn’t have to be complete. The job could be done at any time in the future with a software update.
Technologists suck at predicting the future. They suck because they don’t understand the past; they’re blind to much of the present. They’re terrible at predicting the future because they fail to grasp the systems and practices surrounding their products, firm in their faith instead that their own genius (and their investors’ continued support) will be enough to muddle forward.
Source: HEWN, No. 298
We have basically told these companies that the smart thing to do, the shareholder thing to do, is to lie and to break the law.
Now technology is 99% about shareholder value and 1% about the betterment of humanity.
The markets are failing.
Behaviorism commodifies people. I wish humane tech and tech regrets folks were louder about what’s going in ed-tech. The primitive moral development of Silicon Valley (& Skinner & Lovaas) is in our schools.
Educators and tech workers, do we want to be in the business of behaviorism?
The first is that implementation is policy. Whatever gets decided at various times by leadership (in this case, first to separate families, then to reunite them), what happens in real life is often determined less by policy than by software. And until the government starts to think of technology as a dynamic service, imperfect but ever-evolving, not just a static tool you buy from a vendor, that won’t change.
The second lesson has to do with how Silicon Valley — made up of people who very much think of technology as something you do — should think about its role in fighting injustice.
This is one of the lessons you can’t escape if you work on government tech. When government is impaired, who gets hurt? More often than not, the most vulnerable people.
In order to properly administer a social safety net, a just criminal justice system, and hundreds of other functions that constitute a functioning democracy, we must build government’s technology capabilities. In doing that, we run the risk of also increasing government’s effectiveness to do harm.
Which is why Silicon Valley can’t limit its leverage over government to software. Software doesn’t have values. People do. If the people who build and finance software (in Silicon Valley and elsewhere) really want government that aligns with their values, and they must offer government not just their software, but their time, their skills, and part of their careers. The best way to reclaim government is to become part of it.