A key question to building any software in the modern age is: “In the wrong hands, who could this harm?”

Decades ago, software seemed harmless. In 2019, when facial recognition is used to deport refugees and data provided by online services have been used to jail journalists, understanding who you’re building for, and who your software could harm, are vital. These are ideas that need to be incorporated not just into the strategies of our companies and the design processes of our product managers, but the daily development processes of our engineers. These are questions that need to be asked over and over again.

It’s no longer enough to build with rough consensus and running code. Empathy and humanity need to be first-class skills for every team – and the tools we use should reflect this.

Source: Pull requests and the templated self

After watching a few reviews of the new Leatherman multi-tool line during a logged out session, YouTube served me far right conspiracy theories and fear mongering as ads.

Why is fairness to people with disabilities a different problem from fairness concerning other protected attributes like race and gender?

Disability status is much more diverse and complex in the ways that it affects people. A lot of systems will model race or gender as a simple variable with a small number of possible values. But when it comes to disability, there are so many different forms and different levels of severity. Some of them are permanent, some are temporary. Any one of us might join or leave this category at any time in our lives. It’s a dynamic thing.

I think the more general challenge for the AI community is how to handle outliers, because machine-learning systems—they learn norms, right? They optimize for norms and don’t treat outliers in any special way. But oftentimes people with disabilities don’t fit the norm. The way that machine learning judges people by who it thinks they’re similar to—even when it may never have seen anybody similar to you—is a fundamental limitation in terms of fair treatment for people with disabilities.

Source: Can you make an AI that isn’t ableist?

See also,

Design is Tested at the Edges: Intersectionality, The Social Model of Disability, and Design for Real Life 

We accelerated progress itself, at least the capitalist and dystopian parts. Sometimes I’m proud, although just as often I’m ashamed. I am proudshamed.

I miss making things. I miss coding. I liked having power over machines. But power over humans is often awkward and sometimes painful to wield. I wish we’d built a better industry.

Source: Why I (Still) Love Tech: In Defense of a Difficult Industry | WIRED

Likewise.

Via: Daring Fireball: Why Paul Ford (Still) Loves Tech

I believe the relative ease — not to mention the lack of tangible cost — of software updates has created a cultural laziness within the software engineering community. Moreover, because more and more of the hardware that we create is monitored and controlled by software, that cultural laziness is now creeping into hardware engineering — like building airliners. Less thought is now given to getting a design correct and simple up front because it’s so easy to fix what you didn’t get right later.

Every time a software update gets pushed to my Tesla, to the Garmin flight computers in my Cessna, to my Nest thermostat, and to the TVs in my house, I’m reminded that none of those things were complete when they left the factory — because their builders realized they didn’t have to be complete. The job could be done at any time in the future with a software update.

Source: Shoddy Software Is Eating The World, And People Are Dying As A Result | Techdirt

The first is that implementation is policy. Whatever gets decided at various times by leadership (in this case, first to separate families, then to reunite them), what happens in real life is often determined less by policy than by software. And until the government starts to think of technology as a dynamic service, imperfect but ever-evolving, not just a static tool you buy from a vendor, that won’t change.

The second lesson has to do with how Silicon Valley — made up of people who very much think of technology as something you do — should think about its role in fighting injustice.

This is one of the lessons you can’t escape if you work on government tech. When government is impaired, who gets hurt? More often than not, the most vulnerable people.

In order to properly administer a social safety net, a just criminal justice system, and hundreds of other functions that constitute a functioning democracy, we must build government’s technology capabilities. In doing that, we run the risk of also increasing government’s effectiveness to do harm.

Which is why Silicon Valley can’t limit its leverage over government to software. Software doesn’t have values. People do. If the people who build and finance software (in Silicon Valley and elsewhere) really want government that aligns with their values, and they must offer government not just their software, but their time, their skills, and part of their careers. The best way to reclaim government is to become part of it.

Source: Border family separation and how computer software can make policy – Vox