Besides your web browser, your music player, and your team communication app, what are your most used macOS apps? My top 5 (per Timing): Ulysses, DEVONthink 3, Reeder, Spark, YNAB
A key question to building any software in the modern age is: “In the wrong hands, who could this harm?”
Decades ago, software seemed harmless. In 2019, when facial recognition is used to deport refugees and data provided by online services have been used to jail journalists, understanding who you’re building for, and who your software could harm, are vital. These are ideas that need to be incorporated not just into the strategies of our companies and the design processes of our product managers, but the daily development processes of our engineers. These are questions that need to be asked over and over again.
It’s no longer enough to build with rough consensus and running code. Empathy and humanity need to be first-class skills for every team – and the tools we use should reflect this.
The saddest thing to me is that this is probably a controversial idea. But I would much rather be a part of the anti-fascist software community than the libertarian free market community if the latter absolves itself of its culpability in the spread of white supremacy.
I believe the relative ease — not to mention the lack of tangible cost — of software updates has created a cultural laziness within the software engineering community. Moreover, because more and more of the hardware that we create is monitored and controlled by software, that cultural laziness is now creeping into hardware engineering — like building airliners. Less thought is now given to getting a design correct and simple up front because it’s so easy to fix what you didn’t get right later.
Every time a software update gets pushed to my Tesla, to the Garmin flight computers in my Cessna, to my Nest thermostat, and to the TVs in my house, I’m reminded that none of those things were complete when they left the factory — because their builders realized they didn’t have to be complete. The job could be done at any time in the future with a software update.
From our participants’ practices we draw the concept of workflow thinking-the act of reading knowledge work as modular and intertwined with technologies. Workflow thinking allows our participants to break any given project into a series of shorter process steps-a perspective that is well in line with rhetoric and composition’s understanding of process and its typical pedagogical practices. Workflow thinking, however, foregrounds the mediated nature of that work. It looks at each task or component and asks a question of the writing technologies and available affordances within that component: “Through which technologies will I accomplish this task? Why? What does a change in technologies offer here?” For our participants, a shift in these practices might afford them mobility or the removal of drudgery or new ways of seeing a problem or new invention strategies. In each case, however, they are able to use this mediated and modular thinking to reevaluate when and how they approach knowledge work.
This book offers workflow thinking as a counterpoint to contemporary discussions of digital writing technologies-particularly in regards to the increasing prominence of institutional software. As more universities sign on to site licenses for platforms like Office 365 and Google Apps for Education, and as more students and faculty become comfortable with working within those applications, writers risk a “cementing” of practice-a means through which writing tasks begin and end in institutionally-sanctioned software because it is free or pre-installed or institutionally available or seen as a shared software vocabulary. A lens of workflow thinking pushes against this, instead asking “what are the component pieces of this work?” and “how is this mediated?” and “what might a shift in mediation or technology afford me in completing this?” In short, we see workflow thinking as a way to reclaim agency and to push against institutionally-purchased software defaults. This perspective has origins in early humanities computing (particularly in 1980s research on word processors), as we will more fully discuss later in this chapter.
Source: Writing Workflows | Chapter 1
demand algorithmic transparency in all software systems used by public entities, including schools.
It is only now, a decade after the financial crisis, that the American public seems to appreciate that what we thought was disruption worked more like extraction—of our data, our attention, our time, our creativity, our content, our DNA, our homes, our cities, our relationships. The tech visionaries’ predictions did not usher us into the future, but rather a future where they are kings.
They promised the open web, we got walled gardens. They promised individual liberty, then broke democracy—and now they’ve appointed themselves the right men to fix it.
But did the digital revolution have to end in an oligopoly? In our fog of resentment, three recent books argue that the current state of rising inequality was not a technological inevitability. Rather the narrative of disruption duped us into thinking this was a new kind of capitalism. The authors argue that tech companies conquered the world not with software, but via the usual route to power: ducking regulation, squeezing workers, strangling competitors, consolidating power, raising rents, and riding the wave of an economic shift already well underway.
In a winners-take-all economy, it’s hard to prove the rulers wrong. But if the tech backlash wants to become more than just the next chapter in their myth, we have to question the fitness of the companies that survived.
The first is that implementation is policy. Whatever gets decided at various times by leadership (in this case, first to separate families, then to reunite them), what happens in real life is often determined less by policy than by software. And until the government starts to think of technology as a dynamic service, imperfect but ever-evolving, not just a static tool you buy from a vendor, that won’t change.
The second lesson has to do with how Silicon Valley — made up of people who very much think of technology as something you do — should think about its role in fighting injustice.
This is one of the lessons you can’t escape if you work on government tech. When government is impaired, who gets hurt? More often than not, the most vulnerable people.
In order to properly administer a social safety net, a just criminal justice system, and hundreds of other functions that constitute a functioning democracy, we must build government’s technology capabilities. In doing that, we run the risk of also increasing government’s effectiveness to do harm.
Which is why Silicon Valley can’t limit its leverage over government to software. Software doesn’t have values. People do. If the people who build and finance software (in Silicon Valley and elsewhere) really want government that aligns with their values, and they must offer government not just their software, but their time, their skills, and part of their careers. The best way to reclaim government is to become part of it.
Because endless growth and data collection is the foundation of their business, and that necessitates doing gross invasive things to their users.
They need you to feed the beast, and they certainly don’t want you to think about it. So they use cartoon animals and sneaky happy paths to make sure you stay blissfully ignorant.
Using software is inherently a handshake agreement between you and the service provider. It’s not unlike paying for a physical service.
The problem is, many of the dominant software makers are abusing your handshake in increasingly dastardly ways. They treat their customers like sitting ducks — just a bunch of dumb animals waiting to be harvested. And when growth slows, they resort to deceptive and creepy tactics to keep the trend lines pointing skyward.