During our research, we also found ourselves reflecting on the unique position of the school as an institution tasked not only with educating its students but also with managing their personal data. Couldn’t one then argue that, since the school is a microcosm of the wider society, the school’s own data protection regime could be explained to children as a deliberate pedagogical strategy? Rather than something quietly managed by the GDPR compliance officer and conveyed as a matter of administrative necessity to parents, the school’s approach to data protection could be explained to students so they could learn about the management of data that is important to them (their grades, attendance, special needs, mental health, biometrics).

Source: What’s the Role of the School in Educating Children in a Datafied Society? – Connected Learning Alliance

Via: 📑 What’s the Role of the School in Educating Children in a Datafied Society? | Read Write Collect

Why is fairness to people with disabilities a different problem from fairness concerning other protected attributes like race and gender?

Disability status is much more diverse and complex in the ways that it affects people. A lot of systems will model race or gender as a simple variable with a small number of possible values. But when it comes to disability, there are so many different forms and different levels of severity. Some of them are permanent, some are temporary. Any one of us might join or leave this category at any time in our lives. It’s a dynamic thing.

I think the more general challenge for the AI community is how to handle outliers, because machine-learning systems—they learn norms, right? They optimize for norms and don’t treat outliers in any special way. But oftentimes people with disabilities don’t fit the norm. The way that machine learning judges people by who it thinks they’re similar to—even when it may never have seen anybody similar to you—is a fundamental limitation in terms of fair treatment for people with disabilities.

Source: Can you make an AI that isn’t ableist?

See also,

Design is Tested at the Edges: Intersectionality, The Social Model of Disability, and Design for Real Life 

But the goal of disinformation isn’t really around these individual transactions. The goal of disinformation is to, over time, change our psychological set-points. To the researcher looking at individuals at specific points in time, the homeostasis looks protective – fire up Mechanical Turk, see what people believe, give them information or disinformation, see what changes. What you’ll find is nothing changes – set-points are remarkably resilient.

But underneath that, from year to year, is drift. And its the drift that matters.

Source: The Homeostatic Fallacy and Misinformation Literacy | Hapgood

Via:

Most of the reading I’m doing right now in my final weeks of research I’d describe as “contextual” – that is, I’m reading the bestsellers and articles that reflect ideas influencing and influenced by and adjacent to teaching machines and behaviorism in the 1950s and 1960s. Needless to say, I’ve been reading a lot about cybernetics – something that totally colored how I thought about the article Mike Caulfield published this week on “The Homeostatic Fallacy and Misinformation Literacy.” Homeostasis is a cornerstone of cybernetic (and information) theory. And yet here we are, thanks to data-driven “feedback,” all out of whack.

I think there’s something wrapped up in all this marketing and mythology that might explain in part why the tech industry (and, good grief, the ed-tech industry) is so incredibly and dangerously dull. You can’t build thinking machines (or teaching machines for that matter) if you’re obsessed with data but have no ideas.

Source: HEWN, No. 296

 “…so many of the data scientists that are in work right now think of themselves as technicians and think that they can blithely follow textbook definitions of optimisation, without considering the wider consequences of their work. So, when they choose to optimise to some kind of ratio of false positives or false negatives, for example, they are not required by their bosses or their educational history to actually work out what that will mean to the people affected by the algorithms they’re optimising. Which means that they don’t really have any kind of direct connection to the worldly consequences of their work.”

“…we think that data people are magical and that they have any kind of wisdom. What they actually have is a technical ability without wisdom.”

Source: To work for society, data scientists need a hippocratic oath with teeth | WIRED UK