We must work not only toward providing better security around student data but also toward _educating _students about the need to critically evaluate how their data is used and how to participate in shaping data privacy practices and policies. These policies and practices will affect them for the rest of their lives, as individuals with personal data and also as leaders with power over the personal data of others. Regulation is necessary, but education is the foundation that enables society to recognize when its members’ changing needs require a corresponding evolution in its regulations. And for those of us in academia, unlike those in industry, education is our work.
During our research, we also found ourselves reflecting on the unique position of the school as an institution tasked not only with educating its students but also with managing their personal data. Couldn’t one then argue that, since the school is a microcosm of the wider society, the school’s own data protection regime could be explained to children as a deliberate pedagogical strategy? Rather than something quietly managed by the GDPR compliance officer and conveyed as a matter of administrative necessity to parents, the school’s approach to data protection could be explained to students so they could learn about the management of data that is important to them (their grades, attendance, special needs, mental health, biometrics).
Why is fairness to people with disabilities a different problem from fairness concerning other protected attributes like race and gender?
Disability status is much more diverse and complex in the ways that it affects people. A lot of systems will model race or gender as a simple variable with a small number of possible values. But when it comes to disability, there are so many different forms and different levels of severity. Some of them are permanent, some are temporary. Any one of us might join or leave this category at any time in our lives. It’s a dynamic thing.
I think the more general challenge for the AI community is how to handle outliers, because machine-learning systems—they learn norms, right? They optimize for norms and don’t treat outliers in any special way. But oftentimes people with disabilities don’t fit the norm. The way that machine learning judges people by who it thinks they’re similar to—even when it may never have seen anybody similar to you—is a fundamental limitation in terms of fair treatment for people with disabilities.
But the goal of disinformation isn’t really around these individual transactions. The goal of disinformation is to, over time, change our psychological set-points. To the researcher looking at individuals at specific points in time, the homeostasis looks protective – fire up Mechanical Turk, see what people believe, give them information or disinformation, see what changes. What you’ll find is nothing changes – set-points are remarkably resilient.
But underneath that, from year to year, is drift. And its the drift that matters.
Most of the reading I’m doing right now in my final weeks of research I’d describe as “contextual” – that is, I’m reading the bestsellers and articles that reflect ideas influencing and influenced by and adjacent to teaching machines and behaviorism in the 1950s and 1960s. Needless to say, I’ve been reading a lot about cybernetics – something that totally colored how I thought about the article Mike Caulfield published this week on “The Homeostatic Fallacy and Misinformation Literacy.” Homeostasis is a cornerstone of cybernetic (and information) theory. And yet here we are, thanks to data-driven “feedback,” all out of whack.
I think there’s something wrapped up in all this marketing and mythology that might explain in part why the tech industry (and, good grief, the ed-tech industry) is so incredibly and dangerously dull. You can’t build thinking machines (or teaching machines for that matter) if you’re obsessed with data but have no ideas.
Source: HEWN, No. 296
“To that end, I propose a Data Bill of Rights. It should have two components: The first would specify how much control we may exert over how our individual information is used for important decisions, and the second would introduce federally enforced rules on how algorithms should be monitored more generally.”
“…so many of the data scientists that are in work right now think of themselves as technicians and think that they can blithely follow textbook definitions of optimisation, without considering the wider consequences of their work. So, when they choose to optimise to some kind of ratio of false positives or false negatives, for example, they are not required by their bosses or their educational history to actually work out what that will mean to the people affected by the algorithms they’re optimising. Which means that they don’t really have any kind of direct connection to the worldly consequences of their work.”
“…we think that data people are magical and that they have any kind of wisdom. What they actually have is a technical ability without wisdom.”