Security is the foundation of privacy. Privacy is a fundamental human right.

Apple’s privacy stance, the gist:

  • Security is the foundation of privacy.
  • Privacy is a fundamental human right.
  • Embody commitments to privacy with code.

That’s my takeaway from Craig Federighi’s keynote at the 10th Annual European Data Protection & Privacy Conference.

… the four key privacy principles that guide Apple.

  1. Not collecting unnecessary data through data minimization.
  2. Processing as much data on device as possible.
  3. Making it clear to customers what data is collected and giving them tools to control how that data is used.
  4. Keeping data safe through security, including Apple’s unique integration of hardware and software. Security is the foundation of privacy.

Source: Craig Federighi Shares Apple’s Four Privacy Principles in Conference Keynote – MacRumors

Now, others take the opposite approach. They gather, sell, and hoard as much of your personal information as they can. The result is a data-industrial complex, where shadowy actors work to infiltrate the most intimate parts of your life and exploit whatever they can find–whether to sell you something, to radicalize your views, or worse. — Craig Federighi

I agree with all of that. Props to Apple for pushing privacy and pissing off the right people.

We must work not only toward providing better security around student data but also toward _educating _students about the need to critically evaluate how their data is used and how to participate in shaping data privacy practices and policies. These policies and practices will affect them for the rest of their lives, as individuals with personal data and also as leaders with power over the personal data of others. Regulation is necessary, but education is the foundation that enables society to recognize when its members’ changing needs require a corresponding evolution in its regulations. And for those of us in academia, unlike those in industry, education is our work.

Source: Education before Regulation: Empowering Students to Question Their Data Privacy | EDUCAUSE

Via: 📑 Education before Regulation: Empowering Students to Question Their Data Privacy | Read Write Collect

During our research, we also found ourselves reflecting on the unique position of the school as an institution tasked not only with educating its students but also with managing their personal data. Couldn’t one then argue that, since the school is a microcosm of the wider society, the school’s own data protection regime could be explained to children as a deliberate pedagogical strategy? Rather than something quietly managed by the GDPR compliance officer and conveyed as a matter of administrative necessity to parents, the school’s approach to data protection could be explained to students so they could learn about the management of data that is important to them (their grades, attendance, special needs, mental health, biometrics).

Source: What’s the Role of the School in Educating Children in a Datafied Society? – Connected Learning Alliance

Via: 📑 What’s the Role of the School in Educating Children in a Datafied Society? | Read Write Collect

Why is fairness to people with disabilities a different problem from fairness concerning other protected attributes like race and gender?

Disability status is much more diverse and complex in the ways that it affects people. A lot of systems will model race or gender as a simple variable with a small number of possible values. But when it comes to disability, there are so many different forms and different levels of severity. Some of them are permanent, some are temporary. Any one of us might join or leave this category at any time in our lives. It’s a dynamic thing.

I think the more general challenge for the AI community is how to handle outliers, because machine-learning systems—they learn norms, right? They optimize for norms and don’t treat outliers in any special way. But oftentimes people with disabilities don’t fit the norm. The way that machine learning judges people by who it thinks they’re similar to—even when it may never have seen anybody similar to you—is a fundamental limitation in terms of fair treatment for people with disabilities.

Source: Can you make an AI that isn’t ableist?

See also,

Design is Tested at the Edges: Intersectionality, The Social Model of Disability, and Design for Real Life 

But the goal of disinformation isn’t really around these individual transactions. The goal of disinformation is to, over time, change our psychological set-points. To the researcher looking at individuals at specific points in time, the homeostasis looks protective – fire up Mechanical Turk, see what people believe, give them information or disinformation, see what changes. What you’ll find is nothing changes – set-points are remarkably resilient.

But underneath that, from year to year, is drift. And its the drift that matters.

Source: The Homeostatic Fallacy and Misinformation Literacy | Hapgood

Via:

Most of the reading I’m doing right now in my final weeks of research I’d describe as “contextual” – that is, I’m reading the bestsellers and articles that reflect ideas influencing and influenced by and adjacent to teaching machines and behaviorism in the 1950s and 1960s. Needless to say, I’ve been reading a lot about cybernetics – something that totally colored how I thought about the article Mike Caulfield published this week on “The Homeostatic Fallacy and Misinformation Literacy.” Homeostasis is a cornerstone of cybernetic (and information) theory. And yet here we are, thanks to data-driven “feedback,” all out of whack.

I think there’s something wrapped up in all this marketing and mythology that might explain in part why the tech industry (and, good grief, the ed-tech industry) is so incredibly and dangerously dull. You can’t build thinking machines (or teaching machines for that matter) if you’re obsessed with data but have no ideas.

Source: HEWN, No. 296

 “…so many of the data scientists that are in work right now think of themselves as technicians and think that they can blithely follow textbook definitions of optimisation, without considering the wider consequences of their work. So, when they choose to optimise to some kind of ratio of false positives or false negatives, for example, they are not required by their bosses or their educational history to actually work out what that will mean to the people affected by the algorithms they’re optimising. Which means that they don’t really have any kind of direct connection to the worldly consequences of their work.”

“…we think that data people are magical and that they have any kind of wisdom. What they actually have is a technical ability without wisdom.”

Source: To work for society, data scientists need a hippocratic oath with teeth | WIRED UK