Most VCs, Principals, Rectors, and senior managers are not well grounded in ed tech. It is also an area which is subject to extreme views (for and against), often based on emotion, romance, and appeals to ego. I would like to therefore propose a new role: Sensible Ed Tech Advisor. Job role is as follows:

  • Ability to offer practical advice on adoption of ed tech that will benefit learners
  • Strong BS detector for ed tech hype
  • Interpreter of developing trends for particular context
  • Understanding of the intersection of tech and academic culture
  • Communicating benefits of any particular tech in terms that are valuable to educators and learners
  • Appreciation of ethical and social impact of ed tech

And as Audrey Watters highlights tirelessly, an unsceptical approach to ed tech is problematic for many reasons. Far more useful is to focus on specific problems staff have, or things they want to realise, than suggest they just ‘don’t get it’. Having an appreciation for this intersection between ed tech (coming from outside the institution and discipline often) and the internal values and culture is also an essential ingredient in implementing any technology successfully.

Source: Sensible Ed Tech – The Ed Techie

Compassion is not the same as politeness or good manners; compassion involves understanding suffering in ourselves and in others and actively desiring to alleviate it.

Another way of looking at it is that compassion presents an optimization problem: minimize suffering. If we’re not building technology with an eye toward minimizing suffering, what’s the point?

Compassion often demands candid and direct communication, so being “fake nice” is not compassionate.

RTFM makes the assumption that the person is motivated by laziness or perhaps even a desire to waste your time. It leaves no space for understanding the person’s true motivation in coming to you for help or even what they’ve tried so far.

The implication of RTFM is that the asker could have found the answer to the question without asking, and is therefore violating some social law by asking. This can easily stir a sense of shame in the asker.

Shame is such a painful feeling; it is cruel to knowingly encourage it in others.

Source: It’s Time to Retire “RTFM” – Compassionate Coding – Medium

“I’ve long tracked Facebook’s serial admission to having SIGINT visibility that nearly rivals the NSA: knowing that Facebook had intelligence corroborating NSA’s judgment that GRU was behind the DNC hack was one reason I was ultimately convinced of the IC’s claims, in spite of initial questions.”

Source: Yet More Proof Facebook’s Surveillance Capitalism Is Good at Surveilling — Even Russian Hackers – emptywheel

 “…so many of the data scientists that are in work right now think of themselves as technicians and think that they can blithely follow textbook definitions of optimisation, without considering the wider consequences of their work. So, when they choose to optimise to some kind of ratio of false positives or false negatives, for example, they are not required by their bosses or their educational history to actually work out what that will mean to the people affected by the algorithms they’re optimising. Which means that they don’t really have any kind of direct connection to the worldly consequences of their work.”

“…we think that data people are magical and that they have any kind of wisdom. What they actually have is a technical ability without wisdom.”

Source: To work for society, data scientists need a hippocratic oath with teeth | WIRED UK

“But again, I don’t care. Why? Because the failures are so obvious compared to those of most algorithms. Dead people by the side of the road constitute public tragedies. They make headlines. They damage companies’ reputations and market valuations. This creates inherent and continuous pressure on the data scientists who build the algorithms to get it right. It’s self-regulating nirvana, to be honest. I don’t care because the self-driving car companies have to care for me.

By contrast, companies that own and deploy other algorithms – algorithms that decide who gets a job, who gets fired, who gets a credit card and with what interest rate, who pays what for car or life insurance – have shockingly little incentive to care.

The problem stems from the subtlety of most algorithmic failures. Nobody, especially not the people being assessed, will ever know exactly why they didn’t get that job or that credit card. The code is proprietary. It’s typically not well understood, even by the people who build it. There’s no system of appeal and often no feedback to improve decision-making over time. The failures could be getting worse and we wouldn’t know it.

Source: Don’t Worry About the Ethics of Self-Driving Cars – Bloomberg

“I don’t use Facebook for ethical and moral reasons. As a service, it is a net negative to our society. It has helped amplify the polarization that has always existed. So why then should I own their stock?”

All I care is that we get some sort of a larger data regulation in place which doesn’t allow this and future Facebooks to abuse the rights of citizens. But given the state of our politics, that too is wishful thinking! After all, if a company can employ 500 people for its propaganda arm, you think they won’t hire a thousand to literally swamp the swamp.

Source: First $100 Billion (decline) is the hardest – Om Malik