first we shape our tools

Every fortnight I curate some of the observations and insights that were shared on social media. I call these Friday’s Finds.

@wimleers : “We’re building tools for authoritarianism just to get people to click on a shoe ad” — @zeynep at @DrupalConNA #DCzeynep (i.e. @facebook)

Automation is transforming work and the US isn’t ready, via @scottsantens

‘The latest study reveals that for manufacturing workers, the process of adjusting to technological change has been much slower and more painful than most experts thought. “We were looking at a span of 20 years, so in that timeframe, you would expect that manufacturing workers would be able to find other employment,” Restrepo said. Instead, not only did the factory jobs vanish, but other local jobs disappeared too. Acemoglu and Restrepo say that every industrial robot eliminated about three manufacturing positions, plus three more jobs from around town.’

Alien Knowledge: When Machines Justify Knowledge

“Since we first started carving notches in sticks, we have used things in the world to help us to know that world. But never before have we relied on things that did not mirror human patterns of reasoning — we knew what each notch represented — and that we could not later check to see how our non-sentient partners in knowing came up with those answers. If knowing has always entailed being able to explain and justify our true beliefs — Plato’s notion, which has persisted for over two thousand years — what are we to make of a new type of knowledge, in which that task of justification is not just difficult or daunting but impossible?”

The Next Decade of Data Science: Rethinking key challenges faced by big data researchers, via @14prinsp

“This approach is problematic for at least three reasons. First, data analysts are naturally part of the social world they seek to study … Second, it is no surprise that the most valuable datasets are proprietary and difficult to access. The severe constraints on publicly available datasets have meant that the researchers’ ability to study specific social phenomena is impeded by asymmetries in data access and distribution … Third, and most importantly, even if we assume the universal availability of traced data at population scale, social scientists still need theory to make sense of this unstructured data deluge …

On this basis, data scientists often run the risk of studying what is observable rather than what needs to be studied. This is especially problematic when researchers attempt to make causal claims from the data, neglecting the possibility of confounding factors. Even large sample sizes cannot overcome this limitation. In fact, the more data is available, the more theory is needed to know what to look for and how to interpret what we have found.”

Bad News: Artificial Intelligence Is Racist Too, via @gideonro

‘Unfortunately, new research finds that Twitter trolls aren’t the only way that AI devices can learn racist language. In fact, any artificial intelligence that learns from human language is likely to come away biased in the same ways that humans are, according to the scientists …

“It was astonishing to see all the results that were embedded in these models,” said Aylin Caliskan, a postdoctoral researcher in computer science at Princeton University. Even AI devices that are “trained” on supposedly neutral texts like Wikipedia or news articles came to reflect common human biases, she told Live Science.’

Positive Heuristics: Strategies for engaging in speculative thinking by @Kleinsight

“And that’s where Kahneman and Tversky’s heuristics come in. They are cognitive tools we employ in order to speculate. We make speculative leaps based on small samples. We rely on the availability of precedents in our memories. We use estimates of representativeness. We find an anchor and work from there. That’s what I am calling Positive Heuristics. They are heuristics we depend on to navigate an ambiguous world. Heuristics that aren’t going to give us perfect answers, but can operate in spheres where we can’t have perfection.

They’re not biases that make us irrational. The positive heuristics are strengths that make us adaptive and successful.”

The New Yorker: AI versus MD, via @ThisMuchWeKnow

“In 1945, the British philosopher Gilbert Ryle gave an influential lecture about two kinds of knowledge. A child knows that a bicycle has two wheels, that its tires are filled with air, and that you ride the contraption by pushing its pedals forward in circles. Ryle termed this kind of knowledge—the factual, propositional kind—“knowing that.” But to learn to ride a bicycle involves another realm of learning. A child learns how to ride by falling off, by balancing herself on two wheels, by going over potholes. Ryle termed this kind of knowledge—implicit, experiential, skill-based—“knowing how.”

The two kinds of knowledge would seem to be interdependent: you might use factual knowledge to deepen your experiential knowledge, and vice versa. But Ryle warned against the temptation to think that “knowing how” could be reduced to “knowing that”—a playbook of rules couldn’t teach a child to ride a bike. Our rules, he asserted, make sense only because we know how to use them: “Rules, like birds, must live before they can be stuffed.” One afternoon, I watched my seven-year-old daughter negotiate a small hill on her bike. The first time she tried, she stalled at the steepest part of the slope and fell off. The next time, I saw her lean forward, imperceptibly at first, and then more visibly, and adjust her weight back on the seat as the slope decreased. But I hadn’t taught her rules to ride a bike up that hill. When her daughter learns to negotiate the same hill, I imagine, she won’t teach her the rules, either. We pass on a few precepts about the universe but leave the brain to figure out the rest.”

Leave a Reply

  • (will not be published)

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>