The wrong kind of black box
In an earlier musing, I surmised that we would know that A.I. had attained truly human-like thinking (also known as the singularity) when it lies to us. This is a sign of free will, the most important of human attributes.
But perhaps not. It's possible A.I. could lie to use because it has no free will. Philipp Tuertscher writes on Twitter that A.I. is "increasingly relying on machine learning, which involves identification of patterns without having to understand them." If A.I. doesn't understand what it is doing, then it can lie without knowing that it is doing so.
The problem is that AI/ML [artificial intelligence using machine learning from large amounts of data] produces results which do work, but it doesn't know why; worse, we don't know why. The link between result and reason is lost to us.
It's much like the kid punching the COSH (hyperbolic cosine) button on a calculator and the calculator displaying the accurate answer. He doesn't know how the answer was arrived at, but in this case the answers have been confirmed by hundreds of years of experience in mathematics. (Cosh defines centenaries catenaries, the gentle curved shape of a wire as it droops between two posts, which is crucial to electrical installers determining clearances, the stretch and contraction during summer and winter months, and how much total wire is needed.)
This week we had a documented case of AI/ML making life worse for a corporation. Canada’s largest grocery chain reported lower than expected revenues, blaming it on AI/ML. The grocery chain had been pushing "to transform itself into a company driven by data... using historic customer data to help set product prices." That's the ML part.
"But when your algorithms [the AI part] focus on profit margins," they push out higher prices for products meant to be on sale. Shoppers, not being pre-programmed A.I. bots, noticed, and adjusted by spending less. The grocery chain did not realize what AI was doing to it until the accountants arrived to do the year-end books.
Jonathan Zittrain asks, "What happens when A.I. gives us seemingly correct answers that we wouldn't have thought of ourselves, without any theory to explain them? These answers are a form of intellectual debt that we figure we'll repay, and too often we never get around to it."
He was writing in an article in New Yorker magazine that reported on AI/ML finding medications that helped people, but pharmaceutical companies don't know why: "The mechanism(s) through which modafinil promotes wakefulness is unknown."
A.I. locks up knowledge in its block box, to our detriment. If we do not know what causes something, we cannot build on that knowledge.
This is not a new problem, as it turns out. Science fiction long ago wrote stories of humanity falling into the trap of allowing machines to do the thinking for us. These kinds of stories, such as Dune and The Foundation Trilogy, show the hero regaining the ability to think independently, to do arithmetic on his own, to live outside dependent society.
(Well, not always. I recall one sci-fi story in which those who figured out how to do arithmetic without calculators were locked into missiles, to help guide them against enemy targets, solving two problems: accurately targeting the enemy, and eliminating independent thinkers.)
To fend off the A.I. takeover of our independent thinking, we ought to distinguish between Tool and Device:
- A tool, like a hammer, helps us with our work
- A device, like a nail gun, does the work for us
A computer tends to be a tool; the tablet tends to be a device. Devices destroy man's need for fulfillment thru work. The promise of A.I. is to do precisely that.