Opinion
Book written by Emmanuel Maggiori
Reviewed by Ralph Grabowski
An unexpected thing happened along the way to AI’s domination over humanity and the rest of the world. It got sued.
Microsoft-funded OpenAI’s ChatGPT had produced a report stating that the current mayor of a town in Australia had been jailed for bank fraud. He hadn’t. Quite the opposite: he had, in fact, blown the whistle that led to others being charged.
Until now, purveyors and apologists for artificial intelligence have smirked at its logical failures. When contacted by Washington Post about fake newspaper articles cited by ChatGPT, “Katy Asher, senior communications director at Microsoft, said the company is taking steps to ensure search results are safe and accurate.” AI is too important to acknowledge its in-grown problems. But overlooking problems is itself a problem.
On the one hand, some ChatGPT failures are amusing, allowing us to lord it over that silly computer:
Q: Write a sentence that contains the letter ‘f’ exactly three times.
A: He offered her a cup of coffee and a fresh croissant for breakfast.
Here is an unamusing one: a conservative law professor was accused in a March, 2018 Washington Post article of sexual misconduct while on a trip with students to Alaska. The trip never took place, the professor has never been accused to misconduct, and the Washington Post article does not exist. ChatGPT made it all up. I rarely use the word frightening, but this is.
- - -
These stories come on the heels of Emmanuel Maggiori’s “Smart Until It’s Dumb: Why artificial intelligence keeps making epic mistakes (and why the AI bubble will burst),” which he self-published earlier this year. He has a Ph.D. degree, wrote machine-learning code for firms like Expedia, and now free-lances as a programmer.
His brief book briefly traces the history of AI from the early 1960s (actually, it got started in the 1950s), and then goes on to explain different kinds of AI. There isn’t just one system, but a series of increasingly sophisticated systems that attempt to become more meta over time. Meta means software that uses software at higher levels of abstraction.
I was interested to see if Mr Maggiori would touch on materialistic determinism, or not -- he does, kind of. He tackles some aspects of the brain-consciousness problem in the chapter on The Mind. He notes that getting to AGI (artificial general-intelligence, which is computers being equivalent to human brains) requires an innovative breakthrough whose arrival we cannot predict, never mind by 2029, as some enthusiasts have insisted on repeatedly, as recently as last week.
He points out attempts by AI programmers to match the processing going on in our brains, but then failed. “If we want to compare [the AI system of] deep learning with our brain, we must ignore a lot of facts to force a fit,” he concludes.
As an industry insider, Mr Maggiori is at his best debunking bad thinking surrounding AI. He explains how AI researchers report percentages of errors, but not the seriousness of those errors. For instance, it was full steam ahead for developing driverless cars (by 2018, no less!) because 37,462 people in the USA would no longer be killed by us humans operating cars, once AI drives them instead of people. (There are 2,350,464 drivers in America.)
The reason Teslas haven’t for the last five years been driving themselves is that AI works in cars based on situations for which it has been trained; it cannot be trained for unexpected events, which humans excel at. This is symbolic of much of the hype around AI: a lot of facts have to be ignored to arrive at the triumphalist pronouncements of yet another advance in AI.
Mr Maggiori warns businesses against applying AI to projects and products due to fashion. From his experience, he has seen that firms assume AI works, and then waste time and money on learning it does not work well. Sometimes, AI is forced on firms because that’s where the funding is, including from governments [*koff* EU] convincing themselves their regions need to take a worldwide lead in AI, and then corporations (ab)use the free money they receive. He related horror stories of how staff sometimes cheat to make it look to executives that AI works.
It is normal to be over-hopeful for new technology. It reminds me of other technology that was capable of anything back when it was still new. My favorite example comes from the 1980s, when people would exclaim that “The sky’s the limit” for the then-new computer-aided design software; it wasn’t, and today none of us would want to go back to CAD of the 1980s, because it is so limiting.
While the limits of technology are poorly understood when it is new, sometimes it is not the tech itself that is limiting. Mr Maggiori points out that the limit might be due to lack of customer demand, which is what is limiting the VR/AR/XR market today, and which killed off 3D TV last decade.
With all this in mind, we can only shake our heads sadly at those who, only two months ago, were still enthusiastic about AI’s future:
Despite such problems, some high-profile leaders have pushed for its expanded use. The most chilling involved Microsoft founder and billionaire Bill Gates, who called for the use of artificial intelligence to combat not just “digital misinformation” but “political polarization.” In a [February 2023] interview on a German program Handelsblatt Disrupt, Gates called for unleashing AI to stop “various conspiracy theories” and to prevent certain views from being “magnified by digital channels.” He added that AI can combat “political polarization” by checking “confirmation bias.”
With AI generating digital misinformation and engaging in various conspiracy theories magnified by digital channels, it’s seems like it s collapsing like a crypto exchange.
- - -
Smart Until It's Dumb: Why artificial intelligence keeps making epic mistakes (and why the AI bubble will burst)
by Emmanuel Maggiori
$15
Published by Applied Maths Ltd in 2023
200 pages, paperback
ISBN-10 1838337229
Comments