Opinion
Geoffrey Miller wrote on Twitter [source], “If you understand the AI alignment problem, you understand that advocating for AI accelerationism means betraying humanity” [my italics]. He’s a professor of evolutionary psychology at the University of New Mexico.
As I pondered the tweet, I began to realize that there is a vast gulf between the two, as they can never be reconciled:
Alignment is an unsolvable moral problem
Accelerationism is an unsolvable technical problem
What Alignment Means
Alignment means “encoding human values and goals into AI models to make them as helpful, safe, and reliable as possible,” IBM tells us [source].
A question arises immediately, as Miller points out [source]: Whose human values will be prioritized, given the diversity of individual values and interests? For example, the value of publishers is that AI cannot freely access their content. My value is the extinction of AI.
To generalize the problem to beyond just me, 86% of humanity is involved in organized religion, but, Miller notes, “the AI industry is dominated by secular atheists who often dismiss religious perspectives.” Not just religion; AI suffers from other biases, such as having a “pro-environmental, left-libertarian orientation” [source].
If you are a left-wing environmentalist, you might, in your worldview bubble, consider such a bias normal, and might even consider AI useful for spreading your minority viewpoint to the rest of the world, such as, say, to the leader of North Korea or your next-door neighbor. (Let me acknowledge that we all live in worldview bubbles of our own making; the key is whether we realize it -- or not.)
When I wrote the subhead, “What Alignment Means” I meant it as a double-entendre: the meaning of the word, and its implication. For me, the implication is that the AI industry will fail to align its software with human values. For instance, while almost nuclear bomb-owning countries promise to prevent AI accessing nuclear weapon launch codes, China makes no such promise [source].
The reality is that AI alignment will be aligned with a narrowly-focused set of values held by coders in Silicon Valley.
What Accelerationism Means
Accelerationism means that “artificial intelligence... should be allowed to move as fast as possible, with no guardrails or gatekeepers standing in the way of innovation,” Kevin Roose tells us [source]. He writes a technology column at the New York Times.
Nick Bostrom warns that AI could actually “takeoff, where an AI could become superintelligent in a matter of days or weeks” [source]. He’s a philosopher with a background in theoretical physics and computational neuroscience. He wrote that back in 2013, we could not yet foresee the technological constraints that would throw up a brick wall against superintelligent AI.
It took another decade, as ChatGPT caught the imagination of the world, that we began to see, dimly at first, the triply-insurmountable hurdle facing AI’s takeoff: GPUs, electricity, and data.
GPUs. More “intelligent AI” requires more GPUs (computer boards that process information in parallel), and the demand for GPUs has allowed their manufacturers to charge top-dollar (now in the range of $30,000 each [source]), with hundreds of thousands of GPU boards needed for top-end AI machines.
Electricity. More GPUs require more electricity, to the point that the largest owners of AI machines are looking to reestablish nuclear power plants [source], while some jurisdictions are threatening to ban AI firms for their overuse of electricity, as has occurred with Bitcoin mining, another energy-intensive technology.
“Training a single large [AI] model can consume 50MWh [mega-watt hour] of energy,” explains Manja Thessin [source]; in contrast, a typical house consumes 0.76MW a day [source], while our brains require a mere 0.3 kWh/day [source]. Increasing advances in AI are heading the wrong way from brain-level electricsl consumption, paradoxically.
Data. Today’s popular AI is LLM, large language model, which reads human-generated text (like this article) to output probability-based guesses of what the next word in a sentence should be. To improve, LLM needs more text, but AI developers are running out of high-quality sources [source]; one solution is synthetic text, but it has been shown to lead to even greater hallucinations (bad output) [source].
- - -
The goal of AI is to make it as intelligent as humans in carrying out tasks without training, which is possible “in a few thousand days; it may take longer,” according to Sam Altman [source]. (He’s currently ceo of OpenAI.) Human-level AI is known as AGI, short for artificial general intelligence (not to be confused with today’s genAI, generative AI, which generates sentences, images, and music) [source].
The long-term aim is to reach the Singularity, a mythic time in the future when computers are good enough to replace our brains. Together with robotic bodies, humanity will achieve eternal life -- at least those very few wealthy enough to purchase it, like today’s space tourists. In Europe, the dream is expressed though a political party, Partei fuer Schulmedizinische Verjuengungsforschung (party for conventional medical rejuvenation research), which promises us that living disease-free for thousand years is achievable [source].
The requirement is clear: to reach the Singularity, less electricity and fewer GPUs are needed. It isn’t working. As AI giants struggle to get closer to AGI, they need more energy, more data, a greater number of GPUs, and more money. The progress is self-defeating.
LLM-based AI cannot scale, as we learn from OpenAI’s impending release of GPT-5 (aka Orion), which is to run on 300,000 GPU clusters: “The jump in quality from GPT-4 to Orion is far smaller than the jump from GPT-3 to GPT-4.” As Ed Zitron explains, “They're running out of data. They're getting more expensive, but not much better, and not really anymore powerful” [source].
To get to the Singularity, AI needs to be showing exponential growth, along with an exponential decline in resource requirements, and not the opposite, as is happening today.
What Geoffrey Miller Meant
I admit that I can only guess at what Geoffrey Miller meant by “If you understand the AI alignment problem, you understand that advocating for AI accelerationism means betraying humanity.” Here’s my take:
The AI-alignment problem is that AI is unable align its values with us, because it is a single entity, whereas humans are seven billion entities. By accelerating AI development, regardless of guard rails, the result is an AI that betrays humans by being unaligned with our values -- and will be worth only to the tiny percentage for whom it was coded.
AI was first developed in the 1950s. When, in future years, we look back at today’s LLM-based hysteria, it will be but a blip in a long line of failures on the road that doesn’t develop artificially-intelligent computers.
Comments