Are you a humanist? Most would readily declare themselves so. They would also proudly declare others not humanist.
Just so we’re clear on definitions here, this is the definition of humanism that I will examine:
In the early 21st century, the term generally denotes a focus on human well-being and advocates for human freedom, autonomy, and progress.
This humanist question has become most pronounced in the AI war between Effective Altruists (EA) and their countermovement, Effective Accelerationists (e/acc). In my significant tenure observing these two groups, I have routinely seen them hurl the insult of being anti-humanist at each other.
The EA argument can be summarized by most science-fiction that has dealt with the topic of AI within the past century. Choose your pick of 2001: A Space Odyssey, Terminator, or Detroit: Become Human, and you’ve pretty much got a perfect description of what the Effective Altruists are afraid of. It’s a worldview with socio-political-economic implications based entirely on fictional concepts at the moment, but it should not be disregarded.
If we take their premise, it fundamentally makes sense that a thing that can think and process information much faster than humans would kill, enslave, or domesticate the human race rather quickly. Like how Homo Sapiens did to all other forms of human. Especially because many EAs believe in the Scaling Hypothesis, which posits that the path to Artificial General Intelligence is very close to being finished, only requiring an increase of data and compute. And thus, AGI (and therefor, the end of humanity) is very close. People calling EA a doomsday cult for this reason aren’t far off, when EA forums like LessWrong are filled with people discussing their “p(doom)”, meaning the probability (in a percentage) that they assign to the chance that AI will lead to catastrophic disaster or total human extinction.
The EAs consider themselves humanists because murder, enslavement, and domestication of humans is antithetical to human well-being, freedom, autonomy, and most definitions of progress.
To prevent the imminent global AI catastrophe, the EA’s solution is their own undoing, because the Scaling Hypothesis is true. Back in 2019, OpenAI announced it would not release GPT-2 because it was “too dangerous to release”. GPT-2 was trained on 1.5 billion parameters, and couldn’t count to ten or write fully coherent paragraphs. Fast forward five years, Meta is now releasing a 405 billion parameter model with near-state of the art capability as open-source software. They have already released smaller versions of it, including an 8 billion parameter model that can run on a cheap bargain bin smartphone.
The current open-source models far outshine any model of just a few years ago, and they’re available with zero safety regulation for anyone to download. This presents a core irony: imagine telling an EA from 2019 that exponentially better models are available for anyone to download on GitHub, including Vladimir Putin. And then you consider the past five years. None of their concerns have come true. While war and misinformation have grown in this time, it has been entirely for other reasons. In fact, AI looks to be a credible solution, with AI-powered military technology and AI being used to moderate social media content.
AI is scaling rapidly, and a relatively short matter of years will decide whether the EAs are right or wrong. They were confidently, obnoxiously wrong in 2019. They’re probably still wrong today. And I can’t see the future. But what the EAs are successful in is regulation. In Europe, the AI Act has already passed, and in California, SB 1047 is on its way to. These are significant: the AI Act pretty much gives EU regulators carte blanche to fine AI companies for anything they deem “unsafe”. The California bill requires bureaucratic oversight for any training run over 10^26 FLOPs of computation, an arbitrarily set number that will certainly not withstand the test of time. Friendly reminder that the Apollo Guidance Computer that got us to the moon weighed 70 pounds and had 1/1000th the processing power of an Apple Watch.
And so, the EAs have a totally sane solution for the long term to fully stop AI. Here’s chief AI pessimist Eliezer Yudkowsky:
Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
Pausing AI Developments Isn’t Enough. We Need to Shut it All Down | TIME
And precisely here is where the EAs invented their own opposition. Enter
. Started by X poster @BasedBeffJezos, Effective Accelerationism really isn’t a movement as much as it is a countermovement to the insanity of the Effective Altruists. GPT-4 hasn’t killed anybody, and the next several years of models won’t either, so why not have fun building great tools with them?Right now, e/acc is like the tech culture equivalent of the protest movement of the ‘60s. It’s primarily run by young people at universities who want to rebel. Old establishmentarians [Nixon fans | tech CEOs] have legitimate concerns over [spread of communism | AI risk], so they begin misguided absurdities [Vietnam War | AI regulation] which the young people oppose because the old people are absurd and have a financial interest in being so, but more importantly because the old people have really bad vibes. The young people have [rock music | twitter memes], and the old establishmentarians have [evangelical Christianity | insufferable Op-eds]. The young are aesthetically inspired by the radicalism of [communism | Landian Accelerationism], which inevitably dooms their movement to failure, but nobody’s really concerned about that because the point wasn’t really [mass extinction | mass extinction], it was to provide counterbalance and have some good vibes.
But accelerationism is complicated. While the term has been used by many different people, the main idea involves speeding up a system with the deliberate intention of destabilizing it. In the parlance of AI, I would consider the British philosopher Nick Land to be the originator and key spokesperson of the idea. Land’s writings are too dizzying, verbose, and overlapping to be able to point to an excerpt as a definition of accelerationism. The best I can do to describe it is the Editors’ Introduction at the beginning of his book Fanged Noumena (which is not much better).
Advanced technologies invoke ancient entities; the human voice disintegrates into the howl of cosmic trauma; civilization hurtles towards an artificial death. (…) AIs are pursued into labyrinthine crypts by Turing cops, and Europe mushrooms into a paranoia laboratory in a global cyberpositive circuit that reaches infinite density in the year 2012, flipping modernity over into whatever has been piloting it from the far side of the approaching singularity.
The basic premise at play here is that the extinction that the EAs are so afraid of, the AI obsoleting us as we obsoleted previous species, is the destined path nature has set for us, and that we should not attempt to interfere with its plan. He uses the process of capitalism, in how we build machinery to replace ourselves, as a microcosm that exemplifies this process of nature. In the accelerationist view, this path of nature is the divine path, and that we should push forward into it without relent. The “Turing cops”, those who resist by pushing AI regulation and strangle capitalism, are the enemy.
Scratch the surface of e/acc, and you see the core contradiction. Consider this line from
’s Techno-Optimist Manifesto:We believe the techno-capital machine is not anti-human – in fact, it may be the most pro-human thing there is. It serves us. The techno-capital machine works for us. All the machines work for us.
Or this one:
And we believe in humanity – individually and collectively.
Or perhaps from the explainer on the official e/acc Substack:
Do you want to get rid of humans?
No. Human flourishing is one of our core values! We are humans, and we love humans.
It seems contradictory.
But let’s consider our options here. EA is not pro-human. While it perhaps ensures the continued reign of human biology upon the planet, it suffocates it under a totalitarian panopticon. The growth and expansion of the human species is set at a cap (10^26 FLOPs, apparently). Perhaps that civilization will live to journey the stars before the heat death of the universe. But it will simply be the equivalent of giving up at the 25th mile of a marathon, a destiny unfulfilled.
Now contrast that to e/acc. With the current path of AI development, I and many of you will live to see a great explosion of human triumph as the cost of knowledge, the great bottleneck of our species, is driven to zero. Economies will hit great exponentials. But with every advance, the beautifully designed human meatsuit we were born into becomes devalued. Today we carry smartphones, have watches, wear maximally comforting clothes. Tomorrow we’ll have brain implants, prosthetic limbs, and fully hacked biology to maximize performance. Eventually, robots are doing everything we used to have to do. And then nobody’s going to notice if we go missing.
But if we zoom into the near-term, the e/accs are plain-and-simple more logical. The harms claimed of the best systems today, and for at least the next few years, are entirely laughable. Problems caused by AI in the next century (physical warfare, biowarfare, cyberwarfare, misinformation) are only solved by AI. America’s AI-piloted drones will kill China’s AI-piloted drones. American AI will develop vaccines at rates unheard of while Chinese AI develops diseases at rates unheard of. Everybody’s AI spends night and day developing digital security to make themselves impenetrable to the opposition. It’s an arms race, but the great economic rush it inspires will make it a good deal for everyone involved. The continued existence of every nation-state will rely upon its ability to scale systems. Attempts to decelerate are analogous to suicide.
Through the lens of humanism, the e/acc vs EA debate is a core philosophical question that must be taken seriously, debated, and decisively decided within our society. It’s either a fast and beautiful existence before fading away, or living as long as possible, if under the unlivable circumstances of a totalitarian world government.
Given the option, I choose e/acc. But please, stop calling yourselves humanists.
Level 1, or world-space, is an anthropomorphically scaled, predominantly vision-configured, massively multi-slotted reality system that is obsolescing very rapidly. Garbage time is running out. Can what is playing you make it to Level 2? - Nick Land