Silicon Valley in the 2020s has had two big disasters. The collapse of the cryptocurrency firm FTX and the failed firing of OpenAI CEO Sam Altman have been the talk of the town, and people couldn’t help but wonder: What were they thinking?
In the first post on this Substack, I attempted to understand the centers of power in AI and how they operate. I covered how a relatively small number of people run a relatively large constellation of institutions that attempt to alter public conscience on matters of AI in an attempt to steer regulation in a certain manner.
In that article, I attempted to understand why ideology appeared in the way it did. The purpose of this article is to explore the people behind these ideas, what motivates them, and how they relate to some recent high-profile dysfunction in the industry.
What are the Effective Altruists?
Effective Altruism (abbreviated EA) has a history among the powerful of Silicon Valley. Originating in the early 2000s, it now holds sway over the most influential individuals and institutions and has a hand in many recent events. I was so confused trying to understand its influence that I attempted to map it:
A large portion of what I uncovered in that first article was the tie between Open Philanthropy and the AI companies, as well as the AI companies’ loudest critics. This map shows a deep connection between Open Philanthropy and the EA community. If you were to claim that they run the movement, you really wouldn’t be that far off.
To better understand the movement, I read its most famous work: “Doing Good Better”, by William McAskill, featured in the lower left quadrant of the map above. I audiobooked it at 2x speed while playing a video game, it essentially spells out the basics of the movement and was a pretty good read for understanding why it is so compelling to these Silicon Valley liberal billionaire types. Here is what you need to know:
EAs believe that people should dedicate their lives to helping the disadvantaged through a contextually utilitarian framework. They use the metric of WALY (sometimes called a QALY, Q for quality when discussing health factors), standing for Wellbeing Adjusted Life Years. WALY is determined by the number of years of somebody’s life it will change multiplied by the (highly subjective) predicted percentage increase in happiness it will cause. Whatever the Altruist decides is the highest WALY cause, they will throw their life behind.
This utilitarian framework naturally favors certain causes more than others. For example, a vaccine for an African child costs very little and will bring them a tremendous amount of WALY, as it multiplies their expected lifespan, exponentially increasing their well-being. However, solving a specific disease affecting some small groups would fall lower on that list.
Effective Altruism is leftist in that it is fighting for equity and the well-being of all people, but most American leftists hate it. It makes sense on an optics level. Although I rarely find myself agreeing with the Rockefeller and Soros astroturf Timnit Gebru, she hits it on the nose when she called it, “a movement consisting of an overwhelmingly white male group”. But fundamentally, it makes sense. Effective Altruism offers a way to have the unashamed surrogate activity of wealth-seeking under a liberal framework through the deflection: “I have an expensive sports car, but I also donate millions to ‘those in need!’”.
So why should you be against Effective Altruism?
Effective Altruists do not believe that free-market capitalism will bring peace and prosperity, but instead that they must wield it to implant themselves in places of power, where they can then choose causes to support that will Do Good Better™ (in their own highly subjective opinion), as opposed to how the market allocates resources naturally.
So, what causes do Effective Altruists believe are the most important to ensuring an increase in their WALYs or QALYs or whatever? AI safety, they decided.
EAs want to solve humanity’s biggest potential problems, and an AI apocalypse is high on that list if you believe such a thing is possible. So naturally, since they aren’t too hot on the whole free market capitalism thing, they despise it in AI.
For example, here is the prominent Effective Altruist Eliezer Yudkowsky from MIRI (featured toward the center of the map above) in an article on how to prevent AI disaster:
Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature. Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.
Pausing AI Developments Isn’t Enough. We Need to Shut it All Down | TIME
This man is the leader of the AI space within Effective Altruism.
Yudkowsky’s organization, MIRI, is funded mainly by Open Philanthropy money. Open Philanthropy’s founder, Holden Karnofsky, established the OpenAI Board. He only left the board when his wife, Daniela Amodei, founded the rival lab, Anthropic.
But in his time serving as board chair, he made his mark.
The OpenAI fiasco
On November 17th 2023, the board of the famous AI startup OpenAI fired Sam Altman.
Over the years, OpenAI’s board saw regular changes as Elon Musk joined and left, and Karnofsky left. The final board prior to Altman’s firing contains deep ties to Effective Altruism. Here are the board members:
Adam D’Angelo (voted to fire Altman)
Funded Asana, Dustin Moscovitz’s startup (Moscovitz cofounded Open Philanthropy).
Runs Quora, which has the Poe chatbot. Poe is a conflict of interest as Poe competes with OpenAI’s ChatGPT. For example, ChatGPT’s recent Custom GPTs feature were something that already existed on Poe, and its release brought it into closer competition with Poe.
Helen Toner (voted to fire Altman)
Formerly worked for the Centre for the Governance of AI, funded by Open Philanthropy and run by Effective Ventures.
Formerly worked for GiveWell, funded by Open Philanthropy, the organization that researches Effective Altruism causes.
Tasha McCauley (voted to fire Altman)
Co-founder and board member of Effective Ventures.
Board member of the Centre for Effective Altruism.
Ilya Sutskever (voted to fire Altman)
Co-founder and Chief Scientist of OpenAI
Famous AI safety advocate, who later wrote that he regrets his decision to choose to fire Altman.
Greg Brockman (voted to retain Altman)
President of OpenAI
Sam Altman
Former and current CEO of OpenAI
Former president of Y Combinator
The Effective Altruists intended to do whatever possible to slow and corrode AI development. So, when Altman led the team that trained the best LLM possible (GPT-3.5, and later GPT-4) and made it highly accessible to the public through the free ChatGPT app, it enraged them. He motivated his team to speed up development, pushing development timelines forward and resulting in the Q* research model. While the details around Q* remain entirely unknown, it is rumored to be a breakthrough that makes AI capable of complex reasoning. His firing happened right after, uncoincidentally.
Fortunately, this time, the EAs lost. The majority of employees signed a petition announcing they would leave the company and join him in a new venture if he was not reinstated. Their demands were met, and many of the EAs on the board and company leadership resigned.
The regulatory war
The Effective Altruist attempt to stop AI by taking control of tech companies was bound to fail. We live in a world with abundant VC cash and a decent-sized talent pool coming out of universities eager to build things. Tech is, by nature, disruptive, and if your company isn’t able to build something, somebody else will.
The Effective Altruist’s new strategy to shape regulation around AI is to regulate it. If they can’t take control of it in Silicon Valley, they can kneecap it in DC and Brussels.
Here is Brendan Bordelon for POLITICO [1] [2]:
Acting through the little-known Horizon Institute for Public Service, a nonprofit that Open Philanthropy effectively created in 2022, the group is funding the salaries of tech fellows in key Senate offices, according to documents and interviews.
Current and former Horizon AI fellows with salaries funded by Open Philanthropy are now working at the Department of Defense, the Department of Homeland Security and the State Department, as well as in the House Science Committee and Senate Commerce Committee, two crucial bodies in the development of AI rules. They also populate key think tanks shaping AI policy, including the RAND Corporation and Georgetown University’s Center for Security and Emerging Technology, according to the Horizon web site. In 2022, Open Philanthropy set aside nearly $3 million to pay for what ultimately became the initial cohort of Horizon fellows.
“It’s an epic infiltration,” said one biosecurity researcher in Washington, granted anonymity to avoid blowback from EA-linked funders.
Effective altruists in Washington skew overwhelmingly young and often hail from the country’s top universities, which increasingly serve as hotbeds for the movement. EAs are usually white, typically male and often hail from privileged backgrounds
Like many of her peers, Connell calls EA a “cult.” And she said there are some specific tells that show which AI and biosecurity researchers are members.
Sam Bankman-Fried: the most Effective of them all
In November 2022, the third-largest cryptocurrency exchange platform collapsed spectacularly. FTX, run by Sam Bankman-Fried (abbreviated SBF), put millions into risky bets and shady altcoins, resulting in bankruptcy and millions of Americans losing access to the funds they owned on the app.
While bankrupting his firm, SBF took billions of dollars in compensation, which he used to fund the Effective Altruist movement and make massive political donations. He set up the FTX Future Fund, headed by William McAskill (author of the previously mentioned Doing Good Better book).
Since the fallout, MacAskill and every other EA celebrity in the books have publicly denounced SBF. However, I would offer a different idea: SBF has accomplished precisely the goals of Effective Altruism.
The net effect of Sam Bankman-Fried was indistinguishable from the desired intentions of the movement. SBF took billions of dollars from people, primarily younger Americans who grew up in the 2008 financial crisis and 2020 COVID-19 recession, hoping to hedge their funds against insane inflation and financial authoritarianism, and redistributed it to the NGO class and shady longtermist nonprofits. And politicians.
Now compare this to how many corporations today do Direct Corporate Giving. Exxonmobil, Google, Microsoft, Apple, and many others give millions or billions in nebulous charity donations. With this money, they could instead lower prices for customers or invest in R&D. Both what SBF did and what EAs advocate for involve a distaste for free markets and reappropriating money from common folk into the hands of who-knows-what because they want to feel better about themselves.
Effective Altruism is not a very new historical phenomenon. It echoes the desire of the Soviet bureaucrat to seek technocratic control of society to administer declarations of what they deem best (hint: it always ends with them at the top of the economic hierarchy). It also answers to the impulses of the elite of a secularized age, who have no moral grounding in religion, so they seek to find spiritual meaning in their pursuits, disregarding the effectiveness of it.
Given the damage and blunders of the Effective Altruist movement in recent years, public opinion grows against it on the populist sides of both political factions. For its rejection of free market capitalist policy that will uplift us all, its authoritarian influence in regulation, and its desperation to stop OpenAI democratizing accessing access to the latest innovations in AI that have empowered millions, I hope it dies.