On The Authenticity Of The AI Safety Movement
Uncovering the Center For AI Safety and the incentive model driving dissent
Over the past several years, there has been a growing relationship between governments, philanthropic foundations, academia, and the big tech companies. Starting with the Twitter Files investigation, a web of journalists have begun to investigate how these groups and their connections have attempted to sway public opinion.
In particular, there has been a worrying movement in the tech industry that has gained control of social media and search engine services. Journalist Matt Taibbi calls this the “Censorship Industrial Complex”. The idea is government agencies, NGOs, philanthropies, the mass media, and academia working in lockstep to police the acceptable opinions and facts that get to thrive in our information ecosystem.
Today I want to discuss one specific philanthropic organization, their web of funding, and the race to regulate humanity’s future. This idea came to me listening to The New York Times’ Hard Fork podcast, where they started a recent episode by discussing a petition that had been signed by many of the top figures in AI across the leading companies, universities, organizations, non-profits, and beyond. This petition, created by the “Center for AI Safety”, was only one sentence long:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Immediately, the name “Center for AI Safety” sounded like some evil influence operation to me, so I decided to look into it. As it turns out, they have received over $6.5 million from Open Philanthropy (1) (2), which is owned by Facebook co-founder Dustin Moskovitz and former Wall Street Journal writer Cari Tuna. Now here’s where it gets interesting: The vast majority of the signers of the petition have financial and employment ties to the biggest tech companies, governments, and even the people running the petition. None of this information is disclosed on the petition website.
Firstly, twenty-one OpenAI employees have signed this petition, including the CEO Sam Altman. In 2017 OpenAI received $30 million from Open Philanthropy. Additionally, Open Philanthropy’s co-CEO Holden Karnofsky signed the petition. Him and his wife were OpenAI board members until they left to start competing AI firm Anthropic in 2021.
Along with this, a laundry list of other Open Philanthropy and otherwise special interest-funded names appear on this list. Fourteen names, including the first listed ones are employees of the University of Toronto’s computer science department. In December 2020, Open Philanthropy gave a $520,000 grant to University of Toronto. University of Toronto’s AI research has also received a grant of undisclosed size from CIFAR, which is funded by the Canadian government. Even more notably, they received $350,000 from the Rockefeller Foundation to research AI governance in September 2021. The Rockefeller Foundation has a history of ethically debatable activity that is far too long to describe here and is part of journalist Matt Taibbi’s “Censorship Industrial Complex” list.
Next, two names appear from U. Montreal, which received a $2.4 million grant from Open Philanthropy in July 2017. In particular, one of the named signatures is Yoshua Bengio, which the grant website provisions $1.6 million to power his own research. His department also received $125 million from CFREF, an arm of the Canadian government, to build “robust, reasoning, and responsible” AI. He has since went on a media tour spreading worry about the direction AI development is going.
These are only a few examples that I bothered to research that are paid for by the owners of the petition. Beyond this, the petition is filled with the names of big tech and government figures who should be looked at closer. 72 names are from Google. 14 names are from the Google-funded Anthropic. Almost every name here is from big media organizations, academia, and the most powerful tech companies. It begs the question, what is their motive? Are they really morally concerned that the technology they are building will be dangerous to humanity? Or have they figured out that bringing regulation will enclose competition and allow ideological domination of this new technology? I find it much more likely that, like how Google and Facebook’s market shares increased in the EU as GDPR went into effect, the current dominant forces in AI are using safety as a rhetorical hook to attain regulatory capture.
These are AI bootleggers, a term I borrow from Marc Andreessen’s recent essay, where he dichotomizes the AI safety movement into two parts:
“Baptists” are the true believer social reformers who legitimately feel – deeply and emotionally, if not rationally – that new restrictions, regulations, and laws are required to prevent societal disaster.
For alcohol prohibition, these actors were often literally devout Christians who felt that alcohol was destroying the moral fabric of society.
For AI risk, these actors are true believers that AI presents one or another existential risks – strap them to a polygraph, they really mean it.“Bootleggers” are the self-interested opportunists who stand to financially profit by the imposition of new restrictions, regulations, and laws that insulate them from competitors.
For alcohol prohibition, these were the literal bootleggers who made a fortune selling illicit alcohol to Americans when legitimate alcohol sales were banned.
For AI risk, these are CEOs who stand to make more money if regulatory barriers are erected that form a cartel of government-blessed AI vendors protected from new startup and open source competition – the software version of “too big to fail” banks.
And most importantly:
A cynic would suggest that some of the apparent Baptists are also Bootleggers – specifically the ones paid to attack AI by their universities, think tanks, activist groups, and media outlets. If you are paid a salary or receive grants to foster AI panic…you are probably a Bootlegger.
Call me a cynic, baby.
In this essay I have used one particular example of a viral petition to show the strong financial and institutional powers leading the AI safety movement in an attempt to argue that the authenticity of nearly all of the thought leaders deserve questioning. Nearly all of the people I have mentioned stand to lose their entire livelihood if they speak contrary to their side’s orthodoxy. I find it incredibly disturbing that the corroding forces that drive our mainstream political arena are already corrupting the newly important AI safety space. This is not to say the AI accelerationist movement is free from ulterior motives. The big forces I see in this movement right now are:
The previously quoted venture capitalist Marc Andreessen who stands to profit from lack of AI regulatory capture as he invests in new companies.
Meta, who is leveraging the open-source community to bring them AI dominance and thus stands to gain from AI deregulation (I am working on another piece about this topic).
The ragtag group of Twitter anons, most of whom work in AI startups.
So to say that either side is completely absolved of responsibility when it comes to authenticity questions is probably inaccurate. But the comparison is just not close. In conclusion, the AI safety movement is clearly an outgrowth of the conspiracy encompassing many of our most powerful institutions to eliminate dissenting and free thought in the western world and should be regarded as such.