Courtesy Of
You don’t have to go far to find people at the top of AI companies who express great concern that if AI is not developed with serious care and appropriate regulation, it could go wildly wrong. But are they putting their money where their mouths are?
Economists coined the phrase “revealed preferences.” For non-math nerds, it’s easier to understand using Steven Levitt’s translation, “Don’t listen to what people say; watch what they do.” Here’s one chart to put in context the financial resources devoted to protecting society from the potentially grave downsides of unsafe AI versus spending on something that is not quite so important for the future of the human species.
To use approximate figures for AI industry leader OpenAI, their market value hovers around $80 billion. Spending a total of $77 million on AI safety amounts to less than 1/1000th of the market cap of just OpenAI. That’s not counting Meta, Microsoft, DeepMind, Anthropic, etc.
$77 million was spent on safety work in the entire global AI industry in 2023.
Still believe AI developers when they say safety is a priority?