Top AI companies score badly on risk and safety assessments, ahead of Paris summit
To display this content from YouTube, you must enable advertisement tracking and audience measurement.
One of your browser extensions seems to be blocking the video player from loading. To watch this content, you may need to disable it on this site.

Issued on: Modified:
Are leading artificial intelligence companies doing enough to keep us safe from the potential harms of their products? According to the nonprofits SaferAI and the Future of Life Institute, based in Paris and Brussels respectively, the answer is "no". We tell you more in this edition of Tech 24.
Both SaferAI and the Future of Life Institute recently released risk and safety ratings of the companies responsible for the most powerful offerings of the latest generation of artificial intelligence.
The Future of Life Institute report assessed that "the current strategies of all companies [was] inadequate for ensuring that these systems remain safe and under human control", while SaferAI found that "even the best AI companies still score relatively poorly" when it comes to risk management.
Both organisations are funded by the Estonian billionaire Jaan Tallinn, co-founder of Skype and part of the "effective altruism" movement, which espouses that AI poses an existential threat to humanity.
The poor ratings come ahead of the AI Action Summit in Paris on February 10 and 11, which will bring together world leaders, tech billionaires and civil society representatives to hash out a declaration on the future of AI policy. It's a highlight of the French diplomatic calendar for 2025 and a crucial test of whether the world can work together on regulation of the powerful technology.
We interviewed SaferAI's Head of Policy, Chloé Touzet, in Tech 24. She said that France's champion AI startup Mistral did particularly badly on their ranking because they have "very little [information] available" about their risk management policy.
Touzet said that San Francisco-based Anthropic had a "weak" rating on their scale, despite a reputation for being more cooperative with regulators, partly because they're missing a "clear statement on their risk tolerance", meaning a policy defining "how much risk they're willing to take."
Mistral and Anthropic have been contacted for comment.
