{"id":1477,"date":"2025-10-27T20:50:24","date_gmt":"2025-10-27T21:50:24","guid":{"rendered":"https:\/\/globaltaalenthq.com\/?p=1477"},"modified":"2025-11-03T09:10:50","modified_gmt":"2025-11-03T09:10:50","slug":"anthropic-draws-fire-from-white-house-with-ai-warnings","status":"publish","type":"post","link":"https:\/\/globaltaalenthq.com\/index.php\/2025\/10\/27\/anthropic-draws-fire-from-white-house-with-ai-warnings\/","title":{"rendered":"Anthropic draws fire from White House with AI warnings"},"content":{"rendered":"
Anthropic has been a rare voice within the artificial intelligence (AI) industry cautioning about the downsides of the technology it develops and supporting regulation \u2014 a stance that has recently drawn the ire of the Trump administration and its allies in Silicon Valley. \u00a0<\/p>\n
While the AI company has sought to underscore areas of alignment with the administration, White House officials supporting a more hands-off approach to AI have chafed at the company\u2019s calls for caution. <\/p>\n
\u201cIf you have a major member of the industry step out and say, \u2018Not so much. It’s OK that we get regulated. We need to figure this out at some point,\u2019 then it makes everyone in the industry look selfish,\u201d said Kirsten Martin, dean of the Heinz College of Information Systems and Public Policy at Carnegie Mellon University. <\/p>\n
\u201cThe narrative that this is the best thing for the industry relies upon everyone in the industry being in line,\u201d she added. <\/p>\n
This tension became apparent earlier this month when Anthropic co-founder Jack Clark shared a recent speech on \u201ctechnological optimism and appropriate fear.” He offered the analogy of a child in a dark room afraid of the mysterious shapes around them that the light reveals to be innocuous objects. <\/p>\n
\u201cNow, in the year of 2025, we are the child from that story and the room is our planet,\u201d he said. \u201cBut when we turn the light on we find ourselves gazing upon true creatures, in the form of the powerful and somewhat unpredictable AI systems of today and those that are to come.\u201d <\/p>\n
\u201cAnd there are many people who desperately want to believe that these creatures are nothing but a pile of clothes on a chair, or a bookshelf, or a lampshade,\u201d Clark continued. \u201cAnd they want to get us to turn the light off and go back to sleep.\u201d <\/p>\n
Clark\u2019s remarks were quickly met with a sharp rebuke from White House AI and crypto czar David Sacks, who accused Anthropic of \u201crunning a sophisticated regulatory capture strategy based on fearmongering\u201d and fueling a \u201cstate regulatory frenzy that is damaging the startup ecosystem.\u201d\u00a0<\/p>\n
He was joined by allies like venture capitalist Marc Andreessen, who replied to the post on the social platform X with \u201cTruth.\u201d Sunny Madra, chief operating officer and president of the AI chip startup Groq, also suggested that \u201cone company is causing chaos for the entire industry.\u201d <\/p>\n
Sriram Krishnan, a White House senior policy adviser for AI, criticized the response to Sacks\u2019s post from the AI safety community, arguing the country should instead be focused on competing with China. <\/p>\n
Sacks later doubled down on his frustrations with Anthropic, alleging that it has been the company\u2019s \u201cgovernment affairs and media strategy to position itself consistently as a foe of the Trump administration.\u201d <\/p>\n
He pointed to previous comments from Anthropic CEO Dario Amodei, in which he reportedly criticized President Trump, as well as op-eds that Sacks described as \u201cattacking\u201d the president\u2019s tax and spending bill, Middle East deals and chip export policies. <\/p>\n
\u201cIt\u2019s a free country and Anthropic is welcome to its views,\u201d Sacks added. \u201cOppose us all you want. We\u2019re the side that supports free speech and open debate.\u201d <\/p>\n
Amodei responded last week to what he called a \u201crecent uptick in inaccurate claims about Anthropic’s policy stances,\u201d arguing the AI firm and the administration are largely aligned on AI policy. <\/p>\n
\u201cI fully believe that Anthropic, the administration, and leaders across the political spectrum want the same thing: to ensure that powerful AI technology benefits the American people and that America advances and secures its lead in AI development,\u201d he wrote in a blog post. <\/p>\n
He cited a $200 million Department of Defense contract Anthropic received earlier this year, in addition to the company\u2019s support for Trump\u2019s AI action plan and other AI-related initiatives. <\/p>\n
Amodei also acknowledged that the company \u201crespectfully disagreed\u201d with a provision in Trump\u2019s tax cut and spending megabill that sought a 10-year moratorium on state AI legislation. <\/p>\n
In a New York Times op-ed in June, he described the push as \u201cunderstandable\u201d but argued the moratorium was \u201ctoo blunt\u201d amid AI\u2019s rapid development, emphasizing that there was \u201cno clear plan\u201d at the federal level. The provision was ultimately removed from the bill by a 99-1 vote in the Senate. <\/p>\n
He pointed to similar concerns about the lack of movement on federal AI regulation in the company’s decision to endorse California Senate Bill 53, a state measure requiring AI firms to release safety information. The bill was signed into law by California Gov. Gavin Newsom (D) late last month. <\/p>\n
\u201cAnthropic is committed to constructive engagement on matters of public policy,\u201d Amodei added. \u201cWhen we agree, we say so. When we don\u2019t, we propose an alternative for consideration. We do this because we are a public benefit corporation with a mission to ensure that AI benefits everyone, and because we want to maintain America\u2019s lead in AI.\u201d <\/p>\n
The recent tiff with administration officials underscores Anthropic\u2019s distinct approach to AI in the current environment. Amodei, Clark and several other former OpenAI employees founded the AI lab in 2021, with a focus on safety. This has remained central to the company and its policy views. <\/p>\n
\u201cIts reputation and its brand is about that mindfulness toward risk,\u201d said Sarah Kreps, director of the Tech Policy Institute at Cornell University. <\/p>\n
This has set Anthropic apart amid an increasing shift toward an accelerationist approach to AI, both inside and outside the industry, Kreps noted.\u00a0<\/p>\n
\u201cThe Anthropic approach has been fairly consistent,\u201d she said. \u201cIn some ways, what has changed is the rest of the world, and [that] includes the U.S., which is this acceleration toward AI, and a change in the White House, where that message has also been toward acceleration rather than regulation.\u201d <\/p>\n
In a shift from its predecessor, the Trump administration has placed a heavy emphasis on eliminating regulations that it believes could stifle innovation and cause the U.S. to fall behind China in the AI race. <\/p>\n
This has created tensions with states, most notably California, that have sought to pass new AI rules that could end up setting the path for the rest of the country. <\/p>\n
\u201cI don’t think there’s a right or wrong in this. It’s just a degree of risk aversion and risk acceptance,\u201d Kreps added. \u201cIf you’re in Europe, it’s a lot more risk-averse. If you’re in the U.S. two years ago, it’s more risk-averse. And now, it’s just a vision that embraces some greater degree of risk.\u201d <\/p>\n","protected":false},"excerpt":{"rendered":"
Anthropic has been a rare voice within the artificial intelligence (AI) industry cautioning about the downsides of the ...<\/p>","protected":false},"author":1,"featured_media":304,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[7],"tags":[],"_links":{"self":[{"href":"https:\/\/globaltaalenthq.com\/index.php\/wp-json\/wp\/v2\/posts\/1477"}],"collection":[{"href":"https:\/\/globaltaalenthq.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/globaltaalenthq.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/globaltaalenthq.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/globaltaalenthq.com\/index.php\/wp-json\/wp\/v2\/comments?post=1477"}],"version-history":[{"count":1,"href":"https:\/\/globaltaalenthq.com\/index.php\/wp-json\/wp\/v2\/posts\/1477\/revisions"}],"predecessor-version":[{"id":1478,"href":"https:\/\/globaltaalenthq.com\/index.php\/wp-json\/wp\/v2\/posts\/1477\/revisions\/1478"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/globaltaalenthq.com\/index.php\/wp-json\/wp\/v2\/media\/304"}],"wp:attachment":[{"href":"https:\/\/globaltaalenthq.com\/index.php\/wp-json\/wp\/v2\/media?parent=1477"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/globaltaalenthq.com\/index.php\/wp-json\/wp\/v2\/categories?post=1477"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/globaltaalenthq.com\/index.php\/wp-json\/wp\/v2\/tags?post=1477"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}