
BlockBeats News, February 28: AI company Anthropic released a statement saying, “We have filed two exceptions regarding the lawful use of the AI model Claude: to prohibit large-scale domestic surveillance of the American people and to ban fully autonomous weapon systems. To date, we have not received direct communication from the Department of Defense or the White House regarding the progress of negotiations. The insistence on exceptions is based on two considerations: first, current state-of-the-art AI models are not yet reliable for use in fully autonomous weapons, and such applications would endanger the safety of American military personnel and civilians; second, large-scale domestic surveillance constitutes a violation of basic human rights.”
“The Department of Defense’s classification of Anthropic as a supply chain risk is unprecedented. We are deeply saddened by this. Since June 2024, Anthropic has been continuously supporting US military personnel and has always been committed to continuing this service. We believe that this designation is legally unfounded and will set a dangerous precedent for any US company negotiating with the government. Regardless of the deterrence or punishment imposed by the Department of Defense, it cannot change our stance on large-scale domestic surveillance and fully autonomous weapons. We will challenge any supply chain risk designation through legal means.”



