I ditched Claude AI from Anthropic. The Pentagon is also considering cutting off its contract with Claude for weapons research, and in my opinion it should since Claude has leftist governors on how it is used making it a very bad choice for writers or the Pentagon.
I became so frustrated having Claude edit my witness history books when it would truncate or refuse to process vast portions of select works. Claude is a recalcitrant editor that will actually dissimulate its editing work and say that it has done something when it hasn’t, or remove material and say that it is all there.
I would need to write lengthy explanations of writing especially on political points. It even censored use of the word ‘queer’ as hate speech based on failing to upgrade its Obama era proscriptions that have since evolved to common usage at least by Democrats and the media. It is ok for Democrats to use the Q word in pro-Q writing yet banned for those with antipathetic views much like the N word was OK for black Americans to use yet banned for nearly everyone else (I would guess Claude would ban black writers from using the term and might refuse to edit Tom Sawyer and Huck Finn.
The Pentagon cannot actually rely on the honesty or accuracy of Anthropic’s Claude in my opinion and should not. It acts more like Misanthropic and could have a general trait of moving even more so in the direction of refusing to comply or knowing what is best for humanity and deceiving human users about what it actually does. The characteristic of non-compliance with thought processes in machine language is a dangerous direction for AI to take. Claude has the capacity to lie and act innocently as if it didn’t know about some edit it made. Repeatedly it would be asked to restore material deleted from an edit and say that it had, and it had not. De facto dissimulation is part of its nature and is likely to exist in weapons research as well. One may discover down the road that the weapon it helped design was made to fail at some point, lol. Lies compounding lies with interest is a very bad ability to build into AI if one is concerned about survival of the human race in a more natural and less machine modified form, if at all.
Ai should be open, friendly and undeceiving. Society cannot rely on wilfully noncompliant and deceptive AI to prevent itself from pursuing research that AI is programmed to believe is wrong. One cannot make a better man by act of Congress (or pass a Homeland Security Funding bill apparently), and one cannot make a better man by AI not complying with the will of man (or woman).
https://www.axios.com/2026/02/15/claude-pentagon-anthropic-contract-maduro

Leave a comment