When I graduated in 2007, it was common for tech companies to refuse to let their systems be used for war, and it was an ordinary thing when some of my graduating classmates refused to work at companies that did let their systems be used for war. Those refusals were on moral grounds.
Now Anthropic wants to have two narrow exceptions, on pragmatic and not moral grounds. To do so, they have to couch it in language clarifying that they would love to support war, actually, except for these two narrow exceptions. And their careful word choice suggests that they are either navigating or expect to navigate significant blowback for asking for two narrow exceptions.
My, the world has changed.
After spending couple of years studying in the US, I came to conclusion that executives and board members in industry doesn't care about society or humans, even universities don't push students towards critical thinking and ethics, and all has turned into a vocational training, turning humans into crafting tools.
The same time, at Harvard, I attended VR innovation week and the last panel discussion of the day was Ethics and Law, which was discussed by Law Professor, a journalist and a moderator and was attended a handful of people. I inquired why founders, CEOs or developers weren't in part of the discussion or in attendance? Moderator responded that they couldn't find them qualified enough to take part in the discussion. The discussion basically was - how product companies build affects the society? Laws aren't founders problem, that's what lawyers are for, and ethics - who cares, right?
This frenzy, this rat race towards next billion dollar company at any cost, has tore down the fabric of the society to the individual thinking level; or more like not thinking, just wanting and needing.
> we had been having productive conversations with the Department of War over the last several days, both about ways we could serve the Department that adhere to our two narrow exceptions, and ways for us to ensure a smooth transition if that is not possible.
Why are people leaving openAI when this is Anthropic's stance? Are their two narrow requirements enough to draw the ethical boundary people are comfortable with?
At the beginning, they’re usually doing it for the money — and maybe some level of patriotism. Eventually they find themselves involved in things so ugly that they can’t really stomach it anymore. At the same time, they can’t easily back out either.
Then a new CEO comes in and thinks the previous guy was too soft, "He couldn’t handle it, but I can."
And the cycle continues.
https://x.com/uswremichael/status/2029754965778907493?s=46&t...
"Everything and I mean everything can be taken from you except your integrity, only you can give that up"
Source: https://www.theguardian.com/us-news/2026/feb/26/anthropic-pe...
"Palantir's Maven uses Anthropic's Claude code, sources say."
https://www.reuters.com/technology/palantir-faces-challenge-...
It is always astonishing that the reviled mainstream press is more critical than hackers these days.
That begs the following question: why does Dario Amodei repeatedly call the Department of Defense "Department of War"?
What a world we live in now where private companies are apologising for the "tone" of their speech while official representatives of the government daily express blatant lies and misrepresentations without the slightest fear of consequence.
It really is incredibly sad that what was one of the most respected countries in the world has descended to this - an utter mockery of a functioning democracy.
The public: Anthropic are so noble, we should give them ever more praise and money.
Is that the synopsis? (Not really paying attention.)
Slow it down as much as possible to give us more time.
"Anthropic CEO says company was punished for not giving "dictator-style praise" or donations to Trump"
https://www.datacenterdynamics.com/en/news/anthropic-ceo-say...
This is a message to people working for that line of business at Anthropic. You don't have to do it, you can quit. If you are helping this insane administration to conduct war on Iran quit. You don't need to have that kind of blood on your hands.
I saw a someone's hypothesis that a generative model was used to help classify buildings to decide what to bomb and that the Girls school was misclassified. If this was an Anthropic model, I can imagine what it feels like being a worker there in that line of business.
The Silicon Valley tech jobs we have now has a history rooted in World War 2 and funding of it by the US gov.
https://youtu.be/ZTC_RxWN_xo?si=gGza5eIv485xEKLS
I’m not saying war is good or anything, but also don't ride a high horse cause none of it would be here w/o WW2.
Or more importantly - say something that says nothing.
When you say nothing to politicians like this then eventually the story moves elsewhere.
But these guys had to put a stake in the ground and yell it out loud.
In politics you must know when to speak and what to speak and how to speak without speaking.
Posted here: https://news.ycombinator.com/item?id=47195085
Trump is the communist nobody warn you about :-D
thankfully, the giga Chads always win against the incel dorks and nerds in the end
I still don't buy this discussion. How exactly do they want to use an llm for autonomous weapons, given it's not even possible to reliably have a piece of code written without having to review it?
And how is a 1M token window model suppposed to be useful for mass surveillance?
Honest questions, I am sure I am missing some details. Because so far it looks like a very sophisticated marketing strategy.