This should make any US company nervous about entering into an agreement with the government. Or any US company that already has a contract with the government. If they one day decide they don't like that contract, they can designate you a supply chain risk.
Not 1) rip up the existing contract and cease the agreement or 2) continue (but not renew) the existing contract or 3) renegotiate terms upon renewal but instead a full on ban of doing any business with an entire industry/sector.
The civil society should be quite concerned about this kind of attacks.
https://news.ycombinator.com/item?id=47186677 I am directing the Department of War to designate Anthropic a supply-chain risk (twitter.com/secwar) 5 days ago, 1083+ comments
https://news.ycombinator.com/item?id=47189441 Anthropic says it will challenge Pentagon supply chain risk designation in court (reuters.com) 5 days ago, 37+ comments
But given that what would typically be red lines for previous administrations have been brazenly crossed without consequences, why would they bother?
Note that I give them a lot of credit for trying to stop and to have their own red lines about the use of their technology, and to stick to those red lines to the end.
I'm curious what'll openai signatories on notdivided.org do now - https://news.ycombinator.com/item?id=47188473
Remain undivided in spirit while grinding for OpenAI?
Such tampering with companies is a smoking gun. Let's wait until there is another decision seizing this (or others') company assets.
Makes sense, obviously, but yeesh.
Anthropic has been given a death sentence.
Right to bare arms and all that etc.
What if Anthropic just shrugged, dissolved the company and open-sourced all of the Opus weights? Could this harm OpenAI and advance AI in a reasonable way?
Look I know it's an insane idea. I'm just curious what the most unhinged response to this might be.
1: https://www.cbsnews.com/news/anthropic-claude-ai-iran-war-u-...
https://arc-anglerfish-washpost-prod-washpost.s3.amazonaws.c...
So that’s most of sp500 and their providers?
Anthropic has vowed to fight this designation in court.
Without weighing in on the constitutionality or legality of the move, I think it's obvious that this kind of retaliation power is unmatched by any private business that has a contractual dispute.
If a private business doesn't like Anthropic's terms, it can walk away from the deal, but it can't conduct coordinated retaliation with other companies before ending up in antitrust territory or potentially violating the Sherman Act.
Now for my editorializing: The fact that Pete Hegseth is willing to apply this type of designation against a U.S. company simply because he doesn't like its terms is pretty chilling. It's all the more scary once you consider which terms he objects to.
Especially 'weak' things like 'caring about people'.
I canceled my ChatGPT subscription a couple of days ago. In my opinion the Trump administration has become far too much of an "imperial Presidency" in its acts of war and its attempts to bully companies. It is also corrupt on a massive scale. I distrust anyone who thinks "yes, I'd like to work with this administration".
Is this about locating the right target for a sortie for example?
How could the regime do such a thing, doesn't law mean anything?!! /s
First they came for my neighbour now they came for my llm!!
The last I commented about LLMs I was ad hominem'd with "schizophrenic" and such. That's annoying but doesn't deter either my strange research or concerns, in this case, regarding the direction LLMs are heading.
Of 4 frontier models, one is not yet connected to the DOD(or w). While such connections are not immediate evidence, I think it's rational to consider possible consequences of this arrangement. By title, there's a gap, real or perceived between the plebeian and mil version. But the relationship could involve mission creep or additional strings as things progress.
We have already a strong trend for these models replacing conventional Internet searches. Not consummate yet, there is a centralizing force occuring, and despite being trained on enormous bodies of data, we know weights and safety rails can affect output, and bearing in mind the many things that could be labeled or masquerade as safety rails, could be formidable biases.
I frequently observe corporate friendly results in my model interactions, where clearly, honesty and integrity are secondary to agenda. As I often say this is not emergent, nor does it need be.
Meanwhile we see LLMs being integrated into nearly everything, from browsers to social profiling companies (lexis nexis, palantir, etc) to email to local shopping centers and the legal system.
'Open' models cannot compete with the budgets of the big four. Though thank god they exist. But I expect serious regulation attempts soon.
My concerns with AI are manifold, and here on hn, affiliated by some, with paranoia or worse.
And it seems to me, many of the most knowledgeable and informed underestimate LLMs the most, while the ignorant conflate them to presently unrealistic degrees. But every which way I perceive this technology, I see epic, paradigm smashing, severe implications in every direction.
One thing of many that gets little attention is documentation vs reality regarding multiple aspects of AI, e.g. where the training vs privacy boundaries really are if anywhere. As they integrate more and more tightly with common everyday activities, they will learn more and more.
A random concern of mine is illustrated by the Xfinity microwave technology which uses a router to visualize or process biological activity interacting with other wifi signals. Standalone, it's sensitive enough to determine animals from adult humans. Take for example the Range-R, a handheld device, sensitive enough to detect breathing through several walls. Well, mix this with AI and we get interesting times.
I could go on, or post essays, but I such is not well received in this savage land.
The military intervention with AI, aside from being objectively necessary or inevitable in some ways (ways I am not comfortable with), I find it foreboding, or portending. I see very little discussion on the implications, so figured I see if anyone had anything to say other than to call me a schizophrenic and criticize my writing. *
*See comment history
We can argue all day long about supporting whichever admin is currently there and who is bad/good as determined by a few almighty elites in the tech world, but it screams irrational and short-sighted to make decisions on behalf of the country by a few tech elites.
Dario's latest interview made this crystal clear: he (and his EA cohort) feel that Congress is moving too slow and that they should determine what's good and bad for the country.
Like dude, is there anything at all you learned from the covid debacle through all the mess of the past few years? Like really a tech guy is gonna coach the USA what's right and wrong? Who are you to decide for the rest of us?
Techbros were wrong so many times (web3! crypto nonsense! theranos! some 500$ juice squeezing machine! and all of them forbes 30 under 30 folks! )... what are the odds you are gonna be wrong now when you look back say a year from now? The most profit making technology of the last few years are Polymarket! and Kalshi! and short-term loans (with a twist of course)! (Not even LLMs which are currently burning money)
And what's this nonsense hatred to working for/with the defense/war dept of YOUR OWN COUNTRY?
In most of the rest of the world, this is pride! It makes a mockery of the poor kids who serve this country to protect your tech bro hype!
Why this whole (fake?) self flagellation nonsense when pretty much everything we got in the US thus far is due to the USD backed by the most modern military superpower in history! Why be ashamed of this?
For example from history we know that Schindler from Schindler's List was indeed a supply chain risk. He harbored persecuted people, he took and sabotaged government contracts. He did the moral but anti-government and illegal things. He was corrupt traitor from governments perspective.
The current US government already is labeled as fascist by many, the guy who designated Anthropic supply chain risk is allegedly a war criminal.
I don’t see why anyone not into these things would not be a supply chain risk.
I know that its very unpopular or divisive to say this but Anthropic can be a hero only after all this is over. At this time people in charge do double tap on survivors and take pride for not having conscience, they give speeches about these things.