https://frontierinstitute.org/frontier-institute-statement-i...
Ah.
The absence of such a story makes me think this law doesn't protect shit. What exactly did a Montanian get killed or arrested trying to do with a computer that is now protected? Can I use AI during a traffic stop or use AI to surveil and doxx governemnt employees? What exactly is the government giving up by granting me this right?
Or is this just about supressing opposition to data centers?
"Government actions that restrict the ability to privately own or make use of computational resources for lawful purposes, which infringes on citizens' fundamental rights to property and free expression, must be limited to those demonstrably necessary and narrowly tailored to fulfill a compelling government interest."
"When critical infrastructure facilities are controlled in whole or in part by a critical artificial intelligence system, the deployer shall develop a risk management policy after deploying the system that is reasonable and considers guidance and standards in the latest version of the artificial intelligence risk management framework from the national institute of standards and technology, the ISO/IEC 4200 artificial intelligence standard from the international organization for standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems. A plan prepared under federal requirements constitutes compliance with this section."
In particular, I think the reporting is straight wrong that there's a shutdown requirement. That was in an earlier version (https://legiscan.com/MT/text/SB212/id/3078731) and remains in the title of this version, but seems to have been removed from the actual text.
What the bill actually does (based on typical legislation of this type) is preempt local zoning and environmental review for large compute facilities. That's a legitimate policy choice, but calling it a "right" is doing a lot of rhetorical work.
For comparison: Wyoming and Texas have done similar things for data centers via tax incentives rather than regulatory preemption. Both approaches get data centers built; they just differ in who captures the value.
Given that, they will be computing in a restrictive and controlled environment. I feel sorry for them.
I am going to college (Computer Science) as an older student with previous experience in programming, and it never ceases to amaze me that the current generation of students doesn't think out of the box and is completely dependent on ChatGPT. We all suffered from conditioning from governments and corporations throughout the years, but it is accelerating at an alarming rate.
Acts like this (the one from Montana) are positive, but unfortunate that they simply have to exist and somewhat irrelevant when the big dogs (California, New York and whole countries such as Australia) approve legislation that will promptly be followed by most companies/projects, which will in turn force this way of things happening everywhere else.
The scaling of federal power with population is also significant as states like Texas that allow for more housing to be built will probably receive more seats at the next apportionment while states like California will lose seats. Overall, pretty neat to see the design of America work quite well like this.
I was hoping for that as a reaction to the current tyrannical movements worldwide to end anonymous personal computing.
I would say considering there has been almost a year since this bill was signed, what happened since then? Was it applied to hurt people's interests? Did it drive investment?
Are Montanans demonstrably better or worse off because of this in some way?
The AI part honestly looks fairly harmless, just applying existing standards, but I may be wrong there...
TL;DR: Basically the AI industry trying to ban governments from regulating it
- Strict limits on governmental regulation, wherein any restrictions must be demonstrably necessary and narrowly tailored to a compelling public safety or health interest.
- Mandatory safety protocols for AI-controlled critical infrastructure, including a shutdown mechanism and compulsory annual risk management reviews.
How were the necessity and scope of the second rule shown to satisfy the first rule?
Instead, it's wasted on AI slop.
why is this posted now?
Always follow the money: https://www.sourcewatch.org/index.php/Frontier_Institute
EDIT for the downvoters, from the law:
> Any restrictions placed by the government on the ability to privately own or make use of computational resources for lawful purposes must be limited to those demonstrably necessary and narrowly tailored to fulfill a compelling government interest in public health or safety.
This basically means you can't use government action to stop the building of a data-center.