Cynthia Lummis proposes the RISE law, a bill on AI requiring transparency for legal immunity

Senator Cynthia Lummis (R-WY) has introduced the law responsible for innovation and expertise in complete safety (RISE) of 2025, a legislative proposal designed to clarify liability managers for artificial intelligence (IA) used by professionals.

The bill could provide the transparency of AI developers – which does not need the models to be open source.

In a press release, Lummis said that the Rise law would mean that professionals, such as doctors, lawyers, engineers and financial advisers, are legally responsible for the advice they provide, even when informed by AI systems.

At the time, AI developers who created systems can only protect themselves from civil liability when things go wrong if they publicly publish model cards.

The proposed bill defines the model cards as detailed technical documents which disclose the sources of training data for an AI system, the expected use cases, performance metrics, known limitations and potential failure methods. All this is intended to help professionals to assess if the tool is suitable for their work.

“Wyoming varies both innovation and responsibility; the RISE law creates predictable standards which encourage the safer development of AI while preserving professional autonomy,” said Lummis in a press release.

“This legislation does not create general immunity for AI,” continued Lummis.

However, the immunity granted under this Act has clear limits. The legislation excludes the protection of promoters in cases of carelessness, reverse misconduct, fraud, knowledge of false declarations or when actions do not fall under the defined scope of professional use.

In addition, the promoters are faced with an obligation of responsibility underway under the rise in power. AI documentation and specifications must be updated within 30 days of the deployment of new versions or the discovery of significant failure methods, strengthening continuous transparency obligations.

Stops unless open source

The Rise law, as it is written now, continues to impose that the models of AI become completely open source.

Developers can retain proprietary information, but only if the material expurgated is not related to security, and each omission is accompanied by a written justification explaining the exemption of commercial secrecy.

In an anterior interview with Coindesk, Simon Kim, the CEO of Hashed, one of the main Funds of VC in Korea, spoke of the danger of a centralized AI with closed source which is indeed a black box.

“Openai is not open, and it is controlled by very few people, so it’s quite dangerous. [closed source] The fundamental model is similar to the manufacture of a “God”, but we don’t know how it works, “Kim said at the time.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top