- Microsoft has published its IA transparency report 2025
- He describes his plans to build and maintain responsible AI models
- New regulations arrive with regard to the use of AI, and Microsoft wants to be ready
With AI and large language models (LLM) increasingly used in many parts of modern life, the reliability and safety of these models have become an important consideration for companies such as Microsoft.
The company has decided to describe its approach to the future of AI in its transparency report responsible for AI in 2025, explaining how it sees the future of the evolution of technology in the years to come.
Just as we have seen the AI more broadly adopted by companies, we have also seen a wave of regulations around the world which aim to establish the safe and responsible use of AI tools and the implementation of AI governance policies which help companies manage the risks associated with the use of AI.
A practical approach
In the report, the second after an initial launch in May 2024, Microsoft exposed how he made significant investments in responsible tools, policies and practices of responsible AI.
These include the extensive management of risks and mitigation for “methods beyond the text – such as images, audio and video – and additional support for agent systems”, as well as the “proactive and diaper approach” of new regulations such as the EU AI law, providing customers with materials and resources to allow them to be ready and in accordance with the requirements of entry.
The coherent risk management, monitoring, revision and red equipment of the AI and AI generator versions are accompanied by continuous research and development to inform our understanding of the socio -technical problems linked to the latest advances in AI ”, with the company’s border laboratory of the company, “Push the frontier of what AI systems can do in terms of capacity, efficiency systems and security”.
As AI advances, Microsoft says that plans to create more adaptable tools and practices and invest in risk management systems in order to “provide tools and practices for the most common risks between deployment scenarios”.
That’s not all, because Microsoft also plans to deepen his work on incoming regulations by supporting effective governance in the AI supply chain.
He indicates that he also works internally and externally to “clarify roles and expectations”, as well as research on “Measuring and evaluation of AI risks and tools to operationalize it on a large scale”, sharing progress with his wider ecosystem to support safer standards and standards.
“Our report highlights new developments linked to the way we build and deploy AI systems in a responsible manner, how we support our customers and the wider ecosystem, and how we learn and evolve,” noted Teresa Hutson, CVP, Trust Technology Group and Natasha Crampton, head of the AI responsible.
“We are impatient to hear your comments on the progress we have made and the opportunities to collaborate on everything that remains to be done. Together, we can advance the governance of effective and effectively, promoting confidence in AI systems at a rate that corresponds to the opportunities to come.”