- Reporting mentions of “responsible AI” (and similar) is increasing in employment advertisements
- Legal, education, mathematics and R&S use the terms
- Could not influence the trend-could it be a keyword?
New data in the manner, in fact, claim that in spite of stronger regulations, business image and brand image mainly lead to managers of employment in employment – and not the compliance of policies.
The analysis of the employment platform – which sought terms such as “responsible AI”, “ethical”, “ethics of AI”, “governance of AI” and “AI – revealed that there was a low correlation (0,21) between the national force of AI regulation and the MENTIONS responsible for AI in employment.
Human centered occupations in the legal, educational, mathematical and R&D sectors were among the sectors most likely to use these terms, technological companies more likely to discuss AI more broadly.
Responsible AI is only a keyword
Although the terms of responsible AI increases worldwide (by almost 0% in 2019), they still represent only less than 1% of related ads on average.
The Netherlands, the United Kingdom, Canada, the United States and Australia are paving the way, but indeed, highly highly raised countries of regulation such as the United Kingdom and those of the European Union have no significantly higher mentions of these keywords compared to lighter regulated countries.
In fact, the differences were more noticeable between the employment sectors rather than the regions, with a legal level (6.5%) above the average.
Indeed, the more in -depth analysis of the mentions of AI responsible through the employment lists in the world suggests that regulatory pressure alone could be insufficient to stimulate a generalized adoption of keywords, which suggests that the “responsible” mentions “are more likely to be part of the market -based incentives and business responsibility strategies.
“This suggests that other factors, including reputation concerns or international commercial strategies, may be responsible for AI, or more, than regulatory requirements,” the researchers said.
With an increasing concern for the public concerning the risks of AI, these terms can serve as signaling tools for customers, investors and the wider market, rather than reflecting a internal change and a deep commitment.