Leading AI scientists are calling for stronger action on AI risks from world leaders, warning that progress has been insufficient since the first AI Safety Summit six months ago. Then, the world's leaders pledged to govern AI responsibly. However, twenty-five of the world's leading AI scientists say not enough is actually being done to protect us from the technology's risks.
Professor Philip Torr,Department of Engineering Science,University of Oxford, a co-author on the paper, says:"The world agreed during the last AI summit that we needed action, but now it is time to go from vague proposals to concrete commitments. This paper provides many important recommendations for what companies and governments should commit to do.
mandate much more rigorous risk assessments with enforceable consequences, rather than relying on voluntary or underspecified model evaluations. According to the authors, for exceptionally capable future AI systems, governments must be prepared to take the lead in regulation. This includes licensing the development of these systems, restricting their autonomy in key societal roles, halting their development and deployment in response to worrying capabilities, mandating access controls, and requiring information security measures robust to state-level hackers, until adequate protections are ready.
Source: Tech Daily Report (techdailyreport.net)
Computers And Internet Internet Hacking STEM Education World Development Public Health Environmental Policies
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Source: engadget - 🏆 276. / 63 Read more »
Source: CoinDesk - 🏆 291. / 63 Read more »
Source: 12News - 🏆 586. / 51 Read more »
Source: FoxNews - 🏆 9. / 87 Read more »
Source: cleveland19news - 🏆 70. / 68 Read more »
Source: ksatnews - 🏆 442. / 53 Read more »