At this time, Microsoft is asserting its assist for brand spanking new voluntary commitments crafted by the Biden-Harris administration to assist make sure that superior AI techniques are secure, safe, and reliable. By endorsing the entire voluntary commitments offered by President Biden and independently committing to a number of others that assist these important objectives, Microsoft is increasing its secure and accountable AI practices, working alongside different business leaders.
By transferring shortly, the White Home’s commitments create a basis to assist make sure the promise of AI stays forward of its dangers. We welcome the President’s management in bringing the tech business collectively to hammer out concrete steps that can assist make AI safer, safer, and extra helpful for the general public.
Guided by the enduring rules of security, safety, and belief, the voluntary commitments handle the dangers offered by superior AI fashions and promote the adoption of particular practices – equivalent to red-team testing and the publication of transparency experiences – that can propel the entire ecosystem ahead. The commitments construct upon sturdy pre-existing work by the U.S. Authorities (such because the NIST AI Threat Administration Framework and the Blueprint for an AI Invoice of Rights) and are a pure complement to the measures which have been developed for high-risk purposes in Europe and elsewhere. We sit up for their broad adoption by business and inclusion within the ongoing international discussions about what an efficient worldwide code of conduct may appear to be.
Microsoft’s further commitments concentrate on how we are going to additional strengthen the ecosystem and operationalize the rules of security, safety, and belief. From supporting a pilot of the Nationwide AI Analysis Useful resource to advocating for the institution of a nationwide registry of high-risk AI techniques, we imagine that these measures will assist advance transparency and accountability. We have now additionally dedicated to broad-scale implementation of the NIST AI Threat Administration Framework, and adoption of cybersecurity practices which might be attuned to distinctive AI dangers. We all know that it will result in extra reliable AI techniques that profit not solely our clients, however the entire of society.
You’ll be able to view the detailed commitments Microsoft has made right here.
It takes a village to craft commitments equivalent to these and put them into follow at Microsoft. I want to take this chance to thank Kevin Scott, Microsoft’s Chief Expertise Officer, with whom I co-sponsor our accountable AI program, in addition to Natasha Crampton, Sarah Fowl, Eric Horvitz, Hanna Wallach, and Ece Kamar, who’ve performed key management roles in our accountable AI ecosystem.
Because the White Home’s voluntary commitments replicate, folks should stay on the heart of our AI efforts and I’m grateful to have sturdy management in place at Microsoft to assist us ship on our commitments and proceed to develop this system we have now been constructing for the final seven years. Establishing codes of conduct early within the growth of this rising know-how won’t solely assist guarantee security, safety, and trustworthiness, it can additionally enable us to raised unlock AI’s constructive affect for communities throughout the U.S. and world wide.