We are focused on developing standards to make artificial intelligence more trustworthy.
I am excited to announce that Buoy has entered the federal policymaking and government affairs arena. This marks another step forward in Buoy’s commitment to put you, our users, at the heart of what we do. We are also focused on being a thought leader in the A.I. and technology space, to lead by example.
In response to an RFI by the National Institute of Standards and Technology (NIST), we recently submitted a comment that provides concrete suggestions on how to standardize artificial intelligence so that A.I. is more reliable, robust and trustworthy.
In our comment, we propose a certification program, managed by the Federal Trade Commission (FTC), that mirrors other consumer-facing programs such as the FTC’s Green Guides program (which manages the rules of the road for consumer-facing labels that designate environmentally-friendly products). Buoy’s proposed certification program would promote public trust and introduce a market threshold that distinguishes trustworthy artificial intelligence systems.
To determine what exactly is “trustworthy” A.I., we propose that this certification program is domain specific — i.e., dependent upon the industry in which the artificial intelligence is being leveraged — and, like any other certification, subject to renewal every few years. For example, A.I. used in the healthcare space would be subject to scrutiny similar to that of the U.S. Food & Drug Administration (FDA) in the successful development of drugs. In particular, Buoy proposes that the certification program ask the following questions at each certification date:
- Is the A.I. safe?
- Does the A.I. work?
- Is the A.I. better than what we have now?
- Are there any other uses or benefits of the A.I.?
Such scrutiny would also consider 1) the measurement of interpretability and explainability, 2) the identification of known risks and/or deficiencies, 3) ways to mitigate any resulting safety or ethical concerns, and 4) whether the A.I. complies with the OECD Principles on Artificial Intelligence.
In our comment, we also highlight the importance of acknowledging and disclosing biases in the artificial intelligence as part of the certification program. We propose that the fees associated with certification be used to subsidize the costs of market-standard certifications related to A.I. — e.g., ISO and/or HITRUST (data security). Finally, we suggest that the certification program should include carve-outs for specific use cases related to A.I. and data privacy, in an effort to unify some of the recent legislation at the state-by-state level regulating data privacy.
Buoy is in good company with its comment. Other organizations that submitted comments in response to the RFI include the AI Security Alliance, Amazon, Anthem, AT&T, Bank of America, Booz Allen Hamilton, Center for Democracy & Technology, Deloitte Consulting LLP Symantec Corporation, Google, Hewlett Packard Enterprise, Intel Corporation, Kaiser Permanente, NetApp, Qualcomm, SAIC and United Technologies.
NIST is a division of the U.S. Department of Commerce, responsible for overseeing and managing the standardization of various subject matter, including weight, measurements and time. In Feb. 2019, the President of the United States issued an executive order that called for American leadership in A.I. so as to maintain economic and national security, promote advancements in technology and innovation, and protect American civil liberties, privacy and our nation’s values. The executive order tasked NIST with overseeing the creation of A.I. standards related to this effort. Read the executive order here.
Was this article helpful?