WASHINGTON—Today, the White House released its “Blueprint for an AI Bill of Rights”, which aims to create a regulatory framework for companies using artificial intelligence technologies in their businesses.
In the framework, the White House calls for new regulations while admitting some of them are likely already covered by existing laws. This is a significant departure from the Donald Trump administration’s “Guidance for the Regulation of Artificial Intelligence Applications”, which took a much smarter approach in 2020 toward technological governance to encourage AI innovation and use while protecting the rights of Americans.
“AI is critical for the next generation of technological innovation and should be seen as a tool to improve lives. While Biden tries to stop America’s technological progress, Trump gave innovators a green light and encouraged American leadership in this critical technology,” said Jennifer Huddleston, NetChoice Policy Counsel. “Departing from the pro-innovation approach under Trump, Biden’s new regulatory framework is guided by fear of innovation and looks toward hindering it in response. When it comes to concerns like discrimination, policymakers should realize that existing laws already protect consumers from those concerns. They should be wary of making the law more complex and confusing, which may discourage AI innovation.”
Huddleston continued, “To ensure American leadership in AI research and development, a better response would follow the Trump administration’s guidelines—providing a green light for innovation and guardrails to address specific and novel harms.”
If put into practice, the Biden Administration’s framework would build steep barriers to entry for new and small businesses, which would be required to do constant testing, assessment and review of AI systems, and it would encourage an unnecessary—almost purpose-defeating—amount of human labor intervention into AI processes.
Here are some concerning examples within the administration’s Blueprint for an AI Bill of Rights:
- Builds steep barriers to entry for new companies with constant testing, assessments, and review.
- “Before deployment, and in a proactive and ongoing manner, potential risks of the automated system should be identified and mitigated. Identified risks should focus on the potential for meaningful impact on people’s rights, opportunities, or access and include those to impacted communities that may not be direct users of the automated system, risks resulting from purposeful misuse of the system, and other concerns identified via the consultation process.” (p.18)
- “This ongoing monitoring should include continuous evaluation of performance metrics and harm assessments, updates of any systems, and retraining of any machine learning models as necessary, as well as ensuring that fallback mechanisms are in place to allow reversion to a previously working system” (p.19)
- “Those responsible for the development, use, or oversight of automated systems should conduct proactive equity assessments in the design phase of the technology research and development or during its acquisition to review potential input data, associated historical context, accessibility for people with disabilities, and societal goals to identify potential discrimination and effects on equity resulting from the introduction of the technology.” (p.26)
- “Any use of sensitive data or decision process based in part on sensitive data that might limit rights, opportunities, or access, whether the decision is automated or not, should go through a thorough ethical review and monitoring, both in advance and by periodic review (e.g., via an independent ethics committee or similarly robust process).” (p.38)
- Calls for New Regulations…
- “Ensuring some of the additional protections proposed in this framework would require new laws to be enacted or new policies and practices to be adopted.” (p.9)
- “Systems should be designed, developed, and deployed by organizations in ways that ensure accessibility to people with disabilities.” (p.27) → the Americans with Disabilities Act already exists
- …that may already be covered under current law
- “Some algorithmic discrimination is already prohibited under existing anti-discrimination law.” (p.26)
- “In some cases, mitigation or elimination of the disparity may be required by law.” (p.27)
- Encourages extensive human review, opt-out, and fallback options
- “In addition to being able to opt out and use a human alternative, the American public deserves a human fallback system in the event that an automated system fails or causes harm. No matter how rigorously an automated system is tested, there will always be situations for which the system fails.” (p.47)
- “An automated system should provide demonstrably effective mechanisms to opt out in favor of a human alternative, where appropriate, as well as timely human consideration and remedy by a fallback system, with additional human oversight and safeguards for systems used in sensitive domains, and with training and assessment for any human-based portions of the system to ensure effectiveness.” (p.49)
If you would like to speak with Huddleston about the framework, please contact press@netchoice.org.