What Does the AI Bill of Rights Mean for Businesses?

The year 2023 may well be remembered as the year Artificial Intelligence (AI) came into its own. OpenAI’s ChatGPT chatbot exploded on the scene, followed by major AI announcements from Google parent Alpha and Microsoft. 

The White House Office of Science and Technology Policy (OSTP), presciently prepared, published a Blueprint for an AI Bill of Rights a few short months before ChatGPT seemingly ignited the new AI age. The framework, while non-binding, presents significant implications for businesses and the ways in which companies adopt and apply various AI technologies and practices, behaviors that will, at some point, likely become codified as US law. 

A Blueprint for Responsible AI Use

The US Bill of Rights—the first 10 Amendments to the Constitution—extend explicit personal freedoms and rights while also restricting specific government actions. The Blueprint for an AI Bill of Rights is similar, and hence the name, in that the framework proposes specific rights-preserving guarantees for Americans while also advocating for explicit restrictions on business use of AI technologies. 

Noting automated technologies can produce significant benefits, the OSTP simultaneously warns such solutions can also unfairly limit opportunities, inappropriately preclude access to essential programs and otherwise discriminate and perpetuate inequities. Examples the office describes include algorithms employed in making hiring and lending decisions perpetuating discriminatory practices and runaway social media harvesting processes violating rights to personal privacy. 

Subsequently, the office recommends businesses and industries adopt five principles to guide the responsible development and application of AI and automated systems to protect everyone from harm. 

The AI Bill of Rights

The OSTP’s AI Bill of Rights advocates five fundamental beliefs: 

  1. You should be protected from unsafe or ineffective systems. 
  2. You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
  3.  You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used. 
  4. You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.
  5. You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter. 

The idea is automated systems should be designed to proactively protect Americans from harm due to unintended but potential errors and practices. Impartial evaluation and reporting are among the processes that can help ensure systems remain safe. 

The office reminds those organizations employing AI that algorithmic discriminations may violate existing laws. Equity assessments, which can mitigate resulting harm, should be included as a matter of course whenever businesses and industries design and test automated systems. 

Further, companies regardless of the vertical market in which they operate should always seek the user’s permission, and honor users’ preferences, when collecting, assessing, using and sharing users’ information. Sensitive data, including health records, should be used only as necessary and should be subject to ethical review and assessment. And requests for consent should be written using plain and concise language. 

The AI Bill of Rights also emphasizes the importance of users understanding whenever AI and automated systems play a role in making decisions that affect the user. Users have the right to know, too, why the system made the decisions it did and what other factors, if any, impacted those decisions. Corresponding reporting should be provided, again, using plain and concise language. 

The OSTP, established in 1976 to strengthen and advance American science and technology initiatives and ensure equity, inclusion and integrity within all aspects of science and technology, also proposes users receive the option to opt out of using automated systems whenever they choose. In such cases, human or alternative options may already be required by law. Alternative systems and assistance troubleshooting issues experienced using AI-powered solutions, the blueprint recommends, should be timely and included as part of normal escalation processes. 

Such considerations are important and impact the methods, processes and procedures companies—whether working within the banking, educational, health care, insurance, manufacturing or other fields—operate and employ AI and machine learning (ML) technologies and automated systems. Whereas some laws already govern some of these behaviors, other issues and vulnerabilities may not yet be addressed as needed. Problems—including harmful and damaging discrimination, inequities and wrongful denial of resource and service access—can occur. To aid adoption of these fundamental principles, the office didn’t just list the five Bill of Rights principles as options companies can adopt. In addition, the OSTP blueprint presents practical steps for applying the AI Bill of Rights, too. 

Applying the AI Bill of Rights

Because technologies and innovation evolve rapidly, the OSTP recommends applying a two-part test to determine when automated solutions should be bound by the AI Bill of Rights. The office’s guidance states the blueprint applies whenever (1) an automated system is in use that (2) possesses the capacity to impact the American public’s rights, opportunities or access to essential programs or resources. 

The overall goal—with the five Bill of Rights and associated practices—is to create a “purposefully overlapping framework” that helps protect the public from harm, discrimination and inequity. Intriguingly, the OSTP’s guidance also states “the measures taken to realize the vision set forward in this framework should be proportionate with the extent and nature of the harm, or risk of harm, to people’s rights, opportunities and access.” 

Such wording suggests the AI Bill of Rights, while non-binding for now, will likely become the subject of new legislation and requirements for businesses and industries to not only adopt but also prove compliant. Certainly, individual states are already actively drafting and passing related legislation. 

Preparing now for such requirements, as well as potential corresponding audit and assessment processes, may give some businesses an advantage when policymakers do act. Planning now, as new automated systems are being designed and developed, to accommodate the Bill of Rights’ tenants may well also eliminate unnecessary costs that could arise later due to needing to recode or reprogram automated systems to prove compliant. 

Have questions about AI?

If your organization has questions about how best to employ AI—whether by upgrading to AI- and ML-powered endpoint and cybersecurity protections, employing AI tools to assist data mining efforts that enable improved decision making or for some other advantage—contact Louisville Geek. Our technicians regularly assist clients with effectively and productively using AI responsibly to enhance operations. Call Louisville Geek at 502-897-7577 or email [email protected].