Self-Regulation and Policymaking Guidance Regarding the Use of AI and ML
Businesses with an interest in developing and using Artificial Intelligence (AI) and Machine Learning (ML) technologies are wisely looking to get ahead of regulators. Earlier this year, the Business Roundtable, an organization consisting of 230 CEOs from some of the largest companies in the world, met to consider the implications of a future where AI and ML play an increasingly important role in making automated decisions at scale—and where regulators are paying close attention. The meeting resulted in a proposed framework that is intended to guide regulators in developing policies to ensure the responsible use of AI and ML technologies without curbing their great potential. Growing appetite for additional regulatory oversight around the globe is certainly a catalyst for the discussions among the world's most successful businesses.
1) A Growing Call for AI Regulations as Regulators Realize the Potential for Consumer Harm
How AI affects consumers varies immensely based on the industry, the company and the purpose for which the technology is to be used. The Business Roundtable recognizes that broad regulations over the nascent industry will be challenging, and no one-size-fits-all regulatory solution is appropriate if companies are to maximize the potential of this new technology. While a one-size-fits all regulatory approach is not practicable, several universal legal and ethical themes have emerged as targets for regulatory oversight from state and federal policymakers as well as government enforcement agencies ranging from the Federal Trade Commission to state attorneys general. Namely, as companies shift to solutions that are partially reliant on AI and ML, the potential for consumer harm in the following areas increases:
1. Unfair bias. Algorithms have the potential for perpetuating stereotypes in society. Some companies have scrapped complex recruiting tools after years of use because the companies came to realize that the system perpetuated certain biases, for example by favoring male applications. This was due to the fact that programming used to vet applicants was developed by observing patterns in previous hiring decisions. In industries where most of the resumes of candidates hired in the past came from men, the machine "learned" to favor male applicants.