and EnergyArtificial intelligenceCenter for TechnologyFeaturedScienceTechnologyTechnology and innovation

Design Mandate Proposals Threaten American AI Leadership

Scholars often cite the 1984 Betamax case as a pivotal moment in the development of modern American tech policy. The entertainment industry sought to prohibit Sony from selling its videocassette recorder, because it could be—and largely was—used by consumers for copyright infringement. But the Court declined, finding that the device was “capable of substantial noninfringing use” and limiting the studios to identifying and suing those who actually used the product illegally.

Departing from the blueprint of permissionless innovation threatens to limit the benefits of AI for Americans.
A closeup of a microchip featuring an American flag, representing the integration and progress of U.S. technology in artificial intelligence.

The Betamax decision reflects the approach that American law has taken fairly consistently for the next two generations: when faced with a new “dual use” technology, one that can be used for good or bad purposes, the law generally will not regulate the technology prophylactically because of fear of misuse. Instead, it will allow technological development to continue and will intervene only in instances of actual harm. This model of “permissionless innovation” has made America the undisputed technological engine of modern society, and the wisdom of this approach has been rewarded time and time again, from the introduction of the computer, to the Framework for Global Commerce that gave rise to the Internet, to the advent of social media networks.

Yet some states are second-guessing this approach when it comes to artificial intelligence. Last year, Colorado became the first American jurisdiction to adopt ex ante model design requirements on AI systems. The law, loosely modeled on the European Union’s AI Act, is concerned not with copyright infringement but algorithmic discrimination—the fear that AI systems may produce biased outputs that systematically disadvantage protected classes of people. To mitigate this risk, Colorado requires AI developers in sensitive sectors such as education, employment, finance, and legal services to document their efforts to assess and remove potential bias when building models. Deployers of these AI systems must adopt risk management policies and conduct annual impact assessments to assess bias. Similar proposals are pending in other states, including Virginia, Connecticut, and (surprisingly) Texas.

Unquestionably, AI bias is a legitimate concern. Sometimes it stems from models trained on skewed or unrepresentative datasets. In other instances, it reflects ongoing systemic biases in society. Machine learning excels at identifying and replicating patterns in data, which means it’s likely models will surface extant but previously unrecognized patterns of bias. One study, provocatively titled Man is to Computer Programmer as Woman is to Homemaker, shows how natural language processing can produce gendered outputs, even when trained on innocuous datasets like Google News.

But extensive model design requirements to mitigate possible bias ex ante is a misguided solution. As an initial matter, observations that a model is “biased” depends on normative baseline judgments about how a model should work—judgments that are sharply disputed. (Take, for example, the ongoing battle over whether social media is “biased” against conservatives.) Moreover, attempts to tweak models to correct for bias risks overcorrection, as Google learned with its infamous Gemini rollout. The ease with which Gemini generated ahistorical images (such as black men in Nazi uniforms), apparently due to prompts instructing the engine to return diverse results, prompted significant online criticism and led Google to suspend the product.

More significantly, regulating AI model design also distorts competition and innovation. Regulatory compliance costs increase the cost of entry, insulating well-financed incumbents from potentially disruptive startups.  Standardizing model design reduces innovation by requiring a level of homogeneity in products, reducing the planes of potential competition. And creating a separate legal regime to govern AI bias creates litigation risk that can deter downstream deployers to eschew AI technology for fear of being sued. This is especially problematic, since well-implemented AI systems can dramatically decrease discrimination by removing humans with implicit bias from decision-making.

Rather than regulating model design, the law should focus on outputs. And as my AEI colleague Will Rinehart has written, we should apply existing laws before rushing to adopt new AI-specific regulations. America already has a robust, sophisticated legal framework to identify and address harmful discrimination. These models can be readily adapted to include decisions made with AI assistance. This approach recognizes that ultimately, AI systems are merely inputs that actors can use to help improve decision-making. When those outcomes are discriminatory, they should be remedied, regardless of whether they are AI assisted. This approach will likely prompt some end-users to demand bias reduction strategies from developers. This market-driven approach is more efficient: individual actors will seek remedial strategies tailored to their individual assessments of mitigation risk, which may involve interventions other than model design.

Betamax provides a time-honored blueprint for nurturing nascent but potentially revolutionary technology, while protecting those harmed by its introduction. Departing from it can adversely affect innovation, lock in incumbents and kill startups, and limit the extent to which America benefits from the AI revolution it created.

The post Design Mandate Proposals Threaten American AI Leadership appeared first on American Enterprise Institute – AEI.

Source link

Related Posts

1 of 34