Will artificial intelligence help, replace, or kill us?
These long-unanswered questions came back into focus earlier this week, as the Pew Research Center published the results of an eye-opening poll that further underscores an unhappy trend: our debate about AI is fundamentally broken.

Pew found that more than half of all American workers reported being “worried about how AI may be used in the workplace in the future,” with an additional 36 percent saying they’re “hopeful” about the technology.
Meanwhile, about one in ten laborers claimed to use large language models (LLMs) like ChatGPT every day, and more than half said they use it rarely if ever.
So what gives? Will workers and consumers ultimately benefit from AI? Will robots take our jobs and impoverish us? Or will they skip the middleman and just wipe us out altogether?
This debate, which has raged for the last several years, has pitted enthusiasts against doomers, those who applaud its capabilities against those who minimize and scorn them.
One group, whom I label “positive autonomists,” believe that machines have already attained a measure of independence from their programmers and will fundamentally—and beneficially—transform the technological, economic, and even political landscape.
Arrayed against them are the “negative autonomists,” who fear that, unimpeded, AI will run rampant and imperil or even eradicate humanity. They therefore demand that we do impede its progress.
There are also “negative automatoners,” who contend that large language models like OpenAI or Deepseek can do little beyond what they’re programmed to do and generally cheapen the human experience.
And finally, “positive automatoners” agree that LLMs are glorified word processing tools that merely reflect the input they receive but regard them as vehicles to improve how we—and they—interact with society.
These disparate views can and must be reconciled, and by humbly and judiciously applying the wisdom of our ancestors, we can strike a balance that promotes the life-enhancing and -extending potential of this technology while minimizing its downsides.
In his recent speech in Paris, Vice President J.D. Vance sought to do just that. “The United State of America is the leader in AI,” he said. “It will make people more productive, more prosperous, and more free.” This cheerleading channeled the most aggressive of the positive autonomists.
To that end, Vance vowed to unshackle the restraints previously imposed by ill-advised supervisory regimes, such as the top-down, one-size-fits-all EU AI Act and the now-repealed executive order on AI announced by President Biden. He correctly cautioned against regulatory capture where large incumbent firms exploit statutory provisions to stymie smaller competitors. “Instead,” he insisted “our laws will keep big tech, small tech, and all other developers on a level playing field.”
The vice president also promised to “always center American workers in our AI policy,” who he expects “will reap the rewards with higher wages, better benefits, and safer and more prosperous communities.” Here, though, tradition would have counseled Vance to temper his with a dose of epistemic humility.
For example, the Babylonian Talmud records that the Fourth Century scholar Rava “created a man,” a forerunner of the mythical golem and an astounding achievement in realizing human potential. The golem, according to legend, was forged from clay by Rabbi Judah Loew of 16th century Prague to protect and empower the city’s embattled Jewish community—to serve the greater good.
Yet both Rava’s “man” and the golem ultimately had to be destroyed because they endangered society more broadly. An epistemically modest approach to creating powerful entities entails some sort of failsafe to safeguard against disaster—a kill switch of sorts, which negative autonomists have rightly sought.
As a corollary, humility dictates ensuring that we embrace rigorous, voluntary, industry-based guidelines to ensure the safe, proper, and ethical deployment of AI designed to assist consumers and workers alike. As my colleagues and I have argued, groups like the Partnership for AI and the AI Alliance, which comprise large and small LLM developers, academics, and nonprofit foundations, have promulgated standards that vigorously promote machine technology for the benefit of all while mitigating its risks.
We must always recognize that the advantages and disadvantages of disruptive breakthroughs aren’t always evenly distributed. American workers are quite right to exhibit mixed emotions about how the technology will turn out, and policymakers would do well to listen to them carefully and apply traditional wisdom to assuage their concerns.
The post New Poll on Workers’ Attitudes to AI Reinforces Old Divides appeared first on American Enterprise Institute – AEI.