from the agency-matters dept
One of the many interesting aspects of the new wave of artificial intelligence systems is that with them, copyright has moved from the fringe to center stage. There are continuing debates about what role copyright should play in the world of generative AI, not to mention various lawsuits asking the courts to rule on that question. Those debates and lawsuits have been about two aspects of AI: the use of copyright materials for training large language models (LLMs), and the copyright status of the output of those models. However, there is a third aspect of the new AI techniques where copyright is likely to raise important issues. This third area is agentic AI, explained here on the blog of one of the leading players in the AI world, Nvidia:
AI chatbots use generative AI to provide responses based on a single interaction. A person makes a query and the chatbot uses natural language processing to reply.
The next frontier of artificial intelligence is agentic AI, which uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems. And it’s set to enhance productivity and operations across industries.
AI agents will also change dramatically how people use personal digital devices. Matthew Green, a professor at Johns Hopkins University, has written a great post about agentic AI and its likely impact on how we use our smartphones:
In theory these new agent systems will remove your need to ever bother messing with your phone again. They’ll read your email and text messages and they’ll answer for you. They’ll order your food and find the best deals on shopping, swipe your dating profile, negotiate with your lenders, and generally anticipate your every want or need. The only ingredient they’ll need to realize this blissful future is virtually unrestricted access to all your private data, plus a whole gob of computing power to process it.
Many people will welcome this kind of intelligent personal assistant, which will undoubtedly reduce the tedium of many daily activities. But there’s a problem. In order to provide these powerful AI agents, powerful software will be required. Even the top smartphones would struggle to run it, never mind older models. As a result, agentic AI will run mostly in the cloud, on company servers. Green explains the implications:
I would say that AI is going to be the biggest privacy story of the decade. Not only will we soon be doing more of our compute off-device, but we’ll be sending a lot more of our private data. This data will be examined and summarized by increasingly powerful systems, producing relatively compact but valuable summaries of our lives. In principle those systems will eventually know everything about us and about our friends. They’ll read our most intimate private conversations, maybe they’ll even intuit our deepest innermost thoughts. We are about to face many hard questions about these systems, including some difficult questions about whether they will actually be working for us at all.
Green goes on to describe some ways in which Apple in particular is trying to minimize the risk that data processed by LLMs in the cloud could leak personal information, for example through its Private Cloud Compute, which uses special trusted hardware devices that run in Apple’s data centers. But as he emphasizes, this is not really a technical challenge, but a political and legal one:
over the past couple of years I’ve been watching the UK and EU debate new laws that would mandate automated “scanning” of private, encrypted messages. The EU’s proposal is focused on detecting both existing and novel child sexual abuse material (CSAM); it has at various points also included proposals to detect audio and textual conversations that represent “grooming behavior.” The UK’s proposal is a bit wilder and covers many different types of illegal content, including hate speech, terrorism content and fraud — one proposed amendment even covered “images of immigrants crossing the Channel in small boats.”
That is the situation today. But now add in LLMs and AI agents that have access to every digital aspect of our lives:
it doesn’t really matter what technical choices we make around privacy. It does not matter if your model is running locally, or if it uses trusted cloud hardware — once a sufficiently-powerful general-purpose agent has been deployed on your phone, the only question that remains is who is given access to talk to it. Will it be only you? Or will we prioritize the government’s interest in monitoring its citizens over various fuddy-duddy notions of individual privacy.
As Green notes, the EU and UK governments – particularly the latter – seem to believe that they must have the power to access people’s systems, regardless of the collateral damage that causes. The recent demand by the UK government for Apple to insert backdoors in its end-to-end encryption – something Apple has refused to do – is further proof of that.
Now imagine a time when governments do have this access, plus the ability to control AI agents on people’s systems – perhaps by means of agentic AI backdoors. Based on the last fifty years of its lobbying, it is easy to imagine the copyright industry demanding from governments laws requiring AI agents to take on the role of copyright police that are installed on every cloud server and personal device, and whose deactivation would be illegal. There are various ways in which that could work. Agentic AI could search through a person’s files for unauthorized copyright material – perhaps even contacting databases over the Internet to check whether a license is in place. AI agents could watch what users are doing online, and report them if they engage in allegedly illegal activity. With agentic AI, those capabilities could be rolled out across an entire population for the first time.
People will say such capabilities would be outrageous attacks on privacy. That is certainly true. But so in their own way are France’s Hadopi system, or Italy’s Piracy Shield, both of which place policing copyright above protecting fundamental human rights. In France, copyright companies are already escalating their attacks on user’s rights by asking courts to compel Virtual Private Network (VPN) suppliers to filter Internet data – a gross attack on freedom of speech. The copyright industry has no shame when it comes to demanding disproportionate legal privileges, and it will doubtless see the arrival of agentic AI as another opportunity to demand even more.
Follow me @glynmoody on Mastodon and on Bluesky. Originally posted to Walled Culture.
Filed Under: agents, ai, ai agents, control, copyright, data, privacy, surveillance