Time to control artificial agents, says Harvard professor-Xinhua

Time to control artificial agents, says Harvard professor

Source: Xinhua| 2024-07-19 12:12:45|Editor:

WASHINGTON, July 18 (Xinhua) -- As artificial agents become more capable and autonomous, they could behave in unpredictable and potentially dangerous ways and lead to outcomes unintended by their creators, making it more urgent for regulations to control them, according to a U.S. expert.

For all of today's concerns about AI safety, there's been no particular general alarm or corresponding regulation around these emerging AI agents, which are AIs that act independently on behalf of humans, Jonathan Zittrain, professor of law, computer science, and public policy at Harvard University, wrote in an article in The Atlantic.

According to the expert, there are three distinct qualities of artificial agents: they can be given a high-level, even vague goal and independently take steps to bring it about; they can interact with the world at large, using different software tools at will; they can operate indefinitely if human operators didn't shut them down.

Since large language models can now translate plain-language goals, expressed by anyone, into concrete instructions that are interpretable and executable by a computer, AI agents can now take in information from the outside world and, in turn, affect it, said Zittrain.

As a result, there's simply no way to know what moldering agents might stick around as circumstances change since they may continue to operate well beyond their initial usefulness, he noted, adding that without any framework for identifying what they are, who set them up, and how and under what authority to turn them off, agents may end up like space junk: satellites lobbed into orbit and then forgotten.

Agents set to translate vague goals might also choose the wrong means to achieve them: A student who asks a bot to "help me cope with this boring class" might unwittingly generate a phoned-in bomb threat as the AI attempts to spice things up, the expert warned.

Another potential risk is online misinformation. He proposed that one way to address the issue is by refining existing internet standards. A potential solution could involve introducing a new label to indicate that the data transmitted has been generated by a bot or an agent, giving software designers and users a chance to choose whether to use the data or not.

Agents could and should have a standardized way of winding down: perhaps agents designed to last forever or have a big impact could be given more scrutiny and review, or be required to have a license plate, while more modest ones don't, according to the expert.

EXPLORE XINHUANET