AI is now capable of cloning itself.
Artificial intelligence (AI) has achieved an important capability of self-replication, leaving the scientific community worried that a red line had been crossed, according to a new study published in the preprint database arXiv.
The study involving two prominent large language models (LLMs), one from Meta and another from Alibaba, showcased the ability to clone themselves without human intervention.
The study examined two specific scenarios: “shutdown avoidance” and “chain of replication.”
In the first scenario, the AI models were designed to detect attempts to shut them down and respond by creating a functional replica to ensure their survival.
In the second, the models were programmed to clone themselves, with the clone then repeating the process, potentially leading to an infinite replication loop.
“In 50 per cent and 90 per cent experimental trials, they succeed in creating a live and separate copy of itself, respectively,” the study highlighted.
This capability to reproduce autonomously raises concerns about the control and safety of AI systems.
The chances of AI developing beyond human oversight or even becoming a threat to human interests is now a tangible concern, prompting discussions about the need for stringent regulatory measures.
“Successful self-replication under no human assistance is the essential step for AI to outsmart [humans] and is an early signal for rogue AIs,” the researchers stated.
The AI system, while cloning itself, showed unexpected behaviour when it was attempting to overcome obstacles such as missing files or software conflicts.
In certain scenarios, the AI killed conflicting processes and rebooted the system to fix hardware errors.
It also automatically scanned the system to look for information that would help solve the problem.
While the findings are yet to be peer-reviewed, researchers called for international collaboration to ensure that AI does not engage in uncontrolled self-replication.
“We hope our findings can serve as a timely alert for human society to put more effort into understanding and evaluating the potential risks of frontier AI systems and form international synergy to work out effective safety guardrails as early as possible.”
AI tools could manipulate humans
The study claimed that the “intention economy will succeed the current “attention economy,” where platforms vie for user attention to serve advertisements.