Speaking Truth to Oppressed

AI now capable of cloning itself, Scientists fear “red line” crossed

AI now capable of cloning itself

AI is now capable of cloning itself.

Artificial intelligence (AI) has achieved an important capability of self-replication, leaving the scientific community worried that a red line had been crossed, according to a new study published in the preprint database arXiv.

The study involving two prominent large language models (LLMs), one from Meta and another from Alibaba, showcased the ability to clone themselves without human intervention.

The study examined two specific scenarios: “shutdown avoidance” and “chain of replication.”

In the first scenario, the AI models were designed to detect attempts to shut them down and respond by creating a functional replica to ensure their survival.

In the second, the models were programmed to clone themselves, with the clone then repeating the process, potentially leading to an infinite replication loop.

“In 50 per cent and 90 per cent experimental trials, they succeed in creating a live and separate copy of itself, respectively,” the study highlighted.

This capability to reproduce autonomously raises concerns about the control and safety of AI systems.

The chances of AI developing beyond human oversight or even becoming a threat to human interests is now a tangible concern, prompting discussions about the need for stringent regulatory measures.

“Successful self-replication under no human assistance is the essential step for AI to outsmart [humans] and is an early signal for rogue AIs,” the researchers stated.

The AI system, while cloning itself, showed unexpected behaviour when it was attempting to overcome obstacles such as missing files or software conflicts.

In certain scenarios, the AI killed conflicting processes and rebooted the system to fix hardware errors.

It also automatically scanned the system to look for information that would help solve the problem.

While the findings are yet to be peer-reviewed, researchers called for international collaboration to ensure that AI does not engage in uncontrolled self-replication.

“We hope our findings can serve as a timely alert for human society to put more effort into understanding and evaluating the potential risks of frontier AI systems and form international synergy to work out effective safety guardrails as early as possible.”

AI tools could manipulate humans

Earlier last month, a study claimed that AI tools could soon be used to manipulate the masses into making decisions that they otherwise would not have made.
Powered by LLMs, the AI chatbots, such as ChatGPT and Gemini, among others, will “anticipate and steer” users based on “intentional, behavioural, and psychological data.”

The study claimed that the “intention economy will succeed the current “attention economy,” where platforms vie for user attention to serve advertisements.

Leave a Reply

Your email address will not be published. Required fields are marked *