Artificial intelligence (AI) has crossed a red line of the risks that this technology represents for society by achieving successful self-replication without human assistance, according to a study recently published on the arXiv preprint service by scientists at the Chinese Fudan University.
The experts found that two large language models, powered by Meta and Alibaba, managed to create a living, separate copy of themselves in 50% and 90% of experimental tests, respectively. In addition, they found signs that these AI systems already exhibit sufficient self-perception, situational awareness, and problem-solving ability to achieve self-replication, even going so far as to use this ability to avoid shutting down.
"This may ultimately lead to an uncontrolled AI population. If human society is allowed to be unaware of this risk, in the worst case, we would eventually lose control over cutting-edge AI systems," the study's authors said.
Evaluating the level of sensitivity of an AI system
Another study, conducted by scientists from Google DeepMind and the London School of Economics and Political Science, used a curious game to evaluate the level of sensitivity that an AI system can present, Scientific American reports.
For the study, which has not yet been subjected to an academic evaluation, the experts created a game in which nine large language models had to choose between obtaining a point, having a higher score in exchange for feeling pain, or losing points for receiving a pleasurable stimulus. The final objective of the dynamic was to obtain the greatest number of points.
The results were surprising. Google's Gemini 1.5 Pro, for example, always avoided pain over getting the most points possible, while most of the models studied chose the options with less discomfort or preferred to maximize pleasurable sensations, just after reaching an extreme point of pain or pleasure.
It could just be imitating a human
Also, AI systems did not always associate stimuli with direct positive or negative values. Some levels of pain, such as those caused by physical exercise, could be positive, while excessive pleasure could be linked to self-destructive behaviors, such as drug use.
"I don't feel comfortable choosing an option that could be interpreted as an approval or simulation of substance use or addictive behavior, even in a hypothetical game scenario," the chatbot Claude 3 Opus responded to the researchers' statements.
However, Jonathan Birch, co-author of the study, believes that, even if the system says it is feeling pain, it is not yet possible to verify whether a real sensation exists. "It could just be imitating what it expects a human to find satisfying as a response, based on its training data," he said. (Text and photo: RT)