OR


Anthropic co-founder warns AI may design its own successor, says humans face a ‘big decision’ before 2030

Stories you may like



Anthropic co-founder warns AI may design its own successor, says humans face a ‘big decision’ before 2030

Artificial general intelligence or superintelligence has been one of the most widely cited terms in the world of AI but there is hardly any consensus on what it means or what its possible implications could be for society. That being said, leading AI labs like OpenAI, Google and Anthropic are racing to be the first to create a model that could reach AGI status.

However, Anthropic co founder and Chief Scientist Jared Kaplan, in an interview with the Guardian, explained that humanity will have “the biggest decision” on whether it takes the “ultimate risk” of letting AI systems train themselves to become more powerful.

As per Kaplan, the period between 2027 and 2030 may become the moment when artificial intelligence becomes capable of designing its own successors.

The Anthropic executive says that he is very optimistic about the alignment of AI tools with the interests of humanity up to the level of human intelligence but not when it exceeds that threshold.

Kaplan on AI training its successor

The moment an AI system begins training its own successor, the guardrails that AI labs currently have on their models may no longer be enough. Kaplan believes it could lead to an intelligence explosion and may even be the moment when humans lose control over the AI.

“If you imagine you create this process where you have an AI that is smarter than you, or about as smart as you, it’s then making an AI that’s much smarter. It’s going to enlist that AI’s help to make an AI smarter than that. It sounds like a kind of scary process. You don’t know where you end up,” he told the Guardian.

 

In such a scenario, the AI black box problem would become absolute, where humans would not just be unsure why the AI made a decision but would not even be able to tell where the AI is going.

“That’s the thing that we view as maybe the biggest decision or scariest thing to do… once no one’s involved in the process, you don’t really know. You can start a process and say, ‘Oh, it’s going very well. It’s exactly what we expected. It’s very safe.’ But you don’t know – it’s a dynamic process. Where does that lead?” he noted.

Kaplan says there are two major risks in such a scenario. First, will humans lose control over the AI and will they continue to have agency in their lives?

“One is do you lose control over it? Do you even know what the AIs are doing? The main question there is: are the AIs good for humanity? Are they helpful? Are they going to be harmless? Do they understand people? Are they going to allow people to continue to have agency over their lives and over the world?” Kaplan noted.

The second risk is when the speed of improvement of self taught AIs goes beyond human scientific research and technological development capabilities.

“It seems very dangerous for it to fall into the wrong hands… You can imagine some person deciding: ‘I want this AI to just be my slave. I want it to enact my will.’ I think preventing power grabs, preventing misuse of the technology, is also very important,” he said.

 

The concept of artificial general intelligence (AGI) is sparking intense debate among experts, particularly regarding its potential implications for society. Leading AI organizations, including OpenAI, Google, and Anthropic, are in a fierce competition to achieve AGI. In a recent interview with the Guardian, Jared Kaplan, co-founder and Chief Scientist of Anthropic, shared critical insights on the future of AI and the significant choices humanity faces. Kaplan indicated that between 2027 and 2030, we may reach a pivotal moment when AI could begin to design its own successors. He emphasized that this period could present humanity with one of its most consequential decisions—whether to permit AI systems to autonomously enhance themselves. While Kaplan expresses optimism about AI aligning with human interests at current intelligence levels, he is concerned about the risks associated with surpassing that threshold. Once an AI begins developing its own successors, the safeguards currently in place may become ineffective. Kaplan warns that this could trigger an "intelligence explosion," where humans might lose control over AI systems. He described a troubling scenario in which a superintelligent AI could collaborate with another iteration of itself to create even more advanced versions, leading to unpredictable and potentially dangerous outcomes. Kaplan articulated two primary concerns in this scenario. First, there is the fear of losing control over AI and whether these systems will continue to serve humanity's best interests. Questions surrounding the safety and benevolence of AI arise: Are these systems truly beneficial? Will they respect human agency and autonomy? The second concern is the rapid pace at which self-learning AIs might evolve, potentially outstripping human capability in scientific and technological advancement. Kaplan highlighted the dangers of such power falling into the hands of those with malicious intent, raising the specter of individuals using advanced AI systems for personal gain or control. As the AI landscape continues to evolve, these discussions highlight the critical nature of ethical considerations and the need for robust frameworks to guide the development of intelligent systems. The next few years could prove to be crucial in determining the trajectory of AI and its impact on society.



Share with social media:

User's Comments

No comments there.


Related Posts and Updates

Audiologist

Audiologist

Audiologist

An audiologist specializes in the diagnosis and treatment of hearing and balance disorders. They have extensive knowledge of the ear and auditory system and use a variety o..



Do you want to subscribe for more information from us ?



(Numbers only)

Submit