Technology

AI is so ethereal that people don’t understand its ‘existential risk’ to humanity: Musk

Elon Musk reiterates his concerns about an Artificial Intelligence singularity, and how it poses a fundamental existential risk to human civilization unless regulated.

Speaking on-stage at the National Governor’s Association at the weekend, the CEO of Tesla and SpaceX wasn’t in the mood to sugar coat anything: “Until people see robots going down the street killing people,” said Musk, “they don’t know how to react [to AI] because it seems so ethereal.”

“I have exposure to the most cutting-edge AI and I think people should be really concerned about it,” the CEO added. “AI is a fundamental existential risk for human civilization, and I don’t think people fully appreciate that.”

More and more experts and scientists are voicing concerns over AI in an attempt to stimulate debate. Last year Stephen Hawking branded AI “either the best, or the worst thing, ever to happen to humanity” as a Cambridge institute was set up to research the future of intelligence.

Read More: Stephen Hawking inaugurates artificial intelligence research center at Cambridge

A few months later he was joined on the topic by Michael Vasser, chief science officer of MetaMed Research, who said “If greater-than-human artificial general intelligence is invented without due caution, it is all but certain that the human species will be extinct in very short order”.

The NGA is Musk’s second stab this year at generating some concern over the potential of AI to create an economic or social singularity. In February he appeared at the World Government Summit in Dubai in order to broach the topic.

“Over time I think we will probably see a closer merger of biological intelligence and digital intelligence,” said Musk in an attempt to pose a solution to AI domination in the workforce. “It’s mostly about the bandwidth, the speed of the connection between your brain and the digital version of yourself – particularly output.”

Read More: AI-human hybrids are essential for humanity’s evolution and survival: Elon Musk

Musk’s latest attempt to get a debate going was focused on AI research either slowing or being regulated quickly. His primary concern was that companies charge ahead, spurred on by competition, and create something dangerous.

“Normally the way regulations are set up is that a whole bunch of bad things happen, there’s a public outcry, and then after many years, a regulatory agency is set up to regulate that industry,” he went on to warn. “That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization. AI is a fundamental risk to the risk of human civilization.”

Read More: AI human cyborgs are next on Elon Musk’s agenda with the launch of Neuralink

The most pressing safety issue currently on the table is driverless cars. It’s possible they will need to be programmed to prioritize pedestrians over passengers if confronted with certain situations. Effectively they will be charged – or, rather, the programmers who design the car will be charged – with making ethical decisions ahead of time.

Say you’re headed down a country lane and as you take a curve a truck is stopped unexpectedly. There isn’t enough room to brake without slamming into the back of it, to swerve to one side would be to veer into oncoming traffic and there’s a family walking their dog along the verge on the other side. The best scenario in terms of preserving human life is to send you flying straight into the back of the truck.

Read More: Self-Driving Cars Might Not Be Just Around The Corner After All

Arguably, genesis for this debate started in 1942 when Isaac Asimov published Runaround in which he posed his now renowned “Three Laws of Robotics”, which state:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Read More: The Artificial Intelligence Singularity and the Collapse of the World’s Money System

It’s feasible that any AI regulation would have to start with something akin to these three laws.

Ben Allen

Ben Allen is a traveller, a millennial and a Brit. He worked in the London startup world for a while but really prefers commenting on it than working in it. He has huge faith in the tech industry and enjoys talking and writing about the social issues inherent in its development.

View Comments

Recent Posts

Bridging Traditional Venture Capital and the Masses: Democratizing Startup and Private Market Investments

Article by Luis X Barrios, CEO of Arkangeles For far too long, venture capital has…

12 hours ago

WEF scrubs ‘Valuing Nature’s Assets’ session from Sustainable Development Impact Meetings

The World Economic Forum (WEF) deletes a session entitled, "Valuing Nature's Assets," from its Sustainable…

2 days ago

Horasis India Meeting 2024: Here’s look at 10 key speakers this week in Athens

The theme of the 2024 Horasis India Meeting is cooperation, impact investing, and sustainable growth…

4 days ago

UN Summit of the Future Global Call: World leaders advocate Agenda 2030, UN reform

World leaders gather on the UN Summit of the Future Global Call to advocate for…

1 week ago

A Look Into AI and the Risks to Elections

Image via: Freepik When an entire nation devotes its attention to Vice President Kamala Harris…

1 week ago

5 ‘interconnected shifts’ are driving ‘profound systemic transformation’: Klaus Schwab, WEF report

World Economic Forum (WEF) founder Klaus Schwab says that the world is undergoing profound systemic…

1 week ago