Categories: Technology

AI breaks simulated laws of physics to win at hide and seek

AI is finding unexpected ways to win a simulated game of hide and seek, even breaking the simulated laws of physics that were programmed by OpenAI developers.

Taking what was available in its simulated environment, the AI began to exhibit “unexpected and surprising behaviors,” including “box surfing, where seekers learn to bring a box to a locked ramp in order to jump on top of the box and then ‘surf’ it to the hider’s shelter,” according to OpenAI.

What does all this mean? It backs up what we already know — that artificial intelligence can behave unexpectedly.

In this simulation, the AI found new and creative ways to win at hide and seek that its programmers never thought of. This is fascinating stuff!

“The self-supervised emergent complexity in this simple environment further suggests that multi-agent co-adaptation may one day produce extremely complex and intelligent behavior,” according to the OpenAI team.

Imagine if this were something as extreme as war. An unpredictable AI producing “complex and intelligent behavior” might bring about a swift end to any war, but at what cost?

Read More: ‘AI will represent paradigm shift in warfare’: WEF predicts an Ender’s Game-like future

Now, imagine if this AI could compute at a quantum level, exploring every possible outcome simultaneously at speeds we couldn’t even fathom!

OpenAI postulated, “Building environments is not easy and it is quite often the case that agents find a way to exploit the environment you build or the physics engine in an unintended way.”

We haven’t reached a Terminator scenario yet, but the technology exists. Luckily, our researchers understand that they can’t predict nor control what the AI does, and so there are countless research programs looking to solve this, and the discussion of ethics is a hot topic.

Read More: Keeping Prometheus out of machine learning systems

In fact, “OpenAI’s mission is to ensure that artificial general intelligence benefits all of humanity.”

Right now, it’s incredible to see how AI is behaving in simulated environments. The challenge is to control that in a real-world environment.

OpenAI concluded, “These results inspire confidence that in a more open-ended and diverse environment, multi-agent dynamics could lead to extremely complex and human-relevant behavior.”

Tim Hinchliffe

The Sociable editor Tim Hinchliffe covers tech and society, with perspectives on public and private policies proposed by governments, unelected globalists, think tanks, big tech companies, defense departments, and intelligence agencies. Previously, Tim was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. These days, he is only responsible for articles he writes and publishes in his own name. tim@sociable.co

View Comments

Recent Posts

Bridging Traditional Venture Capital and the Masses: Democratizing Startup and Private Market Investments

Article by Luis X Barrios, CEO of Arkangeles For far too long, venture capital has…

5 hours ago

WEF scrubs ‘Valuing Nature’s Assets’ session from Sustainable Development Impact Meetings

The World Economic Forum (WEF) deletes a session entitled, "Valuing Nature's Assets," from its Sustainable…

2 days ago

Horasis India Meeting 2024: Here’s look at 10 key speakers this week in Athens

The theme of the 2024 Horasis India Meeting is cooperation, impact investing, and sustainable growth…

4 days ago

UN Summit of the Future Global Call: World leaders advocate Agenda 2030, UN reform

World leaders gather on the UN Summit of the Future Global Call to advocate for…

1 week ago

A Look Into AI and the Risks to Elections

Image via: Freepik When an entire nation devotes its attention to Vice President Kamala Harris…

1 week ago

5 ‘interconnected shifts’ are driving ‘profound systemic transformation’: Klaus Schwab, WEF report

World Economic Forum (WEF) founder Klaus Schwab says that the world is undergoing profound systemic…

1 week ago