Is AI more dangerous than nukes?

Elon Musk says AI (artificial intelligence) is far more dangerous than nuclear weapons, but then he is known for making controversial statements that take us all by surprise. But in what ways might AI be dangerous and what should we be aware of?

Glenn Gow, an expert consultant on AI Strategy for businesses, says that firms can use AI to reduce risk to “business partners, employees and customers,” before regulatory bodies step in to force them to take specific steps. As he says, most firms “have regulations to govern how we manage risks to ensure safety,” yet there are very few regulations around the use of AI regarding safety, although there are regulations about its use in relation to privacy. Therefore, there is a need to find ways to manage the risk presented by AI systems.

For example, Singapore has created a Model AI Governance Framework that is a good starting place to understand the risks in relation to two key issues: 1. The level of human involvement in AI;

2. The potential harm caused by AI.

The level of human involvement

Where are the dangers here? First, we have to remember that sometimes AI works alone, and at other times it requires human input. When there is no human involvement, the AI runs on its own and a human can’t override the AI. When a human is involved, the AI only offers suggestions, such as in medical diagnostics and treatment. The third type of interaction is the AI that is designed to “let the human intercede if the human disagrees or determines the AI has failed.” AI-based traffic prediction systems are an example of this.

In the case of the third example, which Gow calls ‘human-over-the-loop’, the risk of harm is low, but the severity of harm is high.

In a ‘human-in-the-loop’ situation, the risk of both probability and severity of harm is high. Gow gives the following example: “Your corporate development team uses AI to identify potential acquisition targets for the company. Also, they use AI to conduct financial due diligence on the various targets. Both the probability and severity of harm are high in this decision.”

When humans are not involved at all, the probability of harm is high, but the severity of harm is low.

As Gow suggests, The Modern AI Governance Framework gives boards and management a starting place to manage risk with AI projects. Whilst AI could be dangerous in several scenarios, by managing when and how humans will be in control, we can greatly reduce a company’s risk factors, and ensure AI is safer than nukes.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top