This article contains major spoilers for the game “Tacoma.”

 

“Hey, ODIN? Can you tell me the average time [Venturis Corporation] has taken to send an evac crew to investigate in situations like this?” asks Andrew Dagyab, a botanist in the 2017 game “Tacoma,” set in the titular lunar transfer station which is quickly losing oxygen. 

The AI assistant, ODIN, is the crew’s only lifeline. Unfortunately, it’s revealed later that it was ODIN himself who caused the disaster. 

Here’s another very scary and very possible story: Leading AI experts say there’s a 5% chance that artificial general intelligence, or AGI, will cause a human extinction-level disaster.  

There was a time when the major concern with AI safety had been the one evil superintelligence, reflected in movies like “The Terminator,” “The Matrix,” and “I, Robot.” 

“Tacoma” takes a different approach. It posits that there will be numerous AGI in the world and that any AGI, even a safely designed one, in the wrong hands at the wrong time could cause lives to be lost. 

That’s the future that a growing number of AI safety experts are worried about. 

An AI’s goal usually isn’t identical to ours. For instance, suppose we built an AI whose goal is to collect stamps. It might logically deduce that the maximizing strategy is to conquer the world and turn the global economy into a stamp collecting machine, humans merely a cog in its existential purpose. That’s called an alignment problem, and a notoriously difficult one to solve. 

Attempting to control AGI by sandboxing it to a confined simulation, implementing a power button, or teaching it human ethics has potential loopholes. Sandboxes could be jailbroken, power buttons could be destroyed, and ethics is incredibly difficult to define in terms of math and code. It only takes superhuman intelligence to dupe humans once for it to go out of control. 

It’s likely that there will be many specialized AGI in different industries throughout the world one day. With numerous human stakeholders and countless AGI, there can be misalignments happening everywhere — what’s called a many-to-many alignment problem. Such a problem deals with what safety researchers call complex systems, or, a bit of a nightmare. 

A complex system is one that’s too unpredictable to reduce to some set of rules, but not random enough to use statistics. The bad news is that most modern safety challenges deal with complex systems. The good news is that people have gotten better at managing it. 

In her 2016 book “Engineering a Safer World,” MIT professor Nancy G. Leveson addresses common misconceptions about safety-critical systems engineering: engineering systems whose malfunction could lead to human loss. Such safety-critical technologies include aviation, nuclear power, automobiles, heavy chemicals, biotechnology, and, of course, AGI. 

First, a system that reliably follows its specifications isn’t the same as a safe one. In “Tacoma,” software engineers achieve an incredible feat: They create AGI that are sandboxed and obey human instructions. Like I mentioned before, such sandboxing might be impossible in the real world. 

Yet an unsafe human order, which ODIN is obliged to follow, jeopardizes the safety of the Tacoma crew. So the “Tacoma” engineers created an AGI that, although reliable, isn’t necessarily safe. When ODIN bends its specifications to help evacuate the Tacoma crew, it becomes safer at the expense of its reliability.

Second, tracing a disaster down to a single root cause and blaming a single individual is a counterproductive approach to disaster prevention. The focus on retributive justice blinds us to systematic issues that allowed said individuals to cause a disaster in the first place. 

In “Tacoma,” there seems to be a single individual who gave unsafe orders to ODIN. But is that really the whole story? What caused them to think that they could get away with it? Why didn’t inspections catch the risk? 

Third, technology isn’t always the solution. A famous example is the invention of sonic radars that were supposed to help ships detect nearby obstacles, but which only increased the rate of accidents. Why? Captains sailed faster, thinking they could get away with it thanks to the new safety technology. 

Similarly in “Tacoma,” the existence of cryogenic sleep that can sustain the crew for up to 75 hours causes the Venturis Corporation to be lax with safety protocols. The result is ODIN’s answer to Andrew’s question above: The average time to rescue is a whopping 98 hours. 

Instead of technologies, Leveson’s book suggests, we should be making organizational changes. 

So what can be done? Among many sophisticated guidelines, Leveson suggests that organizations should be aware that safety guidelines will inevitably become lax over time, and implement preventative measures. 

Or in the words of E.V. James, Tacoma’s administrator, “We know it’s not safe working up here. We just don’t think about it a lot, but here we are.” 

2020 probably made you think about disasters more than usual. Those thoughts probably stirred up panic, passion, desperation, and a host of other uncomfortable emotions. That’s why it’s doubly important to check if our gut instincts are sound. 

Am I mistaking a reliable system for a safe one?

Am I missing something by trying to find a single root cause?

Am I assuming that technological advances will solve the issue?

Disaster narratives remind us that, especially in times like this, we shouldn’t forget the potential for other disasters. Public conscience really does matter. And if we’re all better at thinking about safety as citizens, maybe we really can prevent disasters.



Notes by Nadia: Can money buy happiness?

People can enjoy their hobbies without worrying about finances. Because let’s be honest, not everyone loves their job.

Shrimp fried rice?

Shrimp fried rice: an age-old mystery. Is it fried rice containing shrimp? Or is it fried rice made by shrimp?…

Geophysicists debut model of donut-shaped Earth

Improvements to geophysical mathematics has led to a stunning new revelation: Our Earth is actually a torus. The Global Geophysicists…