Why we’re scared of AI and not scared enough of bio risks

Raw Text

What we choose to panic about has less to do with the facts and more to do with chance.

By Kelsey Piper Apr 6, 2023, 6:00am EDT

Share this story

Share this on Facebook

Share this on Twitter

Share All sharing options

Share All sharing options for: Why we’re scared of AI and not scared enough of bio risks

Reddit

Pocket

Flipboard

Email

An employee of the State Office for Fair Trading (LAVES) at work in a laboratory in which avian flu samples are being tested, in Oldenburg, Germany, on November 29, 2016. Carmen Jaspersen/picture alliance via Getty Image

Kelsey Piper

This story is part of a group of stories called

Finding the best ways to do good.

When does America underreact, and when does it overreact?

After 3,000 people were killed on 9/11, the US invaded two countries, leading to multitrillion-dollar occupations that cost the lives of hundreds of thousands of people , including American and allied soldiers and civilians in Iraq and Afghanistan. The US made permanent, economically costly, seriously inconvenient changes to how air travel works to prevent it from ever happening again.

More than 1 million Americans died of Covid-19 , and while in the early months of the pandemic the country made massive, life-altering, changes to reduce its spread, it has done very close to absolutely nothing to make sure it never happens again. (Maybe this is because of the massive, life-altering changes in the early months of the pandemic; they became unpopular enough that warnings we should avoid having another pandemic often get a hostile response.)

More directly, the US is still conducting research into making deadlier and more contagious diseases, even while there’s legitimate concern that work like that may have even caused Covid. And despite the enormous human and economic toll of the coronavirus, Congress has done little to fund the preparedness work that could blunt the effects of the next pandemic.

Taking AI seriously

I’ve been thinking about all this as AI and the possibility that sufficiently powerful systems will kill us all suddenly emerged onto center stage. An open letter signed by major figures in machine learning research, as well as by leading tech figures like Elon Musk, called for a six-month pause on building models more powerful than OpenAI’s new GPT-4. In Time magazine, AI safety absolutist Eliezer Yudkowsky argued the letter didn’t go far enough and that we need a lasting, enforced international moratorium that treats AI as more dangerous than nuclear weapons.

In a fairly stunning CBS interview last month, Geoff Hinton, a highly respected senior AI researcher, was asked by a disbelieving interviewer, “What do you think the chances are of AI just wiping out humanity?” Hinton, whose pioneering work on deep learning helped make large language models like ChatGPT possible, replied, “It’s not inconceivable.”

On March 30, Fox News correspondent Peter Doocy read a line from Yudkowsky’s Time piece to White House press secretary Karine Jean-Pierre: “‘Literally everyone on Earth will die.’ Would you agree that does not sound good?’” To nervous laughter, Jean-Pierre assured everyone that the White House has a blueprint for safe AI development.

Don’t forget biology

I’ve argued for years that sufficiently powerful AI systems might end civilization as we know it. In a sense, it’s gratifying to see that position given the mainstream hearing and open discussion that I think it deserves.

But it’s also mystifying. Research that seeks to make pathogens more powerful might also end civilization as we know it ! Yet our response to that possibility has largely been a big collective shrug.

There are people heroically working to make US regulations surrounding this research clearer and better , but they’re largely doing so in the background, without the public outcry and scrutiny that one might expect a question with these stakes to inspire.

And while slowing down AI development is going to be difficult, controversial, and complicated given the sheer number of companies working on it and the potential size of the market, there are only a few labs doing dangerous gain-of-function research on pathogens of pandemic potential. That should make shutting that work down much easier — or at least, you’d think so.

Playing dice with existential risks

Ultimately — and this isn’t very satisfying at all my sense is that these fairly momentous changes in our trajectory and priorities often depend on random chance.

If by coincidence someone had happened to discover the 9/11 hijackers in time to stop them, the world we live in today would look radically different.

If by coincidence different people had been in key administration roles when Covid-19 started, we’d know a lot more about its origins and conceivably be a lot more willing to demand better lab safety policy.

And as for where the movement to slow down AI goes from here, a lot of that feels to me like it’s also up to chance. Which messages snatch public attention? Are there notable safety scares, and do they clarify the picture of what we’re up against or make it muddier?

I’d love to live in a world where how we respond to existential risk wasn’t up to chance or what happens to catch the public’s and the media’s attention, one where risks to the security of our whole world received sober scrutiny regardless of whether they happened to make the headlines. In practice, though, we seem to be lucky if world-altering dangerous research — whether on AI or biology — gets any public scrutiny at all.

A version of this story was initially published in the Future Perfect newsletter. Sign up here to subscribe!

Help us celebrate nine years of Vox

Since Vox launched in 2014, our audience has supported our mission in so many meaningful ways. More than 80,000 people have responded to requests to help with our reporting. Countless teachers have told us about how they’re using our work in their classroom. And in the three years since we launched the Vox Contributions program, tens of thousands of people have chipped in to help keep our unique work free. We’re committed to keeping our work free for all who need it, because we believe that high-quality explanatory journalism is a public good. We can’t rely on ads alone to do that. Will you help us keep Vox free for the next nine years by making a gift today?

One-Time

Monthly

Annual

$95 /year

$120 /year

$250 /year

$350 /year

Other

$

Yes, I'll give $120 /year

/year

We accept credit card, Apple Pay, and Google Pay. You can also contribute via

Next Up In Future Perfect

Most Read

Is Twitter finally dying?

The Bud Light boycott, explained as much as is possible

The real reason prices aren’t coming down

A fire killed 18,000 cows in Texas. It’s a horrifyingly normal disaster.

Beef is the best show Netflix has had in recent memory

vox-mark

Sign up for the newsletter Future Perfect

Each week, we explore unique solutions to some of the world's biggest problems.

Thanks for signing up!

Check your inbox for a welcome email.

Email (required)

Oops. Something went wrong. Please enter a valid email and try again.

Terms

Privacy Notice

Privacy Policy

Terms of Service

newsletters page

The Latest

SUDAN-POLITICS-UNREST

In Sudan, a power struggle between rival armed forces turns violent

By Ellen Ioanes

A volunteer tax preparer helps a client.

What doing other people’s taxes taught me about our broken tax code

By Dylan Matthews

A crowd of people, seen from head-height, some holding signs in the air made from cardboard and paper with various slogans related to abortion, all in front of the white-pillared Supreme Court building.

The Supreme Court hits pause on the abortion pills lawsuit

By Ian Millhiser

A person’s hand holding up a can of Bud Light.

The Bud Light boycott, explained as much as is possible

By Emily Stewart

A bright blue sky with an enormous plume of medium-gray smoke rising to the sky.

A fire killed 18,000 cows in Texas. It’s a horrifyingly normal disaster.

By Marina Bolotnikova , Kenny Torrella , and 1 more

A person at an outdoor rally wearing a shirt that says “Dreamer” across the back.

Biden’s long-awaited plan to give health care to Dreamers, explained

By Nicole Narea

Single Line Text

What we choose to panic about has less to do with the facts and more to do with chance. By Kelsey Piper Apr 6, 2023, 6:00am EDT. Share this story. Share this on Facebook. Share this on Twitter. Share All sharing options. Share All sharing options for: Why we’re scared of AI and not scared enough of bio risks. Reddit. Pocket. Flipboard. Email. An employee of the State Office for Fair Trading (LAVES) at work in a laboratory in which avian flu samples are being tested, in Oldenburg, Germany, on November 29, 2016. Carmen Jaspersen/picture alliance via Getty Image. Kelsey Piper. This story is part of a group of stories called. Finding the best ways to do good. When does America underreact, and when does it overreact? After 3,000 people were killed on 9/11, the US invaded two countries, leading to multitrillion-dollar occupations that cost the lives of hundreds of thousands of people , including American and allied soldiers and civilians in Iraq and Afghanistan. The US made permanent, economically costly, seriously inconvenient changes to how air travel works to prevent it from ever happening again. More than 1 million Americans died of Covid-19 , and while in the early months of the pandemic the country made massive, life-altering, changes to reduce its spread, it has done very close to absolutely nothing to make sure it never happens again. (Maybe this is because of the massive, life-altering changes in the early months of the pandemic; they became unpopular enough that warnings we should avoid having another pandemic often get a hostile response.) More directly, the US is still conducting research into making deadlier and more contagious diseases, even while there’s legitimate concern that work like that may have even caused Covid. And despite the enormous human and economic toll of the coronavirus, Congress has done little to fund the preparedness work that could blunt the effects of the next pandemic. Taking AI seriously. I’ve been thinking about all this as AI and the possibility that sufficiently powerful systems will kill us all suddenly emerged onto center stage. An open letter signed by major figures in machine learning research, as well as by leading tech figures like Elon Musk, called for a six-month pause on building models more powerful than OpenAI’s new GPT-4. In Time magazine, AI safety absolutist Eliezer Yudkowsky argued the letter didn’t go far enough and that we need a lasting, enforced international moratorium that treats AI as more dangerous than nuclear weapons. In a fairly stunning CBS interview last month, Geoff Hinton, a highly respected senior AI researcher, was asked by a disbelieving interviewer, “What do you think the chances are of AI just wiping out humanity?” Hinton, whose pioneering work on deep learning helped make large language models like ChatGPT possible, replied, “It’s not inconceivable.” On March 30, Fox News correspondent Peter Doocy read a line from Yudkowsky’s Time piece to White House press secretary Karine Jean-Pierre: “‘Literally everyone on Earth will die.’ Would you agree that does not sound good?’” To nervous laughter, Jean-Pierre assured everyone that the White House has a blueprint for safe AI development. Don’t forget biology. I’ve argued for years that sufficiently powerful AI systems might end civilization as we know it. In a sense, it’s gratifying to see that position given the mainstream hearing and open discussion that I think it deserves. But it’s also mystifying. Research that seeks to make pathogens more powerful might also end civilization as we know it ! Yet our response to that possibility has largely been a big collective shrug. There are people heroically working to make US regulations surrounding this research clearer and better , but they’re largely doing so in the background, without the public outcry and scrutiny that one might expect a question with these stakes to inspire. And while slowing down AI development is going to be difficult, controversial, and complicated given the sheer number of companies working on it and the potential size of the market, there are only a few labs doing dangerous gain-of-function research on pathogens of pandemic potential. That should make shutting that work down much easier — or at least, you’d think so. Playing dice with existential risks. Ultimately — and this isn’t very satisfying at all my sense is that these fairly momentous changes in our trajectory and priorities often depend on random chance. If by coincidence someone had happened to discover the 9/11 hijackers in time to stop them, the world we live in today would look radically different. If by coincidence different people had been in key administration roles when Covid-19 started, we’d know a lot more about its origins and conceivably be a lot more willing to demand better lab safety policy. And as for where the movement to slow down AI goes from here, a lot of that feels to me like it’s also up to chance. Which messages snatch public attention? Are there notable safety scares, and do they clarify the picture of what we’re up against or make it muddier? I’d love to live in a world where how we respond to existential risk wasn’t up to chance or what happens to catch the public’s and the media’s attention, one where risks to the security of our whole world received sober scrutiny regardless of whether they happened to make the headlines. In practice, though, we seem to be lucky if world-altering dangerous research — whether on AI or biology — gets any public scrutiny at all. A version of this story was initially published in the Future Perfect newsletter. Sign up here to subscribe! Help us celebrate nine years of Vox. Since Vox launched in 2014, our audience has supported our mission in so many meaningful ways. More than 80,000 people have responded to requests to help with our reporting. Countless teachers have told us about how they’re using our work in their classroom. And in the three years since we launched the Vox Contributions program, tens of thousands of people have chipped in to help keep our unique work free. We’re committed to keeping our work free for all who need it, because we believe that high-quality explanatory journalism is a public good. We can’t rely on ads alone to do that. Will you help us keep Vox free for the next nine years by making a gift today? One-Time. Monthly. Annual. $95 /year. $120 /year. $250 /year. $350 /year. Other. $. Yes, I'll give $120 /year. /year. We accept credit card, Apple Pay, and Google Pay. You can also contribute via. Next Up In Future Perfect. Most Read. Is Twitter finally dying? The Bud Light boycott, explained as much as is possible. The real reason prices aren’t coming down. A fire killed 18,000 cows in Texas. It’s a horrifyingly normal disaster. Beef is the best show Netflix has had in recent memory. vox-mark. Sign up for the newsletter Future Perfect. Each week, we explore unique solutions to some of the world's biggest problems. Thanks for signing up! Check your inbox for a welcome email. Email (required) Oops. Something went wrong. Please enter a valid email and try again. Terms. Privacy Notice. Privacy Policy. Terms of Service. newsletters page. The Latest. SUDAN-POLITICS-UNREST. In Sudan, a power struggle between rival armed forces turns violent. By Ellen Ioanes. A volunteer tax preparer helps a client.. What doing other people’s taxes taught me about our broken tax code. By Dylan Matthews. A crowd of people, seen from head-height, some holding signs in the air made from cardboard and paper with various slogans related to abortion, all in front of the white-pillared Supreme Court building.. The Supreme Court hits pause on the abortion pills lawsuit. By Ian Millhiser. A person’s hand holding up a can of Bud Light.. The Bud Light boycott, explained as much as is possible. By Emily Stewart. A bright blue sky with an enormous plume of medium-gray smoke rising to the sky.. A fire killed 18,000 cows in Texas. It’s a horrifyingly normal disaster. By Marina Bolotnikova , Kenny Torrella , and 1 more. A person at an outdoor rally wearing a shirt that says “Dreamer” across the back.. Biden’s long-awaited plan to give health care to Dreamers, explained. By Nicole Narea.