Our AI writing assistant, WriteUp, can assist you in easily writing any text. Click here to experience its capabilities.

Why we’re scared of AI and not scared enough of bio risks

Summary

This article examines why we are scared of the potential risks of AI technology, yet not scared enough of bio risks such as pandemics. The article examines how the US responded to the 9/11 attacks and the Covid-19 pandemic, and the fact that despite the enormous human and economic toll of the coronavirus, congress has done little to fund the preparedness work that could blunt the effects of the next pandemic. It also looks at the public response to the idea of AI wiping out humanity, and how it is being taken seriously, but the same cannot be said for the risks of research that seeks to make pathogens more powerful. The article argues that how we respond to existential risk depends on a lot of chance, such as what messages snatch public attention, and suggests that in order to avoid future disasters, we need to take all risks seriously.

Q&As

What is the potential human and economic toll of the coronavirus pandemic?
The potential human and economic toll of the coronavirus pandemic is more than 1 million Americans dead and multitrillion-dollar occupations that cost the lives of hundreds of thousands of people.

How has the US responded to the risk of AI?
The US has called for a six-month pause on building models more powerful than OpenAI’s new GPT-4 and has discussed the possibility of a lasting, enforced international moratorium that treats AI as more dangerous than nuclear weapons.

What is gain-of-function research on pathogens of pandemic potential?
Gain-of-function research on pathogens of pandemic potential is research that seeks to make pathogens more powerful.

What is the difference between the public outcry surrounding AI and the response to research that seeks to make pathogens more powerful?
The public outcry surrounding AI is much greater than the response to research that seeks to make pathogens more powerful.

What is the role of chance in the US' response to existential risks?
The role of chance in the US' response to existential risks is that how we respond to them often depends on random chance. If by coincidence different people had been in key administration roles when Covid-19 started, we’d know a lot more about its origins and conceivably be a lot more willing to demand better lab safety policy.

AI Comments

👍 This is a thought-provoking article that provides an interesting perspective on why some risks are taken more seriously than others.

👎 This article does not provide enough evidence to support its claim that our response to existential risk is often determined by chance.

AI Discussion

Me: It's about why we're more scared of AI than we are of bio risks. It discusses how AI is a relatively new topic and has caught people's attention, while research into making pathogens more powerful has been largely ignored. It also talks about how the US response to 9/11 and the Covid-19 pandemic were both up to chance - what happened to catch the public's attention and the media's attention.

Friend: That's really interesting. It's scary to think that something so important as our response to existential risk is up to chance. It also makes you wonder why people are more scared of AI than bio-risks, when the latter seems like it could be more of an immediate threat.

Me: Exactly. It does seem like we should be taking bio-risks more seriously, since the consequences could be truly catastrophic. We need to do more to ensure that dangerous research receives the public scrutiny it deserves.

Action items

Technical terms

AI (Artificial Intelligence)
AI is a type of computer technology that is designed to mimic human intelligence and behavior.
Bio risks
Bio risks refer to the potential risks posed by biological agents, such as viruses, bacteria, and other microorganisms.
GPT-4
GPT-4 is a type of natural language processing model developed by OpenAI. It is used to generate text based on a given prompt.
Gain-of-function research
Gain-of-function research is a type of scientific research that seeks to make pathogens more powerful.
Existential risk
An existential risk is a risk that could potentially lead to the extinction of humanity.

Similar articles

0.9079422 Why Is The World Afraid Of AI? The Fears Are Unfounded, And Here’s Why.

0.90117997 One of the “godfathers of AI” airs his concerns

0.90116537 Existential risk, AI, and the inevitable turn in human history

0.89806986 Pausing AI Developments Isn't Enough. We Need to Shut it All Down

0.89730525 Fears about AI’s existential risk are overdone, says a group of experts

🗳️ Do you like the summary? Please join our survey and vote on new features!