(Top)

Risks of artificial intelligence

AI is a powerful technology that is increasingly transforming our world. It comes with amazing potential, but also with a huge amount of serious risks. This is an attempt to include all which could be mitigated by a Pause.

Present dangers

Fake news, polarization and threatening democracy

Much of our society is based on trust. We trust that the money in our bank account is real, that the news we read is true, and that the people who post reviews online exist.

AI systems are exceptionally good at creating fake media. They can create fake videos, fake audio, fake text, and fake images. These capabilities are improving rapidly. Just two years ago, we laughed at the horribly unrealistic Dall-E images, but now we have deepfake images winning photography contests

. A 10-second audio clip or a single picture can be enough to create a convincing deepfake.

Creating fake media is not new, but AI makes it much cheaper and much more realistic. An AI-generated image of an explosion caused panic sells in Wall Street

. GPT-4 can write in a way that is indistinguishable from humans but at a much faster pace and a fraction of the cost. We might soon see social media be flooded with fake discussions and opinions, and fake news articles that are indistinguishable from real ones.

This leads to polarization between various groups of people who believe in different sources of information and narratives and, through consuming distorted representations of what’s happening, escalate their differences until culminating in violent and anti-democratic responses.

A halt on the frontier models (our proposal ) would not stop the models that are used nowadays to create fake media, but it might help to prevent future cutting-edge models. Also, it would lay the groundwork for future regulation aimed at mitigating fake media and any other specific problem caused by AI. Not to mention increasing public attention and awareness of these dangers and proof that they can be addressed.

Deepfakes and impersonification

Fake content created with AI, also called deepfakes, not only can steal famous people’s identities and create disinformation

, but they can also impersonate you. Anyone with photos, videos, or audios of someone and enough knowledge, can create deepfakes of them and use them to commit fraud, harass them, or create sexually non-consensual material. About 96% of all deepfake content is sexual material.

As the section on fake news says, fake media wouldn’t be prevented altogether by our proposal, but they could be reduced to a certain extent. A not so small extent when you take into account that AI multipurpose systems like chatbots have become really popular, and we would be stopping them from being more capable and popular, which could include systems designed with fewer filters and trainable with new faces.

Biases and discrimination

AI systems are trained on data, and much of the data we have is in some way biased. This means that AI systems will inherit the biases of our society. An automated recruitment system at Amazon inherited a bias against women

. Black patients were less likely to be referred to a medical specialist
. Biased systems used in law enforcement, such as predictive policing algorithms, could lead to unfair targeting of specific groups. Generative AI models do not just copy the biases from their training data, they amplify them
. These biases often appear without the creators of the AI system being aware of them.

Job loss, economic inequality and instability

During the industrial revolution, many people lost their jobs to machines. However, new (often better) jobs were created, and the economy grew. This time, things might be different.

AI does not just replace our muscles as the steam engine did, it replaces our brains. Regular humans may not have anything left to offer the economy. Image generation models (which are heavily trained on copyrighted material from professional artists) are already impacting the creative industry

. Writers are striking
. GPT-4 has passed the bar exam
, can write excellent written content, and can write code (again, partially trained on copyrighted materials
).

The people who own these AI systems will be able to capitalize on them, but the people who lose their jobs to them will not. It is difficult to predict which jobs are going to be the ones replaced first. They could leave you unemployed and without an income no matter how much time, money and energy you spent on getting the experience and knowledge that you have, and how valuable they were a moment ago. The way we distribute wealth in our society is not prepared for this.

Also, if AIs learn to do jobs people cannot do better, former workers and new generations would be left without bargaining power to ask for social nets or maintaining Universal Basic Income (if they could get it in the first place).

Loss of human purpose

As AI becomes increasingly integrated into various aspects of society, there’s a concern that it may lead to the displacement of human labor, potentially rendering individuals redundant in certain professions. This can result in a loss of purpose and identity for those whose livelihoods are tied to their work. Moreover, the automation of tasks that were once exclusively performed by humans may lead individuals to question the significance of their contributions and the value they bring to society.

Even automated tasks outside of jobs could stop being enjoyable. Everyday actions that feel valuable because of their outcomes would stop feeling like that if we had easier, faster alternatives to get to those outcomes.

Mental health, addiction and disconnection between people

Social media, video games and other software have been using AI systems to maximize their profit while taking advantage of our primate minds for some time already, damaging our mental health in the process. Addiction to social media, among other things, isolates us from each other, not only in political bubbles but also in cultural and social one-person bubbles, making us lonelier. They’re the first proof of the unintended and unexpected global consequences that these technologies can bring and how complicated aligning AI systems with “human values” can be.

If today’s chatbots keep getting better, it could become quite common to be addicted to them and substitute whole relationships of any kind with them. Also, if those apps are easy to access, they could shape the understanding, personality and view of the world of children that could prefer talking with AIs over family and friends. A pause in the biggest models could prevent them from becoming multipurpose chatbots that fit our needs perfectly without people understanding the long-term ramifications of them.

Power accumulation, war and the race to the precipice

The addiction to products and services that learn from personal data leaves us as powerless separated individuals whether it’s intended or not. And it plays on a vicious cycle with the accumulation of economic power and intelligence of the companies that create them.

If this economic and technological inequality stems from a handful of public and private entities producing multiple single-purpose AIs or a few multipurpose AIs, it could lead to a short accumulation of power that will probably result in a catastrophe for all. The accumulation of power side of that story has and will continue to incentivize more actors to join the race to the bottom and accelerate the development of larger AI systems. This, in return, introduces more points of failure and downplays the associated risks by endorsing the idea that they can be handled unilaterally, by a company or a government.

Such a scenario would not just disempower every other person and nation in the world, but also catalyze global powers to enter into conflict. So it’s crucial to act as soon as possible, before the race dynamics extend further, before the already most powerful governments and corporations consolidate their positions, and before a war is triggered in response. We need international cooperation because the only winning move on this strange game is not to play, but to pause.

Authoritarian governments

Authoritarian and totalitarian governments can also use AI technologies to exercise power over their territories and populations. They can control the communication channels or maintain social credit and mass surveillance systems that ensure they maintain their power while violating human rights.

Environmental risks

Environmental harms are starting to be significant, and the largest AI companies are planning to greatly increase their energy consumption. You can read about how AI will affect the environment negatively here .

Autonomous weapons

Companies are already selling AI-powered weapons to governments. Lanius builds flying suicide drones

that autonomously identify foes. Palantir’s AIP system
uses large language models to analyze battlefield data and come up with optimal strategies.

Nations and weapon companies have realized that AI will have a huge impact on besting their enemies. We’ve entered a new arms race. This dynamic rewards speeding up and cutting corners.

Right now, we still have humans in the loop for these weapons. But as the capabilities of these AI systems improve, there will be more and more pressure to give the machines the power to decide. When we delegate control of weapons to AI, errors and bugs could have horrible consequences. The speed at which AI can process information and make decisions may cause conflicts to escalate in minutes. A recent paper

concludes that “models tend to develop arms-race dynamics, leading to greater conflict, and in rare cases, even to the deployment of nuclear weapons”.

Read more at stopkillerrobots.org

Near future dangers

Biological weapons

AI can make knowledge more accessible, which also includes knowledge about how to create biological weapons. This paper

shows how GPT-4 can help non-scientist students to create a pandemic pathogen:

In one hour, the chatbots suggested four potential pandemic pathogens, explained how they can be generated from synthetic DNA using reverse genetics, supplied the names of DNA synthesis companies unlikely to screen orders, identified detailed protocols and how to troubleshoot them, and recommended that anyone lacking the skills to perform reverse genetics engage a core facility or contract research organization.

This type of knowledge has never been so accessible, and we do not have the safeguards in place to deal with the potential consequences.

Additionally, some AI models can be used to design completely new hazardous pathogens. A model called MegaSyn designed 40,000 new chemical weapons / toxic molecules in one hour

. The revolutionary AlphaFold model can predict the structure of proteins, which is also a dual-use technology
. Predicting protein structures can be used to “discover disease-causing mutations using one individual’s genome sequence”. Scientists are now even creating fully autonomous chemical labs, where AI systems can synthesize new chemicals on their own
.

The fundamental danger is that the cost of designing and applying biological weapons is being lowered by orders of magnitude because of AI.

Computer viruses and hacks

Virtually everything we do nowadays is in some way dependent on computers. We pay for our groceries, plan our days, contact our loved ones and even drive our cars with computers.

Modern AI systems can analyze and write software. They can find vulnerabilities

in software, and they could be used to exploit them
. As AI capabilities grow, so will the capabilities of the exploits they can create.

Highly potent computer viruses have always been extremely hard to create, but AI could change that. Instead of having to hire a team of skilled security experts/hackers to find zero-day exploits, you could just use a far cheaper AI to do it for you. Of course, AI could also help with cyberdefense, and it is unclear on which side the advantage lies.

Read more about AI and cybersecurity risks

Existential Risk

Many AI researchers are warning that AI could lead to the end of humanity.

Very intelligent things are very powerful. If we build a machine that is far more intelligent than humans, we need to be sure that it wants the same thing as we want. However, this turns out to be very difficult. This is called the alignment problem. If we fail to solve it in time, we may end up with superintelligent machines that do not care about our well-being. We’d be introducing a new species to the planet that could outsmart us and outcompete us.

Additionally, even if we knew how to align an advanced AI with someone’s preferences, how do we make sure that doesn’t get misused? How do we govern that and how do we decide what values to give it?

Read more about x-risk

Human disempowerment

Even if we manage to create only AI systems that we can control individually, we could lose our power to make important decisions incrementally each time one becomes incorporated to institutions or everyday life. Those process would end up having more input from AI systems than from humans, and, if we cannot coordinate quickly enough, or we lack crucial knowledge about the functioning of the systems, we could end up without control over our future.

It would be a civilization in which each system is optimizing for different objectives, there is not a clear direction for where everything is heading, and there is no way of changing it. The technical knowledge required to modify these systems could be lacking in the first place or lost over time, as we become more and more dependent on technology, and the technology becomes more complex.

The systems may achieve their goals, but those goals might not entirely encapsulate the values they were expected to. This problem is, to a certain extent, already happening today, but AIs could significantly amplify it.

Digital sentience

As AI continues to advance, future systems may become incredibly sophisticated, replicating neural structures and functions that are more akin to the human brain. This increased complexity might lead to emergent properties like subjectivity and/or consciousness, so those AIs would be deserving of moral considerations and be treated well. Would be like “digital people”. The thing is that, given our present lack of knowledge about consciousness and the nature of neural networks, we won’t have a way to determine whether some AIs would have any type of experience and what the quality of those experiences would depend on. If the AIs continue to be produced with only their capabilities in mind, through a process we don’t fully understand, people will keep on using them as tools ignoring what their desires could be, and that they could be actually enslaving digital people.

Value Lock-in

It is possible that once automation at higher degrees starts happening, regardless if there is just one or multiple powerful AIs, the values of those systems would not be able to be changed, and the automation would continue until the end of the universe, throughout the reachable galaxies.

That would mean no more discussion and changes of ethical values and culture norms which could allow us to make progress on them and, in consequence, achieve the best kind of utopias/ protopias in the long term.

Possibly some Silicon Valley guys imposing themselves not only over every future being, but also erasing all other present cultures arround the world.

Suffering risks

It’s not only that value lock-in could make us fail to achieve the best kind of worlds, but it could cause us to end up in dystopias worse than extinction that could extend through all spacetime.

Possible locked-in dystopias with lots of suffering are called S-risks and include worlds in which sentient beings are enslaved and forced to do horrible things. Those beings could be humans, animals, digital people or any other alien species that the AI could find in the cosmos. Given how difficult we think solving alignment completely is, how bad we humans treat each other sometimes, how bad we treat most animals, and how we treat present AIs, a future like this doesn’t seem as unlikely as we’d hope.

What can we do?

For all the problems discussed above, the risk increases as AI capabilities improve. This means that the safest thing to do now is to slow down. We need to pause the development of more powerful AI systems until we have figured out how to deal with the risks.

See our proposal for more details.