in ,

San Francisco Authorizes Robots to Kill: Is Pandora’s Box Open?

The San Francisco government has allowed police to use robots to kill suspects for the first time in the United States, a decision that has sparked huge controversy. Will this open a Pandora’s box in the future, making it possible for robots to suppress protests and demonstrations, and even subvert the artificial intelligence ethics that robots “cannot kill”?

Deterioration of Law and Order and Controversial Resolutions

Last Tuesday, the San Francisco Council passed a bill agreeing to authorize the San Francisco Police Department to use police robots for lethal use if necessary. In layman’s terms, they authorize the police to kill suspects remotely with robots when necessary to protect life and when there is no other option.

This highly controversial decision not only immediately attracted the attention of the national media, but also brought the topic of “killer robots” that had lasted for several years back into the focus of discussion. When this bill was submitted for discussion last month, it aroused widespread attention and heated discussions from all walks of life in the San Francisco Bay Area.

Before the formal vote, the San Francisco City Council had been discussing the matter for several weeks. On the day of the vote, the San Francisco City Council held a heated debate for more than two hours around this issue. In the end, with an 8:3 vote, the San Francisco Police Department’s equipment use authorization was approved.

The San Francisco City Council’s approval of the bill came against the backdrop of a public outcry to fight crime. Like many big cities in the United States, San Francisco has also faced serious deterioration of law and order in the past few years, which has caused complaints from people and businesses. Smashed car windows have become a common experience while parking in San Francisco.

Due to the overwhelmedness of the police, there have been many incidents of brazen robbery of supermarkets and luxury stores in broad daylight in the core business district of San Francisco. Many stores had no choice but to close their doors and evacuate. In addition to the high level of property crimes involving vandalism and robbery, vicious violent crimes involving armed robbery and homicide are also increasing. San Francisco, with a population of more than 800,000, had 56 homicides last year, a significant increase from 41 in 2019 before the outbreak.

Due to dissatisfaction with the deterioration of law and order, in July this year, San Francisco voters removed the radical leftist District Attorney Chesa Boudin through a special referendum for his ineffective response to crime issues. Under pressure for re-election, the mayor of San Francisco has promised to take measures to crack down on crime, while the San Francisco Council has continuously introduced new technological means to curb crime. In September, the San Francisco Council authorized police access to private surveillance camera data under certain circumstances.

During the debate last Tuesday, both sides accused the other of creating panic. Both sides presented their reasons and concerns. Supporters believe that this authorization is very prudent and clearly limited, and it is to provide the police with a last resort in extreme cases; but opponents believe that it may bring about the risk of excessive use of force by the police, and it will aggravate the police and people of color, especially people of color. It’s a conflict between Africans.

San Francisco Congressman Connie Chan, who proposed the bill, said she understands concerns about the use of force by the police, but “we need to use these devices in accordance with California law. This is definitely not a light discussion.” Another congressman Rafael Mandelman also voted in favor. Radicals see the police as a dangerous and untrustworthy antithesis, which is not good for public safety, he said.

The African-American MP Shamann Walton (Shamann Walton), who opposed it, emphasized that he was not targeting the police, but worried that upgrading police weapons and equipment might increase the chances of negative conflicts between the police and people of color. Relatively speaking, people of African descent are more worried and opposed to the possibility of excessive use of force by the police.

It is worth mentioning that the San Francisco prosecutor also stood on the opposite side of the police department. Just one day before the San Francisco Council vote, the San Francisco Prosecutor’s Office issued an open letter arguing that authorizing robots to kill people remotely is contrary to San Francisco’s progressive values. They therefore called on the City Council to prohibit the police from using robots to use force against anyone.


You can modify and load weapons at any time

Why is there this authorization vote? Under a new California law that went into effect this year, Cal Police departments need to obtain approval to use military-grade equipment. In order to prevent terrorist attacks, police stations in many cities in the United States have purchased a batch of military-grade explosion-proof robots in the past decade. The San Francisco Police Department is no exception and therefore requires a device authorization application.

However, when the San Francisco Police Department applied for authorization this time, it added an application for an extreme case scenario, that is, when “the lives of the public or the police are under imminent threat and other means of force cannot be used, they are allowed to use robots for deadly weapon selection.” . This means that they have the right to modify robots into offensive weapons to kill or blow up terrorist suspects remotely.

San Francisco Police Department Deputy Chief Lazar (David Lazar) mentioned in the city council debate that they applied for extreme situation authorization to deal with cases like the 2017 Las Vegas mass shooting. “We have to consider that (using robots) is a possible option in that situation.”

The Las Vegas shooting was the bloodiest mass shooting and domestic terror attack in U.S. history. On October 1, 2017, in the suite on the 32nd floor of the Mandela Bay Hotel in Las Vegas, a man used more than 20 modified semi-automatic rifles to shoot more than 1,000 bullets at the crowd attending a concert downstairs. The police did not find the suspect until more than an hour later, but he committed suicide and the motive of the crime remains a mystery. The shooting resulted in 61 deaths (including the suspect) and more than 500 injuries.

Perhaps the scene mentioned by Lazar was too shocking. In the end, the San Francisco Council approved the police department’s application with a majority vote. But parliament has also made special rules requiring police to use robots for lethal purposes only after using other force or avoidance tactics, or after deciding they cannot control a suspect by other means. And, only a handful of high-ranking police officers can approve the use of robots as deadly force.

In order to defuse the outside world’s concerns about “killer robots”, the San Francisco Police Department emphasized that their existing robots are not equipped with weapons, and they have no plans to do so at present. But after being authorized, if life is threatened, they can install explosives on the robot, “approach, disorient and disable violent, armed or dangerous suspects”, “robots equipped with these devices will only It is used in extreme cases to protect or prevent the further loss of innocent lives.”

According to the equipment list released by the San Francisco Police Department, they currently have 17 bomb disposal robots, 12 of which are operational. These devices were purchased from 2010 to 2017 and are mainly used to deal with explosives, dangerous goods or in low-visibility environments, and have never been installed or used for explosive purposes before. In addition to the application for lethal use, the San Francisco Police Department has also applied for the robots to be used “in situations such as training and simulation, criminal arrests, important incidents, emergencies, performing arrests, or handling suspicious devices.”

Although the current robots of the San Francisco Police Department are not equipped with lethal weapons, some of the newer models can be equipped with weapons and operated remotely. The bomb disposal robot of model F5A can be equipped with a large-caliber rifle through accessories; another QinetiQ Talon robot can also be converted into a military version, installing a grenade launcher, a machine gun and an anti-material rifle. This is the same robot used by the US Army. Police version.

In other words, after obtaining the approval of the San Francisco City Council, the San Francisco Police Department can convert the current bomb disposal robot into a remote-operated and direct-firing robot at any time if the conditions of use are met. And that’s why the outside world is worried.


Worries about police abuse of robots

The Oakland Police Department, which is across the sea from San Francisco, gave up the authorization application for similar modified robots used in extreme scenes because of public protests last month. Racial conflict is a factor that Oakland police have to consider. The combined black and Latino population here exceeds 45%, the highest proportion in the San Francisco Bay Area.

Despite its close proximity to Silicon Valley, Oakland is the worst city in the United States for law and order. Last year, the violent crime rate was as high as 75.5, far higher than the US average of 22.7. The new crown epidemic has exacerbated social conflicts and security problems. Oakland, a city of 430,000 people, had 133 murders and nearly 600 shootings last year, the most since 2006.

Since the public security in big cities in the United States is so bad, why are there so many people opposed to authorizing the police to use more advanced weapons to deal with suspected attackers? Why is police robot modified into a weapon, causing so much controversy?

After the outbreak of the new crown epidemic in the United States in 2020, social conflicts have further intensified, and there have been many large-scale racial riots. In the process of containing the demonstrations, the police across the United States had conflicts with the demonstrators to varying degrees, and even used tear gas, smoke bombs and other pacification methods.

The San Francisco police were allowed to use robots to kill suspects. Even though extreme scenarios were clearly stipulated, many people worried about whether the US police might abuse robots, a cutting-edge technology, to suppress public protests and demonstrations in the future. They worried that police robots might appear to prevent demonstrations. The sci-fi scene of the runners moving forward.

Elizabeth Joh, a professor at the University of Southern California School of Law, believes that it has only been two years since the Floyd incident triggered global anti-racism protests. San Francisco’s approval of the police’s use of robots as weapons will affect people’s trust in law enforcement. “Would we like to live in a world where the police use robots to kill people, I certainly don’t.”

Although the San Francisco Council is the first in the United States to approve police modification of robots, the Dallas Police Department took the lead in using bomb disposal robots for offensive purposes and killing suspects in a mass shooting as early as 2016.

In July of that year, an army veteran who had been on the battlefield in Afghanistan, because he was dissatisfied with American society and racial issues, killed 5 policemen and wounded 9 civilians and policemen with a sniper rifle in downtown Dallas. The deadliest attack on police officers in the United States since 9/11. After several hours of confrontation with the gunman and fruitless negotiations, the Dallas police installed a bomb on their bomb disposal robot MARCbot, remotely moved to the suspect and detonated it, killing the suspect who refused to put down his weapon.

There is no dispute that the Dallas police killed the murder suspect, but the use of a remote-controlled robot to kill the suspect has aroused huge controversy, because this is the first time that the US police has used a robot to kill in the country. The Dallas Police Chief explained, “There was really no other option at the time, and this was the only way to reduce casualties.”

Prior to this, the US police had used robots many times to deal with suspects, but they did not use robots to directly kill suspects. In 2013, the New Mexico State Special Police used robots to sneak into the room where the suspect was for investigation. In 2016, the California police also used robots to negotiate with the suspect and deliver supplies, avoiding sending negotiators. The existence of robots greatly reduces the risk of police confronting suspects directly.


Robots have long been used in battlefields

In fact, the robot that the San Francisco police seeks to authorize this time has long been used by the US military on the battlefield. The 2011 book “The Changing Character of War” (The Changing Character of War) records that when American soldiers encounter the danger of an ambush in Iraq, they will let the MARCbot robot go to explore the road first, and if it finds the enemy, it will directly detonate the installation remotely. Claymore mines above. The MARCbot robot, which sells for only $5,000, has become the “favorite scapegoat” for US soldiers because of its small size, light movement, and low cost.

To some extent, drones are also combat robots. Since the war in Afghanistan, drones have shifted from reconnaissance to attack methods, and are widely used by the US military for long-range attacks. Since the Naka conflict in Armenia and Azerbaijan in 2020, drones have become the main force of modern warfare. The Azerbaijani military sent a large number of bomb-carrying drones to launch intensive attacks on Armenian ground forces and air defense systems, which became an important means for them to win the war.

In October of this year, five robotics companies including Boston Dynamics, Agility Robotics, and ANYbotics jointly signed a petition calling on users not to use their robots for offensive purposes. “The addition of weapons to robots that operate remotely or autonomously presents new risks to humans and serious ethical questions.”

However, this petition is for symbolic purposes only and has no binding force. The U.S. military is an important customer and source of funds for these robot companies. Boston Dynamics’ robot dogs are also used by the U.S. military and police for logistics, investigation and many other purposes. Boston Dynamics is powerless to stop the U.S. military from converting its robot dogs into weapons-carrying killer dogs.

Compared with remote-controlled robots and drones, the more terrifying threat comes from “AI killer robots”, that is, a new generation of fully automatic weapons equipped with artificial intelligence technologies such as visual recognition. This type of robot or drone can automatically select an attack target and launch an attack according to a preset program by relying on computing chips and a variety of sensors without manual operation.

There is currently no official record of such robots being put into use, but once they appear and are put into use in the future, it is bound to shake the ethical code of robots that “cannot be used to kill people”. With the continuous maturity of AI technology, all walks of life in Europe and the United States are worried that if global coordinated and effective measures are not taken, it will only be a matter of time before the emergence of “AI killer robots”.

As early as 2013, experts from the scientific and legal circles formed the “International Committee for Robot Arms Control” (International Committee for Robot Arms Control), which is committed to promoting the peaceful use of robots in various countries and adopting common regulatory measures for the development and production of robotic weapons. The organization made it clear that “robots cannot be allowed to make autonomous decisions to kill.”

In 2015, Hawking and more than 1,000 well-known scientists and artificial intelligence experts around the world jointly issued an open letter, warning that there may be an artificial intelligence arms race in the world in the future, and urging the United Nations to issue a ban on offensive autonomous weapons. Among them were representatives from Silicon Valley, including Tesla CEO Elon Musk, Apple co-founder Wozniak, and the head of Google’s DeepMind project.

The open letter reads, “Although illegal, the use of autonomous weapons will become a reality in the next few years, not decades. This will bring great risks to human society. Autonomous weapons will become the third weapon after gunpowder and nuclear weapons. A revolution in weapons. We must call on the United Nations to ban such weapons, just like chemical weapons.”

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

As chatbot sophistication grows, AI debate intensifies

New breakthrough in plastic recycling US scientists develop new process PVC is expected to be reused