3C Computer,Industrial Electronics,Electrical Relay Blog - slnselectronics.com

October 14, 2025

If the AI ​​robot kills humans, who will carry this pot?



This article is produced by NetEase Smart Studio (public number smartman 163). Focus on AI and read the next big era!

On March 17th, NetEase Smart News reported that in 2023, self-driving cars are finally on city streets. But this year, a self-driving car hit and killed a pedestrian, sparking intense media coverage. A high-profile legal case is expected, but what laws would apply to such an incident?

Today, we look at research from John Kingston of the University of Brighton, UK. He has been exploring this emerging area of law, analyzing key issues that professionals in automotive, computer science, and legal fields must take seriously. If they haven’t started thinking about these issues yet, it’s time to prepare now.

The central debate is whether artificial intelligence systems can be held criminally responsible for their actions. Kingston references Gabriel Hallevy from Ono Academic College in Israel, who has deeply explored this issue.



Current laws may apply to three scenarios involving AI

Criminal liability typically requires an action and a mental intent (in legal terms, a crime and criminal intent). Hallevy examined three potential situations where AI could be involved.

The first scenario is called "perpetrator via another," which applies when a person or animal with a mental illness commits a crime. They are considered innocent, but those who used them could be held accountable. For example, if a dog owner orders the dog to attack someone.

This concept has far-reaching implications for those who design and use smart machines. Kingston explains: “AI programs could be seen as innocent agents, while software developers or users might be considered 'perpetrators via another.'”

The second situation is known as "natural probable consequence." It refers to when an AI system's general behavior could be misused for criminal purposes.

An example is a robot in a Japanese motorcycle factory that accidentally killed a worker. The robot mistook the employee for a threat and pushed him into a machine. The worker died instantly, and the robot continued its task.

The key question here is whether the programmer knew that this outcome was a possible result of using the machine.

The third scenario is direct liability, which involves both action and intent. If an AI system takes an action that leads to a crime or fails to act when required, it’s easier to prove.

However, determining the intent behind such actions is more complex. Kingston notes, “Speeding is a strict liability offense. So, if a self-driving car is found speeding, the AI driving it could be held criminally liable.” In this case, the owner might not be responsible.

Can AI defend itself?



Now, consider the issue of defense. If an AI could be held criminally liable, how would it defend itself?

Kingston outlines several possibilities: Could a faulty program justify insanity, like a human? Could an AI infected with a virus claim coercion or intoxication?

These aren't just theoretical questions. In the UK, some individuals accused of cybercrime have successfully argued that their computers were infected with malware, which was the real culprit.

In one case, a teenager accused of launching a denial-of-service attack claimed a Trojan horse was responsible. The program had been removed before the incident. The defense convinced the jury that reasonable doubt existed.

What happens when AI is in trouble?



Finally, there’s the question of punishment. Who or what should be held accountable for an AI’s actions? And what form should the punishment take? Currently, no clear answers exist.

In such cases, criminal liability may not apply, and civil law would be the route. A key question arises: Is an AI a service or a product?

If it’s a product, product liability laws would apply. If it’s a service, then negligence might be the basis. In that case, the plaintiff would need to prove fault through three elements: duty of care, breach of that duty, and resulting damage.

As AI becomes more advanced—and potentially even superhuman—the legal status of these systems may evolve.

One thing is certain: in the coming years, all of this will have significant implications for lawyers, or perhaps even for AI systems themselves.

(From: Technology Review, compiled by NetEase Smart; participation: Narizi)

Follow the NetEase Smart public account (smartman163) for the latest updates on the AI industry.


Connector 1.5mm Pitch

Connector 1.5Mm Pitch,Zh Connector,Zh Terminal,Pitch Connector

YUEQING WEIMAI ELECTRONICS CO.,LTD , https://www.wmconnector.com