The possibility of crime by artificial intelligence is growing with accelerating the development of artificial intelligence. The development of robots built-in highly developed software become a reality, the form of crime that use those robots is expe...
The possibility of crime by artificial intelligence is growing with accelerating the development of artificial intelligence. The development of robots built-in highly developed software become a reality, the form of crime that use those robots is expected. And similar accidents are already happening. Here we can have the question whether we could hold A. I. robot criminally liable for crime.
The Korean Criminal Act introduced computer crime when amending the 1995 for the first time. For the next 20 years we are having developed technology with a tremendous pace. The law can not keep up coincidentally with the development of science and technology, but the effort is essential to try to faster response to the development of science and technology. If the law is too far away from technology, law can be just a nuisance that catches up with the development of technology, law can not deal with violations of legal interests by technology and law can not protect legal interests of people.
The discussion for robot crime is still an opening stage in Southkorea and more studies are expected to take place in the future. In the United States and Germany these discussions are already done more actively and their discussions can be taken into consideration, too.
There are basic concepts that must be confirmed as crime or requirements that must be recognized in order to discuss robot crime. This paper provides the following issues dealing with: First, discussing robot crime in the Criminal Act, the question should be examined wether the legal concept of ‘robot’ can confirm. Second, for the robot crime, robot must be an agent of the crime. I examine the problem whether we can recognize robot as an agent of the crime. Third, if we can not recognize a robot as an agent of crime, we should examine whether and how human can take the responsibility for the accident by A. I. robot.
First, the concept of robot crime can be variously defined. But there is already a legal regulation defining robot crime. So, we can be able to take advantage of the legal definition.
Second, there is a point of view that applies theories for an criminal agent of the corporation to the agent of robot crime in Germany. But I find it difficult to apply to a robot crime. The corporate acts through the natural person in fact, on the other hand, a robot operates on their own machines. In that aspect, the A. I. robot is similar to animals rather than corporate. Today, the criminal responsibility for animals can not be asked. Likewise it is difficult to admit the criminal agent of robot crime.
Third, I conclude in this paper that robot can not be admitted as an criminal agent. Therefore in case that robot violates legal interest of people, the case should be treated primarily with the principle of civil product liability. When the criminal solution is necessary, it should be examined whether there is negligence of the owner or producer of A. I. robot.
In this case the culpability of negligence in relation to omission of management responsibility can be recognized to the owner for robot. For the producer of robot, occupational negligence can be admitted because the producer did not fully consider the possibility of violation of legal interest that can be anticipated on manufacturing.