A CRITIQUE OF GABRIEL HALLEVY’S MODELS OF CRIMINAL LIABILITY OF ARTIFICIAL INTELLIGENCE ENTITIES

K. OGUNNOIKI & Ikenga K.E. ORAEGBUNAM

Abstract


The rapid technological change in the society has tremendously influenced our lives as human beings. Machines are increasingly becoming more sophisticated in their functions. The positive impacts of these artificial entities are undisputable in the society. Robots and computers are gradually replacing human activities, ranging from autonomous cars to machine translation software, robots and medical diagnosis software. From the homes to hospitals, and other public spaces, there is no denying the rapid growth and impact of Artificial Intelligence (AI) entities. However, the rise of Artificial Intelligence entities also raises questions about liability for crimes associated with them. Though Criminal law embodies the most powerful legal social control to regulate crimes, yet the concern of people in most cases is based on the fact that Artificial intelligence entities are not ordinarily considered to be subjects of law. Unfortunately, there are no enough pieces of legislation and regulations addressing the question of liability in relation to Artificial Intelligence entities. Be that as it may, scholars have made attempts to address the thorny issue. One such scholar is the Israeli Professor of Criminal Law, Gabriel Hallevy, whose basic question for consideration is: Does the growing intelligence of AI entities subject them to legal social control as any other legal entity? Hallevy developed three models of response to the subject, namely, the perpetration-via-anotheer liability model, The Natural-Probable-Consequence Liability Model’ and The Direct Liability Model. This study simply examines these models and draws a response. The conclusion is that Hallevy’s models are novel but are not sufficiently capable of addressing existing questions on the culpability of the AI entities.

Full Text:

PDF

Refbacks

  • There are currently no refbacks.