As AI-based technologies become more sophisticated, concerns about the side effects that may arise from AI are increasing. Problems caused by AI can lead to direct damage to individuals and society, leading to strong demands for regulation. However, r...
As AI-based technologies become more sophisticated, concerns about the side effects that may arise from AI are increasing. Problems caused by AI can lead to direct damage to individuals and society, leading to strong demands for regulation. However, risks posed by AI cannot be easily controlled through government regulation, alternative regulatories such as co-regulation, independent agency regulation, and self-regulation are being discussed, but related empirical research has been limited. The purpose of this study is to examine the relationship between AI risk perception(social and ethical risks and technical risks) and preference for regulation types(government regulation, co-regulation, independent agency regulation and self-regulation).
As a result of the analysis, it was found that in the government and co-regulation model, there can be a shift from government regulation to preference for co-regulation due to risk awareness of technology abuse, job loss, personal information infringement, and system control failure.
Second, in the government and independent agency regulation model, there can be a shift from government regulation to preference for independent agency regulation due to the perception of job loss, security incidents and inaccurate results. Lastly, in the government and self-regulation model, personal information infringement and opacity were found to influence the transition from government regulation to self-regulation.