There has been a continuous movement to automate fact-checking through artificial intelligence (AI) technology as a countermeasure to the widespread production and rapid dissemination of misinformation, disinformation, and harmful information on the i...
There has been a continuous movement to automate fact-checking through artificial intelligence (AI) technology as a countermeasure to the widespread production and rapid dissemination of misinformation, disinformation, and harmful information on the internet. However, this approach often appears to be a technology-centric solution with lacking two key perspectives. First, it proposes AI in the fact-checking process without sufficient consideration of the conditions and methods for AI implementation in the fact-checking process. Second, the current approach lacks an understanding of how factuality is negotiated and constructed within fact-checking process performed by AI. This paper addresses these issues by examining how AI, as a ‘technology’ of fact-checking, constructs ‘facts’ through domestic and international AI fact-checking technology cases. This paper explores how the ‘factuality’ constructed by AI fact-checking differs from the ‘factuality’ provided by traditional fact-checking journalism, which has historically shaped social reality by verifying and adding facts. To achieve this, the paper reviews AI fact-checking technologies presented by globally certified fact-checking organizations as of October 2023. The sources come from the International Fact-Checking Network (IFCN) and AI fact-checking technology cases published in an online database created by the Rand Institute in the U.S., aimed at fighting disinformation. Based on the objectives of each technology and how they are set up, this paper categorizes the AI fack-checking technology cases into: 1) detection of claims, 2) evidence extraction and verification of claim truthfulness, and 3) detection and control of information dissemination patterns. This paper then critically examines how each type of technology socially constructs ‘factuality’ compared to the ways in which the traditional journalism approach has defined ‘factuality.’ In sum, AI fact-checking involves the automation of the process through which external ‘objective facts and their requirements are defined in technically sensible ways, and the machine ‘filters’ and ‘matches’ information based on whether it meets these criteria. Ultimately, this process reconstructs the nature of facts within the ‘requirements of facts that the technology can construct. This study emphasizes the need to move beyond the current focus on whether AI technology can be adopted or implemented, and instead calls for a detailed examination of the specific conditions and methods under which AI is deployed within the context of fact-checking. This study reveals, through the analysis of case studies, that the definition of ‘fake news’ becomes highly ambiguous, depending on which aspects of the complex fact-checking process are automated by AI, and that the concept of factuality, traditionally emphasized in journalism, can also be subject to political (re)definition.