Artificial intelligence (AI) in healthcare holds potential for addressing disease challenges, optimizing the allocation of healthcare resources, and improving medical accessibility. However, concerns have emerged regarding potential risks and harms, i...
Artificial intelligence (AI) in healthcare holds potential for addressing disease challenges, optimizing the allocation of healthcare resources, and improving medical accessibility. However, concerns have emerged regarding potential risks and harms, including the misuse of personal data, privacy infringements, issues of bias and discrimination. Given the dual nature of AI, it becomes crucial to provide ethical guidance for promoting good practices in research and development (R&D) in AI, to prevent negative consequence of raised from the use of AI in healthcare, as well as to offer directions for improving legal frameworks in the future. Therefore, this paper aims to propose six ethical principles for AI R&D in healthcare, along with checklists for good practices. The principles are developed with the objectives of harmonizing existing research governance, such as data sharing, and ensuring responsible R&D of AI in healthcare in the context of South Korea.
Firstly, it is crucial to respect and protect the autonomy of individuals. This principle entails striking a balanced emphasis on the autonomy of data subjects, whose information is used by AI researchers, while ensuring that control over automated decision-making remains in humans. Secondly, researchers in the field of AI in healthcare aim to contribute to the well-being, safety, and public benefits of humans. It implies that researchers recognize the various risks and societal concerns associated with the technology, and that legal and social efforts are inevitable to promote the collective benefit. Thirdly, AI technology in healthcare should ensure transparency, explainability, and foster social trust. The outcomes generated by AI should be interpretable and explainable, and the utilization of data, including R&D outcomes from researchers and affiliated institutions, should be appropriately disclosed to the public in order to building social trust. Fourthly, AI researchers in healthcare should demonstrate responsibility, while the legal liability needs improvement. Given the shortcomings of the current compensation system for medical errors caused by such technology, researchers and affiliated institutions should collaborate to minimize harm for patients and consumers. Fifthly, it should strive for inclusitivy and fairness. This principle necessitates designing the technology to be used appropriately and equitably regardless of individual characteristics, addressing digital disparities, and resolving issues of bias. Lastly, it should be responsive and sustainable. This implies that systematic responses should be undertaken to address societal perceptions of the technology, and there should be a commitment to pursuing sustainable R&D approaches.