As Large Language Models (LLMs) continue to evolve, they are being increasingly utilized across various domains. However, it has been observed that human cognitive biases also manifest in LLMs. Previous studies have investigated whether one of such bi...
As Large Language Models (LLMs) continue to evolve, they are being increasingly utilized across various domains. However, it has been observed that human cognitive biases also manifest in LLMs. Previous studies have investigated whether one of such bias, the primacy effect, is evident in LLMs using English datasets. These studies confirmed the presence of the primacy effect in English data, but did not explore its characteristics in other language datasets. Given reports that LLM responses may vary across cultural contexts, it is crucial to examine whether the primacy effect also appears in LLMs communicating with Korean language. Therefore, we conduct an experiment to test whether the primacy effect exists in LLMs when it generates answers for Korean datasets. As a result, we find that it is significantly less observed in LLMs with Korean datasets compared to English datasets. Also, we find that strength of primacy effect varies with the number of options that the dataset provides.