Sarcasm is a form of figurative language that has been widely used to implicitly convey an opinion or for humorous purposes. Preliminary research has constantly tried to identify the sarcasm lying in a text directly from tokens within the text. Howeve...
Sarcasm is a form of figurative language that has been widely used to implicitly convey an opinion or for humorous purposes. Preliminary research has constantly tried to identify the sarcasm lying in a text directly from tokens within the text. However, it is insufficient because sarcasm does not have specific vocabularies as in polarized sentences. Especially in threads or discussions, sarcasm can be identified after getting the context information from previous replies or discussions. To this end, I propose IA-BERT, a classifier architecture that considers contextual information to identify incongruity features in sarcastic texts. IA-BERT is embedded with an incongruity attention layer that combines features extracted from the response alone and interactive features obtained from the context and the response sequence. The model leverages BERT pre-trained embedding and yields a performance improvement over the standard fine-tuned BERT classifier. IA-BERT also outperforms the sophisticated architecture of LCF-BERT in the accuracy and F1-score.