Conceptual commonsense-aware attentive modeling with pre-trained masked language models for humor recognition

Humor is an important component of daily communication and usually causes laughter that promotes mental and physical health. Understanding humor is sometimes difficult for humans and may be more difficult for AIs since it usually requires deep commonsense. In this paper, we focus on automatic humor...

Full description

Saved in:
Bibliographic Details
Main Authors: Yuta Sasaki, Jianwei Zhang, Yuhki Shiraishi
Format: Article
Language:English
Published: Elsevier 2024-12-01
Series:Natural Language Processing Journal
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2949719124000657
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1846123203924066304
author Yuta Sasaki
Jianwei Zhang
Yuhki Shiraishi
author_facet Yuta Sasaki
Jianwei Zhang
Yuhki Shiraishi
author_sort Yuta Sasaki
collection DOAJ
description Humor is an important component of daily communication and usually causes laughter that promotes mental and physical health. Understanding humor is sometimes difficult for humans and may be more difficult for AIs since it usually requires deep commonsense. In this paper, we focus on automatic humor recognition by extrapolating conceptual commonsense-aware modules to Pre-trained Masked Language Models (PMLMs) to provide external knowledge. Specifically, keywords are extracted from an input text and conceptual commonsense embeddings associated with the keywords are obtained by using a COMET decoder. By using multi-head attention the representations of the input text and the commonsense are integrated. In this way we attempt to enable the proposed model to access commonsense knowledge and thus recognize humor that is not detectable only by PMLM. Through the experiments on two datasets we explore different sizes of PMLMs and different amounts of commonsense and find some sweet spots of PMLMs’ scales for integrating commonsense to perform humor recognition well. Our proposed models improve the F1 score by up to 1.7% and 4.1% on the haHackathon and humicroedit datasets respectively. The detailed analyses show our models also improve the sensitivity to humor while retaining the predictive tendency of the corresponding PMLMs.
format Article
id doaj-art-8e6d1985b7e64d3facd57767bee0b538
institution Kabale University
issn 2949-7191
language English
publishDate 2024-12-01
publisher Elsevier
record_format Article
series Natural Language Processing Journal
spelling doaj-art-8e6d1985b7e64d3facd57767bee0b5382024-12-14T06:34:34ZengElsevierNatural Language Processing Journal2949-71912024-12-019100117Conceptual commonsense-aware attentive modeling with pre-trained masked language models for humor recognitionYuta Sasaki0Jianwei Zhang1Yuhki Shiraishi2Iwate University, Morioka, JapanIwate University, Morioka, Japan; Corresponding author.Tsukuba University of Technology, Tsukuba, JapanHumor is an important component of daily communication and usually causes laughter that promotes mental and physical health. Understanding humor is sometimes difficult for humans and may be more difficult for AIs since it usually requires deep commonsense. In this paper, we focus on automatic humor recognition by extrapolating conceptual commonsense-aware modules to Pre-trained Masked Language Models (PMLMs) to provide external knowledge. Specifically, keywords are extracted from an input text and conceptual commonsense embeddings associated with the keywords are obtained by using a COMET decoder. By using multi-head attention the representations of the input text and the commonsense are integrated. In this way we attempt to enable the proposed model to access commonsense knowledge and thus recognize humor that is not detectable only by PMLM. Through the experiments on two datasets we explore different sizes of PMLMs and different amounts of commonsense and find some sweet spots of PMLMs’ scales for integrating commonsense to perform humor recognition well. Our proposed models improve the F1 score by up to 1.7% and 4.1% on the haHackathon and humicroedit datasets respectively. The detailed analyses show our models also improve the sensitivity to humor while retaining the predictive tendency of the corresponding PMLMs.http://www.sciencedirect.com/science/article/pii/S2949719124000657Humor recognitionCommonsense-aware attentionKnowledge-intensive NLPCommonsense knowledgePMLMs
spellingShingle Yuta Sasaki
Jianwei Zhang
Yuhki Shiraishi
Conceptual commonsense-aware attentive modeling with pre-trained masked language models for humor recognition
Natural Language Processing Journal
Humor recognition
Commonsense-aware attention
Knowledge-intensive NLP
Commonsense knowledge
PMLMs
title Conceptual commonsense-aware attentive modeling with pre-trained masked language models for humor recognition
title_full Conceptual commonsense-aware attentive modeling with pre-trained masked language models for humor recognition
title_fullStr Conceptual commonsense-aware attentive modeling with pre-trained masked language models for humor recognition
title_full_unstemmed Conceptual commonsense-aware attentive modeling with pre-trained masked language models for humor recognition
title_short Conceptual commonsense-aware attentive modeling with pre-trained masked language models for humor recognition
title_sort conceptual commonsense aware attentive modeling with pre trained masked language models for humor recognition
topic Humor recognition
Commonsense-aware attention
Knowledge-intensive NLP
Commonsense knowledge
PMLMs
url http://www.sciencedirect.com/science/article/pii/S2949719124000657
work_keys_str_mv AT yutasasaki conceptualcommonsenseawareattentivemodelingwithpretrainedmaskedlanguagemodelsforhumorrecognition
AT jianweizhang conceptualcommonsenseawareattentivemodelingwithpretrainedmaskedlanguagemodelsforhumorrecognition
AT yuhkishiraishi conceptualcommonsenseawareattentivemodelingwithpretrainedmaskedlanguagemodelsforhumorrecognition