Interpretable Embeddings for Next Point-of-Interest Recommendation via Large Language Model Question–Answering
Next point-of-interest (POI) recommendation provides users with location suggestions that they may be interested in, allowing them to explore their surroundings. Existing sequence-based or graph-based POI recommendation methods have matured in capturing spatiotemporal information; however, POI recom...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2024-11-01
|
| Series: | Mathematics |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2227-7390/12/22/3592 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1846153064996667392 |
|---|---|
| author | Jiubing Chen Haoyu Wang Jianxin Shang Chaomurilige |
| author_facet | Jiubing Chen Haoyu Wang Jianxin Shang Chaomurilige |
| author_sort | Jiubing Chen |
| collection | DOAJ |
| description | Next point-of-interest (POI) recommendation provides users with location suggestions that they may be interested in, allowing them to explore their surroundings. Existing sequence-based or graph-based POI recommendation methods have matured in capturing spatiotemporal information; however, POI recommendation methods based on large language models (LLMs) focus more on capturing sequential transition relationships. This raises an unexplored challenge: how to leverage LLMs to better capture geographic contextual information. To address this, we propose interpretable embeddings for next point-of-interest recommendation via large language model question–answering, named QA-POI, which transforms the POI recommendation task into obtaining interpretable embeddings via LLM prompts, followed by lightweight MLP fine-tuning. We introduce question–answer embeddings, which are generated by asking LLMs yes/no questions about the user’s trajectory sequence. By asking spatiotemporal questions about the trajectory sequence, we aim to extract as much spatiotemporal information from the LLM as possible. During training, QA-POI iteratively selects the most valuable subset of potential questions from a set of questions to prompt the LLM for the next POI recommendation. It is then fine-tuned for the next POI recommendation task using a lightweight Multi-Layer Perceptron (MLP). Extensive experiments on two datasets demonstrate the effectiveness of our approach. |
| format | Article |
| id | doaj-art-51dbaa99b07d4e60a78d76d3adf99d9f |
| institution | Kabale University |
| issn | 2227-7390 |
| language | English |
| publishDate | 2024-11-01 |
| publisher | MDPI AG |
| record_format | Article |
| series | Mathematics |
| spelling | doaj-art-51dbaa99b07d4e60a78d76d3adf99d9f2024-11-26T18:11:57ZengMDPI AGMathematics2227-73902024-11-011222359210.3390/math12223592Interpretable Embeddings for Next Point-of-Interest Recommendation via Large Language Model Question–AnsweringJiubing Chen0Haoyu Wang1Jianxin Shang2Chaomurilige3School of Statistics, Jilin University of Finance and Economics, Changchun 130117, ChinaBig Data and Network Management Center, Jilin University, Changchun 130012, ChinaSchool of Information and Technology, Northeast Normal University, Changchun 130024, ChinaKey Laboratory of Ethnic Language Intelligent Analysis and Security Governance, Ministry of Education, Minzu University of China, Haidian District, Beijing 100081, ChinaNext point-of-interest (POI) recommendation provides users with location suggestions that they may be interested in, allowing them to explore their surroundings. Existing sequence-based or graph-based POI recommendation methods have matured in capturing spatiotemporal information; however, POI recommendation methods based on large language models (LLMs) focus more on capturing sequential transition relationships. This raises an unexplored challenge: how to leverage LLMs to better capture geographic contextual information. To address this, we propose interpretable embeddings for next point-of-interest recommendation via large language model question–answering, named QA-POI, which transforms the POI recommendation task into obtaining interpretable embeddings via LLM prompts, followed by lightweight MLP fine-tuning. We introduce question–answer embeddings, which are generated by asking LLMs yes/no questions about the user’s trajectory sequence. By asking spatiotemporal questions about the trajectory sequence, we aim to extract as much spatiotemporal information from the LLM as possible. During training, QA-POI iteratively selects the most valuable subset of potential questions from a set of questions to prompt the LLM for the next POI recommendation. It is then fine-tuned for the next POI recommendation task using a lightweight Multi-Layer Perceptron (MLP). Extensive experiments on two datasets demonstrate the effectiveness of our approach.https://www.mdpi.com/2227-7390/12/22/3592point of interestsequential recommendationlarge language modelsspatiotemporal |
| spellingShingle | Jiubing Chen Haoyu Wang Jianxin Shang Chaomurilige Interpretable Embeddings for Next Point-of-Interest Recommendation via Large Language Model Question–Answering Mathematics point of interest sequential recommendation large language models spatiotemporal |
| title | Interpretable Embeddings for Next Point-of-Interest Recommendation via Large Language Model Question–Answering |
| title_full | Interpretable Embeddings for Next Point-of-Interest Recommendation via Large Language Model Question–Answering |
| title_fullStr | Interpretable Embeddings for Next Point-of-Interest Recommendation via Large Language Model Question–Answering |
| title_full_unstemmed | Interpretable Embeddings for Next Point-of-Interest Recommendation via Large Language Model Question–Answering |
| title_short | Interpretable Embeddings for Next Point-of-Interest Recommendation via Large Language Model Question–Answering |
| title_sort | interpretable embeddings for next point of interest recommendation via large language model question answering |
| topic | point of interest sequential recommendation large language models spatiotemporal |
| url | https://www.mdpi.com/2227-7390/12/22/3592 |
| work_keys_str_mv | AT jiubingchen interpretableembeddingsfornextpointofinterestrecommendationvialargelanguagemodelquestionanswering AT haoyuwang interpretableembeddingsfornextpointofinterestrecommendationvialargelanguagemodelquestionanswering AT jianxinshang interpretableembeddingsfornextpointofinterestrecommendationvialargelanguagemodelquestionanswering AT chaomurilige interpretableembeddingsfornextpointofinterestrecommendationvialargelanguagemodelquestionanswering |