Unsupervised qualitative scoring for binary item features

K Ichikawa, H Tamano - Data Science and Engineering, 2020 - Springer
K Ichikawa, H Tamano
Data Science and Engineering, 2020Springer
Binary features, such as categories, keywords, or tags, are widely used to describe product
properties. However, these features are incomplete in that they do not contain several
aspects of numerical information. The qualitative score of tags is widely used to describe
which product is better in terms of the given property. For example, in a restaurant navigation
site, properties such as mood, dishes, and location are given in the form of numerical values,
representing the goodness of each aspect. In this paper, we propose a novel approach to …
Abstract
Binary features, such as categories, keywords, or tags, are widely used to describe product properties. However, these features are incomplete in that they do not contain several aspects of numerical information. The qualitative score of tags is widely used to describe which product is better in terms of the given property. For example, in a restaurant navigation site, properties such as mood, dishes, and location are given in the form of numerical values, representing the goodness of each aspect. In this paper, we propose a novel approach to estimate the qualitative score from the binary features of products. Based on a natural assumption that an item with a better property is more popular among users who prefer that property, in short, “experts know best,” we introduce both discriminative and generative models with which user preferences and item qualitative scores are inferred from user--item interactions. We constrain the space of the item qualitative score by item binary features so that the score of each item and tag can only have nonzero values when the item has the corresponding tag. This approach contributes to resolving the following difficulties: (1) no supervised data for the score estimation, (2) implicit user purpose, and (3) irrelevant tag contamination. We evaluate our models by using two artificial datasets and two real-world datasets of movie and book ratings. In the experiment, we evaluate the performances of our model under sparse transaction and noisy tag settings by using two artificial datasets. We also evaluate our models’ resolution for irrelevant tags using the real-world dataset of movie ratings and observe that our models outperform a baseline model. Finally, tag rankings obtained from the real-world datasets are compared with a baseline model.
Springer
顯示最佳搜尋結果。 查看所有結果