© 2017, Springer Science+Business Media, LLC. Contextual factors greatly affect users' preferences for music, so they can benefit music recommendation and music retrieval. However, how to acquire and utilize the contextual information is still facing challenges. This paper proposes a novel approach for context-aware music recommendation, which infers users' preferences for music, and then recommends music pieces that fit their real-time requirements. Specifically, the proposed approach first learns the low dimensional representations of music pieces from users' music listening sequences using neural network models. Based on the learned representations, it then infers and models users' general and contextual preferences for music from users' historical listening records. Finally, music pieces in accordance with user's preferences are recommended to the target user. Extensive experiments are conducted on real world datasets to compare the proposed method with other state-of-the-art recommendation methods. The results demonstrate that the proposed method significantly outperforms those baselines, especially on sparse data.
Wang, D, Deng, S, Zhang, X & Xu, G 2018, 'Learning to embed music and metadata for context-aware music recommendation', World Wide Web, vol. 21, pp. 1399-1423.View/Download from: UTS OPUS or Publisher's site
© 2017 Springer Science+Business Media, LLC, part of Springer Nature Contextual factors greatly influence users' musical preferences, so they are beneficial remarkably to music recommendation and retrieval tasks. However, it still needs to be studied how to obtain and utilize the contextual information. In this paper, we propose a context-aware music recommendation approach, which can recommend music pieces appropriate for users' contextual preferences for music. In analogy to matrix factorization methods for collaborative filtering, the proposed approach does not require music pieces to be represented by features ahead, but it can learn the representations from users' historical listening records. Specifically, the proposed approach first learns music pieces' embeddings (feature vectors in low-dimension continuous space) from music listening records and corresponding metadata. Then it infers and models users' global and contextual preferences for music from their listening records with the learned embeddings. Finally, it recommends appropriate music pieces according to the target user's preferences to satisfy her/his real-time requirements. Experimental evaluations on a real-world dataset show that the proposed approach outperforms baseline methods in terms of precision, recall, F1 score, and hitrate. Especially, our approach has better performance on sparse datasets.
Wang, D, Xu, G & Deng, S 2017, 'Music recommendation via heterogeneous information graph embedding', Proceedings of the 2017 International Joint Conference on Neural Networks, International Joint Conference on Neural Networks, IEEE, Anchorage, Alaska, USA, pp. 596-603.View/Download from: UTS OPUS or Publisher's site
Traditional music recommendation techniques suffer from limited performance due to the sparsity of user-music interaction data, which is addressed by incorporating auxiliary information. In this paper, we study the problem of personalized music recommendation that takes different kinds of auxiliary information into consideration. To achieve this goal, a Heterogeneous Information Graph (HIG) is first constructed to encode different kinds of heterogeneous information, including the interactions between users and music pieces, music playing sequences, and the metadata of music pieces. Based on HIG, a Heterogeneous Information Graph Embedding method (HIGE) is proposed to learn the latent low-dimensional representations of music pieces. Then, we further develop a context-aware music recommendation method. Extensive experiments have been conducted on real-world datasets to compare the proposed method with other state-of-the-art recommendation methods. The results demonstrate that the proposed method significantly outperforms those baselines, especially on sparse datasets.
Wang, D, Deng, S & Xu, G 2016, 'GEMRec: A graph-based emotion-aware music recommendation approach', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Web Information Systems Engineering, Springer, Shanghai, China, pp. 92-106.View/Download from: UTS OPUS or Publisher's site
© Springer International Publishing AG 2016.Music recommendation has gained substantial attention in recent times. As one of the most important context features,user emotion has great potential to improve recommendations,but this has not yet been sufficiently explored due to the difficulty of emotion acquisition and incorporation. This paper proposes a graph-based emotion-aware music recommendation approach (GEMRec) by simultaneously taking a user's music listening history and emotion into consideration. The proposed approach models the relations between user,music,and emotion as a three-element tuple (user,music,emotion),upon which an Emotion Aware Graph (EAG) is built,and then a relevance propagation algorithm based on random walk is devised to rank the relevance of music items for recommendation. Evaluation experiments are conducted based on a real dataset collected from a Chinese microblog service in comparison to baselines. The results show that the emotional context from a user's microblogs contributes to improving the performance of music recommendation in terms of hitrate,precision,recall,and F1 score.