Music Recommendation By Facial Analysis

Music Recommendation By Facial Analysis

February 17, 2020

Abstract

The project is to develop an Emotion Based Music Player which is an android application meant for users to minimize their efforts in searching large playlists. This project is based on the principle of detection of human emotions using image processing with Convolutional Neural Network (CNN), and to play music which is appropriate for enhancing that emotional state. This will extract user‟s facial expressions and features to determine the current mood of the user. Once the emotion is detected, playlist of songs suitable to the mood of the user will be presented to him/her with the use of YouTube API.

Background Theory

This project was based on the principle of detection of human emotions using image processing, and to play music which is appropriate for enhancing that emotional state. It worked when mathematical operations were performed using the framework of signal processing which used an image or a series of images as input. Music plays a very important role in enhancing an individual„s life as it is an important medium of entertainment for music lovers and listeners and sometimes even imparts a therapeutic approach. In today„s world, with ever increasing advancements in the field of multimedia and technology, various music players have been developed with features like fast forward, reverse, variable playback speed (seek & time compression),local playback, streaming playback with multicast streams. Although these features satisfy the user„s basic requirements, yet the user has to face the task of manually browsing through the playlist of songs and select songs based on his current mood and behavior. Although human speech and gesture are common ways of expressing emotions, but facial expression is the most ancient and natural way of expressing feelings, emotions and mood. The state of mind and current emotional mood of human beings can be easily observed through their facial expressions. This project was made by taking basic emotions (happy, sad, anger, excitement, surprise, disgust, fear and neutral) into consideration [5]. The face detection in this project was made by using convolutional neural network. Music is often described as a “language of emotions” throughout the globe. Hard hitting evidence on why human brain reacts to music differently is not available but scientists have discovered some findings which state that the brain through cerebellum activation synchronizes the pulse of music with the neural oscillators. While processing music brain‟s language centre, emotional centre and memory centre are connected thereby stimulating a thrill obtained by expected beats in a pattern to provide a synesthetic experience [6]. This project was therefore aimed to provide people with befitting music using facial recognition, saving the time which was required to go into the files and scroll at a never ending list of songs to choose from thereby enhancing user experience.

Problem Statement

  • The user has to face the task of manually browsing through the playlist of songs and select songs based on his/her current mood and behaviour.
  • Randomly played songs may not match the mood of the user.
  • Existing methods for automating the playlist generation process are computationally slow, less accurate and sometimes even require use of additional hardware like EEG or sensors.

Objectives

  • A noble approach that provides, the user with an automatically generated playlist of songs based on the current emotion and mood of the user

Scope of Project and Applications

  • This system can be effectively used for personal use. It can act as a mood lifter or mood enhancer.
  • This system would be helpful in music therapy treatment and provide the music therapist the help needed to treat the patients suffering from disorders like mental stress, anxiety, acute depression and trauma.

Authors

Ayush Guidel, Birat Sapkota, Krishna Sapkota and Nirajan Thapa