Sensors & Transducers

Vol. 261, Issue 2, July 2023, pp. 77-86

Multimodal Deep Learning Architecture for Hindustani Raga Classification

1,* Stella PASCHALIDOU 2 and ** Ioanna MILIARESI

1 Hellenic Mediterranean University, E. Daskalaki, Rethymno, Greece

Tel.: +30 2831021918

2 Ionian University, Plateia Tsirigoti 7, Corfu 49100, Greece

Tel.: +30 2661087860


Received: 27 May 2023 Accepted: 23 June 2023 Published: 26 June 2023

Abstract: In this paper, our key aspect is the design of a deep learning architecture for the classification of Hindustani (classical North Indian music) ragas (music modes). In an attempt to address this task, we propose a modular deep learning architecture designed to process data from two modalities, comprising audio recordings and metadata. Our bipolar classifier utilizes convolutional and feed forward neural networks and incorporates spectral information of audio data and metadata descriptors tailored to the peculiar melodic characteristics of Hindustani music. In specific, audio recordings as well as manually annotated and automatically extracted metadata were utilized for audio samples of both Hindustani improvisations and compositions available in the Saraga open dataset of Indian art music. Experiments are conducted on two Hindustani ragas, namely Yaman and Bhairavi. Results indicate that the integration of multimodal data increases the classification accuracy of the classifier in comparison to simply using audio features. Additionally, for the specific task of raga classification the use of the swaragram feature, which is customized for Hindustani music, outperforms the effectiveness of audio features that are commonly used in Eurocentric music genres.

Keywords: Hindustani raga identification, Deep learning, Convolutional neural networks, Multimodal.