Enhanced Realistic Audio Sound Generation based on Virtual Speaker Layout
Kwangki Kim
Kwangki Kim, Department of IT convergence, Korea Nazarene University, Cheonan, South Korea.
Manuscript received on 1 August 2019. | Revised Manuscript received on 10 August 2019. | Manuscript published on 30 September 2019. | PP: 2017-2021 | Volume-8 Issue-3 September 2019 | Retrieval Number: C4518098319/19©BEIESP | DOI: 10.35940/ijrte.C4518.098319
Open Access | Ethics and Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Abstract: In this paper, we proposed a constant power panning (CPP) based realistic binaural sound generation with head related transfer function (HRTF) coefficients in the virtual twelve speakers layout arranged at intervals of 30 degrees for the VR service. In the proposed method, the original multi-channel audio signals are mapped to the virtual playback system using the CPP according to the users’ head movement, and the realistic stereo binaural sound is formed by convolution of the mapped multi-channel audio signals and the HRTF coefficients for the virtual twelve speakers layout. Since the angle difference between the arbitrary adjacent two speakers is fixed as 30 degrees, the azimuthal resolution of the CPP based realistic sound generation is also 30 degrees and we can create the more accurate realistic sound reflecting the users’ head movement. The experimental results show that the proposed method has a similar performance to the HRTF based realistic binaural sound generation even though it only needs about 0.79 Mbytes to be 1/30 of data amount of the HRTF coefficients compared with the HRTF based method.
Keywords: binaural rendering, constant power panning, HRTF, multi-channel audio, realistic audio.
Scope of the Article: Next Generation Internet & Web Architectures