4

使用 librosa,我为我的音频文件创建了 mfcc,如下所示:

import librosa
y, sr = librosa.load('myfile.wav')
print y
print sr
mfcc=librosa.feature.mfcc(y=y, sr=sr)

我还有一个文本文件,其中包含与音频相对应的手动注释[开始、停止、标记],如下所示:

0.0 2.0 声音 1
2.0 4.0 声音 2
4.0 6.0 静音
6.0 8.0 声音 1

问题:如何将 librosa 生成的 mfcc 与文本文件中的注释结合起来。

最终目标是,我想结合标签对应的mfcc,并将其传递
给神经网络。
因此,神经网络将 mfcc 和相应的标签作为训练数据。

如果它是一维的,我可以有 N 列有 N 个值,最后的 Y 列有一个 Class 标签。但我很困惑如何进行,因为 mfcc 的形状类似于 (16, X) 或 (20, Y)。所以我不知道如何将两者结合起来。

我的示例 mfcc 在这里:https ://gist.github.com/manbharae/0a53f8dfef6055feef1d8912044e1418

请帮忙谢谢。

更新:目标是训练一个神经网络,以便它能够在将来遇到新声音时识别它。

我用谷歌搜索,发现 mfcc 非常适合语音。但是我的音频有语音,但我想识别非语音。对于通用音频分类/识别任务,是否还有其他推荐的音频功能?

4

1 回答 1

6

试试下面的。解释包含在代码中。

import numpy
import librosa

# The following function returns a label index for a point in time (tp)
# this is psuedo code for you to complete
def getLabelIndexForTime(tp):
    # search the loaded annoations for what label corresponsons to the given time
    # convert the label to an index that represents its unqiue value in the set
    # ie.. 'sound1' = 0, 'sound2' = 1, ...
    #print tp  #for debug
    label_index = 0 #replace with logic above
    return label_index


if __name__ == '__main__':
    # Load the waveforms samples and convert to mfcc
    raw_samples, sample_rate = librosa.load('Front_Right.wav')
    mfcc  = librosa.feature.mfcc(y=raw_samples, sr=sample_rate)
    print 'Wave duration is %4.2f seconds' % (len(raw_samples)/float(sample_rate))

    # Create the network's input training data, X
    # mfcc is organized (feature, sample) but the net needs (sample, feature)
    # X is mfcc reorganized to (sample, feature)
    X     = numpy.moveaxis(mfcc, 1, 0)
    print 'mfcc.shape:', mfcc.shape
    print 'X.shape:   ', X.shape

    # Note that 512 samples is the default 'hop_length' used in calculating 
    # the mfcc so each mfcc spans 512/sample_rate seconds.
    mfcc_samples = mfcc.shape[1]
    mfcc_span    = 512/float(sample_rate)
    print 'MFCC calculated duration is %4.2f seconds' % (mfcc_span*mfcc_samples)

    # for 'n' network input samples, calculate the time point where they occur
    # and get the appropriate label index for them.
    # Use +0.5 to get the middle of the mfcc's point in time.
    Y = []
    for sample_num in xrange(mfcc_samples):
        time_point = (sample_num + 0.5) * mfcc_span
        label_index = getLabelIndexForTime(time_point)
        Y.append(label_index)
    Y = numpy.array(Y)

    # Y now contains the network's output training values
    # !Note for some nets you may need to convert this to one-hot format
    print 'Y.shape:   ', Y.shape
    assert Y.shape[0] == X.shape[0] # X and Y have the same number of samples

    # Train the net with something like...
    # model.fit(X, Y, ...   #ie.. for a Keras NN model

我应该提到,这里的Y数据旨在用于具有 softmax 输出的网络,该输出可以用整数标签数据进行训练。Keras 模型通过损失函数接受这一点sparse_categorical_crossentropy(我相信损失函数在内部将其转换为单热编码)。其他框架要求Y以 one-hot 编码格式提供训练标签。这是比较常见的。有很多关于如何进行转换的示例。对于您的情况,您可以执行类似...

Yoh = numpy.zeros(shape=(Y.shape[0], num_label_types), dtype='float32')
for i, val in enumerate(Y):
    Yoh[i, val] = 1.0

至于 mfcc 对非语音分类是否可接受,我希望它们能够工作,但您可能想尝试修改它们的参数,即.. librosa 允许您执行类似的操作,n_mfcc=40因此您可以获得 40 个功能,而不仅仅是 20 个。为了好玩,您可能会尝试用相同大小(512 个样本)的简单 FFT 替换 mfcc,看看哪个效果最好。

于 2018-01-30T16:35:38.133 回答