0

我在我的应用程序中使用 SFSpeechRecognizer,由于有一个专用按钮(开始语音识别),它可以很好地帮助最终用户在 UITextView 中输入评论。

但是,如果用户先手动输入一些文本,然后启动其语音识别,则先前手动输入的文本将被删除。如果用户在同一个 UITextView 上执行两次语音识别(用户正在“语音”录制其文本的第一部分,然后停止录制,最后重新开始录制),情况也是如此,之前的文本被删除。

因此,我想知道如何将 SFSpeechRecognizer 识别的文本附加到现有文本中。

这是我的代码:

func recordAndRecognizeSpeech(){

    if recognitionTask != nil {
        recognitionTask?.cancel()
        recognitionTask = nil
    }
    let audioSession = AVAudioSession.sharedInstance()
    do {
        try audioSession.setCategory(AVAudioSessionCategoryRecord)
        try audioSession.setMode(AVAudioSessionModeMeasurement)
        try audioSession.setActive(true, with: .notifyOthersOnDeactivation)
    } catch {
        print("audioSession properties weren't set because of an error.")
    }
    self.recognitionRequest = SFSpeechAudioBufferRecognitionRequest()
    guard let inputNode = audioEngine.inputNode else {
        fatalError("Audio engine has no input node")
    }
    let recognitionRequest = self.recognitionRequest
    recognitionRequest.shouldReportPartialResults = true

    recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in
        var isFinal = false
        self.decaration.text = (result?.bestTranscription.formattedString)!

        isFinal = (result?.isFinal)!
        let bottom = NSMakeRange(self.decaration.text.characters.count - 1, 1)
        self.decaration.scrollRangeToVisible(bottom)

        if error != nil || isFinal {
            self.audioEngine.stop()
            inputNode.removeTap(onBus: 0)
            self.recognitionTask = nil
            self.recognitionRequest.endAudio()
            self.oBtSpeech.isEnabled = true
        }
    })
    let recordingFormat = inputNode.outputFormat(forBus: 0)
    inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in
        self.recognitionRequest.append(buffer)
    }
    audioEngine.prepare()

    do {
        try audioEngine.start()
    } catch {
        print("audioEngine couldn't start because of an error.")
    }

}

我试图更新

self.decaration.text = (result?.bestTranscription.formattedString)!

经过

self.decaration.text += (result?.bestTranscription.formattedString)!

但它会为每一个识别出的句子赚取一个金币。

知道我该怎么做吗?

4

1 回答 1

1

在启动识别系统之前尝试保存文本。

func recordAndRecognizeSpeech(){
    // one change here
    let defaultText = self.decaration.text

    if recognitionTask != nil {
        recognitionTask?.cancel()
        recognitionTask = nil
    }
    let audioSession = AVAudioSession.sharedInstance()
    do {
        try audioSession.setCategory(AVAudioSessionCategoryRecord)
        try audioSession.setMode(AVAudioSessionModeMeasurement)
        try audioSession.setActive(true, with: .notifyOthersOnDeactivation)
    } catch {
        print("audioSession properties weren't set because of an error.")
    }
    self.recognitionRequest = SFSpeechAudioBufferRecognitionRequest()
    guard let inputNode = audioEngine.inputNode else {
        fatalError("Audio engine has no input node")
    }
    let recognitionRequest = self.recognitionRequest
    recognitionRequest.shouldReportPartialResults = true

    recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in
        var isFinal = false
        // one change here
        self.decaration.text = defaultText + " " + (result?.bestTranscription.formattedString)!

        isFinal = (result?.isFinal)!
        let bottom = NSMakeRange(self.decaration.text.characters.count - 1, 1)
        self.decaration.scrollRangeToVisible(bottom)

        if error != nil || isFinal {
            self.audioEngine.stop()
            inputNode.removeTap(onBus: 0)
            self.recognitionTask = nil
            self.recognitionRequest.endAudio()
            self.oBtSpeech.isEnabled = true
        }
    })
    let recordingFormat = inputNode.outputFormat(forBus: 0)
    inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in
        self.recognitionRequest.append(buffer)
    }
    audioEngine.prepare()

    do {
        try audioEngine.start()
    } catch {
        print("audioEngine couldn't start because of an error.")
    }
}

result?.bestTranscription.formattedString返回已识别的整个短语,这就是为什么self.decaration.text每次收到来自SFSpeechRecognnizer.

于 2018-11-21T17:07:41.593 回答