9

我正在尝试语音识别示例。如果我开始通过麦克风识别我的语音,那么我会尝试让 iPhone 发出识别文本的声音。这是工作。但是,声音太小了。你能指导我吗?

而不是,如果我尝试简单的按钮操作,使用AVSpeechUtterance代码,音量是正常的。

之后,如果我选择startRecognise()方法,音量太低。

我的代码

func startRecognise()
{
let audioSession = AVAudioSession.sharedInstance()  //2
    do
    {
        try audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord)
        try audioSession.setMode(AVAudioSessionModeDefault)
        try audioSession.setMode(AVAudioSessionModeMeasurement)
        try audioSession.setActive(true, with: .notifyOthersOnDeactivation)
        try AVAudioSession.sharedInstance().overrideOutputAudioPort(AVAudioSessionPortOverride.speaker)
    }
    catch
    {
        print("audioSession properties weren't set because of an error.")
    }
    recognitionRequest = SFSpeechAudioBufferRecognitionRequest()
    guard let inputNode = audioEngine.inputNode else {
        fatalError("Audio engine has no input node")
    }
    guard let recognitionRequest = recognitionRequest else {
        fatalError("Unable to create an SFSpeechAudioBufferRecognitionRequest object")
    }
    recognitionRequest.shouldReportPartialResults = true
    recognitionTask = speechRecognizer.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in
        if result != nil
        {
            let lastword = result?.bestTranscription.formattedString.components(separatedBy: " ").last
            if lastword == "repeat" || lastword == "Repeat"{
                self.myUtterance2 = AVSpeechUtterance(string: "You have spoken repeat")
                self.myUtterance2.rate = 0.4
                self.myUtterance2.volume = 1.0
                self.myUtterance2.pitchMultiplier = 1.0
                self.synth1.speak(self.myUtterance2)
                // HERE VOICE IS TOO LOW. 
            }
        }
    })
    let recordingFormat = inputNode.outputFormat(forBus: 0)  //11
    inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in
    self.recognitionRequest?.append(buffer)
    }
    audioEngine.prepare()
    do 
    {
        try audioEngine.start()
    } 
    catch 
    {
        print("audioEngine couldn't start because of an error.")
    }
}

我的按钮操作

func buttonAction()
{
   self.myUtterance2 = AVSpeechUtterance(string: "You are in button action")
   self.myUtterance2.rate = 0.4
   self.myUtterance2.volume = 1.0
   self.myUtterance2.pitchMultiplier = 1.0
   self.synth1.speak(self.myUtterance2)
   // Before going for startRecognise() method, 
   //I tried with buttonAction(), 
   //this time volume is normal. 
   //After startRecognise() method call, volume is too low in both methods.
}
4

2 回答 2

18

最后,我得到了解决方案。

func startRecognise()
{
let audioSession = AVAudioSession.sharedInstance()  //2
    do
    {
        try audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord)
        try audioSession.setMode(AVAudioSessionModeDefault)
        //try audioSession.setMode(AVAudioSessionModeMeasurement)
        try audioSession.setActive(true, with: .notifyOthersOnDeactivation)
        try AVAudioSession.sharedInstance().overrideOutputAudioPort(AVAudioSessionPortOverride.speaker)
    }
    catch
    {
        print("audioSession properties weren't set because of an error.")
    }

    ... 
}

一旦我评论了这一行try audioSession.setMode(AVAudioSessionModeMeasurement),音量就正常了。

于 2017-07-31T05:46:34.037 回答
3

在深入研究技术细节后,它观察到,overrideOutputAudioPort()暂时改变了当前的音频路径。

func overrideOutputAudioPort(_ portOverride: AVAudioSession.PortOverride) throws

如果您的应用程序使用该playAndRecord类别,则使用 AVAudioSession.PortOverride.speaker 选项调用此方法会导致音频被路由到built-in speaker并且microphone无论其他设置如何。

此更改仅在当前路由更改或您使用 AVAudioSession.PortOverride.none 选项再次调用此方法之前保持有效。

try audioSession.setMode(AVAudioSessionModeDefault)

如果您更喜欢permanently enable这种行为,则应改为设置类别的defaultToSpeaker选项。如果没有使用其他配件(例如耳机),设置此选项将始终路由到扬声器而不是接收器。

Swift 5.x 中,上面的代码看起来像 -

let audioSession = AVAudioSession.sharedInstance()
do {
  try audioSession.setCategory(.playAndRecord)
  try audioSession.setMode(.default)
  try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
  try audioSession.overrideOutputAudioPort(.speaker)
} catch {
  debugPrint("Enable to start audio engine")
  return
}

通过将 mode 设置为measurement,它负责最小化system-supplied signal对输入和输出信号的处理量。

try audioSession.setMode(.measurement)

通过注释此模式并使用default负责到andpermanently enabling的音频路由的模式。built-in speakermicrophone

感谢@McDonal_11 的回答。希望这将有助于理解技术细节。

于 2020-07-22T08:21:47.357 回答