0

我正在尝试实现用户麦克风的生命流翻译。用户将使用 Chrome 或 Edge。

浏览器将每 500 毫秒发布一次用户输入。不会丢失任何信息。

在服务器级别,我基本上创建PushAudioInputStream

并将帖子的正文写入流中。

 _httpListener = new HttpListener();
...
var context = _httpListener.GetContext();
...
var binaryReader = new BinaryReader(context.Request.InputStream);
public static void CopyFromReader(BinaryReader reader, PushAudioInputStream writer)
{
  byte[] readBytes;
  do
  {
     readBytes = reader.ReadBytes(2048);
     writer.Write(readBytes);
  } while (readBytes.Length > 0);
}

在第一篇文章之后,我让下面的并行方法运行(基本上直到用户离开)

private async void azureSideRequest()
        {
            //Standard settings for Audio Data
            byte channels = 1;
            byte bitsPerSample = 16;
            uint samplesPerSecond = 48000; // <--- my Mic
            var audioFormat = AudioStreamFormat.GetWaveFormatPCM(samplesPerSecond, bitsPerSample, channels);
            var audioConfig = AudioConfig.FromStreamInput(_livestream._audioStream);
            var speechConfig = SpeechConfig.FromSubscription(ResourceKey, Region);

            using var recognizer = new SpeechRecognizer(speechConfig, "en-US", audioConfig);

            List<string> list = new List<string>();
            var signal = new SemaphoreSlim(0, 1);

            //If the text is recognized append him to the builder.
            recognizer.Recognized += (s, e) =>
            {
                if (e.Result.Text == "") return;
                _livestream._stashed.Add(e.Result.Text);
            };

            recognizer.Canceled += (s, e) =>
            {

                signal.Release(); //Immediately called, Reason: End of Stream
            };

            await recognizer.StartContinuousRecognitionAsync().ConfigureAwait(false);
            await signal.WaitAsync();
        }

在第一次发布之后,此代码立即调用 .Canceled() 并返回。给定的原因是EndOfStream。但我从不 .close() 流或类似的东西......

4

0 回答 0