我已经创建了一个双向流assistant.converse()
,并且发送音频成功。
const assistant = new protocol.EmbeddedAssistant(ASSISTANT_API_ENDPOINT, credentials);
const conversation = assistant.converse();
conversation.on('data', data => {
console.log(data);
});
conversation.on('error', err => {
console.log(err);
micStream.end();
});
conversation.write({ config }); // audio type config
micStream // an object that provides an audio stream
.pipe(through2.obj((chunk, enc, cb) => cb(null, { 'audio_in': chunk })))
.pipe(conversation);
当我停止说话并且助手检测到静音时,助手会抛出此错误:
{ Error: Failed to parse server response
at ClientDuplexStream._emitStatusIfDone (/Users/arilotter/Projects/assistant/node_modules/grpc/src/node/src/client.js:201:19)
at ClientDuplexStream._receiveStatus (/Users/arilotter/Projects/assistant/node_modules/grpc/src/node/src/client.js:180:8)
at /Users/arilotter/Projects/assistant/node_modules/grpc/src/node/src/client.js:649:14 code: 13, metadata: undefined }