0

我的目标是拼接来自多个视频文件的视频片段。片段由任意开始时间和结束时间定义。最初我想使用像 mp4parser 这样的库来完成它,但它只能在同步(IFRAME)点切割流,而我需要更高的精度。

我的方案是从文件中提取编码流 -> 解码 -> 编码 -> 将结果复用到 mp4 文件中。现在通常代码可以工作,但结果视频是白噪声。在 Nexus-S 和 Galaxy-S3 上测试。我的代码是几个例子的组合:

  • 基于 MoviePlayer.java 读取之前录制的文件
  • 解码-编码:DecodeEditEncodeTest.java
  • 多路复用视频流到 mp4 - 又一个例子,这里不相关

我想简化示例,因为我不需要在中间处理帧。我试图将缓冲区从解码器输出馈送到编码器输入,中间没有 Surface。整个过程在代码运行完成并产生可播放的视频文件的意义上起作用。但是文件的内容是白噪声。

这是将帧从解码器传送到编码器的代码片段。出了什么问题以及如何使其工作?

...
} else { // decoderStatus >= 0
    if (VERBOSE) Log.d(TAG, "surface decoder given buffer "
                            + decoderStatus + " (size=" + info.size + ")");
    // The ByteBuffers are null references, but we still get a nonzero
    // size for the decoded data.
    boolean doRender = (info.size != 0);
    // As soon as we call releaseOutputBuffer, the buffer will be forwarded
    // to SurfaceTexture to convert to a texture.  The API doesn't
    // guarantee that the texture will be available before the call
    // returns, so we need to wait for the onFrameAvailable callback to
    // fire.  If we don't wait, we risk rendering from the previous frame.
    //   decoder.releaseOutputBuffer(decoderStatus, doRender);
    if (doRender) {
    // This waits for the image and renders it after it arrives.
//                  if (VERBOSE) Log.d(TAG, "awaiting frame");
//                          outputSurface.awaitNewImage();
//                          outputSurface.drawImage();
//                          // Send it to the encoder.
//                              inputSurface.setPresentationTime(info.presentationTimeUs * 1000);
//                          if (VERBOSE) Log.d(TAG, "swapBuffers");
//                          inputSurface.swapBuffers();

            encoderStatus = encoder.dequeueInputBuffer(-1);

            if (encoderStatus >= 0) {
                                encoderInputBuffers[encoderStatus].clear();

                                decoderOutputBuffers[decoderStatus].position(info.offset);
                                decoderOutputBuffers[decoderStatus].limit(info.offset + info.size);

                                encoderInputBuffers[encoderStatus].put(decoderOutputBuffers[decoderStatus]);
                                encoder.queueInputBuffer(encoderStatus, 0, info.size, info.presentationTimeUs*1000, 0);
                }
            }

                        decoder.releaseOutputBuffer(decoderStatus, false);
...
4

1 回答 1

5

It's much better to use a Surface than a ByteBuffer. It's faster as well as more portable. Surfaces are queues of buffers, not just framebuffers for pixel data; decoded video frames are passed around by handle. If you use ByteBuffers, the video data has to be copied a couple of times, which will slow you down.

Create the MediaCodec encoder, get the input surface, and pass that to the decoder as its output surface.

If you need to work with API 16/17, you're stuck with ByteBuffers. If you search around you can find reverse-engineered converters for the wacky qualcomm formats, but bear in mind that there were no CTS tests until API 18, so there are no guarantees.

于 2015-04-21T15:44:54.063 回答