2

我想在 AVCapture 捕获的视频上显示 CALayer。我可以显示图层,但对于下一帧,应该删除前一帧。

我的代码是:

[CATransaction begin];  
[CATransaction setValue:(id)kCFBooleanTrue forKey:kCATransactionDisableActions];

for (int i = 0; i < faces.size(); i++) {
     CGRect faceRect;
     // Get the Graphics Context        

     faceRect.origin.x = xyPoints.x;
     faceRect.origin.y = xyPoints.y;
     faceRect.size.width =50; //faces[i].width;
     faceRect.size.height =50;// faces[i].height;

     CALayer *featureLayer=nil;         

     // faceRect = CGRectApplyAffineTransform(faceRect, t);
     if (!featureLayer) {
         featureLayer = [[CALayer alloc]init];

         featureLayer.borderColor = [[UIColor redColor] CGColor];
         featureLayer.borderWidth = 10.0f;
         [self.view.layer addSublayer:featureLayer];

     }

     featureLayer.frame = faceRect;

     NSLog(@"frame-x - %f, frame-y - %f, frame-width - %f, frame-height - %f",featureLayer.frame.origin.x,featureLayer.frame.origin.y,featureLayer.frame.size.width,featureLayer.frame.size.height);
 }

//  [featureLayer removeFromSuperlayer]; 
[CATransaction commit];

其中 face 是(const std::vector<cv::Rect)faceOpenCV 格式。我需要知道在哪里放置代码[featureLayer removeFromSuperLayer];

注意:“人脸”不是用于人脸检测的……它只是一个矩形。

4

1 回答 1

1

我得到了解决方案...... featureLayer 是 CALayer 对象,我将其作为身份。喜欢

featureLayer.name = @"earLayer";

每当我检测到框架中的对象时,我都会从主视图中获取子层,例如

NSArray *sublayers = [NSArray arrayWithArray:[self.view.layer sublayers]];

并计算子层以检查 for 循环,如下所示:

int sublayersCount = [sublayers count];
    int currentSublayer = 0;
for (CALayer *layer in sublayers) {
        NSString *layerName = [layer name];
        if ([layerName isEqualToString:@"earayer"])
            [layer setHidden:YES];
}

现在我得到了带有 Detected objects 的正确图层。

于 2013-06-26T13:50:51.200 回答