4

我将三个 png 的“320 x 480px”加载到单独的 UIImageViews 中。png 的名称是身体、嘴巴、帽子。通过将图像堆叠在一起,我创建了一个可以轻松更换身体部位的角色。看照片>

http://www.1976inc.com/dev/iphone/beast.jpg

我的问题是,当您触摸最顶部的 UIImageView 时,包括透明度在内的整个图像都会注册触摸事件。我想要做的是让触摸事件仅注册在不透明的 png 部分上。因此,使用户可以与所有三个 UIImageView 进行交互。

我确信这很简单,但我是 iphone 开发的新手,我似乎无法弄清楚。


更新 所以我意识到完成我想要做的最简单的方法是创建循环并为每个 png 创建一个上下文,然后获取发生触摸事件的像素的颜色数据。如果像素代表透明区域,我将移至下一张图像并尝试相同的操作。这有效,但只是第一次。例如,当我第一次点击主视图时,我得到了这个输出

2010-07-26 15:50:06.285 colorTest[21501:207] 帽子
2010-07-26 15:50:06.286 colorTest[21501:207] 偏移量:227024 颜色:RGB A 0 0 0 0
2010-07-26 15 :50:06.293 colorTest[21501:207] 口
2010-07-26 15:50:06.293 colorTest[21501:207] 偏移量:227024 颜色:RGB A 0 0 0 0
2010-07-26 15:50:06.298 colorTest[ 21501:207] 身体
2010-07-26 15:50:06.299 colorTest[21501:207] 偏移量:227024 颜色:RGB A 255 255 255 255

这正是我想看到的。但是,如果我再次单击同一区域,我会得到。

2010-07-26 15:51:21.625 colorTest[21501:207] 帽子
2010-07-26 15:51:21.626 colorTest[21501:207] 偏移量:283220 颜色:RGB A 255 255 255 255
2010-07-26 15 :51:21.628 colorTest[21501:207] 口
2010-07-26 15:51:21.628 colorTest[21501:207] 偏移量:283220 颜色:RGB A 255 255 255 255
2010-07-26 15:51:21.630 colorTest[ 21501:207] 主体
2010-07-26 15:51:21.631 colorTest[21501:207] 偏移量:283220 颜色:RGB A 255 255 255 255

这是我正在使用的代码;

触摸事件存在于应用程序的主视图中

- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
 NSLog(@"Touched balls");
 UITouch *touch = [touches anyObject];
 CGPoint point = [touch locationInView:self.view];

 UIColor *transparent = [UIColor colorWithRed:0 green:0 blue:0 alpha:0];

 for( viewTest *currentView in imageArray){
  //UIColor *testColor = [self getPixelColorAtLocation:point image:currentView.image];
  [currentView getPixelColorAtLocation:point];

 }

}

它调用自定义类中的一个方法,该类扩展了 imageView 该函数返回 touchEvent 下像素的颜色。

- (UIColor*) getPixelColorAtLocation:(CGPoint)point
{
 UIColor *color = nil;
 CGImageRef inImage = self.image.CGImage;

 CGContextRef context = [self createARGBBitmapContextFromImage:inImage];

 if(context == NULL) return nil;

 size_t w = CGImageGetWidth(inImage);
 size_t h = CGImageGetHeight(inImage);
 CGRect rect = {{0,0},{w,h}}; 

 // Draw the image to the bitmap context. Once we draw, the memory
 // allocated for the context for rendering will then contain the
 // raw image data in the specified color space.
 CGContextDrawImage(context, rect, inImage); 

 // Now we can get a pointer to the image data associated with the bitmap
 // context.
 unsigned char* data = CGBitmapContextGetData (context);
 if (data != NULL) {
  //offset locates the pixel in the data from x,y.
  //4 for 4 bytes of data per pixel, w is width of one row of data.
  int offset = 4*((w*round(point.y))+round(point.x));
  int alpha =  data[offset];
  int red = data[offset+1];
  int green = data[offset+2];
  int blue = data[offset+3];
  NSLog(@"%@",name);
  NSLog(@"offset: %i colors: RGB A %i %i %i  %i ",offset,red,green,blue,alpha);
  color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)];
 }

 // When finished, release the context
 CGContextRelease(context);

 // Free image data memory for the context
 if (data) { free(data); }

 return color;
}

- (CGContextRef) createARGBBitmapContextFromImage:(CGImageRef) inImage {

 CGContextRef    context = NULL;
 CGColorSpaceRef colorSpace;
 void *          bitmapData;
 int             bitmapByteCount;
 int             bitmapBytesPerRow;

 // Get image width, height. We'll use the entire image.
 size_t pixelsWide = CGImageGetWidth(inImage);
 size_t pixelsHigh = CGImageGetHeight(inImage);

 // Declare the number of bytes per row. Each pixel in the bitmap in this
 // example is represented by 4 bytes; 8 bits each of red, green, blue, and
 // alpha.
 bitmapBytesPerRow   = (pixelsWide * 4);
 bitmapByteCount     = (bitmapBytesPerRow * pixelsHigh);

 // Use the generic RGB color space.
 colorSpace = CGColorSpaceCreateDeviceRGB();//CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
 if (colorSpace == NULL)
 {
  fprintf(stderr, "Error allocating color space\n");
  return NULL;
 }

 // Allocate memory for image data. This is the destination in memory
 // where any drawing to the bitmap context will be rendered.
 bitmapData = malloc( bitmapByteCount );
 if (bitmapData == NULL)
 {
  fprintf (stderr, "Memory not allocated!");
  CGColorSpaceRelease( colorSpace );
  return NULL;
 }

 // Create the bitmap context. We want pre-multiplied ARGB, 8-bits
 // per component. Regardless of what the source image format is
 // (CMYK, Grayscale, and so on) it will be converted over to the format
 // specified here by CGBitmapContextCreate.
 context = CGBitmapContextCreate (bitmapData,
          pixelsWide,
          pixelsHigh,
          8,      // bits per component
          bitmapBytesPerRow,
          colorSpace,
          kCGImageAlphaPremultipliedFirst);
 if (context == NULL)
 {
  free (bitmapData);
  fprintf (stderr, "Context not created!");
 }

 // Make sure and release colorspace before returning
 CGColorSpaceRelease( colorSpace );

 return context;
}

更新 2 感谢您的快速回复。我不确定我是否跟随你。如果我将 hidden 更改为 true,则 UIImageView“层”将被隐藏。我想要的是 png 的透明部分不注册触摸事件。因此,例如,如果您查看我在帖子中包含的图像。如果您单击蠕虫、茎或叶“它们都是同一个 png 的一部分”,则该 ImageView 会触发一个触摸事件,但如果您触摸圆圈,则该 ImageView 会触发一个触摸事件。顺便说一句,这是我用来将它们放置在视图中的代码。

UIView *tempView = [[UIView alloc] init];
[self.view addSubview:tempView];


UIImageView *imageView1 = [[UIImageView alloc] initWithImage:[UIImage  imageNamed:@"body.png"] ];
[imageView1 setUserInteractionEnabled:YES];
UIImageView *imageView2 = [[UIImageView alloc] initWithImage:[UIImage  imageNamed:@"mouth.png"] ];
[imageView2 setUserInteractionEnabled:YES];
UIImageView *imageView3 = [[UIImageView alloc] initWithImage:[UIImage  imageNamed:@"hat.png"] ];
[imageView3 setUserInteractionEnabled:YES];

[tempView addSubview:imageView1];
[tempView addSubview:imageView2];
[tempView addSubview:imageView3];

[self.view addSubview:tempView];
4

1 回答 1

0

首先:

您可以使用透明度,但隐藏图像可能会满足您的需要。

您可以使用以下命令隐藏图像:[myImage setHidden:YES];myImage.hidden = YES;

if (CGRectContainsPoint(myImage.frame, touchPosition)==true && myImage.hidden==NO) 
{
}

这可确保您的图像在点击时不透明,因为myImage.hidden==NO会检查图像是否隐藏。

于 2010-07-24T20:57:51.313 回答