我试过这个:
CIFilter *dodgeFilter = [CIFilter filterWithName:@"CIColorDodgeBlendMode"];
取代:
GPUImageDivideBlendFilter *divideBlendFilter = [[GPUImageDivideBlendFilter alloc] init];
但效果不一样。
你试过CIDivideBlendMode
吗?
CIImage *img1 = [[CIImage alloc] initWithImage:[UIImage imageNamed:@"img1.jpg"]];
CIImage *img2 = [[CIImage alloc] initWithImage:[UIImage imageNamed:@"img2.jpg"]];
CIFilter *filterBuiltin = [CIFilter filterWithName:@"CIDivideBlendMode"
keysAndValues:@"inputImage", img1,
@"inputBackgroundImage", img2, nil];
CIImage *outputImageBuiltin = [filterBuiltin outputImage];
UIImage *filteredImageBuiltin = [self imageWithCIImage:outputImageBuiltin];
我认为尝试CIFilter
基于现有的 GPUImageFilter 创建自定义会很有趣,因为 iOS8 允许我们这样做。这应该允许将任何翻译GPUImageFilter
成它的CIFilter
对应物。
在开始之前,有必要检查一下在编写自定义过滤器 和核心图像内核语言参考之前需要了解的内容
我们将从编写与GPUImageDivideBlendFilter
着色器非常相似的自定义内核开始。一个例外是在 Core Image Kernel 语言中似乎不支持的控制流部分,我们将使用*_branch1
和*_branch2
乘法器来解决它。
创建 CIFilter 很简单:
为您的 Xcode 项目创建一个新的ImageDivideBlendFilter.cikernel
(您的自定义过滤器内核)文件:
kernel vec4 GPUImageDivideBlendFilter(sampler image1, sampler image2)
{
float EPSILON = 1e-4;
vec4 base = sample(image1, samplerCoord(image1));
vec4 overlay = sample(image2, samplerCoord(image2));
float ra1 = overlay.a * base.a + overlay.r * (1.0 - base.a) + base.r * (1.0 - overlay.a);
float ra2 = (base.r * overlay.a * overlay.a) / overlay.r + overlay.r * (1.0 - base.a) + base.r * (1.0 - overlay.a);
// https://developer.apple.com/library/mac/documentation/GraphicsImaging/Reference/CIKernelLangRef/ci_gslang_ext.html#//apple_ref/doc/uid/TP40004397-CH206-TPXREF101
// "Other flow control statements (if, for, while, do while) are supported only when the loop condition can be inferred at the time the code compiles"
float ra_branch2 = step(EPSILON, overlay.a) * step(base.r / overlay.r, base.a / overlay.a);
float ra_branch1 = step(ra_branch2, 0.5);
float ra = ra1 * ra_branch1 + ra2 * ra_branch2;
float ga1 = overlay.a * base.a + overlay.g * (1.0 - base.a) + base.g * (1.0 - overlay.a);
float ga2 = (base.g * overlay.a * overlay.a) / overlay.g + overlay.g * (1.0 - base.a) + base.g * (1.0 - overlay.a);
float ga_branch2 = step(EPSILON, overlay.a) * step(base.g / overlay.g, base.a / overlay.a);
float ga_branch1 = step(ga_branch2, 0.5);
float ga = ga1 * ga_branch1 + ga2 * ga_branch2;
float ba1 = overlay.a * base.a + overlay.b * (1.0 - base.a) + base.b * (1.0 - overlay.a);
float ba2 = (base.b * overlay.a * overlay.a) / overlay.b + overlay.b * (1.0 - base.a) + base.b * (1.0 - overlay.a);
float ba_branch2 = step(EPSILON, overlay.a) * step(base.b / overlay.b, base.a / overlay.a);
float ba_branch1 = step(ba_branch2, 0.5);
float ba = ba1 * ba_branch1 + ba2 * ba_branch2;
return vec4(ra, ga, ba, 1.0);
}
为您的过滤器添加接口和实现
// ImageDivideBlendFilter.h
#import <CoreImage/CoreImage.h>
@interface ImageDivideBlendFilter : CIFilter
@end
// ImageDivideBlendFilter.m
#import "ImageDivideBlendFilter.h"
@interface ImageDivideBlendFilter()
{
CIImage *_image1;
CIImage *_image2;
}
@end
@implementation ImageDivideBlendFilter
static CIColorKernel *imageDivideBlendKernel = nil;
+ (void)initialize
{
// This will load the kernel code which will compiled at run time. We do this just once to optimize performances
if (!imageDivideBlendKernel)
{
NSBundle *bundle = [NSBundle bundleForClass:[self class]];
NSString *code = [NSString stringWithContentsOfFile:[bundle pathForResource: @"ImageDivideBlendFilter" ofType: @"cikernel"] encoding:NSUTF8StringEncoding error:nil];
NSArray *kernels = [CIColorKernel kernelsWithString:code];
imageDivideBlendKernel = [kernels firstObject];
}
}
- (CIImage *)outputImage
{
return [imageDivideBlendKernel applyWithExtent:_image1.extent roiCallback:nil arguments:@[_image1, _image2]];
}
+ (CIFilter *)filterWithName: (NSString *)name
{
CIFilter *filter;
filter = [[self alloc] init];
return filter;
}
@end
我们准备在我们的应用程序中使用我们新创建的自定义过滤器
- (void)filterDemo
{
CIImage *img1 = [[CIImage alloc] initWithImage:[UIImage imageNamed:@"img1.jpg"]];
CIImage *img2 = [[CIImage alloc] initWithImage:[UIImage imageNamed:@"img2.jpg"]];
[ImageDivideBlendFilter class]; // preload kernel, it speeds up loading the filter if used multiple times
CIFilter *filterCustom = [CIFilter filterWithName:@"ImageDivideBlendFilter" keysAndValues:@"image1", img2, @"image2", img1, nil];
CIImage *outputImageCustom = [filterCustom outputImage];
UIImage *filteredImageCustom = [self imageWithCIImage:outputImageCustom];
}
- (UIImage *)imageWithCIImage:(CIImage *)ciimage
{
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgimg = [context createCGImage:ciimage fromRect:[ciimage extent]];
UIImage *newImg = [UIImage imageWithCGImage:cgimg];
CGImageRelease(cgimg);
return newImg;
}
内置和自定义过滤器产生相同的结果。
我在 Github https://github.com/tcamin/CustomCoreImageFilteringDemo上提供了一个示例项目,展示了如何在 Swift 中进行 CIFiltering。
答案中的 CIColorKernel 代码不起作用;事实上,任何将多个采样器对象(图像)传递给内核的尝试都会失败。
另外——我知道这与问题无关,但我觉得应该指出这一点——内核代码有一个倒退的东西,特别是与预乘相关的函数。独立于其余三个使用 Alpha 通道时,取消预乘任何采样器(或颜色)对象;当你有成品时,用预乘法重新组合它们。如果您没有在计算中更改或混合或以其他方式使用两个采样器(或颜色)对象,则不要这样做。