4

On iOS, I was able to create 3 CGImage objects, and use a CADisplayLink at 60fps to do

self.view.layer.contents = (__bridge id) imageArray[counter++ % 3];

inside the ViewController, and each time, an image is set to the view's CALayer contents, which is a bitmap.

And this all by itself, can alter what the screen shows. The screen will just loop through these 3 images, at 60fps. There is no UIView's drawRect, no CALayer's display, drawInContext, or CALayer's delegate's drawLayerInContext. All it does is to change the CALayer's contents.

I also tried adding a smaller size sublayer to self.view.layer, and set that sublayer's contents instead. And that sublayer will cycle through those 3 images.

So this is very similar to back in the old days even on Apple ][ or even in King's Quest III era, which are DOS video games, where there is 1 bitmap, and the screen just constantly shows what the bitmap is.

Except this time, it is not 1 bitmap, but a tree or a linked list of bitmaps, and the graphics card constantly use the Painter's Model to paint those bitmaps (with position and opacity), onto the main screen. So it seems that drawRect, CALayer, everything, were all designed to achieve this final purpose.

Is that how it works? Does the graphics card take an ordered list of bitmaps or a tree of bitmaps? (and then constantly show them. To simplify, we don't consider the Implicit animation in the CA framework) What is actually happening down in the graphics card handling layer? (and actually, is this method almost the same on iOS, Mac OS X, and on the PCs?)

(this question aims to understand how our graphics programming actually get rendered in modern graphics cards, since for example, if we need to understand UIView and how CALayer works, or even use CALayer's bitmap directly, we do need to understand the graphics architecture.)

4

2 回答 2

5

现代显示库(例如 iOS 和 Mac OS 中使用的 Quartz)使用硬件加速合成。其工作方式与 OpenGL 等计算机图形库的工作方式非常相似。本质上,每个 CALayer 都作为一个单独的表面保存,由视频硬件缓冲和渲染,就像 3D 游戏中的纹理一样。这在 iOS 中实现得非常好,这就是 iPhone 以流畅的用户界面而闻名的原因。

在“旧时代”(即 Windows 9x、Mac OS Classic 等),屏幕本质上是一个大帧缓冲区,通过移动窗口等方式暴露的所有内容都必须由每个应用程序手动重绘。重绘主要由 CPU 完成,这对动画性能提出了上限。由于涉及重绘,动画通常非常“闪烁”。这种技术主要适用于没有太多动画的桌面应用程序。值得注意的是,Android 使用(或至少曾经使用)这种技术,这在将 iOS 应用程序移植到 Android 时是一个大问题。

过去的游戏(例如 DOS、街机等,在 Mac OS 经典版上也有很多使用),称为sprite 动画的东西被用来提高性能并通过将运动图像保留在由硬件并与显示器的 vblank 同步,这意味着即使在非常低端的系统上动画也很流畅。然而,这些图像的大小非常有限,屏幕分辨率也很低,即使是今天的 iPhone 屏幕也只有大约 10-15% 的像素。

于 2012-05-29T23:09:45.640 回答
3

contents你在这里有一个合理的直觉,但是在和显示之间仍然有几个步骤。首先,contents不一定是CGImage. 它通常是一个名为的私有类CABackingStorage,它并不完全相同。在许多情况下,会进行硬件优化以绕过将图像渲染到主内存然后将其复制到视频内存。而且由于contents各个层都合成在一起,因此您距离“真正的”显示内存还有一段距离。更不用说修改contents直接影响模型层,而不是表示层或渲染层。另外,有些CGLayer对象可以将其图像直接存储在视频内存中。有很多不同的事情正在发生。

所以答案是,不,视频“卡”(芯片;它是 PowerVR BTW)不采用有序的层。它以未充分记录的方式获取较低级别的数据。有些东西(尤其是核心动画的一部分,可能还有 CGLayer)似乎是 OpenGL 纹理的包装,但其他东西可能是核心图形直接访问硬件本身。一旦你到达堆栈的这个级别,它就是私有的,并且可以从一个版本到另一个版本,从一个设备到另一个设备。

您还可能会发现 Brad Larson 的回复在这里很有用: iOS:Core Graphics 是在 OpenGL 之上实现的吗?

您可能还对iOS:PTL的第 6 章感兴趣。虽然它没有详细介绍实现细节,但它确实包含了很多关于如何提高绘图性能和最好地利用 Core Graphics 硬件的实际讨论。CALayer第 7 章详细介绍了绘图中涉及的所有开发人员可访问的步骤。

于 2012-05-29T23:51:27.540 回答