문제

Ever since their introduction in iOS 4, I have been wondering about the internal implementation of the UIView's block-based animation methods. In particular I would like to understand what mystical features of Objective C are used there to capture all the relevant layer state changes before and after execution of the animation block.

Observing the black-box implementation, I gather that it needs to capture the before-state of all layer properties modified in the animation block, to create all the relevant CAAnimations. I guess it does not do a snapshot of whole view hierarchy, as that would be horribly inefficient. The animation block is opaque code blob during runtime, so I don't think it can analyze that directly. Does it replace the implementation of property setters on CALayer with some kind of recoding versions? Or is the support for this property change recoding baked-in somewhere deep inside the CALayers?

To generalize the question a little bit, is it possible do create similar block-based API for recording state changes using some Objective C dark magic, or does this rely on knowing and having the access to the internals of the objects being changed in the block?

도움이 되었습니까?

해결책

It is actually a very elegant solution that is built around the fact that the view is the layers delegate and that stand-alone layers implicitly do animate on property changes.

It just happens to be that I gave a BLITZ talk about this at NSConference just a couple of days ago and I posted my slides on GitHub and tried to write down more or less what I said in the presenter notes.

That said: it is a very interesting question that I don't see being asked very often. It may be a bit to broad but I really like curiosity.


UIView animations existed before iOS 4

Ever since their introduction in iOS 4, I have been wondering about the internal implementation of the UIView's block-based animation methods.

UIView animations existed before iOS 4 but in a different style (that is no longer recommended to use because it is more cumbersome to use). For example, animating position and color of a view with a delay could be done like this. Disclaimer: I did not run this code so it may contains bugs.

// Setup
static void *myAnimationContext = &myAnimationContext;
[UIView beginAnimations:@"My Animation ID" context:myAnimationContext];
// Configure
[UIView setAnimationDuration:1.0];
[UIView setAnimationDelay:0.25];
[UIView setAnimationCurve:UIViewAnimationCurveEaseInOut];

// Make changes
myView.center = newCenter;
myView.backgroundColor = newColor;

// Commit
[UIView commitAnimations];

The view-layer synergy is very elegant

In particular I would like to understand what mystical features of Objective C are used there to capture all the relevant layer state changes before and after execution of the animation block.

It is actually the other way around. The view is built on top of the layer and they work together very closely. When you set a property on the view it sets the corresponding property on the layer. You can for example see that the view doesn't even have it's own variable for the frame, bounds or position.

Observing the black-box implementation, I gather that it needs to capture the before-state of all layer properties modified in the animation block, to create all the relevant CAAnimations.

It does not need to do that and this is where it all gets very elegant. Whenever a layer property changes, the layer looks for the action (a more general term for an animation) to execute. Since setting most properties on a view actually sets the property on the layer, you are implicitly setting a bunch of layer properties.

The first place that the layer goes looking for an action is that it asks the layer delegate (it is documented behaviour that the view is the layers delegate). This means that when the layer property changes, the layers asks the view to provide an animation object for that each property change. So the view doesn't need to keep track of any state since the layer has the state and the layer asks the view to provide an animation when the properties change.

Actually, that's not entirely true. The view needs to keep track of some state such as: if you are inside of the block or not, what duration to use for the animation, etc.

You could imagine that the API looks something like this.

Note: I don't know what the actual implementation does and this is obviously a huge simplification to prove a point

// static variables since this is a class method
static NSTimeInterval _durationToUseWhenAsked;
static BOOL _isInsideAnimationBlock;

// Oversimplified example implementation of how it _could_ be done
+ (void)animateWithDuration:(NSTimeInterval)duration
                 animations:(void (^)(void))animations
{
    _durationToUseWhenAsked = duration;
    _isInsideAnimationBlock = YES;
    animations();
    _isInsideAnimationBlock = NO;
}

// Running the animations block is going to change a bunch of properties
// which result in the delegate method being called for each property change
- (id<CAAction>)actionForLayer:(CALayer *)layer
                        forKey:(NSString *)event
{
    // Don't animate outside of an animation block
    if (!_isInsideAnimationBlock)
        return (id)[NSNull null]; // return NSNull to don't animate

    // Only animate certain properties
    if (![[[self class] arrayOfPropertiesThatSupportAnimations] containsObject:event])
        return (id)[NSNull null]; // return NSNull to don't animate

    CABasicAnimation *theAnimation = [CABasicAnimation animationWithKeyPath:event];
    theAnimation.duration = _durationToUseWhenAsked;

    // Get the value that is currently seen on screen
    id oldValue = [[layer presentationLayer] valueForKeyPath:event];
    theAnimation.fromValue = oldValue;
    // Only setting the from value means animating form that value to the model value

    return theAnimation;
}

Does it replace the implementation of property setters on CALayer with some kind of recoding versions?

No (see above)

Or is the support for this property change recoding baked-in somewhere deep inside the CALayers?

Yes, sort of (see above)

Creating similar API yourself

To generalize the question a little bit, is it possible do create similar block-based API for recording state changes using some Objective C dark magic, or does this rely on knowing and having the access to the internals of the objects being changed in the block?

You can definitely create a similar block based API if you want to provide your own animations based on property changes. If you look at the techniques I showed in my talk at NSConference for inspecting UIView animations (directly asking the layer for the actionForLayer:forKey: and using layerClass to create a layer that logs all addAnimation:forKey: information) then you should be able to learn enough about how the view is using the layer to create this abstraction.

I'm not sure if recording state changes is you end goal or not. If you only want to do your own animation API then you shouldn't have to. If you really want to do it, You could probably could, but there wouldn't be as much communication infrastructure (delegate methods and callbacks between the view and the layer) available to you as there is for animations.

다른 팁

David's answer is awesome. You should accept it as the definitive answer.

I do have a minor contribution. I created a markdown file in one of my github projects called "Sleuthing UIView Animations." (link) It goes into more detail on how you can watch the CAAnimation objects that the system creates in response to UIView animations. The project is called KeyframeViewAnimations. (link)

It also shows working code that logs the CAAnimations that are created when you submit UIView animations.

And, to give credit where credit is due, it was David who suggested the technique I use.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top