I would Assume that while I'm adding animation, it will be applyed on top of current transformations if those elements not changed in "idle".
We started out this way, but it caused a lot problems. In most cases it's a lot easier for the animations combined to define the skeleton's pose, without remnants from previously applied animations.
I understand your run/idle situation now, thanks. Often at runtime you need to know where the skeleton is for things like hit detection, so it's often impractical to have the animation moving the skeleton around the screen. If your animations position your game objects, your game code is unlikely to know where the objects. Also it can make it difficult to serialize your game world's state, if you need that. There are a few ways to go about it:
The easiest thing to do is play your idle animation on the right side of the screen. This may be sufficient for simple scenarios, where you need only a few animations like
The next easiest thing is to know where your skeleton has been moved to after you play an animation. For example, you play idle, then run, when run is complete you know you need to move the skeleton across the screen, then when you play idle it's on the right side. If you want to smoothly transition from run to idle it may take some planning. It's easiest for the run animation to end in the idle pose, so the transition from run to idle is in the animation.
That approach is useful for complex movements, like climbing a ladder, when the exact position of the skeleton isn't important during the animation (maybe they can't be hit while on the ladder).
Another option for you can be to use additive animation. This is where you set TrackEntry
mixBlend to MixBlend
add. This causes an animation to be added to the pose from lower tracks. For example, it can be used to do facial expressions during other animations. For example, to use it for that you first need to key the face on a lower track. If you don't then every frame the additive animation will be applied on top of the pose from the previous frame, quickly getting out of hand. Next, you play your animations, run, jump, idle, etc normally. Then on a higher track you can play one or more face animations that are applied using MixBlend
add. This changes the face no matter what else the character is doing. You can use the TrackEntry
alpha to partially apply an animation, for example to blend between facial emotions using a weighted system.
Or, you could move your skeleton in your code, at a speed that matches the animation. See the run or walk animations in the spineboy example project. To do this you need to animate without moving the skeleton across the screen. It can be helpful to move the skeleton while animation, then remove those keys when finished. Ghosting offset can be used to determine the speed the skeleton needs to move at runtime to match the animation.
That approach can work for linear or simple movement. It's also necessary when the player has a lot of control of the skeleton's movement, like a platformer game. In that situation you may want to adjust the animation speed based on the skeleton's movement speed, so they match. For example, when moving in a platform, often movement starts slow and then gets faster.
For more complex movement, like a shambling zombie, it would be difficult to write code that moves the skeleton in a way that matches the animation. For this it's better for the animation to define the movement, where the animator has full control. At runtime you can use a technique called "root motion". This is where you take the motion that would be applied to the root bone and you instead apply it to the entire skeleton. This allows an animation to move your skeleton in your game world in complex ways. With root motion, after you play your run animation the skeleton would be moved across the screen, then when you play idle, it would play on the right side of the screen.
Implementing root motion is relatively advanced. We provide code to do it in Unity. The docs are here, the code is here and here. It is split because Unity has two types of scene graph nodes that are position differently. Also it is relatively complex because it has some advanced features you may not need, like rotation.
"Hold previous" is probably not what you want. It applies the previous animation fully in a mix from one animation to the next, ie A -> B. If you have multiple animations, eg A -> B -> C then yes, it will apply A and B and then C and with hold previous it won't mix out the previous animation. However, AnimationState tracks intend mixing to be used from one animation to the next. To apply multiple animations, it's intended to use different AnimationState tracks.
If you really want to change how animations are applied, you can modify the runtimes, as mentioned in the thread you linked. A drawback to that is you'll find you often need to key things in an animation to reset things that were keyed in an animation played previously. This leads to keying almost everything at the start of most animations. That is tedious, bloats the animation data, and it makes applying an animation at runtime slightly more expensive to have many extra timelines. As you add things to your skeleton, you'll need to revisit existing animations and key things to reset them. It's also error prone, as you won't notice any problems until you happen to play an animation that keys something that some other animation forgot to key to override. Lastly, setting the skeleton back to the setup pose will snap, rather than transition smoothly.
Sorry for the long post!