• Showcase
  • Spine Pro Vtuber Working Prototype

I spent over a week ( working on this after my full-time work ) researching and developing a working Vtuber web solution. It's time for Spine Pro to level the playing field with Live2D in the Vtube realm. I expect to upload this application after I have enough features.


Edit ( 7-4-2022 ) Spine Vtuber Prototype itch.io release

The Vtuber Prototype using Esoteric Software Spine models is released.

https://silverstraw.itch.io/spine-vtuber-prototype

I also include the model that I have been using for testing.

https://silverstraw.itch.io/spine-vtube-test-model

Very cool! We have plans to do this officially. It's still early days, but it's super cool!

I beat you to the punch, Nate. :grinteeth:

Nate wrote

Very cool! We have plans to do this officially. It's still early days, but it's super cool!

Official support would be awesome! :heart:
In the meantime, I will spearhead the frontier. I bet your plate is full with Spine's future development. Maybe we can have some discussions in the future. 😉

You did! :punch:

Besides the camera/vtubing itself, we also have some supporting features in the works that makes vtubing super cool: physics! We're in a bit of a rut though, as we started these things and then switched gears to improve the new graph. Lately there's been a lot of distractions that eat our time and/or fracture our focus, plus the holidays, so even that's going much more slowly than we want. We do keep plodding along though, so we'll get there in time!

Oooh now that looks very promising as well, SilverStraw!
I'm so excited about being able to use Spine rigs for streams >.<

Super cool! From the debug render on the left, it seems we are using the same tensorflow light model 🙂

6 dana kasnije
Nate wrote

You did! :punch:

Besides the camera/vtubing itself, we also have some supporting features in the works that makes vtubing super cool: physics! We're in a bit of a rut though, as we started these things and then switched gears to improve the new graph. Lately there's been a lot of distractions that eat our time and/or fracture our focus, plus the holidays, so even that's going much more slowly than we want. We do keep plodding along though, so we'll get there in time!

Yay! Physics!
It's been slow for me as well. Now, I am trying to figure out how to allow user to load their Spine model that would work with spine-ts runtime.

Erika wrote

Oooh now that looks very promising as well, SilverStraw!
I'm so excited about being able to use Spine rigs for streams >.<

Hype! :fiesta:
I can't wait to create Spine models for streaming. Live2D Cubism has a completely different approach to model rigging.

Mario wrote

Super cool! From the debug render on the left, it seems we are using the same tensorflow light model 🙂

Are we really using the same? :bigeyed:
Now, tell me more. :smirk:


UPDATE:
I finally got the web browser file reader to work properly with Spine WebGL runtime. My file browser isn't shown in this video but I am clicking the asset files to be read and loaded into Spine runtime. This is an important step towards Vtubing with Spine rigs.

Cool! It can also be done via drag and drop. Working with browsers is such pain!

4 mjeseci kasnije
Nate wrote

Cool! It can also be done via drag and drop. Working with browsers is such pain!

Nate is telling me something.
Okay, I found the example of the drag and drop from Spine Player generator example.


I am updating on my progress with my application. The last couple months has been hectic with work and my other project failures.

Neat!

Great, I’m so excited! 😃

20 dana kasnije

Hello Mario and Misaki.

This is another step closer. I tried several approaches to operate the mouth on the model.

OMG, it looks very easy to use! The mouth behavior looks natural 😃 Although the character has no eyelid yet, just being able to open and close the mouth conveys face expression quite well.

13 dana kasnije

The character is evolving. I will get to the eyelids . . . eventually.


This should be ready for the next phase of testing.

Great evolving! 😃 :yes:

I used all my rare candies, Misaki. 😉

Spine Vtuber Prototype itch.io release

The Vtuber Prototype using Esoteric Software Spine models is released. I also include the model that I have been using for testing.

https://silverstraw.itch.io/

Congrats on the release! 😃
I tried with a simple character which was created based on your Spine project file and it looks good! :heart:
Here's what I tried:

Sorry about the lack of blink animations since I drew and rigged it in a hurry and about the closed mouth not being clean enough! I will make various modifications later.

It may depend on the environment to do this, but it seems that eye pupil movement tends to be a bit jumpy. However, the response for facial tilt, direction, and mouth movements are very smooth! Also, the setup was easy to understand, so creating my own project was easy for me. It is a lot of fun to try out the movements with this tool 🙂

Eye glasses affect the reliability of eye pupil tracking.I think it's because of the glare on the eye glasses. I used another software called Vtube Studio and it also has this issue. Eye glasses are a bane to eye pupil tracking.

Another factor is the small size of the eye pupils relative to the face. They were twitching a lot so I had to smooth the incoming signal.That solution isn't perfect. It's hard to distinguish jitter from eye pupil movements. In the future I will allow users to adjust these incoming signals.

I have added some modifications to my skeleton:

Eye glasses affect the reliability of eye pupil tracking.I think it's because of the glare on the eye glasses. I used another software called Vtube Studio and it also has this issue. Eye glasses are a bane to eye pupil tracking.

Unfortunately, I tried it without eye glasses last time and this time, but it seems the eye pupil tracking response is not very good :tear:

Currently, the right eye and the left eye can register separate animations, but since it often happens that only one eye moves and looks weird, it would be good to make both eyes share the same animation once. Or, as in Owl's sample, it would be good to combine the animations for the face and eye direction together for now.

Another issue could be certain cameras mirror the image such as left and right are flipped. Are you familiar with Open Broadcasting Studio ( OBS )? It has functionality to create a virtual camera that you can flip horizontally. If you have another web camera that doesn't mirror, you can test it out. I noticed this issue as well when I switched over to my laptop camera which mirrors the image. Currently the prototype only cater to non-mirror images but I could give that an option in the future.

Your feedback has been helpful!

I have OBS installed, but even if I started the virtual camera, I could not use it because the option does not appear in Google Chrome's camera settings. To be precise, in the general settings of Chrome, I can set like the following:

However, on the actual web page, it was automatically changed to FaceTime HD Camera and could not be freely changed here.

Incidentally, Google Meet allowed me to choose the virtual camera.
I have looked for other ways to flip the camera, but unfortunately have not found any.
I have attached the project file above so you can see if the problem does not occur in your environment.

I think my eye pupil jitter filter is too strong for your model, Misaki. I will lower the strength and update the change as soon as I get the chance. On my test model I though the filter was the right strength. It's great to see another model and how model rigs are affected differently. :think:


I released a small patch, version 1.0.2.

  • Changed canvas resize functionality.
  • Decrease the strength of jitter filter for the eye pupils.

Hello Misaki. I hope this weaker filter works better for your model.

Thank you for your quick fixing, it looks much better! 😃 The movement has become smooth, and weird movements such as only one eye moving are less than before.

In the video above, I tried to roll my eyes 360 degrees, but unfortunately the up and down did not seem to respond very well. The distance between up and down is shorter than left and right, so it seems to be hard to catch the movement. I'm not sure if it would be optimal to have the filter adjusted based on my model, so I think it would be nice if the settings could be adjusted by the user.

By the way, I will modify my model later as the rigging of the eyes and mouth is still not very good. I'll share it again when the improvements are done.

So the jitter filter is for the model to stop twitching so much. Much like the .gif below when your eyes are not moving but the model is twitching excessively. :bigeye:

I had the filter strong enough to cause the eye pupils to jump. I thought it was favorable effect to have jumpy eye pupils. :upsidedown:
I lowered it to the same level as the other filters as the face and mouth. There should not be too much bias of the change.

Letting the user adjust the face tracking setting is for future updates. When you allow those adjustments, users would want a way to save the settings so they do not have to adjust every setting each time.

So the jitter filter is for the model to stop twitching so much. Much like the .gif below when your eyes are not moving but the model is twitching excessively.

Ah, I see.

When you allow those adjustments, users would want a way to save the settings so they do not have to adjust every setting each time.

Exactly. I think it would be great if you can download the settings as a file such as a JSON format and load it with the skeleton data when you want to apply the settings again.

There are so many features to add, so I am going to implement them one at a time.

Misaki, have you played with the drop down box below the Reset Render Camera button on the Model Settings? I might have not made it clear that drop down is where you change certain properties about the model(s).

There are so many features to add, so I am going to implement them one at a time.

Sure, take your time 🙂

Misaki, have you played with the drop down box below the Reset Render Camera button on the Model Settings? I might have not made it clear that drop down is where you change certain properties about the model(s).

I have tried some of them! The position and scale of the model can be changed on the canvas with the mouse (this is really comfortable!), so I have not had to change them using the settings box, though.

The scale and position really comes to play when you add more than one Spine model. With my recent unreleased testing, I was able to have 3 faces tracked on the same camera. There is a way to assign each tracked face to each Spine model. All the models start at 0, 0 position. You would not want overlapping models when you are using multi-face tracking. The scaling function helps when the models are different sizes. While over the canvas, your mouse only changes the viewing port. The model's scaling is still at 1 scale for x and y.

I wonder what you think about skins and animation properties if you tried them.

While over the canvas, your mouse only changes the viewing port. The model's scaling is still at 1 scale for x and y.

Ah, I see. I'm looking forward to seeing what fun looks like when multiple models are placed.

Regarding animation property, what is the middle box set for? I can see that changing the animation of the pull down changes the animation, but is it played on track 0?

Regarding skin property, I can confirm that it works correctly, but is it currently only possible to apply one skin?

The middle box is for inputting values and is not used for the animation property. That box is for the other properties. Certain properties would use either the value input or the drop-drown list. Since properties like skin and animation have a finite number of choices, a drop-down list would be more appropriate. If value input box is too confusing, I could hide it for properties that do not need it.

I forgot to update the track layer for the animation property so it is not working on the intended track layer. It is in the middle of the track stack so you have half the tracks overriding it. 😃

So far you can only apply a single skin. I only worked out populating the drop-down list of all skins. I never had the intent of multiple skins in early development but it seems like something I could expand upon.

Thank you for elaborating on those! 🙂

If value input box is too confusing, I could hide it for properties that do not need it.

Yes, I think the value input box should be hidden as it would mislead people into thinking it is something that is meant to set up the animation.

I forgot to update the track layer for the animation property so it is not working on the intended track layer. It is in the middle of the track stack so you have half the tracks overriding it. 😃

Ah, that makes sense. I had a feeling that something was wrong. It would be useful if it could be used to switch from the default breathing animation to another, or to switch facial expressions.

So far you can only apply a single skin. I only worked out populating the drop-down list of all skins. I never had the intent of multiple skins in early development but it seems like something I could expand upon.

Yes, it would be useful to be able to use more than one skin, such as one for costumes and one for expressions or postures, since skins can be used to change facial expressions and postures using constraints.

Expanding on more animation tracks and skins add another dimension of complexity. I will need to figure how to expand on those features.

It would be useful if it could be used to switch from the default breathing animation to another, or to switch facial expressions.

Yes, it would be useful to be able to use more than one skin, such as one for costumes and one for expressions or postures, since skins can be used to change facial expressions and postures using constraints.

It will probably a good idea to get animation and skin properties their own sections in the future. They are getting too complicated to be used as a single input setting. If this keeps up, I will end up with a Spine visual programming web application. :scared:

I was hoping to soft cap the amount of animation tracks and skin layers. If I follow your suggestion, everything would need to be uncapped ( or until you web browser crashes :p ). So far I am using 20 something animation tracks just for face tracking.

On a side note, this topic got so many views within a day. I checked my itch.io stats and there hasn't been that many downloads on the test model. So many lurkers among us or it is just you, Misaki. :o

I give you my suggestions on many things, but of course this is your tool, so make it what you want it to be 🙂

I was hoping to soft cap the amount of animation tracks and skin layers. If I follow your suggestion, everything would need to be uncapped ( or until you web browser crashes :p ). So far I am using 20 something animation tracks just for face tracking.

I'm not familiar with the cost of having a lot of tracks, so it would be better to have Nate or Mario comment on this.

On a side note, this topic got so many views within a day. I checked my itch.io stats and there hasn't been that many downloads on the test model. So many lurkers among us or it is just you, Misaki. :o

I’m sure I'm not the only one checking this topic! The order of topics will go up when there are responses in this forum, and a lot of people see the posts at the top. That’s why there are a lot of views.

I give you my suggestions on many things, but of course this is your tool, so make it what you want it to be 🙂

Your suggestions are really important to me because it is the feedback that I have gotten. I am going to slowly implement them because I have no idea what I am doing :lol:. It is uncharted territory.

I'm not familiar with the cost of having a lot of tracks, so it would be better to have Nate or Mario comment on this.

Soft capping is really just less work for me :grinteeth:. For example, with only one animation or skin I do not have implement a more complex system. I will get to it eventually.

I’m sure I'm not the only one checking this topic! The order of topics will go up when there are responses in this forum, and a lot of people see the posts at the top. That’s why there are a lot of views.

I wish the feedback was that popular.

For all those lurking and have Discord, I am Aestos on unofficial Spine Discord server. :think:

6 dana kasnije

It has been a while, but I have made various minor modifications to my skeleton. The latest version of the recording is here:

As for the old videos, I had set them to limited public access, but I have set the latest video to public access. For people who are new to this tool, I have recorded the process from uploading the skeleton data so that it can be a simplified tutorial.

My skeleton still has some issues as the clipping attachments for the eyes sometimes go wrong, but I have attached the latest file here for your reference:
face-for-Spine-Vtuber-Prototype.zip
(Also, I deleted old files in this thread.)

I will come back to this thread when I have time. Cheers! :beer:

https://silverstraw.itch.io/spine-vtuber-prototype
https://silverstraw.itch.io/spine-vtube-test-model

1.0.3

  • Rearrange model property drop-down list.
  • Add "maximum number faces" option to model property drop-down list. This property will allow more than one face to be tracked using a single web camera.
  • Add "Single Value Properties" label to the left of model property drop-down list. It should be clearer about the drop-down list if it was not before.
  • Update the track layer for the model property "animation" option".
  • Hide value input box for animation and skin properties.
  • Add setting for "face pitch strength", "face yaw strength", "face roll strength", "mouth height strength", "mouth width strength", "left brow strength", "right brow strength", "left eye strength", "right eye strength", "left pupil pitch strength", "left pupil yaw strength", "right pupil pitch strength", "right pupil yaw strength" next to "Single Value Properties".

Misaki wrote

My skeleton still has some issues as the clipping attachments for the eyes sometimes go wrong, but I have attached the latest file here for your reference:

Hello. What is wrong with the clipping attachments for the eyes?

That sounds like a great update! However, I could not find where the strength setting was. I don't see the drop down menu in your video, is it visible on your end?

Hello. What is wrong with the clipping attachments for the eyes?

When the eyes were closed, the clipping masks sometimes crossed over, and this caused the eyes that should have been hidden to be visible. I fixed the problem today and I replaced the attached file on my previous reply.

Misaki wrote

That sounds great update! However, I could not find where Add strength setting was. I don't see the drop down menu in your video, is it visible on your end?

I meant that I added the setting for the user to change "face pitch", "face yaw", "face roll", "mouth height, "mouth width", "left brow", "right brow", "left eye", "right eye", "left pupil pitch", "left pupil roll", "right pupil pitch", "right pupil roll" next to property label. I apologize that Open Broadcast Studio did not capture any pop-up menus when I recorded the video. The drop-down does appear at my end but I did not full screen capture the process.

Hmm, somehow I can't find that setting on my end....

Also, I can't find "maximum number faces", so the update seems not to be reflected properly. Is there anything I need to do to use the updated version?