It looks like you're new here. If you want to get involved, click one of these buttons!
Back again with another update!
This will be slightly shorter update than you may have grown accustomed to. I was away on vacation earlier this month, and wanted to take it a bit easier than usual when I got back, working on more straight forward features. So not super heavy engine stuff this time in other words, but straight forward features are important too! So this will be more of a visual update.
How exactly eyes are supposed to work in YL2 has been a huge question mark for us. We've had several ideas through out the months that we wanted to try out, but with all the stuff that has been going on, we felt like we never really had the time to truly investigate them. Now however, coming back from my vacation, this felt like an extraordinarily "lagom" task to start off with. So I jumped straight into it!
Originally I had intended to make the eye authoring as a sort of extension to our texture building system, using gradients and layers to procedurally generate the type of textures and effects desired. The benefit of such a system, naturally, would be that since the texture would be generated at runtime, the character size on disk could be kept very low. However, upon delving deeper into this method, I came to the conclusion that in order for such a system to encompass all the needs in terms of eyes, it would end up being too difficult to use. So I opted going for a more straight forward solution that would be both easier to use and implement.
With our eye authoring system, we have tried to strike a nice balance between detail, simplicity and customizability. We hope we offer enough features to accommodate most people!
So without further ado, let's see what it looks like!
(Changing iris and pupil size.)
(Making iris and/or sclera glow.)
We definitely want to make the size and glow settings accessible during interactions as well. I think it would be really cool to have the eyes react to climax and such!
(Custom texture support.)
(Close look parallax effect.)
Here's a guide showing all the steps involved in achieving the final look:
The final look:
Unlike some other eye systems that we have seen, that "bake" the sclera shading into the eye texture, we have a custom solution that keeps the sclera shading in place despite the eye moving around:
(Sclera shadow staying in place even as eye moves around.)
This is accomplished by rendering the eye a second time but with a special transparent material. This material renders on top of the first eye pass, causing it to darken, but with a Z offset (to make sure it actually renders on top). To prevent the offset eye from rendering through undesired places (for example, through the eye lids), we're using the stencil buffer to only render where eye pixels have already been rendered. (Since the first material is opaque, it renders into Z buffer and also uses Z tests, so if there are higher order eye lid pixels, the stencil will not be written there.)
As you might remember from our previous update, "Nomistrav" recently joined our team and has been helping us by providing feedback on our designs. I know I already expressed this earlier, but let me just say this again that Nomistrav is making all the difference in the world to us. Before he was on board, it used to be just me doing all the feedback on dogson's work, and since I'm not a designer, I oftentimes had problems articulating my concerns in a way that was helpful. We would end up going back and forth, never really arriving at something that we were perfectly happy with, until we eventually lost interest or ran out of time and just moved on. Now, however, with Nomistrav's help we've been able to work far more efficiently and produce higher quality shapes, and are generally having so much more fun since it actually feels like we're making real progress. I think the mood in the team has been really great the past couple of weeks!
Also, recently we made some changes to how we communicate, that I think has resulted in a stronger feeling of solidarity between team members. It truly feels like we're all part of this special group, all doing our best in our own way to create something truly amazing for the furry fandom, all while having fun and discussing all sorts of creative topics. I'm super happy about it!
New head models
In connection to the section above... We were never really 100% happy with the head designs we did previously, so this month Dogson and Nomistrav have been collaborating to design new ones. Here are the results:
We've tried to make each head more pronounced and unique, so they really feel more lizardy, caniney and feliney, respectively. Also, we've tried to make them more cartoony/stylized, as you can see from the proportions. We're super happy how they turned out, and definitely think it was worth the remake.
Z Depth study
This is more of a side note, but I felt like bringing it up anyway because it has taken time from us.
Yiffalicious features a pivot placement system, where you press middle click to place the pivot at cursor location. The problem with this system is that it lacks precision. Even though you're clearly clicking something, a different location might be returned by this system. This is because it uses crude representations of the characters (and environment) when fetching the depth, since Unity is dependent on them in order to do raycasting:
(Crude physical representation of a character in Yiffalicious.)
Some of you might remember that last year, we made a new system that would eliminate this problem. The method we used in that system was to render the scene a second time when middle-clicking, but with a replacement shader that instead of actually rendering shaded objects, would just output the distance from the camera to each rendered pixel. This method worked great from a precision sense, but with the huge drawback of having to render the scene twice. That is very costly to do, and I was never really happy with it. Especially not since I knew that Unity renders its own, internal depth texture, but that unfortunately is not possible to access through the APIs on the CPU. Furthermore, since this system's creation, we have started using more specialized materials when rendering our characters in YL2, meaning that this technique with shader replacement would not work anymore when fetching depth from characters.
However, now we have discovered a new method that allows us to do what we originally wanted to. Through some API and shader trickery, we have been able to fetch Unity's internal depth texture and read individual pixels from it on the CPU. In comparison to the previous method, this one is about 10 times faster (and most of that time is just overhead). It would be much, much faster if we were able to do GPU calls asynchronously (something that isn't possible in Unity 2017 but exists as an experimental feature in Unity 2018). (We're on Unity 2017 atm.)
(Example of read Depth-buffer, darker values means closer.)
(Placing orb at scene depth, without raycasting or any colliders. Pixel precision.)
This method offers the same pixel-precision as our other custom solution. However, to my surprise (and disappointment), while the pixel-precision is great, the depth precision is actually quite bad! When comparing the fetched Z value with a standard raycast, the Z value can be quite off, especially at longer distances. I was so happy when I discovered this method, only to be disappointed by its lacking depth precision. It seems no matter what we try, there are always trade-offs! Perhaps our final solution will be some kind of hybrid between them.
This month we've been working on eye customization and new head models, in addition to some other minor things.
I think the mood in the team is great. Ever since Nomistrav came along, we've been able to work more efficiently and are just having so much fun.