Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

YL2 Update Dec 2018

odesodes Administrator
Hi all!

We hope you had a great time during these holidays!

We're a bit later than usual with this month's update. This is partly due to the holidays and all its social gatherings, but also because I started out the month by being ill in the flu. We've still managed to make some great progress though and we feel good about it, so let's just jump straight into it!

Part system

This month we've been busy with implementing the "part" system in YL2.

"Parts" are separate objects that can be added to a character. A typical example would be ears or tails. Parts differ from "appendages" (fluff/decorative models) in the sense that they can be transformed during an interaction (unlike appendages which are static and "glued" to the surface of a mesh). Since parts are individual objects, they are not batched together during rendering (unlike appendages).

Parts use the same model referencing system as the model appendage groups. The idea is to offer an ever increasing selection of different pre-made parts, but of course custom models are also supported. For this example, we'll import an external model.

To import a model, simply navigate to Import > Model in the menu:

https://gyazo.com/5933353c014b84241a473e046f3a6d21

Then, to add a part, go to Parts and click the plus sign.

https://gyazo.com/0ed64e2623ac24a7a776eba701ba1eb2

An empty part is not of much use though, so let's reference the model we imported:

https://gyazo.com/41bade32f5363fa9db36efacca68d3ad

We can now move the part around in any way we like:

https://gyazo.com/788d8b9c37524e5ae8b2e8f5efbeca35

Parts, like other objects, use our "OmniPoint" system, that can represent a point in space in many different ways. If we want the part to stick to a certain area, we have three options:

1. Directly set a parent for the part:

https://gyazo.com/5b3de44ea5f6778230b7fefeb0b942bb

2. Snap the part to a bone (using shift + click):

https://gyazo.com/aa684a52d661f10d1fba5824cf7b19bf

3. Snap the part to a surface point (using ctrl + click):

https://gyazo.com/15ef32affecb82d043ab01d783cdaff0

Once snapped, you can still move the object around, but the new location will be stored as an offset from the snapped point to the newly configured one.

Scaling is configured inside the inspector:

https://gyazo.com/d36e9366ba66d0569d9311ac0ccee63c

If you want to mirror a part, of course you can do that easily by checking the mirror checkbox:

https://gyazo.com/fa3f5e48026f9412898385d9bd8b42d8

A part has its own texture builder:

https://gyazo.com/bdac7c354eb2efc0d009573a91f6e96d

You can also add appendages to parts:

https://gyazo.com/89d9942bf8717b8e0c437c6a0d1473f7

What happens if you switch a part to another model when it already has fluff added to it? It tries to adapt the appendages to the new model:

https://gyazo.com/a2144defdf82d9d0429643036c6df387

You can of course undo this, and the appendage instances will recover their previous configurations:

https://gyazo.com/1e2d10d5ff92ae4fb2874a2c0cd55dd6

This specific action proved extremely tricky to implement. Since history is only tracked on a per property basis, i.e. if the user changes a property, then only its former and new state is stored in the history command. Here however, what happens is that the locations and directions of the appendage instances are indirectly affected by the change of another property - in this case the model reference property. This means the locations and directions of the appendages would essentially become corrupted if you changed the model and then undid that action, because the appendages would first try to adapt themselves from the old model to the new one, and then from the new one to the old one when the undo command is run (which of course does not guarantee the result would look the same as it did before any of these actions were made).

The solution we came up with was to implement a system where it's possible to hook into a property and inject extra commands during its history processing. So in this case, we're hooking into the model property to not only store the model property's before and after state, but also the before and after states of the appendages.

Parts can also contain other parts, forming a hierarchy:

(Silly example of parenting a sphere to the ear.)

https://gyazo.com/3d1a8738c23cdcf9639b52555030d263

If you enable the mirror option in a child part object, it will mirror according to its direct parent:

(Silly mirroring example.)

https://gyazo.com/b229eeaa994d6a0d343affd3e42d5c08

Bone scaling

Another feature we've been working on this month is bone scaling. In the main character object you're now able to scale specific bone groups:

https://gyazo.com/d3a801c4e7cf4ecc6d1c4063c17e2de6

Right now the sliders are capped at +- 20%, but it's hardly carved in stone. We just think it makes sense to put a limit somewhere, but where exactly that limit should be remains to be decided.

The bone scaling caused a ton of issues for us with the appendage and part systems, since they assumed the character always was in its unscaled form. When calculating the appendage locations and skinning, we essentially had to "undo" the scaling by applying the bind pose and bone matrices in the inverse order, getting the locations as if the model was unscaled.

Fluff coloring

There always were several features we knew we wanted fluff appendages to support, like blending in with the color of the model they're attached to, while at the same time being able to have their own individual color, connected to the color indexing system. How exactly all of that was going to work was always a bit of a question mark for us however. This month, we've been working on exactly these questions.

Our original idea was that during the placement of the fluff instances, the UVs of the source surface below (that they're connected to) would be stored inside the instance. Then, during rendering, the instance could simply sample the source textures, and blend in their colors according to a curve.

However, during the implementation of the shaders for this system, we realized we had overlooked a major issue - the source mesh isn't always made up of 1 material and 1 set of textures. To ease our content authoring and increase texture resolutions, we have opted for giving each switchable part of the character their own materials and textures. A character is made up of 4 parts - the head, the body, the hands and the feet. This meant if we were to sample the source textures during the rendering of a fluff instance, we would either have to use branching when deciding what texture to sample (which is very bad to do in shaders), or we would have to split the fluff mesh into several sub meshes according to what part of the character they're attached to (but that would increase draw calls). Neither one of these paths were satisfactory, so we decided to go back to the drawing board.

We realized that looking up textures is actually a bit of an overkill regardless, since it's just a single point in the source UVs that's used per instance anyway (i.e. just the color in the point they're attached to). Therefore, rather than looking up any colors in textures during rendering, we could simply just pre-sample the textures and bake the color information directly into the fluff mesh instead. So now we're doing exactly that - not only for color information but also for metalness, smoothness and emission. This means we don't need to do any source texture lookups at all when rendering fluff instances. This solution comes at a small cost of having to re-generate the fluff mesh each time textures are changed, but the benefit of not having to branch or do any source texture lookups makes it worth it.

So that's that for the source color lookups, i.e. the color blending into the source mesh. Next is how individual coloring was supposed to work.

As stated earlier, our original idea was to simply use a curve to decide how much to blend from the source's colors into the fluff instance's own color. However, we realized that this method was far too intrusive, since this curve would be on a per group basis rather than on a per instance basis (having it per instance would be far too cumbersome to work with). So we tried to think of something better.

This is when we realized we didn't really have a good idea for how coloring of individual instances was supposed to work to begin with. We really wanted the color indexing system to be used everywhere in the character editor, making colors easy to change in a single place, affecting the whole character. But forcing the user to only use colors defined in the swatch list would make smooth color transitions in the fluff instances very cumbersome to do. Still, we felt that if we side-stepped the color indexing system, it would kind of defeat its purpose.

At this point, we realized we had actually already implemented a color gradient system earlier which would fit perfectly for this problem. Instead of binding each instance to a certain color index or color value, we could simply reference a color gradient in the group itself, and then have each instance store a single floating point value between 0 and 1 to represent a lookup in that gradient. This way, the color indexing system would still be used, and you can still create smooth color transitions in the fluff instances. This method was also very simple to implement into our already existing sculpting system, since it was already adapted to work with single floating point values.

Great! So now we have individual coloring and source color blending. Next up was trying to figure out how the shader was supposed to know what color to use of these. We realized we wanted it to be possible to define a pattern of how the color is distributed, so rather than using a curve, we decided to use an alpha mask. That way the user is fully empowered to create whatever styles of transitions they want. We also realized it's not always that you want to use the individual coloring, but instead only use the source color. For this we implemented yet another single floating point value to represent how much to use the individual fluff color (essentially an alpha).

Additionally, we wanted it to be possible to change the silhouette of the fluff instances. This was accomplished simply with another alpha mask (these masks - the color blending mask and the alpha mask - are automatically combined by YL2 into a single texture).

Lastly, we also wanted the color mask to be possible to "move" up and down in the UVs, so for example only the tips of fluff instances are colored but can be smoothly transitioned to cover the whole instance if wanted.

Here's examples of the final system.

(Just adding fluff - nothing new here.)

https://gyazo.com/12dd8239f08396e1c225033d17c05593

(Just sculpting and tweaking of instances - nothing new.)

https://gfycat.com/PopularSparseAmericanrobin

(Here, we're importing two textures we quickly threw together in photoshop - one clip mask and one color mask. Then, we set the group to use the clip mask to change the silhouette of the appendages.)

https://gfycat.com/WelltodoTinyAfricanfisheagle

(No clip mask used.)

(Clip mask active.)

(Here we're setting the color mask for the appendages and gradient index for the group. Then we sculpt the coloring of the individual appendages. The example is a bit silly since the area is so small. It's mostly to show the possibility.)

https://gfycat.com/BetterRevolvingBluebottle

(Finally, we show the UV offset and alpha features.)

https://gfycat.com/HugeSmugEastsiberianlaika

Material system

This is more of a side note than anything, but I thought I'd it mention anyway. While the PBR workflow will probably be sufficient for most things, we have implemented a system where you're able to select what shader to use when rendering characters and parts. This is done through a dropdown menu. When a new shader is selected, a material will be instanced and then populated with the textures from the texture builder. Each material can also have its own custom properties. For example, in the future we intend to offer a sub-surface scattering material, and perhaps a thickness slider would be appropriate for it. Right now we don't have anymore shaders than the PBR one, so it's mostly a system for the future when we may want to offer more types of shaders...

Penetration dynamics experimentation

Every once in a while, I feel the need to cheat a little bit and work with other things than the character creator. This month, I had a few ideas I wanted to try out for how penetration mechanics could work in the next iteration of the interaction systems. So I decided to do just that.

In Yiffalicious, we relied heavily on raycasting and a custom method for parsing those rays and applying their interpretation on a mesh. The method was relatively simple to work with and create content for, and I think it was a great first step to take. However, this method came with several limitations. Each ray could essentially only determine how much the orifice should stretch at a given point, but not actually physically take into account pushing, pulling or friction. On top of that - the rays didn't affect eachother in any way. It's true we added systems for these later on, but they were essentially hacked together on top of a system that didn't actually support them. The push/pull mechanic only worked on the entire orifice (not individual points in it), and only in 1 direction. No matter how much we'd try to improve them, the limitations in the base idea and implementation would always manifest.

This month, we've been working on trying to remedy these issues by using a different penetration method altogether. In this new method, instead of using raycasts, we're relying on a physical model made up of colliders and springs, meticulously configured to achieve the kind of effect we're looking for:

https://gyazo.com/79bed241f99dafacb0443c3a68910f96

(Pushing and pulling is essentially given "for free".)

https://gyazo.com/981b40304265a4bd97767aae7368658c

(A setup such as this handles friction and angles in a much more physically realistic way.)

https://gyazo.com/a72ea37df5fbc6fcab2157591fafbd1b

Each point is connected to its neighbors, meaning if one point is affected, it will push and pull its neighbors too, propagating the effect through the whole configuration. Also, pushing and pulling of the whole object, together with adaptation to angle of the penetration, are essentially given "for free" using this setup.

There are a lot of question marks remaining with an implementation such as this, especially regarding how to interpret these results and apply them on a mesh. We do have a few ideas though, so hopefully we could put something together. This lies further down the road though when we get to implementing the interaction systems. We still need to focus on the character creator first and foremost. Still, we think this is an interesting approach that's worth exploring more in the future.

Dogson stuff

Hello, this is Dogson, So this month I’ve been busy remeshing and making sure the various parts such as heads, hands and feet fit as perfectly as they can. It’s crucial to have the intersecting vertices identical to prevent faulty shading rendering and to avoid holes in the mesh that can break the illusion.

On a different note is that I’ve started to iterate an equine head this last weekend now that I feel that I have less laborious work of remeshing and refitting to do now. So I thought I'd share some images of it:

Summary

Illness and social gatherings did cut into the development of YL2 this month, but we still managed to get a lot of crucial things done. While the part system will probably require some final polish, it could for all intents and purposes be considered complete.

Bone scaling, penetration experimentation, fluff coloring and masking are other tasks we've been working with this month. Dogson has continued the work on content for the character creator, including an equine head.

The idea now is to start getting the new content into the character creator, so we can finally start testing and tweaking the character assembly and shape layering system we developed earlier.

- odes

«1

Comments

  • Capped at +/÷ 20% scale difference? Just wondering, if this might be increased at some point, given the build can handle it? Would love to get access to some Giantess animations :blush:
  • odesodes Administrator
    @DrunkDragon
    Sliders will probably be capped somewhere as scaling characters and their parts too far away from base scale would cause problems in the physics simulation. I think the illusion of giants could be better implemented by placing them in a smaller environment (essentially a city model, for example). If in VR, the virtual eye distance could be reduced to match the desired scale. For example, to achieve the effect of 100 scale, the eye distance could be divided by 100, and the environment city could be modeled in scale 1/100. That would make the characters appear to be 100 times bigger but the simulations would still be run in scale 1.
  • Oh My God! You guys are cranking. Damn that will be fun to play with. odes and Dogson you have earned you guys a new patron!
  • @odes Thanks for the info, but unfortunately, I don't have a VR set. Yet. But would it be able to handle a +/÷ 50% difference as max?
  • Oh, just to make it clear, I'm not talking about a giant giant, but just having a really, really tall female (300cm/10 feet)
  • It would be cool to have an option to disable limits. Like having  a checkbox on the options menu, with a warning dialog after enabling it that informs the user that "Using parameters outside the limits may result in graphical glitches, broken physics and even crash the game. Use at your own risk. Do not file bug reports for problems using Yiffalicious while this option is enabled".
    Then, you can do absurd modifications to most parameters.
    All online downloads would have the tag "limits disabled" so they would be filtered out from the browser if you don't have this "feature" enabled.
    It may result on some interesting creations, and also a lot of hilarious ones.
  • i second @Horsie 's idea. being able to change things to absurd sizes or whatnot similar to SFM would always be fun.
  • odesodes Administrator
    edited January 2019
    @Onebiglotus

    Thanks!

    @DrunkDragon
    Oh, just to make it clear, I'm not talking about a giant giant, but just having a really, really tall female (300cm/10 feet)
    Ah ok, I wrongly assumed you meant giant giants.

    Hm, possibly. I mean it's certainly possible from a mesh and rendering perspective, the issue is how such a big character would behave in terms of simulation. Essentially, scaling something up acts as a magnifying glass, so any issues would become more easily noticeable, not to mention the penetration mechanics whose relative "resolution" would be lowered the more you scale the character up. It's a question that requires more testing before we can give a proper answer. It's better to keep parameters strict and gradually increase them as we gain confidence in the limits of our technology, rather than promise too much and break people's characters with restrictions later on.

    @Horsie
    warning dialog
    I'm not convinced. I mean, sure, we would essentially absolve ourselves of responsibility, but at the same time, is it really worth allowing characters that are going to cause the quality of the experience to go down?
  • @odes
    "is it really worth allowing characters that are going to cause the quality of the experience to go down?"

    Ah, but isn't it the user's prerogative to decide exactly what constitutes a decline in quality? Having been around this fandom for a decade and a half,  I know for a fact that many with more... peculiar interests, shall we say, are willing to tolerate phenomenally poor quality of work if it means their interests get catered to. Slightly wonky physics simulation is nothing compared to that, in my estimation.

    I've also got a bit of a tangentially related experience to share. I recently played through Saints Row 2 again, partly to scratch the customizable character itch I've developed waiting for your project's demo. I was actually surprised by the amount of clothing customization options available, having largely forgotten about that since my last playthrough many years ago. Well, once I was done with that, I fired up Saints Row 3. There's a much greater variety of options for customizing the character's body (shiny silver skin, iridescent hair and completely white eyes, oh my) but the clothing options are considerably more limited. Gone are the texture variants for individual pieces and you can no longer mix and match to the same degree due to there being less individual categories to fill (no differentiation between under- and overshirts, for example) and, to top it all off, there are a ton of clothes NPCs can wear that just aren't available for the player character. I'm guessing that last one is due to a fatal error of design judgement to have a separate mechanism for generating the player character and the NPCs, so the clothing isn't interchangeable, unlike in SR2. A similar issue presents itself in the customization of cars, but that's a whole 'nother can of worms and I've ranted enough as-is.

    The point here is that it's a bad idea to restrict player freedom and choice, especially in a very core part of gameplay. SR2 and 3 have their open world and insane gangland shenanigans to fall back on, but Yiffalicious 2 will be all about the user-generated characters and their interactions. Consider every restriction you impose on it with great care.

    Speaking of iridescent colours and clothing, by the by, are there any plans for those?
  • odesodes Administrator
    @Brownmane
    That's some good points, thanks for sharing.

    Clothing is definitely something we're looking at.

    Iridescent colors - possibly. We have a system where you're able to select what shader you want to use for each object's materials, so it's mostly a matter of actually having an iridescent shader. This is something I know HDRP is capable of, but right now we're in built-in pipeline and will have to remain there until HDRP becomes production ready. But I know HDRP does supports iridescent colors.


  • @odes
    allowing characters that are going to cause the quality of the experience to go down
    I'm sure no limitations can assure there won't be awful creations. 
    But, I perfectly understand not wanting the users of Yiffalicious to create content out of specs and/or based on glitches.
    So maybe the most diplomatic solution may be putting that up on a (closed?) beta and see what happens. If the online browser overflows with grossly deformed content, purge the tagged content and disable the option.
  • I agree that 'unlimited' or less-limited options should be available for users to mess around with, even if things may bend or break in different ways. If the side effects of more freedom with customization are too negative or jarring, then I think it would be fine to limit the 'unlimited' creations to not being uploadable or something (If possible). Even if those creations couldn't be shared, the community would likely still have many emergent & creative findings & creations with less limiting them.
  • edited January 2019
    @odes
    Will you be able to change eye size and head proportions, I'm just curious for creation of like mlp characters and other characters like sonic characters
  • odesodes Administrator
    @thereaper35
    You will be able to scale body parts, including heads. Eyes will however not be possible to scale, since they need to fit into the carefully designed eye sockets.
  • @odes
    Can the eye sockets be scaled, then? If not, it may be worth your while to find a way to make them scalable; eyes are arguably the most important part of the face, so maximum customizability of them gets you the biggest bang for your buck.

    Have you planned to make other parts of the face customizable, by the way? To use the horse head Dogson's made (pretty good job on that, by the way, I know from experience it's a hard thing to get to look good without compromising too much on the distinctive shape, so kudos) as an example, I'd personally want to slim the jaw down considerably so it would look better from the frontal angle (something I often end up doing with human faces in games, too, come to think of it...) but that's just me and my tastes. Not saying you should have it so that you can fiddle with every square inch of the mesh, but sliders for certain key areas are something I'd consider vital for any character customization system.

    Speaking of character deformation, any plans for making it possible to do so mid-scene? Would be handy for those of us who like a bit of transformation. I think a lot of it could be done with some clever use of separate characters that represent different stages of change and some fancy camerawork to conceal the fact that they're technically not changing in real time, but methinks having something change before the viewer's very eyes (like the colours/textures or limb proportions/shape) would go a long way to sell the illusion to the audience.

  • odesodes Administrator
    edited January 2019
    Can the eye sockets be scaled, then? If not, it may be worth your while to find a way to make them scalable; eyes are arguably the most important part of the face, so maximum customizability of them gets you the biggest bang for your buck.
    There's plenty of options for eye customization - scaling is just not one of them. While we do want to incorporate as much customization as we're able to, we also want to balance that with an art-direction. Vastly different eye-to-head ratios for characters of the same species is one of those things that would break the art direction imo. That said, we may offer some form of eye-sizing at some point, but it's not a priority and if ever implemented will probably be subtle.

    Have you planned to make other parts of the face customizable, by the way? To use the horse head Dogson's made (pretty good job on that, by the way, I know from experience it's a hard thing to get to look good without compromising too much on the distinctive shape, so kudos) as an example, I'd personally want to slim the jaw down considerably so it would look better from the frontal angle (something I often end up doing with human faces in games, too, come to think of it...) but that's just me and my tastes. Not saying you should have it so that you can fiddle with every square inch of the mesh, but sliders for certain key areas are something I'd consider vital for any character customization system.
    For starters, it will only be a masculinity-femininity slider. More sliders may be implemented later on.

    Speaking of character deformation, any plans for making it possible to do so mid-scene? Would be handy for those of us who like a bit of transformation. I think a lot of it could be done with some clever use of separate characters that represent different stages of change and some fancy camerawork to conceal the fact that they're technically not changing in real time, but methinks having something change before the viewer's very eyes (like the colours/textures or limb proportions/shape) would go a long way to sell the illusion to the audience.
    As far as character shapes go, inflation is the only thing we can promise will be possible to change in real time. Other forms of change are not completely ruled out - it's just not a priority.
  • @odes when will be the next patron update? I mean here in the forum
  • odesodes Administrator
    @blain
    We're late with the Jan update even for patrons. It's not out yet. Next update will be like a sum of both Jan and Feb. Once we have posted it on patreon, it will show up here around 14 days later.
  • @odes does this mean that patrons should expect the next update before march?

  • odesodes Administrator
    edited February 2019
    @Horsie
    Hopefully yeah. It's hard to say since this update depends a lot on dogson's and my coordinative success, which is very much a back and forth process. You never know how many iterations it's going to take.
  • @odes
    Ok, thank you. I don't want to nag or anything, but maybe you should try to give a bit more frequent updates, even if there isn't too much to tell. If you're concerned about being spammy, maybe you can ask the community about what does think about it?
    I'm just saying this because I hate when there's "radio silence", but I also don't like to ask for how it's going, it makes me feel like I'm pressuring, and it's not my intention.
  • odesodes Administrator
    edited February 2019
    @Horsie
    Naturally, we had not intended to delay things in this fashion. We try to write a major update every month, but these past weeks have been exceptionally problematic. I couldn't have imagined we would run into this many problems and that things would be this delayed. If I had known, I would have written something sooner. Now we're most likely just days away from posting this major update, that I feel it's kinda pointless writing something in this instant.

    In the end, what matters is that we're making progress, and in this regard... I can't wait to show you all the stuffs. :)
  • @odes ah ok I thought that the january´s post "short update" was the update from january, that´s why i was wondering why it was taking so long
  • edited February 2019
    @odes Oof, no facial features customization via slider morphs is a huge deal breaker for me. I was hoping for something similar to Sims 4 CAS, just with sliders instead of dragging and pulling facial and body parts around and forming them like clay. My hype for YL2 has suddenly shrinked to almost non-existence.  :|
  • Think about working with Fek (rack, rack 2), your character and ik animation systems + his gameplay systems... Think about it.
  • odesodes Administrator
    @Ttsar
    Unlikely. We both have our own way of doing this. Besides, my understanding of fek is that he works alone.

    Gameplay systems is precisely the reason why Yiffalicious exists - to get rid of them. If running around in an environment and playing mini-games is what you want from a porn game, our apps are not for you.
  • @Oozaru @odes I too hope that there will be sliders for the facial appearance.
  • @odes
    I know that what's most important is making progress, and I wasn't talking exclusively about this delay, but in general: I think having only an update a month is too little.
    I know you compensate by doing very large, detailed, and well written updates with lots of videos and images, and I really enjoy them a lot.
    So what about making several small updates every month, plus the big one you are already doing right now? And by small, I mean one or two paragraphs, no need for images or videos... the stuff one would publish on a Twitter account.
  • odesodes Administrator
    edited February 2019
    @Horsie
    The problem with updates is that not only does it take time for the results of our work to condense into something worth sharing, but it also takes time to prepare said results to get them into an adequate state. It doesn't really matter how short an update is, if what we share simply isn't ready yet. Increasing the frequency of updates would inevitably force us to spend more time on sharing preparation than actual work.
Sign In or Register to comment.