It looks like you're new here. If you want to get involved, click one of these buttons!
Working on a project of this magnitude definitely feels a bit like laying a puzzle. In the beginning, it was hard to figure out how all the pieces were supposed to fit together, or even what all the pieces were in the first place. We had to do a ton of research and testing before we got something that started to make sense. I'm sure there's more pieces left to find, but with each piece that we manage to place, others become easier to find and place as a result. I think that with time, the finding and placing of these pieces will accelerate the closer we get to the finish line, and I think we're close if not already in that point in time when we just start getting it all together.
But enough talk! Lets get into it!
Hierarchical state manager
Starting with the boring stuff. You can skip this whole part if you're not interested in programming mumbo jumbo.
Projects such as YL2 are very complex in nature, and as time goes on, they have a tendency to grow even more complex. Of all the challenges that development endeavors are faced with, it is my belief that complexity is the biggest one, at least over the duration of time. If not taken seriously and treated properly, complexity will eat away and eventually kill your project.
In YL2 I'm respecting complexity tremendously, and take all the repercussions I can think of to keep it in place. This is by no means an easy task, as it means developing in such a way that you consider things that you may not have even thought of yet. That might sound like a contradiction or impossibility, but with the right development methods, your project can become robust enough to adapt to unexpected changes.
The art of software development is truly at its core about making abstractions, so that complex problems and their relationships become easier to understand and handle. The real challenge is to figure out how to make these abstractions. To accomplish this, there are a large number of methods (or design patterns) you can employ for any one specific problem. In the end what these solution patterns are usually about is solving the distribution of responsibilities, so that the code doesn't become an entangled mess. Instead, each part of the code is contained in a small decoupled module that has a very clearly defined purpose/responsibility, and that doesn't know any more about the overall design of the application than it needs to. If you're able to make abstractions to arrive at such small modules, then the robustness of the project is greatly increased, the risk for bugs much lower and its probability of survival much greater.
So far, we have developed perhaps 80% of all the modules that will be used in the character creator. However, these are just separate modules - raw code that isn't yet reachable by the user. In order to do that, we needed to create another layer of code that communicates with these modules and interfaces them with the user.
To do this, we have developed an object-oriented, hierarchical state manager, that helps us manage the complexity that inevitably arises when trying to glue all these systems together. Basically the way this works is that we have separately defined states with very clearly defined purposes, that can be transitioned into and out of at will. What's more, each one of these states can have their own behavior stacks inside of them, and those their own etc. It becomes a little bit like the movie Inception, where we can push into deeper and deeper states, and then pop back out again to the previous module's states, without any one of these states knowing anything about the other ones. This keeps the code from becoming a monolithic mess that is impossible to maintain in the long run.
(Each rectangle is a state that can be pushed and popped from the state manager's stack. Furthermore, each state can have its own behavior stack.)
A simple example of such a transition that we've been working on is when you select a mask inside a texture builder. When that happens, the program transitions into a new state that changes the interface, the keyboard shortcuts and the behavior to accommodate that mask's needs. For example, in the case of the RadialGradient mask, we start showing its source point (which also becomes editable with the transform gizmo), we change the material of the model to one showing the mask selection, and keyboard shortcuts and mouse gestures are registered for increasing and decreasing the mask's influence. Also, a message at the bottom pops up, informing the user of these keyboard shortcuts. When the editing is done, the state manager nicely pops out of this state and enters the one in the level above.
(When the mask object is selected, another state is pushed onto the behavior stack, which defines behavior specifically made for the mask. When the object is deselected, the behavior stack enters the behavior in the level above (previous).
(Note, it's not the radial gradient that we're showing off here - that's something we already did before. The difference now is that instead of being a separate test case, the behavior that governs the radial gradient has been integrated as part of the app that is now reachable through the interface, with help of the hierarchical state manager.)
Characters in YL2 can differ vastly in size and shape. This means there are a couple of challenges regarding how our skin systems should work while the character is being changed. For example, our masking layers often times use locations as input when generating their output. As an example, the radial gradient mask takes a point and a radius as input when filling its texture. So the question then is how should this point be expressed, so that it remains somewhat in the same location even as the character changes in shape and/or size [during character authoring]?
Well, an obvious answer could be that the point is expressed as a local position inside a certain bone's own transformation matrix. However, this is not necessarily always a good way to express things, as the matrix and point themselves don't know how they relate to others. A local point of let's say [0,0,1] isn't necessarily the same distance away from it's child bone as it would be in another model shape/size. Furthermore, the offset from bone-to-surface can change a lot depending on what shapes you're applying to the model.
To solve this problem, we have created a new system that can adapt to all of these problems. We call this system OmniPoint.
An OmniPoint can be expressed in 4 different ways - as a world point, as a local point, as a percentage between two bones, and finally as a surface point.
Here are a few examples:
Expressed as world point
Expressed as local point
Expressed as point along bone (+ offset)
(When pressing and holding LeftShift, the point snaps to the closest bone and the OmniPoint switches to PointAlongBone mode.)
Expressed as surface point (+ offset)
(When pressing and holding LeftControl, the point snaps to the closest surface point and the OmniPoint switches to SurfacePoint mode.)
Using the OmniPoint system, the point is ensured to stay in the same place even as the character is changed during authoring.
(Btw, don't mind the head and feet. They're missing UVs so that's why they're not affected by the gradient.)
You might already have noticed in the local point image above that we have implemented a list selection system. It's a generic system that we can show for all sorts of selections the user may want to make. It's very similar to the Unity list selection, and is even filterable. This will come in handy when selecting references to other objects, for example when selecting what mask to use for a layer in the texture builder, or when selecting what characters are to be used in a connection (for the sex systems later on). (In case you didn't know, characters' poses and connections between them will be separated, so you can define connections after posing the characters.)
Creating a rig for a model can be a challenging process already as it is, so as you might imagine, creating a single universal rig and bone mapping (skinning) that is supposed to work across a vast amount of different body types and species is a very hard nut to crack. Bone deformation can cause artifacts whose only remedy are custom made blend shapes. However, when you have such a vast amount of different shapes and combinations that we intend to in the character creator, creating such pre-authored blend shapes can be next to impossible. For this reason, we have tried to look beyond conventional methods in order to face this issue.
DeltaMush is a tech that has been around for years in 3D authoring software, but that has yet to become anywhere near a standard in real-time rendering. DeltaMush acts as a post bone deformation filter, that can smooth out artifacts while maintaining details. In Blender, you might know this filter as the "Corrective smooth" modifier.
To give you a better idea of how it works, here's an exact demonstration of the algorithm.
DeltaMush works in two steps - a pre-calculation step (deltas), and then the actual filter step. The pre-calculation is done by smoothing the mesh when it is in its "rest pose", i.e. the pose the model is in when no bone deformation has been applied:
Then, it undoes this smoothing by calculating the difference between the rest pose mesh and the smoothed one:
This difference, the "deltas", is stored in an array. Now when we have these deltas, we're ready to perform the actual filter.
So given a pose:
We smooth the mesh in its posed state:
Lastly, instead of simply moving the vertices back to the pre-smoothed pose (which would make the whole thing useless), we apply the stored deltas (by transforming them by the pose) to offset the vertices.
This month we've been working on a custom real-time implementation of delta mush, to see if it can help us in our pursuit of creating as artifact free models as possible. This came with a variety of challenges. DeltaMush is a very computationally heavy task, so doing it in real-time every frame while maintaining high framerates definitely took some thought. Furthermore, since the mesh can change shape during the course the interaction, it means that we must be able to rebind the delta mush filter on the fly to adapt to the new shape (this would be the equivalence to Blender's "bind" option for the rest source in the Corrective smooth modifier settings). The only way to do this was to off-load these computations to the GPU using ComputeShaders, which is a feature that requires DX11 hardware to work.
Below are results of our research:
The results didn't turn out quiet as well as I was hoping they would, but they still make a visible difference - usually for the better. Our real-time DeltaMush implementation seems particularly efficient at eliminating artifacts in the groin area when the shape is fully applied, as you can see here:
It also seems very good at smoothing vertices affected by long chains of bones, such as those in tails. Here's a test we did on dragoness:
It remains to be seen if we will choose to keep this tech. It does make things a fair bit more complex, as the vertices are offset from their posed locations, which means that parts that have been applied on the mesh (for example fluff) would have to be adapted as well. But regardless if we choose to keep this or not, our understanding of GPU processing has been greatly increased, and I can see how we could potentially make the penetration mechanics much more fleshed out in YL2 compared to Yiffalicious by utilizing the computational power of GPUs, so I don't think this time was wasted.
Delta Mush & posing tech demo
We have put together a small tech demo showing off our work on delta mush. You can grab it here if you're a $12 patron. This demo only works on DX11 systems (i.e. systems that have hardware support for DX11). If your system doesn't have DX11 hardware support, then don't pledge to get access to it since the demo will not work on your system!
Some other things to note about this demo:
In addition to showing off the delta mush tech, you can also pose the model similarly to how you can pose models in Yiffalicious. There are also other parts of the framework that are accessible in this demo. For example, it has an undo-redo system, there's some simple serialization and de-serialization with the pose saving and loading, and the GenericList is also present as you might discover when you load a pose. The adaptive rig tech is active as well, as you can see if you play around with the shape influence slider which will also move the bones around and rebind the mesh. The Properties inspector and Outliner are there too, as are the menu options. We've tried to write detailed explanations for all the properties in the demo, so try hovering the mouse over each information icon to see what they do.
New mouse pivot tech
The mouse middle-click to place camera pivot has also had a rework (since Yiffalicious). Instead of relying on raycasts and simple physical representations, the depth is actually fetched from a camera at the pixel location of the mouse, which means we get much higher precision result. Press middle-mouse to place pivot at cursor location:
Last month we revealed we intend to create a body type selection system for our character creator. This is due to us wanting to implement a wide array of differnet body types, that simply wouldn't be possible if we only used sliders. However, rather than just being able to select a single body type and its influence, this month we've been working on a shape layering system that uses masks when applying the shape. Using such a system, you could, for example, take a muscular body type as a base for the model, and then cherry pick the belly area from one of the fatter body types.
The "backend" of this system, i.e. the raw modular code, has been defined. What remains now is to integrate it into the character creator and interface it with the user (in addition to creating and preparing content for it). For this we need the hierarchical state manager that we described above.
Developing a universal mesh that's supposed to work across so many different body types and species is definitely a scary affair. Such a mesh becomes really sensitive to changes, because if we notice an error in it, all the other models that interface with it (body parts, essentially) will potentially have to be altered as well. This does make things a bit difficult, because we need to think things through really hard before we can work on other parts with confidence.
So far we've made several iterations that have profoundly changed the base mesh in some way, due to us finding problems with it, or simply because we came up with a better way to do things. In order to keep these potential changes down, it is tempting to take a simple path to make things easier. Previously, we had intended to keep the hands unalterable, since that would have made things less complex. But as we continued our way down that path, we didn't feel right about this, so now we have decided to cut them out along the rest (feet and head). The idea is to eventually offer different types of hands (just as we offer different types of feet/legs).
When it comes to creating a universal mesh such as this, a big question is where we should make these cuts. Because if we cut too close to the separate body part, it means we lose versatility, but if we cut too far, it means maps' seams will become harder to conceal and the baking of maps much more complex (as we would have to bake maps per body part per shape to fit with the body). So it's a question of the lesser of two evils.
Originally we had the idea of creating pre-made texture masks that would work across all the body types and species. For example, there could be a mask adds some flame-like design to the legs and feet. However, this is something we're starting to realize will be impossibly complex to do if it is to fit all the species and body types. Instead, we were thinking of creating a mask import feature as well as a model export feature, so you would be able to import the character into your favorite texture authoring software, create your masks there, and then import them back into the app. That way we bypass the problem of fitting a pre-made mask texture on all body types and species [in the seam area], since the user will have a final body-type and species selection when exporting the model. The problem with way of doing things though is size, because textures take a shit ton of space. So we're thinking maybe 1 or 2 channels will be imported from the custom texture, then compressed and stored within the character file. That way it will still won't weigh more than about 2 MB, so they'll be quick to download.
We will still create generic masks like selection for noses, eyes, nails etc as well as generic patterns for the body. But the idea of creating patterns that work across all body types and species in the areas connecting the body with separate parts is something that is perhaps best handled by the community itself. I think custom importable masks will offer far more diversity in the long run too. Everybody wins?
This month we've been developing systems to handle much of the complexity that arises when combining our different systems together and interfacing them to the user. We have also been looking into technologies to increase the quality of the mesh deformation for our character models. There is a tech demo available for $12 patrons regarding this research.
Creating a universal mesh is a challenge, but after several iterations we think we're getting the hang of it and feel more confident about creating content for it and integrating that into the character editor.