Announcement

Collapse
No announcement yet.

Unity coder? I have questions

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    Unity coder? I have questions

    I have been hired to develop content for an app and that's going on at the moment. I don't know coding and that's proving to be a problem because I have a sneaking suspicion that the coders on this job simply aren't all that good. When they tell me something can't be done or it can be done but it's a major problem and would take weeks, I have my doubts. There is one issue in particular that I'd love to run by someone, although I probably have a few 'can this be done?' follow on questions. I have browsed Unity boards and the truth is I'm out of my depth in those and so I thought I'd ask here.

    The issue is related to a system of animation we're using in cut scenes and how states can be altered for lip sync but before I type out the problem itself I better check first if there is anyone who might be able to help? So do you know Unity and are familiar with using animation with it? If so, let me know as I'd love a view on my problem.

    #2
    Is the issue related to the Unity-specific backend or to FaceFX itself(which is a middleware solution for lip syching and also used in other engines like Unreal)?

    Comment


      #3
      Hmmm... that's a good question. Nobody has mentioned using FaceFX so I don't think it's that. The animation is 3D but done to look 2D (started before Unity did 2D apparently so poor timing there) and as a result there are a few oddities to the methods which are part of the main issue I'm dealing with. Animated in 3DSMax and then put together in Unity. The issue isn't so much the lip sync itself, it's about assigning different sets of mouths to different parts of dialogue - for example, happy mouths, sad mouths and so on. Because they're using an odd 3D for 2D method, it's a little different. If you reckon you might have a suggestion or know if what I'm thinking is actually impossible, I can elaborate on the whole thing.

      Comment


        #4
        Hmm, not sure I can be that much help I'm afraid. But from what you describe that sounds like pretty standard morph based animation which is normal for facial animation in games. I would have thought the state logic should be pretty simple for the use case you describe - simply defining interpolation for the blending between each of your morph states and then key framing those into the various transition points within the dialogue. Is the 3D to 2D done in post processing to create a cell shaded style?

        Comment


          #5
          No, the characters are completely flat. Sort of like Parappa the Rapper only they're using a piece-switching system to replicate some 2D methods which I think is where it gets weird. So if a character is going from an 'eeee' mouth to an 'oh' mouth, they're not morphing a mesh, they're actually bringing out a whole new mouth piece. The pieces are hidden behind each other in three dimensions, looking like 2D animation when seen straight on.

          Comment


            #6
            That certainly sounds like a different way of doing things. As far as I know the normal practices for (non-postprocessed) 2D tend to be either using (i) traditional sprites which are then just rendered onto simple 1 or 2-poly plane objects or (ii) using meshes where all the points are co-planar (though I think that would only work with vector style graphics without issues with strange texture warping). Hope it all goes well, sorry haven't been of much use.

            Comment


              #7
              No problem. Thanks for asking about it just in case. Yeah, it's an odd way that was done to try to replicate a very specific 2D style in a system not designed for it at all.

              Comment

              Working...
              X