Analysis of Torso Movement for Signing Avatar Using Deep Learning
Seventh International Workshop on Sign Language Translation and Avatar Technology: The Junction of the Visual and the Textual (SLTAT 2022)
Marseille, France, 24 June 2022
Abstract for Book of Abstracts
Avatars are virtual or on-screen representations of a human used in various roles for sign language display, including translation and educational tools. Though the ability of avatars to portray acceptable sign language with believable human-like motion has improved in recent years, many still lack the naturalness and supporting motions of human signing. Such details are generally not included in the linguistic annotation. Nevertheless, these motions are highly essential to displaying lifelike and communicative animations. This paper presents a deep learning model for use in a signing avatar. The study focuses on coordinating torso movements and other human body parts. The proposed model will automatically compute the torso rotation based on the avatar's wrist positions. The resulting motion can improve the user experience and engagement with the avatar.
Conference Manager (V2.61.0 - Rev. 6487)