GarTemFormer: Temporal transformer-based for optimizing virtual garment animation
Virtual garment animation and deformation constitute a pivotal research direction in computer graphics, finding extensive applications in domains such as computer games, animation, and film. Traditional physics-based methods can simulate the physical characteristics of garments, such as elasticity a...
Saved in:
| Main Authors: | , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Elsevier
2024-12-01
|
| Series: | Graphical Models |
| Subjects: | |
| Online Access: | http://www.sciencedirect.com/science/article/pii/S1524070324000237 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1846138599036157952 |
|---|---|
| author | Jiazhe Miao Tao Peng Fei Fang Xinrong Hu Li Li |
| author_facet | Jiazhe Miao Tao Peng Fei Fang Xinrong Hu Li Li |
| author_sort | Jiazhe Miao |
| collection | DOAJ |
| description | Virtual garment animation and deformation constitute a pivotal research direction in computer graphics, finding extensive applications in domains such as computer games, animation, and film. Traditional physics-based methods can simulate the physical characteristics of garments, such as elasticity and gravity, to generate realistic deformation effects. However, the computational complexity of such methods hinders real-time animation generation. Data-driven approaches, on the other hand, learn from existing garment deformation data, enabling rapid animation generation. Nevertheless, animations produced using this approach often lack realism, struggling to capture subtle variations in garment behavior. We proposes an approach that balances realism and speed, by considering both spatial and temporal dimensions, we leverage real-world videos to capture human motion and garment deformation, thereby producing more realistic animation effects. We address the complexity of spatiotemporal attention by aligning input features and calculating spatiotemporal attention at each spatial position in a batch-wise manner. For garment deformation, garment segmentation techniques are employed to extract garment templates from videos. Subsequently, leveraging our designed Transformer-based temporal framework, we capture the correlation between garment deformation and human body shape features, as well as frame-level dependencies. Furthermore, we utilize a feature fusion strategy to merge shape and motion features, addressing penetration issues between clothing and the human body through post-processing, thus generating collision-free garment deformation sequences. Qualitative and quantitative experiments demonstrate the superiority of our approach over existing methods, efficiently producing temporally coherent and realistic dynamic garment deformations. |
| format | Article |
| id | doaj-art-4e2041350ff646049f6d9355024fb6fd |
| institution | Kabale University |
| issn | 1524-0703 |
| language | English |
| publishDate | 2024-12-01 |
| publisher | Elsevier |
| record_format | Article |
| series | Graphical Models |
| spelling | doaj-art-4e2041350ff646049f6d9355024fb6fd2024-12-07T08:25:15ZengElsevierGraphical Models1524-07032024-12-01136101235GarTemFormer: Temporal transformer-based for optimizing virtual garment animationJiazhe Miao0Tao Peng1Fei Fang2Xinrong Hu3Li Li4School of Computer and Artificial Intelligence, Wuhan Textile University, No. 1 Sunshine Avenue, Jiangxia District, Wuhan, 430200, Hubei Province, China; Engineering Research Center of Hubei Province for Clothing Information, Wuhan, 430200, Hubei Province, ChinaSchool of Computer and Artificial Intelligence, Wuhan Textile University, No. 1 Sunshine Avenue, Jiangxia District, Wuhan, 430200, Hubei Province, China; Engineering Research Center of Hubei Province for Clothing Information, Wuhan, 430200, Hubei Province, China; China National Textile and Apparel Council Key Laboratory of Intelligent Perception and Computing, Wuhan, 430200, Hubei Province, China; Corresponding author at: School of Computer and Artificial Intelligence, Wuhan Textile University, No. 1 Sunshine Avenue, Jiangxia District, Wuhan, 430200, Hubei Province, China.School of Computer and Artificial Intelligence, Wuhan Textile University, No. 1 Sunshine Avenue, Jiangxia District, Wuhan, 430200, Hubei Province, China; Engineering Research Center of Hubei Province for Clothing Information, Wuhan, 430200, Hubei Province, ChinaSchool of Computer and Artificial Intelligence, Wuhan Textile University, No. 1 Sunshine Avenue, Jiangxia District, Wuhan, 430200, Hubei Province, China; China National Textile and Apparel Council Key Laboratory of Intelligent Perception and Computing, Wuhan, 430200, Hubei Province, ChinaSchool of Computer and Artificial Intelligence, Wuhan Textile University, No. 1 Sunshine Avenue, Jiangxia District, Wuhan, 430200, Hubei Province, China; Engineering Research Center of Hubei Province for Clothing Information, Wuhan, 430200, Hubei Province, ChinaVirtual garment animation and deformation constitute a pivotal research direction in computer graphics, finding extensive applications in domains such as computer games, animation, and film. Traditional physics-based methods can simulate the physical characteristics of garments, such as elasticity and gravity, to generate realistic deformation effects. However, the computational complexity of such methods hinders real-time animation generation. Data-driven approaches, on the other hand, learn from existing garment deformation data, enabling rapid animation generation. Nevertheless, animations produced using this approach often lack realism, struggling to capture subtle variations in garment behavior. We proposes an approach that balances realism and speed, by considering both spatial and temporal dimensions, we leverage real-world videos to capture human motion and garment deformation, thereby producing more realistic animation effects. We address the complexity of spatiotemporal attention by aligning input features and calculating spatiotemporal attention at each spatial position in a batch-wise manner. For garment deformation, garment segmentation techniques are employed to extract garment templates from videos. Subsequently, leveraging our designed Transformer-based temporal framework, we capture the correlation between garment deformation and human body shape features, as well as frame-level dependencies. Furthermore, we utilize a feature fusion strategy to merge shape and motion features, addressing penetration issues between clothing and the human body through post-processing, thus generating collision-free garment deformation sequences. Qualitative and quantitative experiments demonstrate the superiority of our approach over existing methods, efficiently producing temporally coherent and realistic dynamic garment deformations.http://www.sciencedirect.com/science/article/pii/S1524070324000237SimulationPhysical constraintsVirtual try-onMotion sequences |
| spellingShingle | Jiazhe Miao Tao Peng Fei Fang Xinrong Hu Li Li GarTemFormer: Temporal transformer-based for optimizing virtual garment animation Graphical Models Simulation Physical constraints Virtual try-on Motion sequences |
| title | GarTemFormer: Temporal transformer-based for optimizing virtual garment animation |
| title_full | GarTemFormer: Temporal transformer-based for optimizing virtual garment animation |
| title_fullStr | GarTemFormer: Temporal transformer-based for optimizing virtual garment animation |
| title_full_unstemmed | GarTemFormer: Temporal transformer-based for optimizing virtual garment animation |
| title_short | GarTemFormer: Temporal transformer-based for optimizing virtual garment animation |
| title_sort | gartemformer temporal transformer based for optimizing virtual garment animation |
| topic | Simulation Physical constraints Virtual try-on Motion sequences |
| url | http://www.sciencedirect.com/science/article/pii/S1524070324000237 |
| work_keys_str_mv | AT jiazhemiao gartemformertemporaltransformerbasedforoptimizingvirtualgarmentanimation AT taopeng gartemformertemporaltransformerbasedforoptimizingvirtualgarmentanimation AT feifang gartemformertemporaltransformerbasedforoptimizingvirtualgarmentanimation AT xinronghu gartemformertemporaltransformerbasedforoptimizingvirtualgarmentanimation AT lili gartemformertemporaltransformerbasedforoptimizingvirtualgarmentanimation |