TY - GEN
T1 - Texture Generation Using a Graph Generative Adversarial Network and Differentiable Rendering
AU - Dharma, K. C.
AU - Morrison, Clayton T.
AU - Walls, Bradley
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2023
Y1 - 2023
N2 - Novel photo-realistic texture synthesis is an important task for generating novel scenes, including asset generation for 3D simulations. However, to date, these methods predominantly generate textured objects in 2D space. If we rely on 2D object generation, then we need to make a computationally expensive forward pass each time we change the camera viewpoint or lighting. Recent work that can generate textures in 3D requires 3D component segmentation that is expensive to acquire. In this work, we present a novel conditional generative architecture that we call a graph generative adversarial network (GGAN) that can generate textures in 3D by learning object component information in an unsupervised way. In this framework, we do not need an expensive forward pass whenever the camera viewpoint or lighting changes, and we do not need expensive 3D part information for training, yet the model can generalize to unseen 3D meshes and generate appropriate novel 3D textures. We compare this approach against state-of-the-art texture generation methods and demonstrate that the GGAN obtains significantly better texture generation quality (according to Fréchet inception distance). We release our model source code as open source (https://github.com/ml4ai/ggan ).
AB - Novel photo-realistic texture synthesis is an important task for generating novel scenes, including asset generation for 3D simulations. However, to date, these methods predominantly generate textured objects in 2D space. If we rely on 2D object generation, then we need to make a computationally expensive forward pass each time we change the camera viewpoint or lighting. Recent work that can generate textures in 3D requires 3D component segmentation that is expensive to acquire. In this work, we present a novel conditional generative architecture that we call a graph generative adversarial network (GGAN) that can generate textures in 3D by learning object component information in an unsupervised way. In this framework, we do not need an expensive forward pass whenever the camera viewpoint or lighting changes, and we do not need expensive 3D part information for training, yet the model can generalize to unseen 3D meshes and generate appropriate novel 3D textures. We compare this approach against state-of-the-art texture generation methods and demonstrate that the GGAN obtains significantly better texture generation quality (according to Fréchet inception distance). We release our model source code as open source (https://github.com/ml4ai/ggan ).
KW - 3D texture synthesis
KW - Differentiable rendering
KW - Graph neural networks
UR - http://www.scopus.com/inward/record.url?scp=85147995861&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85147995861&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-25825-1_28
DO - 10.1007/978-3-031-25825-1_28
M3 - Conference contribution
AN - SCOPUS:85147995861
SN - 9783031258244
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 388
EP - 401
BT - Image and Vision Computing - 37th International Conference, IVCNZ 2022, Revised Selected Papers
A2 - Yan, Wei Qi
A2 - Nguyen, Minh
A2 - Stommel, Martin
PB - Springer Science and Business Media Deutschland GmbH
T2 - 37th International Conference on Image and Vision Computing New Zealand, IVCNZ 2022
Y2 - 24 November 2022 through 25 November 2022
ER -